status
stringclasses 1
value | repo_name
stringclasses 31
values | repo_url
stringclasses 31
values | issue_id
int64 1
104k
| title
stringlengths 4
369
| body
stringlengths 0
254k
⌀ | issue_url
stringlengths 37
56
| pull_url
stringlengths 37
54
| before_fix_sha
stringlengths 40
40
| after_fix_sha
stringlengths 40
40
| report_datetime
timestamp[us, tz=UTC] | language
stringclasses 5
values | commit_datetime
timestamp[us, tz=UTC] | updated_file
stringlengths 4
188
| file_content
stringlengths 0
5.12M
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 59,414 |
connection/shell/become vars are removed when in a loop
|
##### SUMMARY
When you have a task variable defined in the task's `vars:` directive and it relates to the connection, shell, or become plugin that is used in that task it will no longer be defined if run on a loop. The variable is present for the first iteration of the loop but future ones will remove that variable causing all sorts of different behaviour.
There is an unfortunate side effect of https://github.com/ansible/ansible/pull/59024 as it just removes the variables from the task vars.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
task_executor
##### ANSIBLE VERSION
```paste below
devel
```
##### CONFIGURATION
N/A
##### OS / ENVIRONMENT
N/A
##### STEPS TO REPRODUCE
Run the following playbook
```yaml
---
- hosts: localhost
gather_facts: no
tasks:
- ping:
vars:
ansible_remote_tmp: /tmp/test1
with_items:
- 1
- 2
- debug:
var: ansible_remote_tmp
vars:
ansible_remote_tmp: /tmp/test1
with_items:
- 1
- 2
```
##### EXPECTED RESULTS
Both runs will use `/tmp/test1` as the remote temp directory that stores the AnsiballZ payload.
##### ACTUAL RESULTS
Only the 1st task uses `/tmp/test1` while the 2nd goes back to using the default as `ansible_remote_tmp` from the task vars is no longer defined.
```paste below
ansible-playbook 2.9.0.dev0
config file = /home/jborean/dev/ansible-tester/ansible.cfg
configured module search path = ['/home/jborean/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/jborean/dev/ansible/lib/ansible
executable location = /home/jborean/dev/ansible/bin/ansible-playbook
python version = 3.7.4 (default, Jul 10 2019, 15:18:20) [GCC 9.1.1 20190503 (Red Hat 9.1.1-1)]
Using /home/jborean/dev/ansible-tester/ansible.cfg as config file
host_list declined parsing /home/jborean/dev/ansible-tester/inventory.ini as it did not pass its verify_file() method
auto declined parsing /home/jborean/dev/ansible-tester/inventory.ini as it did not pass its verify_file() method
yaml declined parsing /home/jborean/dev/ansible-tester/inventory.ini as it did not pass its verify_file() method
Parsed /home/jborean/dev/ansible-tester/inventory.ini inventory source with ini plugin
PLAYBOOK: test.yml **************************************************************************************
1 plays in test.yml
PLAY [localhost] ****************************************************************************************
META: ran handlers
TASK [ping] *********************************************************************************************
task path: /home/jborean/dev/ansible-tester/test.yml:5
<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: jborean
<127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /tmp/test1/ansible-tmp-1563847217.167148-225396898656963 `" && echo ansible-tmp-1563847217.167148-225396898656963="` echo /tmp/test1/ansible-tmp-156384
7217.167148-225396898656963 `" ) && sleep 0'
Using module file /home/jborean/dev/ansible/lib/ansible/modules/system/ping.py
<127.0.0.1> PUT /home/jborean/.ansible/tmp/ansible-local-245768n0yb3g5/tmpbf7ibfe5 TO /tmp/test1/ansible-tmp-1563847217.167148-225396898656963/AnsiballZ_ping.py
<127.0.0.1> EXEC /bin/sh -c 'chmod u+x /tmp/test1/ansible-tmp-1563847217.167148-225396898656963/ /tmp/test1/ansible-tmp-1563847217.167148-225396898656963/AnsiballZ_ping.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '/home/jborean/venvs/ansible-py37/bin/python /tmp/test1/ansible-tmp-1563847217.167148-225396898656963/AnsiballZ_ping.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c 'rm -f -r /tmp/test1/ansible-tmp-1563847217.167148-225396898656963/ > /dev/null 2>&1 && sleep 0'
<127.0.0.1> EXEC /bin/sh -c 'echo ~jborean && sleep 0'
ok: [localhost] => (item=1) => {
"ansible_loop_var": "item",
"changed": false,
"invocation": {
"module_args": {
"data": "pong"
}
},
"item": 1,
"ping": "pong"
}
<127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /home/jborean/.ansible/tmp/ansible-tmp-1563847217.3545752-117775520192534 `" && echo ansible-tmp-1563847217.3545752-117775520192534="` echo /home/jbore
an/.ansible/tmp/ansible-tmp-1563847217.3545752-117775520192534 `" ) && sleep 0'
Using module file /home/jborean/dev/ansible/lib/ansible/modules/system/ping.py
<127.0.0.1> PUT /home/jborean/.ansible/tmp/ansible-local-245768n0yb3g5/tmpl2ss9stw TO /home/jborean/.ansible/tmp/ansible-tmp-1563847217.3545752-117775520192534/AnsiballZ_ping.py
<127.0.0.1> EXEC /bin/sh -c 'chmod u+x /home/jborean/.ansible/tmp/ansible-tmp-1563847217.3545752-117775520192534/ /home/jborean/.ansible/tmp/ansible-tmp-1563847217.3545752-117775520192534/AnsiballZ_ping.py && sl
eep 0'
<127.0.0.1> EXEC /bin/sh -c '/home/jborean/venvs/ansible-py37/bin/python /home/jborean/.ansible/tmp/ansible-tmp-1563847217.3545752-117775520192534/AnsiballZ_ping.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c 'rm -f -r /home/jborean/.ansible/tmp/ansible-tmp-1563847217.3545752-117775520192534/ > /dev/null 2>&1 && sleep 0'
ok: [localhost] => (item=2) => {
"ansible_loop_var": "item",
"changed": false,
"invocation": {
"module_args": {
"data": "pong"
}
},
"item": 2,
"ping": "pong"
}
TASK [debug] ********************************************************************************************
task path: /home/jborean/dev/ansible-tester/test.yml:12
ok: [localhost] => (item=1) => {
"ansible_loop_var": "item",
"ansible_remote_tmp": "/tmp/test1",
"item": 1
}
ok: [localhost] => (item=2) => {
"ansible_loop_var": "item",
"ansible_remote_tmp": "VARIABLE IS NOT DEFINED!: 'ansible_remote_tmp' is undefined",
"item": 2
}
META: ran handlers
META: ran handlers
PLAY RECAP **********************************************************************************************
localhost : ok=2 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
```
This is because after the first loop, all task vars that are either options for the connection, shell, and become plugin for that task are removed.
|
https://github.com/ansible/ansible/issues/59414
|
https://github.com/ansible/ansible/pull/59426
|
5a7f579d86b52754d5634fd6719882158c01ec18
|
1010363c0bebaf4ef3c34ac858d74de5ca01fc7b
| 2019-07-23T02:04:07Z |
python
| 2019-07-24T09:35:14Z |
lib/ansible/executor/task_executor.py
|
# (c) 2012-2014, Michael DeHaan <[email protected]>
# (c) 2017 Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import os
import re
import pty
import time
import json
import subprocess
import sys
import termios
import traceback
from ansible import constants as C
from ansible.errors import AnsibleError, AnsibleParserError, AnsibleUndefinedVariable, AnsibleConnectionFailure, AnsibleActionFail, AnsibleActionSkip
from ansible.executor.task_result import TaskResult
from ansible.executor.module_common import get_action_args_with_defaults
from ansible.module_utils.six import iteritems, string_types, binary_type
from ansible.module_utils.six.moves import xrange
from ansible.module_utils._text import to_text, to_native
from ansible.module_utils.connection import write_to_file_descriptor
from ansible.playbook.conditional import Conditional
from ansible.playbook.task import Task
from ansible.plugins.loader import become_loader, cliconf_loader, connection_loader, httpapi_loader, netconf_loader, terminal_loader
from ansible.template import Templar
from ansible.utils.collection_loader import AnsibleCollectionLoader
from ansible.utils.listify import listify_lookup_plugin_terms
from ansible.utils.unsafe_proxy import UnsafeProxy, wrap_var
from ansible.vars.clean import namespace_facts, clean_facts
from ansible.utils.display import Display
from ansible.utils.vars import combine_vars, isidentifier
display = Display()
__all__ = ['TaskExecutor']
def remove_omit(task_args, omit_token):
'''
Remove args with a value equal to the ``omit_token`` recursively
to align with now having suboptions in the argument_spec
'''
if not isinstance(task_args, dict):
return task_args
new_args = {}
for i in iteritems(task_args):
if i[1] == omit_token:
continue
elif isinstance(i[1], dict):
new_args[i[0]] = remove_omit(i[1], omit_token)
elif isinstance(i[1], list):
new_args[i[0]] = [remove_omit(v, omit_token) for v in i[1]]
else:
new_args[i[0]] = i[1]
return new_args
class TaskExecutor:
'''
This is the main worker class for the executor pipeline, which
handles loading an action plugin to actually dispatch the task to
a given host. This class roughly corresponds to the old Runner()
class.
'''
# Modules that we optimize by squashing loop items into a single call to
# the module
SQUASH_ACTIONS = frozenset(C.DEFAULT_SQUASH_ACTIONS)
def __init__(self, host, task, job_vars, play_context, new_stdin, loader, shared_loader_obj, final_q):
self._host = host
self._task = task
self._job_vars = job_vars
self._play_context = play_context
self._new_stdin = new_stdin
self._loader = loader
self._shared_loader_obj = shared_loader_obj
self._connection = None
self._final_q = final_q
self._loop_eval_error = None
self._task.squash()
def run(self):
'''
The main executor entrypoint, where we determine if the specified
task requires looping and either runs the task with self._run_loop()
or self._execute(). After that, the returned results are parsed and
returned as a dict.
'''
display.debug("in run() - task %s" % self._task._uuid)
try:
try:
items = self._get_loop_items()
except AnsibleUndefinedVariable as e:
# save the error raised here for use later
items = None
self._loop_eval_error = e
if items is not None:
if len(items) > 0:
item_results = self._run_loop(items)
# create the overall result item
res = dict(results=item_results)
# loop through the item results, and set the global changed/failed result flags based on any item.
for item in item_results:
if 'changed' in item and item['changed'] and not res.get('changed'):
res['changed'] = True
if 'failed' in item and item['failed']:
item_ignore = item.pop('_ansible_ignore_errors')
if not res.get('failed'):
res['failed'] = True
res['msg'] = 'One or more items failed'
self._task.ignore_errors = item_ignore
elif self._task.ignore_errors and not item_ignore:
self._task.ignore_errors = item_ignore
# ensure to accumulate these
for array in ['warnings', 'deprecations']:
if array in item and item[array]:
if array not in res:
res[array] = []
if not isinstance(item[array], list):
item[array] = [item[array]]
res[array] = res[array] + item[array]
del item[array]
if not res.get('Failed', False):
res['msg'] = 'All items completed'
else:
res = dict(changed=False, skipped=True, skipped_reason='No items in the list', results=[])
else:
display.debug("calling self._execute()")
res = self._execute()
display.debug("_execute() done")
# make sure changed is set in the result, if it's not present
if 'changed' not in res:
res['changed'] = False
def _clean_res(res, errors='surrogate_or_strict'):
if isinstance(res, UnsafeProxy):
return res._obj
elif isinstance(res, binary_type):
return to_text(res, errors=errors)
elif isinstance(res, dict):
for k in res:
try:
res[k] = _clean_res(res[k], errors=errors)
except UnicodeError:
if k == 'diff':
# If this is a diff, substitute a replacement character if the value
# is undecodable as utf8. (Fix #21804)
display.warning("We were unable to decode all characters in the module return data."
" Replaced some in an effort to return as much as possible")
res[k] = _clean_res(res[k], errors='surrogate_then_replace')
else:
raise
elif isinstance(res, list):
for idx, item in enumerate(res):
res[idx] = _clean_res(item, errors=errors)
return res
display.debug("dumping result to json")
res = _clean_res(res)
display.debug("done dumping result, returning")
return res
except AnsibleError as e:
return dict(failed=True, msg=wrap_var(to_text(e, nonstring='simplerepr')), _ansible_no_log=self._play_context.no_log)
except Exception as e:
return dict(failed=True, msg='Unexpected failure during module execution.', exception=to_text(traceback.format_exc()),
stdout='', _ansible_no_log=self._play_context.no_log)
finally:
try:
self._connection.close()
except AttributeError:
pass
except Exception as e:
display.debug(u"error closing connection: %s" % to_text(e))
def _get_loop_items(self):
'''
Loads a lookup plugin to handle the with_* portion of a task (if specified),
and returns the items result.
'''
# save the play context variables to a temporary dictionary,
# so that we can modify the job vars without doing a full copy
# and later restore them to avoid modifying things too early
play_context_vars = dict()
self._play_context.update_vars(play_context_vars)
old_vars = dict()
for k in play_context_vars:
if k in self._job_vars:
old_vars[k] = self._job_vars[k]
self._job_vars[k] = play_context_vars[k]
# get search path for this task to pass to lookup plugins
self._job_vars['ansible_search_path'] = self._task.get_search_path()
# ensure basedir is always in (dwim already searches here but we need to display it)
if self._loader.get_basedir() not in self._job_vars['ansible_search_path']:
self._job_vars['ansible_search_path'].append(self._loader.get_basedir())
templar = Templar(loader=self._loader, shared_loader_obj=self._shared_loader_obj, variables=self._job_vars)
items = None
loop_cache = self._job_vars.get('_ansible_loop_cache')
if loop_cache is not None:
# _ansible_loop_cache may be set in `get_vars` when calculating `delegate_to`
# to avoid reprocessing the loop
items = loop_cache
elif self._task.loop_with:
if self._task.loop_with in self._shared_loader_obj.lookup_loader:
fail = True
if self._task.loop_with == 'first_found':
# first_found loops are special. If the item is undefined then we want to fall through to the next value rather than failing.
fail = False
loop_terms = listify_lookup_plugin_terms(terms=self._task.loop, templar=templar, loader=self._loader, fail_on_undefined=fail,
convert_bare=False)
if not fail:
loop_terms = [t for t in loop_terms if not templar.is_template(t)]
# get lookup
mylookup = self._shared_loader_obj.lookup_loader.get(self._task.loop_with, loader=self._loader, templar=templar)
# give lookup task 'context' for subdir (mostly needed for first_found)
for subdir in ['template', 'var', 'file']: # TODO: move this to constants?
if subdir in self._task.action:
break
setattr(mylookup, '_subdir', subdir + 's')
# run lookup
items = mylookup.run(terms=loop_terms, variables=self._job_vars, wantlist=True)
else:
raise AnsibleError("Unexpected failure in finding the lookup named '%s' in the available lookup plugins" % self._task.loop_with)
elif self._task.loop is not None:
items = templar.template(self._task.loop)
if not isinstance(items, list):
raise AnsibleError(
"Invalid data passed to 'loop', it requires a list, got this instead: %s."
" Hint: If you passed a list/dict of just one element,"
" try adding wantlist=True to your lookup invocation or use q/query instead of lookup." % items
)
# now we restore any old job variables that may have been modified,
# and delete them if they were in the play context vars but not in
# the old variables dictionary
for k in play_context_vars:
if k in old_vars:
self._job_vars[k] = old_vars[k]
else:
del self._job_vars[k]
if items:
for idx, item in enumerate(items):
if item is not None and not isinstance(item, UnsafeProxy):
items[idx] = UnsafeProxy(item)
return items
def _run_loop(self, items):
'''
Runs the task with the loop items specified and collates the result
into an array named 'results' which is inserted into the final result
along with the item for which the loop ran.
'''
results = []
# make copies of the job vars and task so we can add the item to
# the variables and re-validate the task with the item variable
# task_vars = self._job_vars.copy()
task_vars = self._job_vars
loop_var = 'item'
index_var = None
label = None
loop_pause = 0
extended = False
templar = Templar(loader=self._loader, shared_loader_obj=self._shared_loader_obj, variables=self._job_vars)
# FIXME: move this to the object itself to allow post_validate to take care of templating (loop_control.post_validate)
if self._task.loop_control:
loop_var = templar.template(self._task.loop_control.loop_var)
index_var = templar.template(self._task.loop_control.index_var)
loop_pause = templar.template(self._task.loop_control.pause)
extended = templar.template(self._task.loop_control.extended)
# This may be 'None',so it is templated below after we ensure a value and an item is assigned
label = self._task.loop_control.label
# ensure we always have a label
if label is None:
label = '{{' + loop_var + '}}'
if loop_var in task_vars:
display.warning(u"The loop variable '%s' is already in use. "
u"You should set the `loop_var` value in the `loop_control` option for the task"
u" to something else to avoid variable collisions and unexpected behavior." % loop_var)
ran_once = False
if self._task.loop_with:
# Only squash with 'with_:' not with the 'loop:', 'magic' squashing can be removed once with_ loops are
items = self._squash_items(items, loop_var, task_vars)
no_log = False
items_len = len(items)
for item_index, item in enumerate(items):
task_vars['ansible_loop_var'] = loop_var
task_vars[loop_var] = item
if index_var:
task_vars['ansible_index_var'] = index_var
task_vars[index_var] = item_index
if extended:
task_vars['ansible_loop'] = {
'allitems': items,
'index': item_index + 1,
'index0': item_index,
'first': item_index == 0,
'last': item_index + 1 == items_len,
'length': items_len,
'revindex': items_len - item_index,
'revindex0': items_len - item_index - 1,
}
try:
task_vars['ansible_loop']['nextitem'] = items[item_index + 1]
except IndexError:
pass
if item_index - 1 >= 0:
task_vars['ansible_loop']['previtem'] = items[item_index - 1]
# Update template vars to reflect current loop iteration
templar.available_variables = task_vars
# pause between loop iterations
if loop_pause and ran_once:
try:
time.sleep(float(loop_pause))
except ValueError as e:
raise AnsibleError('Invalid pause value: %s, produced error: %s' % (loop_pause, to_native(e)))
else:
ran_once = True
try:
tmp_task = self._task.copy(exclude_parent=True, exclude_tasks=True)
tmp_task._parent = self._task._parent
tmp_play_context = self._play_context.copy()
except AnsibleParserError as e:
results.append(dict(failed=True, msg=to_text(e)))
continue
# now we swap the internal task and play context with their copies,
# execute, and swap them back so we can do the next iteration cleanly
(self._task, tmp_task) = (tmp_task, self._task)
(self._play_context, tmp_play_context) = (tmp_play_context, self._play_context)
res = self._execute(variables=task_vars)
task_fields = self._task.dump_attrs()
(self._task, tmp_task) = (tmp_task, self._task)
(self._play_context, tmp_play_context) = (tmp_play_context, self._play_context)
# update 'general no_log' based on specific no_log
no_log = no_log or tmp_task.no_log
# now update the result with the item info, and append the result
# to the list of results
res[loop_var] = item
res['ansible_loop_var'] = loop_var
if index_var:
res[index_var] = item_index
res['ansible_index_var'] = index_var
if extended:
res['ansible_loop'] = task_vars['ansible_loop']
res['_ansible_item_result'] = True
res['_ansible_ignore_errors'] = task_fields.get('ignore_errors')
# gets templated here unlike rest of loop_control fields, depends on loop_var above
try:
res['_ansible_item_label'] = templar.template(label, cache=False)
except AnsibleUndefinedVariable as e:
res.update({
'failed': True,
'msg': 'Failed to template loop_control.label: %s' % to_text(e)
})
self._final_q.put(
TaskResult(
self._host.name,
self._task._uuid,
res,
task_fields=task_fields,
),
block=False,
)
results.append(res)
del task_vars[loop_var]
# clear 'connection related' plugin variables for next iteration
if self._connection:
clear_plugins = {
'connection': self._connection._load_name,
'shell': self._connection._shell._load_name
}
if self._connection.become:
clear_plugins['become'] = self._connection.become._load_name
for plugin_type, plugin_name in iteritems(clear_plugins):
for var in C.config.get_plugin_vars(plugin_type, plugin_name):
if var in task_vars:
del task_vars[var]
self._task.no_log = no_log
return results
def _squash_items(self, items, loop_var, variables):
'''
Squash items down to a comma-separated list for certain modules which support it
(typically package management modules).
'''
name = None
try:
# _task.action could contain templatable strings (via action: and
# local_action:) Template it before comparing. If we don't end up
# optimizing it here, the templatable string might use template vars
# that aren't available until later (it could even use vars from the
# with_items loop) so don't make the templated string permanent yet.
templar = Templar(loader=self._loader, shared_loader_obj=self._shared_loader_obj, variables=variables)
task_action = self._task.action
if templar.is_template(task_action):
task_action = templar.template(task_action, fail_on_undefined=False)
if len(items) > 0 and task_action in self.SQUASH_ACTIONS:
if all(isinstance(o, string_types) for o in items):
final_items = []
found = None
for allowed in ['name', 'pkg', 'package']:
name = self._task.args.pop(allowed, None)
if name is not None:
found = allowed
break
# This gets the information to check whether the name field
# contains a template that we can squash for
template_no_item = template_with_item = None
if name:
if templar.is_template(name):
variables[loop_var] = '\0$'
template_no_item = templar.template(name, variables, cache=False)
variables[loop_var] = '\0@'
template_with_item = templar.template(name, variables, cache=False)
del variables[loop_var]
# Check if the user is doing some operation that doesn't take
# name/pkg or the name/pkg field doesn't have any variables
# and thus the items can't be squashed
if template_no_item != template_with_item:
if self._task.loop_with and self._task.loop_with not in ('items', 'list'):
value_text = "\"{{ query('%s', %r) }}\"" % (self._task.loop_with, self._task.loop)
else:
value_text = '%r' % self._task.loop
# Without knowing the data structure well, it's easiest to strip python2 unicode
# literals after stringifying
value_text = re.sub(r"\bu'", "'", value_text)
display.deprecated(
'Invoking "%s" only once while using a loop via squash_actions is deprecated. '
'Instead of using a loop to supply multiple items and specifying `%s: "%s"`, '
'please use `%s: %s` and remove the loop' % (self._task.action, found, name, found, value_text),
version='2.11'
)
for item in items:
variables[loop_var] = item
if self._task.evaluate_conditional(templar, variables):
new_item = templar.template(name, cache=False)
final_items.append(new_item)
self._task.args['name'] = final_items
# Wrap this in a list so that the calling function loop
# executes exactly once
return [final_items]
else:
# Restore the name parameter
self._task.args['name'] = name
# elif:
# Right now we only optimize single entries. In the future we
# could optimize more types:
# * lists can be squashed together
# * dicts could squash entries that match in all cases except the
# name or pkg field.
except Exception:
# Squashing is an optimization. If it fails for any reason,
# simply use the unoptimized list of items.
# Restore the name parameter
if name is not None:
self._task.args['name'] = name
return items
def _execute(self, variables=None):
'''
The primary workhorse of the executor system, this runs the task
on the specified host (which may be the delegated_to host) and handles
the retry/until and block rescue/always execution
'''
if variables is None:
variables = self._job_vars
templar = Templar(loader=self._loader, shared_loader_obj=self._shared_loader_obj, variables=variables)
context_validation_error = None
try:
# apply the given task's information to the connection info,
# which may override some fields already set by the play or
# the options specified on the command line
self._play_context = self._play_context.set_task_and_variable_override(task=self._task, variables=variables, templar=templar)
# fields set from the play/task may be based on variables, so we have to
# do the same kind of post validation step on it here before we use it.
self._play_context.post_validate(templar=templar)
# now that the play context is finalized, if the remote_addr is not set
# default to using the host's address field as the remote address
if not self._play_context.remote_addr:
self._play_context.remote_addr = self._host.address
# We also add "magic" variables back into the variables dict to make sure
# a certain subset of variables exist.
self._play_context.update_vars(variables)
# FIXME: update connection/shell plugin options
except AnsibleError as e:
# save the error, which we'll raise later if we don't end up
# skipping this task during the conditional evaluation step
context_validation_error = e
# Evaluate the conditional (if any) for this task, which we do before running
# the final task post-validation. We do this before the post validation due to
# the fact that the conditional may specify that the task be skipped due to a
# variable not being present which would otherwise cause validation to fail
try:
if not self._task.evaluate_conditional(templar, variables):
display.debug("when evaluation is False, skipping this task")
return dict(changed=False, skipped=True, skip_reason='Conditional result was False', _ansible_no_log=self._play_context.no_log)
except AnsibleError:
# loop error takes precedence
if self._loop_eval_error is not None:
raise self._loop_eval_error # pylint: disable=raising-bad-type
raise
# Not skipping, if we had loop error raised earlier we need to raise it now to halt the execution of this task
if self._loop_eval_error is not None:
raise self._loop_eval_error # pylint: disable=raising-bad-type
# if we ran into an error while setting up the PlayContext, raise it now
if context_validation_error is not None:
raise context_validation_error # pylint: disable=raising-bad-type
# if this task is a TaskInclude, we just return now with a success code so the
# main thread can expand the task list for the given host
if self._task.action in ('include', 'include_tasks'):
include_args = self._task.args.copy()
include_file = include_args.pop('_raw_params', None)
if not include_file:
return dict(failed=True, msg="No include file was specified to the include")
include_file = templar.template(include_file)
return dict(include=include_file, include_args=include_args)
# if this task is a IncludeRole, we just return now with a success code so the main thread can expand the task list for the given host
elif self._task.action == 'include_role':
include_args = self._task.args.copy()
return dict(include_args=include_args)
# Now we do final validation on the task, which sets all fields to their final values.
self._task.post_validate(templar=templar)
if '_variable_params' in self._task.args:
variable_params = self._task.args.pop('_variable_params')
if isinstance(variable_params, dict):
if C.INJECT_FACTS_AS_VARS:
display.warning("Using a variable for a task's 'args' is unsafe in some situations "
"(see https://docs.ansible.com/ansible/devel/reference_appendices/faq.html#argsplat-unsafe)")
variable_params.update(self._task.args)
self._task.args = variable_params
# get the connection and the handler for this execution
if (not self._connection or
not getattr(self._connection, 'connected', False) or
self._play_context.remote_addr != self._connection._play_context.remote_addr):
self._connection = self._get_connection(variables=variables, templar=templar)
else:
# if connection is reused, its _play_context is no longer valid and needs
# to be replaced with the one templated above, in case other data changed
self._connection._play_context = self._play_context
self._set_connection_options(variables, templar)
# get handler
self._handler = self._get_action_handler(connection=self._connection, templar=templar)
# Apply default params for action/module, if present
self._task.args = get_action_args_with_defaults(self._task.action, self._task.args, self._task.module_defaults, templar)
# And filter out any fields which were set to default(omit), and got the omit token value
omit_token = variables.get('omit')
if omit_token is not None:
self._task.args = remove_omit(self._task.args, omit_token)
# Read some values from the task, so that we can modify them if need be
if self._task.until:
retries = self._task.retries
if retries is None:
retries = 3
elif retries <= 0:
retries = 1
else:
retries += 1
else:
retries = 1
delay = self._task.delay
if delay < 0:
delay = 1
# make a copy of the job vars here, in case we need to update them
# with the registered variable value later on when testing conditions
vars_copy = variables.copy()
display.debug("starting attempt loop")
result = None
for attempt in xrange(1, retries + 1):
display.debug("running the handler")
try:
result = self._handler.run(task_vars=variables)
except AnsibleActionSkip as e:
return dict(skipped=True, msg=to_text(e))
except AnsibleActionFail as e:
return dict(failed=True, msg=to_text(e))
except AnsibleConnectionFailure as e:
return dict(unreachable=True, msg=to_text(e))
display.debug("handler run complete")
# preserve no log
result["_ansible_no_log"] = self._play_context.no_log
# update the local copy of vars with the registered value, if specified,
# or any facts which may have been generated by the module execution
if self._task.register:
if not isidentifier(self._task.register):
raise AnsibleError("Invalid variable name in 'register' specified: '%s'" % self._task.register)
vars_copy[self._task.register] = wrap_var(result)
if self._task.async_val > 0:
if self._task.poll > 0 and not result.get('skipped') and not result.get('failed'):
result = self._poll_async_result(result=result, templar=templar, task_vars=vars_copy)
# FIXME callback 'v2_runner_on_async_poll' here
# ensure no log is preserved
result["_ansible_no_log"] = self._play_context.no_log
# helper methods for use below in evaluating changed/failed_when
def _evaluate_changed_when_result(result):
if self._task.changed_when is not None and self._task.changed_when:
cond = Conditional(loader=self._loader)
cond.when = self._task.changed_when
result['changed'] = cond.evaluate_conditional(templar, vars_copy)
def _evaluate_failed_when_result(result):
if self._task.failed_when:
cond = Conditional(loader=self._loader)
cond.when = self._task.failed_when
failed_when_result = cond.evaluate_conditional(templar, vars_copy)
result['failed_when_result'] = result['failed'] = failed_when_result
else:
failed_when_result = False
return failed_when_result
if 'ansible_facts' in result:
if self._task.action in ('set_fact', 'include_vars'):
vars_copy.update(result['ansible_facts'])
else:
# TODO: cleaning of facts should eventually become part of taskresults instead of vars
af = wrap_var(result['ansible_facts'])
vars_copy.update(namespace_facts(af))
if C.INJECT_FACTS_AS_VARS:
vars_copy.update(clean_facts(af))
# set the failed property if it was missing.
if 'failed' not in result:
# rc is here for backwards compatibility and modules that use it instead of 'failed'
if 'rc' in result and result['rc'] not in [0, "0"]:
result['failed'] = True
else:
result['failed'] = False
# Make attempts and retries available early to allow their use in changed/failed_when
if self._task.until:
result['attempts'] = attempt
# set the changed property if it was missing.
if 'changed' not in result:
result['changed'] = False
# re-update the local copy of vars with the registered value, if specified,
# or any facts which may have been generated by the module execution
# This gives changed/failed_when access to additional recently modified
# attributes of result
if self._task.register:
vars_copy[self._task.register] = wrap_var(result)
# if we didn't skip this task, use the helpers to evaluate the changed/
# failed_when properties
if 'skipped' not in result:
_evaluate_changed_when_result(result)
_evaluate_failed_when_result(result)
if retries > 1:
cond = Conditional(loader=self._loader)
cond.when = self._task.until
if cond.evaluate_conditional(templar, vars_copy):
break
else:
# no conditional check, or it failed, so sleep for the specified time
if attempt < retries:
result['_ansible_retry'] = True
result['retries'] = retries
display.debug('Retrying task, attempt %d of %d' % (attempt, retries))
self._final_q.put(TaskResult(self._host.name, self._task._uuid, result, task_fields=self._task.dump_attrs()), block=False)
time.sleep(delay)
self._handler = self._get_action_handler(connection=self._connection, templar=templar)
else:
if retries > 1:
# we ran out of attempts, so mark the result as failed
result['attempts'] = retries - 1
result['failed'] = True
# do the final update of the local variables here, for both registered
# values and any facts which may have been created
if self._task.register:
variables[self._task.register] = wrap_var(result)
if 'ansible_facts' in result:
if self._task.action in ('set_fact', 'include_vars'):
variables.update(result['ansible_facts'])
else:
# TODO: cleaning of facts should eventually become part of taskresults instead of vars
af = wrap_var(result['ansible_facts'])
variables.update(namespace_facts(af))
if C.INJECT_FACTS_AS_VARS:
variables.update(clean_facts(af))
# save the notification target in the result, if it was specified, as
# this task may be running in a loop in which case the notification
# may be item-specific, ie. "notify: service {{item}}"
if self._task.notify is not None:
result['_ansible_notify'] = self._task.notify
# add the delegated vars to the result, so we can reference them
# on the results side without having to do any further templating
# FIXME: we only want a limited set of variables here, so this is currently
# hardcoded but should be possibly fixed if we want more or if
# there is another source of truth we can use
delegated_vars = variables.get('ansible_delegated_vars', dict()).get(self._task.delegate_to, dict()).copy()
if len(delegated_vars) > 0:
result["_ansible_delegated_vars"] = {'ansible_delegated_host': self._task.delegate_to}
for k in ('ansible_host', ):
result["_ansible_delegated_vars"][k] = delegated_vars.get(k)
# and return
display.debug("attempt loop complete, returning result")
return result
def _poll_async_result(self, result, templar, task_vars=None):
'''
Polls for the specified JID to be complete
'''
if task_vars is None:
task_vars = self._job_vars
async_jid = result.get('ansible_job_id')
if async_jid is None:
return dict(failed=True, msg="No job id was returned by the async task")
# Create a new pseudo-task to run the async_status module, and run
# that (with a sleep for "poll" seconds between each retry) until the
# async time limit is exceeded.
async_task = Task().load(dict(action='async_status jid=%s' % async_jid, environment=self._task.environment))
# FIXME: this is no longer the case, normal takes care of all, see if this can just be generalized
# Because this is an async task, the action handler is async. However,
# we need the 'normal' action handler for the status check, so get it
# now via the action_loader
async_handler = self._shared_loader_obj.action_loader.get(
'async_status',
task=async_task,
connection=self._connection,
play_context=self._play_context,
loader=self._loader,
templar=templar,
shared_loader_obj=self._shared_loader_obj,
)
time_left = self._task.async_val
while time_left > 0:
time.sleep(self._task.poll)
try:
async_result = async_handler.run(task_vars=task_vars)
# We do not bail out of the loop in cases where the failure
# is associated with a parsing error. The async_runner can
# have issues which result in a half-written/unparseable result
# file on disk, which manifests to the user as a timeout happening
# before it's time to timeout.
if (int(async_result.get('finished', 0)) == 1 or
('failed' in async_result and async_result.get('_ansible_parsed', False)) or
'skipped' in async_result):
break
except Exception as e:
# Connections can raise exceptions during polling (eg, network bounce, reboot); these should be non-fatal.
# On an exception, call the connection's reset method if it has one
# (eg, drop/recreate WinRM connection; some reused connections are in a broken state)
display.vvvv("Exception during async poll, retrying... (%s)" % to_text(e))
display.debug("Async poll exception was:\n%s" % to_text(traceback.format_exc()))
try:
async_handler._connection.reset()
except AttributeError:
pass
# Little hack to raise the exception if we've exhausted the timeout period
time_left -= self._task.poll
if time_left <= 0:
raise
else:
time_left -= self._task.poll
if int(async_result.get('finished', 0)) != 1:
if async_result.get('_ansible_parsed'):
return dict(failed=True, msg="async task did not complete within the requested time - %ss" % self._task.async_val)
else:
return dict(failed=True, msg="async task produced unparseable results", async_result=async_result)
else:
return async_result
def _get_become(self, name):
become = become_loader.get(name)
if not become:
raise AnsibleError("Invalid become method specified, could not find matching plugin: '%s'. "
"Use `ansible-doc -t become -l` to list available plugins." % name)
return become
def _get_connection(self, variables, templar):
'''
Reads the connection property for the host, and returns the
correct connection object from the list of connection plugins
'''
if self._task.delegate_to is not None:
# since we're delegating, we don't want to use interpreter values
# which would have been set for the original target host
for i in list(variables.keys()):
if isinstance(i, string_types) and i.startswith('ansible_') and i.endswith('_interpreter'):
del variables[i]
# now replace the interpreter values with those that may have come
# from the delegated-to host
delegated_vars = variables.get('ansible_delegated_vars', dict()).get(self._task.delegate_to, dict())
if isinstance(delegated_vars, dict):
for i in delegated_vars:
if isinstance(i, string_types) and i.startswith("ansible_") and i.endswith("_interpreter"):
variables[i] = delegated_vars[i]
# load connection
conn_type = self._play_context.connection
connection = self._shared_loader_obj.connection_loader.get(
conn_type,
self._play_context,
self._new_stdin,
task_uuid=self._task._uuid,
ansible_playbook_pid=to_text(os.getppid())
)
if not connection:
raise AnsibleError("the connection plugin '%s' was not found" % conn_type)
# load become plugin if needed
become_plugin = None
if self._play_context.become:
become_plugin = self._get_become(self._play_context.become_method)
if getattr(become_plugin, 'require_tty', False) and not getattr(connection, 'has_tty', False):
raise AnsibleError(
"The '%s' connection does not provide a tty which is required for the selected "
"become plugin: %s." % (conn_type, become_plugin.name)
)
try:
connection.set_become_plugin(become_plugin)
except AttributeError:
# Connection plugin does not support set_become_plugin
pass
# Backwards compat for connection plugins that don't support become plugins
# Just do this unconditionally for now, we could move it inside of the
# AttributeError above later
self._play_context.set_become_plugin(become_plugin)
# FIXME: remove once all plugins pull all data from self._options
self._play_context.set_attributes_from_plugin(connection)
if any(((connection.supports_persistence and C.USE_PERSISTENT_CONNECTIONS), connection.force_persistence)):
self._play_context.timeout = connection.get_option('persistent_command_timeout')
display.vvvv('attempting to start connection', host=self._play_context.remote_addr)
display.vvvv('using connection plugin %s' % connection.transport, host=self._play_context.remote_addr)
options = self._get_persistent_connection_options(connection, variables, templar)
socket_path = start_connection(self._play_context, options)
display.vvvv('local domain socket path is %s' % socket_path, host=self._play_context.remote_addr)
setattr(connection, '_socket_path', socket_path)
return connection
def _get_persistent_connection_options(self, connection, variables, templar):
final_vars = combine_vars(variables, variables.get('ansible_delegated_vars', dict()).get(self._task.delegate_to, dict()))
option_vars = C.config.get_plugin_vars('connection', connection._load_name)
plugin = connection._sub_plugin
if plugin['type'] != 'external':
option_vars.extend(C.config.get_plugin_vars(plugin['type'], plugin['name']))
options = {}
for k in option_vars:
if k in final_vars:
options[k] = templar.template(final_vars[k])
return options
def _set_plugin_options(self, plugin_type, variables, templar, task_keys):
try:
plugin = getattr(self._connection, '_%s' % plugin_type)
except AttributeError:
# Some plugins are assigned to private attrs, ``become`` is not
plugin = getattr(self._connection, plugin_type)
option_vars = C.config.get_plugin_vars(plugin_type, plugin._load_name)
options = {}
for k in option_vars:
if k in variables:
options[k] = templar.template(variables[k])
# TODO move to task method?
plugin.set_options(task_keys=task_keys, var_options=options)
def _set_connection_options(self, variables, templar):
# Keep the pre-delegate values for these keys
PRESERVE_ORIG = ('inventory_hostname',)
# create copy with delegation built in
final_vars = combine_vars(
variables,
variables.get('ansible_delegated_vars', {}).get(self._task.delegate_to, {})
)
# grab list of usable vars for this plugin
option_vars = C.config.get_plugin_vars('connection', self._connection._load_name)
# create dict of 'templated vars'
options = {'_extras': {}}
for k in option_vars:
if k in PRESERVE_ORIG:
options[k] = templar.template(variables[k])
elif k in final_vars:
options[k] = templar.template(final_vars[k])
# add extras if plugin supports them
if getattr(self._connection, 'allow_extras', False):
for k in final_vars:
if k.startswith('ansible_%s_' % self._connection._load_name) and k not in options:
options['_extras'][k] = templar.template(final_vars[k])
task_keys = self._task.dump_attrs()
# set options with 'templated vars' specific to this plugin and dependant ones
self._connection.set_options(task_keys=task_keys, var_options=options)
self._set_plugin_options('shell', final_vars, templar, task_keys)
if self._connection.become is not None:
# FIXME: find alternate route to provide passwords,
# keep out of play objects to avoid accidental disclosure
task_keys['become_pass'] = self._play_context.become_pass
self._set_plugin_options('become', final_vars, templar, task_keys)
# FOR BACKWARDS COMPAT:
for option in ('become_user', 'become_flags', 'become_exe'):
try:
setattr(self._play_context, option, self._connection.become.get_option(option))
except KeyError:
pass # some plugins don't support all base flags
self._play_context.prompt = self._connection.become.prompt
def _get_action_handler(self, connection, templar):
'''
Returns the correct action plugin to handle the requestion task action
'''
module_prefix = self._task.action.split('_')[0]
collections = self._task.collections
# let action plugin override module, fallback to 'normal' action plugin otherwise
if self._shared_loader_obj.action_loader.has_plugin(self._task.action, collection_list=collections):
handler_name = self._task.action
# FIXME: is this code path even live anymore? check w/ networking folks; it trips sometimes when it shouldn't
elif all((module_prefix in C.NETWORK_GROUP_MODULES, module_prefix in self._shared_loader_obj.action_loader)):
handler_name = module_prefix
else:
# FUTURE: once we're comfortable with collections impl, preface this action with ansible.builtin so it can't be hijacked
handler_name = 'normal'
collections = None # until then, we don't want the task's collection list to be consulted; use the builtin
handler = self._shared_loader_obj.action_loader.get(
handler_name,
task=self._task,
connection=connection,
play_context=self._play_context,
loader=self._loader,
templar=templar,
shared_loader_obj=self._shared_loader_obj,
collection_list=collections
)
if not handler:
raise AnsibleError("the handler '%s' was not found" % handler_name)
return handler
def start_connection(play_context, variables):
'''
Starts the persistent connection
'''
candidate_paths = [C.ANSIBLE_CONNECTION_PATH or os.path.dirname(sys.argv[0])]
candidate_paths.extend(os.environ['PATH'].split(os.pathsep))
for dirname in candidate_paths:
ansible_connection = os.path.join(dirname, 'ansible-connection')
if os.path.isfile(ansible_connection):
break
else:
raise AnsibleError("Unable to find location of 'ansible-connection'. "
"Please set or check the value of ANSIBLE_CONNECTION_PATH")
env = os.environ.copy()
env.update({
# HACK; most of these paths may change during the controller's lifetime
# (eg, due to late dynamic role includes, multi-playbook execution), without a way
# to invalidate/update, ansible-connection won't always see the same plugins the controller
# can.
'ANSIBLE_BECOME_PLUGINS': become_loader.print_paths(),
'ANSIBLE_CLICONF_PLUGINS': cliconf_loader.print_paths(),
'ANSIBLE_COLLECTIONS_PATHS': os.pathsep.join(AnsibleCollectionLoader().n_collection_paths),
'ANSIBLE_CONNECTION_PLUGINS': connection_loader.print_paths(),
'ANSIBLE_HTTPAPI_PLUGINS': httpapi_loader.print_paths(),
'ANSIBLE_NETCONF_PLUGINS': netconf_loader.print_paths(),
'ANSIBLE_TERMINAL_PLUGINS': terminal_loader.print_paths(),
})
python = sys.executable
master, slave = pty.openpty()
p = subprocess.Popen(
[python, ansible_connection, to_text(os.getppid())],
stdin=slave, stdout=subprocess.PIPE, stderr=subprocess.PIPE, env=env
)
os.close(slave)
# We need to set the pty into noncanonical mode. This ensures that we
# can receive lines longer than 4095 characters (plus newline) without
# truncating.
old = termios.tcgetattr(master)
new = termios.tcgetattr(master)
new[3] = new[3] & ~termios.ICANON
try:
termios.tcsetattr(master, termios.TCSANOW, new)
write_to_file_descriptor(master, variables)
write_to_file_descriptor(master, play_context.serialize())
(stdout, stderr) = p.communicate()
finally:
termios.tcsetattr(master, termios.TCSANOW, old)
os.close(master)
if p.returncode == 0:
result = json.loads(to_text(stdout, errors='surrogate_then_replace'))
else:
try:
result = json.loads(to_text(stderr, errors='surrogate_then_replace'))
except getattr(json.decoder, 'JSONDecodeError', ValueError):
# JSONDecodeError only available on Python 3.5+
result = {'error': to_text(stderr, errors='surrogate_then_replace')}
if 'messages' in result:
for level, message in result['messages']:
if level == 'log':
display.display(message, log_only=True)
elif level in ('debug', 'v', 'vv', 'vvv', 'vvvv', 'vvvvv', 'vvvvvv'):
getattr(display, level)(message, host=play_context.remote_addr)
else:
if hasattr(display, level):
getattr(display, level)(message)
else:
display.vvvv(message, host=play_context.remote_addr)
if 'error' in result:
if play_context.verbosity > 2:
if result.get('exception'):
msg = "The full traceback is:\n" + result['exception']
display.display(msg, color=C.COLOR_ERROR)
raise AnsibleError(result['error'])
return result['socket_path']
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 59,414 |
connection/shell/become vars are removed when in a loop
|
##### SUMMARY
When you have a task variable defined in the task's `vars:` directive and it relates to the connection, shell, or become plugin that is used in that task it will no longer be defined if run on a loop. The variable is present for the first iteration of the loop but future ones will remove that variable causing all sorts of different behaviour.
There is an unfortunate side effect of https://github.com/ansible/ansible/pull/59024 as it just removes the variables from the task vars.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
task_executor
##### ANSIBLE VERSION
```paste below
devel
```
##### CONFIGURATION
N/A
##### OS / ENVIRONMENT
N/A
##### STEPS TO REPRODUCE
Run the following playbook
```yaml
---
- hosts: localhost
gather_facts: no
tasks:
- ping:
vars:
ansible_remote_tmp: /tmp/test1
with_items:
- 1
- 2
- debug:
var: ansible_remote_tmp
vars:
ansible_remote_tmp: /tmp/test1
with_items:
- 1
- 2
```
##### EXPECTED RESULTS
Both runs will use `/tmp/test1` as the remote temp directory that stores the AnsiballZ payload.
##### ACTUAL RESULTS
Only the 1st task uses `/tmp/test1` while the 2nd goes back to using the default as `ansible_remote_tmp` from the task vars is no longer defined.
```paste below
ansible-playbook 2.9.0.dev0
config file = /home/jborean/dev/ansible-tester/ansible.cfg
configured module search path = ['/home/jborean/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/jborean/dev/ansible/lib/ansible
executable location = /home/jborean/dev/ansible/bin/ansible-playbook
python version = 3.7.4 (default, Jul 10 2019, 15:18:20) [GCC 9.1.1 20190503 (Red Hat 9.1.1-1)]
Using /home/jborean/dev/ansible-tester/ansible.cfg as config file
host_list declined parsing /home/jborean/dev/ansible-tester/inventory.ini as it did not pass its verify_file() method
auto declined parsing /home/jborean/dev/ansible-tester/inventory.ini as it did not pass its verify_file() method
yaml declined parsing /home/jborean/dev/ansible-tester/inventory.ini as it did not pass its verify_file() method
Parsed /home/jborean/dev/ansible-tester/inventory.ini inventory source with ini plugin
PLAYBOOK: test.yml **************************************************************************************
1 plays in test.yml
PLAY [localhost] ****************************************************************************************
META: ran handlers
TASK [ping] *********************************************************************************************
task path: /home/jborean/dev/ansible-tester/test.yml:5
<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: jborean
<127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /tmp/test1/ansible-tmp-1563847217.167148-225396898656963 `" && echo ansible-tmp-1563847217.167148-225396898656963="` echo /tmp/test1/ansible-tmp-156384
7217.167148-225396898656963 `" ) && sleep 0'
Using module file /home/jborean/dev/ansible/lib/ansible/modules/system/ping.py
<127.0.0.1> PUT /home/jborean/.ansible/tmp/ansible-local-245768n0yb3g5/tmpbf7ibfe5 TO /tmp/test1/ansible-tmp-1563847217.167148-225396898656963/AnsiballZ_ping.py
<127.0.0.1> EXEC /bin/sh -c 'chmod u+x /tmp/test1/ansible-tmp-1563847217.167148-225396898656963/ /tmp/test1/ansible-tmp-1563847217.167148-225396898656963/AnsiballZ_ping.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '/home/jborean/venvs/ansible-py37/bin/python /tmp/test1/ansible-tmp-1563847217.167148-225396898656963/AnsiballZ_ping.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c 'rm -f -r /tmp/test1/ansible-tmp-1563847217.167148-225396898656963/ > /dev/null 2>&1 && sleep 0'
<127.0.0.1> EXEC /bin/sh -c 'echo ~jborean && sleep 0'
ok: [localhost] => (item=1) => {
"ansible_loop_var": "item",
"changed": false,
"invocation": {
"module_args": {
"data": "pong"
}
},
"item": 1,
"ping": "pong"
}
<127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /home/jborean/.ansible/tmp/ansible-tmp-1563847217.3545752-117775520192534 `" && echo ansible-tmp-1563847217.3545752-117775520192534="` echo /home/jbore
an/.ansible/tmp/ansible-tmp-1563847217.3545752-117775520192534 `" ) && sleep 0'
Using module file /home/jborean/dev/ansible/lib/ansible/modules/system/ping.py
<127.0.0.1> PUT /home/jborean/.ansible/tmp/ansible-local-245768n0yb3g5/tmpl2ss9stw TO /home/jborean/.ansible/tmp/ansible-tmp-1563847217.3545752-117775520192534/AnsiballZ_ping.py
<127.0.0.1> EXEC /bin/sh -c 'chmod u+x /home/jborean/.ansible/tmp/ansible-tmp-1563847217.3545752-117775520192534/ /home/jborean/.ansible/tmp/ansible-tmp-1563847217.3545752-117775520192534/AnsiballZ_ping.py && sl
eep 0'
<127.0.0.1> EXEC /bin/sh -c '/home/jborean/venvs/ansible-py37/bin/python /home/jborean/.ansible/tmp/ansible-tmp-1563847217.3545752-117775520192534/AnsiballZ_ping.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c 'rm -f -r /home/jborean/.ansible/tmp/ansible-tmp-1563847217.3545752-117775520192534/ > /dev/null 2>&1 && sleep 0'
ok: [localhost] => (item=2) => {
"ansible_loop_var": "item",
"changed": false,
"invocation": {
"module_args": {
"data": "pong"
}
},
"item": 2,
"ping": "pong"
}
TASK [debug] ********************************************************************************************
task path: /home/jborean/dev/ansible-tester/test.yml:12
ok: [localhost] => (item=1) => {
"ansible_loop_var": "item",
"ansible_remote_tmp": "/tmp/test1",
"item": 1
}
ok: [localhost] => (item=2) => {
"ansible_loop_var": "item",
"ansible_remote_tmp": "VARIABLE IS NOT DEFINED!: 'ansible_remote_tmp' is undefined",
"item": 2
}
META: ran handlers
META: ran handlers
PLAY RECAP **********************************************************************************************
localhost : ok=2 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
```
This is because after the first loop, all task vars that are either options for the connection, shell, and become plugin for that task are removed.
|
https://github.com/ansible/ansible/issues/59414
|
https://github.com/ansible/ansible/pull/59426
|
5a7f579d86b52754d5634fd6719882158c01ec18
|
1010363c0bebaf4ef3c34ac858d74de5ca01fc7b
| 2019-07-23T02:04:07Z |
python
| 2019-07-24T09:35:14Z |
test/integration/targets/loops/tasks/main.yml
|
#
# loop_control/pause
#
- name: Measure time before
shell: date +%s
register: before
- debug:
var: i
with_sequence: count=3
loop_control:
loop_var: i
pause: 2
- name: Measure time after
shell: date +%s
register: after
# since there is 3 rounds, and 2 seconds between, it should last 4 seconds
# we do not test the upper bound, since CI can lag significantly
- assert:
that:
- '(after.stdout |int) - (before.stdout|int) >= 4'
- name: test subsecond pause
block:
- name: Measure time before loop with .5s pause
set_fact:
times: "{{times|default([]) + [ lookup('pipe','date +%s.%3N') ]}}"
with_sequence: count=3
loop_control:
pause: 0.6
- name: ensure lag, since there is 3 rounds, and 0.5 seconds between, it should last 1.2 seconds, but allowing leeway due to CI lag
assert:
that:
- tdiff|float >= 1.2
- tdiff|int < 3
vars:
tdiff: '{{ times[2]|float - times[0]|float }}'
when:
- ansible_facts['distribution'] not in ("MacOSX", "FreeBSD")
#
# Tests of loop syntax with args
#
- name: Test that with_list works with a list
ping:
data: '{{ item }}'
with_list:
- 'Hello World'
- 'Olá Mundo'
register: results
- name: Assert that we ran the module twice with the correct strings
assert:
that:
- 'results["results"][0]["ping"] == "Hello World"'
- 'results["results"][1]["ping"] == "Olá Mundo"'
- name: Test that with_list works with a list inside a variable
ping:
data: '{{ item }}'
with_list: '{{ phrases }}'
register: results2
- name: Assert that we ran the module twice with the correct strings
assert:
that:
- 'results2["results"][0]["ping"] == "Hello World"'
- 'results2["results"][1]["ping"] == "Olá Mundo"'
- name: Test that loop works with a manual list
ping:
data: '{{ item }}'
loop:
- 'Hello World'
- 'Olá Mundo'
register: results3
- name: Assert that we ran the module twice with the correct strings
assert:
that:
- 'results3["results"][0]["ping"] == "Hello World"'
- 'results3["results"][1]["ping"] == "Olá Mundo"'
- name: Test that loop works with a list in a variable
ping:
data: '{{ item }}'
loop: '{{ phrases }}'
register: results4
- name: Assert that we ran the module twice with the correct strings
assert:
that:
- 'results4["results"][0]["ping"] == "Hello World"'
- 'results4["results"][1]["ping"] == "Olá Mundo"'
- name: Test that loop works with a list via the list lookup
ping:
data: '{{ item }}'
loop: '{{ lookup("list", "Hello World", "Olá Mundo", wantlist=True) }}'
register: results5
- name: Assert that we ran the module twice with the correct strings
assert:
that:
- 'results5["results"][0]["ping"] == "Hello World"'
- 'results5["results"][1]["ping"] == "Olá Mundo"'
- name: Test that loop works with a list in a variable via the list lookup
ping:
data: '{{ item }}'
loop: '{{ lookup("list", wantlist=True, *phrases) }}'
register: results6
- name: Assert that we ran the module twice with the correct strings
assert:
that:
- 'results6["results"][0]["ping"] == "Hello World"'
- 'results6["results"][1]["ping"] == "Olá Mundo"'
- name: Test that loop works with a list via the query lookup
ping:
data: '{{ item }}'
loop: '{{ query("list", "Hello World", "Olá Mundo") }}'
register: results7
- name: Assert that we ran the module twice with the correct strings
assert:
that:
- 'results7["results"][0]["ping"] == "Hello World"'
- 'results7["results"][1]["ping"] == "Olá Mundo"'
- name: Test that loop works with a list in a variable via the query lookup
ping:
data: '{{ item }}'
loop: '{{ q("list", *phrases) }}'
register: results8
- name: Assert that we ran the module twice with the correct strings
assert:
that:
- 'results8["results"][0]["ping"] == "Hello World"'
- 'results8["results"][1]["ping"] == "Olá Mundo"'
- name: Test that loop works with a list and keyword args
ping:
data: '{{ item }}'
loop: '{{ q("file", "data1.txt", "data2.txt", lstrip=True) }}'
register: results9
- name: Assert that we ran the module twice with the correct strings
assert:
that:
- 'results9["results"][0]["ping"] == "Hello World"'
- 'results9["results"][1]["ping"] == "Olá Mundo"'
- name: Test that loop works with a list in variable and keyword args
ping:
data: '{{ item }}'
loop: '{{ q("file", lstrip=True, *filenames) }}'
register: results10
- name: Assert that we ran the module twice with the correct strings
assert:
that:
- 'results10["results"][0]["ping"] == "Hello World"'
- 'results10["results"][1]["ping"] == "Olá Mundo"'
#
# loop_control/index_var
#
- name: check that the index var is created and increments as expected
assert:
that: my_idx == item|int
with_sequence: start=0 count=3
loop_control:
index_var: my_idx
- name: check that value of index var matches position of current item in source list
assert:
that: 'test_var.index(item) == my_idx'
vars:
test_var: ['a', 'b', 'c']
with_items: "{{ test_var }}"
loop_control:
index_var: my_idx
- name: check index var with included tasks file
include_tasks: index_var_tasks.yml
with_sequence: start=0 count=3
loop_control:
index_var: my_idx
# The following test cases are to ensure that we don't have a regression on
# GitHub Issue https://github.com/ansible/ansible/issues/35481
#
# This should execute and not cause a RuntimeError
- debug:
msg: "with_dict passed a list: {{item}}"
with_dict: "{{ a_list }}"
register: with_dict_passed_a_list
ignore_errors: True
- assert:
that:
- with_dict_passed_a_list is failed
- debug:
msg: "with_list passed a dict: {{item}}"
with_list: "{{ a_dict }}"
register: with_list_passed_a_dict
ignore_errors: True
- assert:
that:
- with_list_passed_a_dict is failed
- debug:
var: "item"
loop:
- "{{ ansible_search_path }}"
register: loop_search_path
- assert:
that:
- ansible_search_path == loop_search_path.results.0.item
# https://github.com/ansible/ansible/issues/45189
- name: with_X conditional delegate_to shortcircuit on templating error
debug:
msg: "loop"
when: false
delegate_to: localhost
with_list: "{{ fake_var }}"
register: result
failed_when: result is not skipped
- name: loop conditional delegate_to shortcircuit on templating error
debug:
msg: "loop"
when: false
delegate_to: localhost
loop: "{{ fake_var }}"
register: result
failed_when: result is not skipped
- name: Loop on literal empty list
debug:
loop: []
register: literal_empty_list
failed_when: literal_empty_list is not skipped
# https://github.com/ansible/ansible/issues/47372
- name: Loop unsafe list
debug:
var: item
with_items: "{{ things|list|unique }}"
vars:
things:
- !unsafe foo
- !unsafe bar
- name: extended loop info
assert:
that:
- ansible_loop.nextitem == 'orange'
- ansible_loop.index == 1
- ansible_loop.index0 == 0
- ansible_loop.first
- not ansible_loop.last
- ansible_loop.previtem is undefined
- ansible_loop.allitems == ['apple', 'orange', 'banana']
- ansible_loop.revindex == 3
- ansible_loop.revindex0 == 2
- ansible_loop.length == 3
loop:
- apple
- orange
- banana
loop_control:
extended: true
when: item == 'apple'
- name: extended loop info 2
assert:
that:
- ansible_loop.nextitem == 'banana'
- ansible_loop.index == 2
- ansible_loop.index0 == 1
- not ansible_loop.first
- not ansible_loop.last
- ansible_loop.previtem == 'apple'
- ansible_loop.allitems == ['apple', 'orange', 'banana']
- ansible_loop.revindex == 2
- ansible_loop.revindex0 == 1
- ansible_loop.length == 3
loop:
- apple
- orange
- banana
loop_control:
extended: true
when: item == 'orange'
- name: extended loop info 3
assert:
that:
- ansible_loop.nextitem is undefined
- ansible_loop.index == 3
- ansible_loop.index0 == 2
- not ansible_loop.first
- ansible_loop.last
- ansible_loop.previtem == 'orange'
- ansible_loop.allitems == ['apple', 'orange', 'banana']
- ansible_loop.revindex == 1
- ansible_loop.revindex0 == 0
- ansible_loop.length == 3
loop:
- apple
- orange
- banana
loop_control:
extended: true
when: item == 'banana'
- name: Validate the loop_var name
assert:
that:
- ansible_loop_var == 'alvin'
loop:
- 1
loop_control:
loop_var: alvin
# https://github.com/ansible/ansible/issues/58820
- name: Test using templated loop_var inside include_tasks
include_tasks: templated_loop_var_tasks.yml
loop:
- value
loop_control:
loop_var: "{{ loop_var_name }}"
vars:
loop_var_name: templated_loop_var_name
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 59,326 |
junos_ping fail when using netconf connection plugin
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
junos_ping module fails when using the netconf connection plugin, however when using the network_cli it works with no problem.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
junos_ping module
##### ANSIBLE VERSION
```
ansible 2.8.1
config file = /vagrant/net_automation_cookbook/ch4_junos/ansible.cfg
configured module search path = ['/home/vagrant/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/vagrant/.local/lib/python3.5/site-packages/ansible
executable location = /home/vagrant/.local/bin/ansible
python version = 3.5.2 (default, Nov 12 2018, 13:43:14) [GCC 5.4.0 20160609]
```
##### CONFIGURATION
```
DEFAULT_GATHERING(/vagrant/net_automation_cookbook/ch4_junos/ansible.cfg) = explicit
DEFAULT_HOST_LIST(/vagrant/net_automation_cookbook/ch4_junos/ansible.cfg) = ['/vagrant/net_automation_cookbook/ch4_junos/hosts']
HOST_KEY_CHECKING(/vagrant/net_automation_cookbook/ch4_junos/ansible.cfg) = False
RETRY_FILES_ENABLED(/vagrant/net_automation_cookbook/ch4_junos/ansible.cfg) = False
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
test against Junos 14.1R4.8 and JUNOS 17.1R1.8
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
when using the netconf plugin the junos_ping module fails with the below faliure output
<!--- Paste example playbooks or commands between quotes below -->
```
- name: "Validate Core Reachability"
hosts: junos
vars:
lo_ip:
mxpe01: 10.100.1.1/32
mxpe02: 10.100.1.2/32
tasks:
- name: "Ping Across All Loopback Interfaces"
junos_ping:
dest: "{{ item.value.split('/')[0] }}"
interface: lo0.0
size: 512
with_dict: "{{lo_ip}}"
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
when using the network_cli plugin the playbook works as expected
- name: "Validate Core Reachability"
hosts: junos
vars:
lo_ip:
mxpe01: 10.100.1.1/32
mxpe02: 10.100.1.2/32
tasks:
- name: "Ping Across All Loopback Interfaces"
junos_ping:
dest: "{{ item.value.split('/')[0] }}"
interface: lo0.0
size: 512
with_dict: "{{lo_ip}}"
vars:
ansible_connection: network_cli
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes -->
```paste below
failed: [mxpe02] (item={'key': 'mxpe02', 'value': '10.100.1.2/32'}) => {"ansible_loop_var": "item", "changed": false, "item": {"key": "mxpe02", "value": "10.100.1.2/32"}, "module_stderr": "Traceback (most recent call last):\n File \"/tmp/ansible_junos_ping_payload_ee3oycmt/ansible_junos_ping_payload.zip/ansible/module_utils/network/common/netconf.py\", line 83, in parse_rpc_error\n File \"src/lxml/etree.pyx\", line 3222, in lxml.etree.fromstring\n File \"src/lxml/parser.pxi\", line 1877, in lxml.etree._parseMemoryDocument\n File \"src/lxml/parser.pxi\", line 1765, in lxml.etree._parseDoc\n File \"src/lxml/parser.pxi\", line 1127, in lxml.etree._BaseParser._parseDoc\n File \"src/lxml/parser.pxi\", line 601, in lxml.etree._ParserContext._handleParseResultDoc\n File \"src/lxml/parser.pxi\", line 711, in lxml.etree._handleParseResult\n File \"src/lxml/parser.pxi\", line 640, in lxml.etree._raiseParseError\n File \"<string>\", line 1\nlxml.etree.XMLSyntaxError: Start tag expected, '<' not found, line 1, column 1\n\nDuring handling of the above exception, another exception occurred:\n\nTraceback (most recent call last):\n File \"/home/vagrant/.ansible/tmp/ansible-local-6820j09yeqgo/ansible-tmp-1563527319.039324-40487203490951/AnsiballZ_junos_ping.py\", line 114, in <module>\n _ansiballz_main()\n File \"/home/vagrant/.ansible/tmp/ansible-local-6820j09yeqgo/ansible-tmp-1563527319.039324-40487203490951/AnsiballZ_junos_ping.py\", line 106, in _ansiballz_main\n invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)\n File \"/home/vagrant/.ansible/tmp/ansible-local-6820j09yeqgo/ansible-tmp-1563527319.039324-40487203490951/AnsiballZ_junos_ping.py\", line 49, in invoke_module\n imp.load_module('__main__', mod, module, MOD_DESC)\n File \"/usr/lib/python3.5/imp.py\", line 234, in load_module\n return load_source(name, filename, file)\n File \"/usr/lib/python3.5/imp.py\", line 170, in load_source\n module = _exec(spec, sys.modules[name])\n File \"<frozen importlib._bootstrap>\", line 626, in _exec\n File \"<frozen importlib._bootstrap_external>\", line 665, in exec_module\n File \"<frozen importlib._bootstrap>\", line 222, in _call_with_frames_removed\n File \"/tmp/ansible_junos_ping_payload_ee3oycmt/__main__.py\", line 231, in <module>\n File \"/tmp/ansible_junos_ping_payload_ee3oycmt/__main__.py\", line 159, in main\n File \"/tmp/ansible_junos_ping_payload_ee3oycmt/ansible_junos_ping_payload.zip/ansible/module_utils/network/common/netconf.py\", line 76, in __rpc__\n File \"/tmp/ansible_junos_ping_payload_ee3oycmt/ansible_junos_ping_payload.zip/ansible/module_utils/network/common/netconf.py\", line 108, in parse_rpc_error\nansible.module_utils.connection.ConnectionError: b\"Start tag expected, '<' not found, line 1, column 1 (<string>, line 1)\"\n", "module_stdout": "", "msg": "MODULE FAILURE\nSee stdout/stderr for the exact error", "rc": 1}
An exception occurred during task execution. To see the full traceback, use -vvv. The error was: ansible.module_utils.connection.ConnectionError: b"Start tag expected, '<' not found, line 1, column 1 (<string>, line 1)"
```
|
https://github.com/ansible/ansible/issues/59326
|
https://github.com/ansible/ansible/pull/59534
|
f2b0bfd4aaa3c71be4c0fb8f566451936aeca629
|
119f2b873a863d05d38a29599ade2ac09f8c6f01
| 2019-07-19T22:13:43Z |
python
| 2019-07-24T17:41:27Z |
lib/ansible/modules/network/junos/junos_ping.py
|
#!/usr/bin/python
# -*- coding: utf-8 -*-
# (c) 2019, Ansible by Red Hat, inc
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
ANSIBLE_METADATA = {'metadata_version': '1.1',
'status': ['preview'],
'supported_by': 'network'}
DOCUMENTATION = """
---
module: junos_ping
short_description: Tests reachability using ping from devices running Juniper JUNOS
description:
- Tests reachability using ping from devices running Juniper JUNOS to a remote destination.
- Tested against Junos (17.3R1.10)
- For a general purpose network module, see the M(net_ping) module.
- For Windows targets, use the M(win_ping) module instead.
- For targets running Python, use the M(ping) module instead.
author:
- Nilashish Chakraborty (@NilashishC)
version_added: '2.8'
options:
dest:
description:
- The IP Address or hostname (resolvable by the device) of the remote node.
required: true
count:
description:
- Number of packets to send to check reachability.
type: int
default: 5
source:
description:
- The IP Address to use while sending the ping packet(s).
interface:
description:
- The source interface to use while sending the ping packet(s).
ttl:
description:
- The time-to-live value for the ICMP packet(s).
type: int
size:
description:
- Determines the size (in bytes) of the ping packet(s).
type: int
interval:
description:
- Determines the interval (in seconds) between consecutive pings.
type: int
state:
description:
- Determines if the expected result is success or fail.
choices: [ absent, present ]
default: present
notes:
- For a general purpose network module, see the M(net_ping) module.
- For Windows targets, use the M(win_ping) module instead.
- For targets running Python, use the M(ping) module instead.
extends_documentation_fragment: junos
"""
EXAMPLES = """
- name: Test reachability to 10.10.10.10
junos_ping:
dest: 10.10.10.10
- name: Test reachability to 10.20.20.20 using source and size set
junos_ping:
dest: 10.20.20.20
size: 1024
ttl: 128
- name: Test unreachability to 10.30.30.30 using interval
junos_ping:
dest: 10.30.30.30
interval: 3
state: absent
- name: Test reachability to 10.40.40.40 setting count and interface
junos_ping:
dest: 10.40.40.40
interface: fxp0
count: 20
size: 512
"""
RETURN = """
commands:
description: List of commands sent.
returned: always
type: list
sample: ["ping 10.8.38.44 count 10 source 10.8.38.38 ttl 128"]
packet_loss:
description: Percentage of packets lost.
returned: always
type: str
sample: "0%"
packets_rx:
description: Packets successfully received.
returned: always
type: int
sample: 20
packets_tx:
description: Packets successfully transmitted.
returned: always
type: int
sample: 20
rtt:
description: The round trip time (RTT) stats.
returned: when ping succeeds
type: dict
sample: {"avg": 2, "max": 8, "min": 1, "stddev": 24}
"""
import re
from ansible.module_utils.basic import AnsibleModule
from ansible.module_utils.network.junos.junos import junos_argument_spec, get_connection
def main():
""" main entry point for module execution
"""
argument_spec = dict(
count=dict(type="int", default=5),
dest=dict(type="str", required=True),
source=dict(),
interface=dict(),
ttl=dict(type='int'),
size=dict(type='int'),
interval=dict(type='int'),
state=dict(type="str", choices=["absent", "present"], default="present"),
)
argument_spec.update(junos_argument_spec)
module = AnsibleModule(argument_spec=argument_spec)
count = module.params["count"]
dest = module.params["dest"]
source = module.params["source"]
size = module.params["size"]
ttl = module.params["ttl"]
interval = module.params["interval"]
interface = module.params['interface']
warnings = list()
results = {'changed': False}
if warnings:
results["warnings"] = warnings
results["commands"] = build_ping(dest, count, size, interval, source, ttl, interface)
conn = get_connection(module)
ping_results = conn.get(results["commands"])
rtt_info, rate_info = None, None
for line in ping_results.split("\n"):
if line.startswith('round-trip'):
rtt_info = line
if line.startswith('%s packets transmitted' % count):
rate_info = line
if rtt_info:
rtt = parse_rtt(rtt_info)
for k, v in rtt.items():
if rtt[k] is not None:
rtt[k] = float(v)
results["rtt"] = rtt
pkt_loss, rx, tx = parse_rate(rate_info)
results["packet_loss"] = str(pkt_loss) + "%"
results["packets_rx"] = int(rx)
results["packets_tx"] = int(tx)
validate_results(module, pkt_loss, results)
module.exit_json(**results)
def build_ping(dest, count, size=None, interval=None, source=None, ttl=None, interface=None):
cmd = "ping {0} count {1}".format(dest, str(count))
if source:
cmd += " source {0}".format(source)
if interface:
cmd += " interface {0}".format(interface)
if ttl:
cmd += " ttl {0}".format(str(ttl))
if size:
cmd += " size {0}".format(str(size))
if interval:
cmd += " interval {0}".format(str(interval))
return cmd
def parse_rate(rate_info):
rate_re = re.compile(
r"(?P<tx>\d*) packets transmitted,(?:\s*)(?P<rx>\d*) packets received,(?:\s*)(?P<pkt_loss>\d*)% packet loss")
rate = rate_re.match(rate_info)
return rate.group("pkt_loss"), rate.group("rx"), rate.group("tx")
def parse_rtt(rtt_info):
rtt_re = re.compile(
r"round-trip (?:.*)=(?:\s*)(?P<min>\d+\.\d+).(?:\d*)/(?P<avg>\d+\.\d+).(?:\d*)/(?P<max>\d*\.\d*).(?:\d*)/(?P<stddev>\d*\.\d*)")
rtt = rtt_re.match(rtt_info)
return rtt.groupdict()
def validate_results(module, loss, results):
state = module.params["state"]
if state == "present" and int(loss) == 100:
module.fail_json(msg="Ping failed unexpectedly", **results)
elif state == "absent" and int(loss) < 100:
module.fail_json(msg="Ping succeeded unexpectedly", **results)
if __name__ == "__main__":
main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 59,326 |
junos_ping fail when using netconf connection plugin
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
junos_ping module fails when using the netconf connection plugin, however when using the network_cli it works with no problem.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
junos_ping module
##### ANSIBLE VERSION
```
ansible 2.8.1
config file = /vagrant/net_automation_cookbook/ch4_junos/ansible.cfg
configured module search path = ['/home/vagrant/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/vagrant/.local/lib/python3.5/site-packages/ansible
executable location = /home/vagrant/.local/bin/ansible
python version = 3.5.2 (default, Nov 12 2018, 13:43:14) [GCC 5.4.0 20160609]
```
##### CONFIGURATION
```
DEFAULT_GATHERING(/vagrant/net_automation_cookbook/ch4_junos/ansible.cfg) = explicit
DEFAULT_HOST_LIST(/vagrant/net_automation_cookbook/ch4_junos/ansible.cfg) = ['/vagrant/net_automation_cookbook/ch4_junos/hosts']
HOST_KEY_CHECKING(/vagrant/net_automation_cookbook/ch4_junos/ansible.cfg) = False
RETRY_FILES_ENABLED(/vagrant/net_automation_cookbook/ch4_junos/ansible.cfg) = False
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
test against Junos 14.1R4.8 and JUNOS 17.1R1.8
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
when using the netconf plugin the junos_ping module fails with the below faliure output
<!--- Paste example playbooks or commands between quotes below -->
```
- name: "Validate Core Reachability"
hosts: junos
vars:
lo_ip:
mxpe01: 10.100.1.1/32
mxpe02: 10.100.1.2/32
tasks:
- name: "Ping Across All Loopback Interfaces"
junos_ping:
dest: "{{ item.value.split('/')[0] }}"
interface: lo0.0
size: 512
with_dict: "{{lo_ip}}"
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
when using the network_cli plugin the playbook works as expected
- name: "Validate Core Reachability"
hosts: junos
vars:
lo_ip:
mxpe01: 10.100.1.1/32
mxpe02: 10.100.1.2/32
tasks:
- name: "Ping Across All Loopback Interfaces"
junos_ping:
dest: "{{ item.value.split('/')[0] }}"
interface: lo0.0
size: 512
with_dict: "{{lo_ip}}"
vars:
ansible_connection: network_cli
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes -->
```paste below
failed: [mxpe02] (item={'key': 'mxpe02', 'value': '10.100.1.2/32'}) => {"ansible_loop_var": "item", "changed": false, "item": {"key": "mxpe02", "value": "10.100.1.2/32"}, "module_stderr": "Traceback (most recent call last):\n File \"/tmp/ansible_junos_ping_payload_ee3oycmt/ansible_junos_ping_payload.zip/ansible/module_utils/network/common/netconf.py\", line 83, in parse_rpc_error\n File \"src/lxml/etree.pyx\", line 3222, in lxml.etree.fromstring\n File \"src/lxml/parser.pxi\", line 1877, in lxml.etree._parseMemoryDocument\n File \"src/lxml/parser.pxi\", line 1765, in lxml.etree._parseDoc\n File \"src/lxml/parser.pxi\", line 1127, in lxml.etree._BaseParser._parseDoc\n File \"src/lxml/parser.pxi\", line 601, in lxml.etree._ParserContext._handleParseResultDoc\n File \"src/lxml/parser.pxi\", line 711, in lxml.etree._handleParseResult\n File \"src/lxml/parser.pxi\", line 640, in lxml.etree._raiseParseError\n File \"<string>\", line 1\nlxml.etree.XMLSyntaxError: Start tag expected, '<' not found, line 1, column 1\n\nDuring handling of the above exception, another exception occurred:\n\nTraceback (most recent call last):\n File \"/home/vagrant/.ansible/tmp/ansible-local-6820j09yeqgo/ansible-tmp-1563527319.039324-40487203490951/AnsiballZ_junos_ping.py\", line 114, in <module>\n _ansiballz_main()\n File \"/home/vagrant/.ansible/tmp/ansible-local-6820j09yeqgo/ansible-tmp-1563527319.039324-40487203490951/AnsiballZ_junos_ping.py\", line 106, in _ansiballz_main\n invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)\n File \"/home/vagrant/.ansible/tmp/ansible-local-6820j09yeqgo/ansible-tmp-1563527319.039324-40487203490951/AnsiballZ_junos_ping.py\", line 49, in invoke_module\n imp.load_module('__main__', mod, module, MOD_DESC)\n File \"/usr/lib/python3.5/imp.py\", line 234, in load_module\n return load_source(name, filename, file)\n File \"/usr/lib/python3.5/imp.py\", line 170, in load_source\n module = _exec(spec, sys.modules[name])\n File \"<frozen importlib._bootstrap>\", line 626, in _exec\n File \"<frozen importlib._bootstrap_external>\", line 665, in exec_module\n File \"<frozen importlib._bootstrap>\", line 222, in _call_with_frames_removed\n File \"/tmp/ansible_junos_ping_payload_ee3oycmt/__main__.py\", line 231, in <module>\n File \"/tmp/ansible_junos_ping_payload_ee3oycmt/__main__.py\", line 159, in main\n File \"/tmp/ansible_junos_ping_payload_ee3oycmt/ansible_junos_ping_payload.zip/ansible/module_utils/network/common/netconf.py\", line 76, in __rpc__\n File \"/tmp/ansible_junos_ping_payload_ee3oycmt/ansible_junos_ping_payload.zip/ansible/module_utils/network/common/netconf.py\", line 108, in parse_rpc_error\nansible.module_utils.connection.ConnectionError: b\"Start tag expected, '<' not found, line 1, column 1 (<string>, line 1)\"\n", "module_stdout": "", "msg": "MODULE FAILURE\nSee stdout/stderr for the exact error", "rc": 1}
An exception occurred during task execution. To see the full traceback, use -vvv. The error was: ansible.module_utils.connection.ConnectionError: b"Start tag expected, '<' not found, line 1, column 1 (<string>, line 1)"
```
|
https://github.com/ansible/ansible/issues/59326
|
https://github.com/ansible/ansible/pull/59534
|
f2b0bfd4aaa3c71be4c0fb8f566451936aeca629
|
119f2b873a863d05d38a29599ade2ac09f8c6f01
| 2019-07-19T22:13:43Z |
python
| 2019-07-24T17:41:27Z |
lib/ansible/plugins/action/junos.py
|
#
# (c) 2016 Red Hat Inc.
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
#
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import sys
import copy
from ansible.module_utils._text import to_text
from ansible.module_utils.connection import Connection
from ansible.module_utils.network.common.utils import load_provider
from ansible.module_utils.network.junos.junos import junos_provider_spec
from ansible.plugins.action.network import ActionModule as ActionNetworkModule
from ansible.utils.display import Display
display = Display()
CLI_SUPPORTED_MODULES = ['junos_netconf', 'junos_command']
class ActionModule(ActionNetworkModule):
def run(self, tmp=None, task_vars=None):
del tmp # tmp no longer has any effect
self._config_module = True if self._task.action == 'junos_config' else False
socket_path = None
if self._play_context.connection == 'local':
provider = load_provider(junos_provider_spec, self._task.args)
pc = copy.deepcopy(self._play_context)
pc.network_os = 'junos'
pc.remote_addr = provider['host'] or self._play_context.remote_addr
if provider['transport'] == 'cli' and self._task.action not in CLI_SUPPORTED_MODULES:
return {'failed': True, 'msg': "Transport type '%s' is not valid for '%s' module. "
"Please see https://docs.ansible.com/ansible/latest/network/user_guide/platform_junos.html"
% (provider['transport'], self._task.action)}
if self._task.action == 'junos_netconf' or (provider['transport'] == 'cli' and self._task.action == 'junos_command'):
pc.connection = 'network_cli'
pc.port = int(provider['port'] or self._play_context.port or 22)
else:
pc.connection = 'netconf'
pc.port = int(provider['port'] or self._play_context.port or 830)
pc.remote_user = provider['username'] or self._play_context.connection_user
pc.password = provider['password'] or self._play_context.password
pc.private_key_file = provider['ssh_keyfile'] or self._play_context.private_key_file
display.vvv('using connection plugin %s (was local)' % pc.connection, pc.remote_addr)
connection = self._shared_loader_obj.connection_loader.get('persistent', pc, sys.stdin)
command_timeout = int(provider['timeout']) if provider['timeout'] else connection.get_option('persistent_command_timeout')
connection.set_options(direct={'persistent_command_timeout': command_timeout})
socket_path = connection.run()
display.vvvv('socket_path: %s' % socket_path, pc.remote_addr)
if not socket_path:
return {'failed': True,
'msg': 'unable to open shell. Please see: ' +
'https://docs.ansible.com/ansible/network_debug_troubleshooting.html#unable-to-open-shell'}
task_vars['ansible_socket'] = socket_path
elif self._play_context.connection in ('netconf', 'network_cli'):
provider = self._task.args.get('provider', {})
if any(provider.values()):
# for legacy reasons provider value is required for junos_facts(optional) and junos_package
# modules as it uses junos_eznc library to connect to remote host
if not (self._task.action == 'junos_facts' or self._task.action == 'junos_package'):
display.warning('provider is unnecessary when using %s and will be ignored' % self._play_context.connection)
del self._task.args['provider']
if (self._play_context.connection == 'network_cli' and self._task.action not in CLI_SUPPORTED_MODULES) or \
(self._play_context.connection == 'netconf' and self._task.action == 'junos_netconf'):
return {'failed': True, 'msg': "Connection type '%s' is not valid for '%s' module. "
"Please see https://docs.ansible.com/ansible/latest/network/user_guide/platform_junos.html"
% (self._play_context.connection, self._task.action)}
if (self._play_context.connection == 'local' and pc.connection == 'network_cli') or self._play_context.connection == 'network_cli':
# make sure we are in the right cli context which should be
# enable mode and not config module
if socket_path is None:
socket_path = self._connection.socket_path
conn = Connection(socket_path)
out = conn.get_prompt()
while to_text(out, errors='surrogate_then_replace').strip().endswith('#'):
display.vvvv('wrong context, sending exit to device', self._play_context.remote_addr)
conn.send_command('exit')
out = conn.get_prompt()
result = super(ActionModule, self).run(task_vars=task_vars)
return result
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 59,233 |
docker_image: is missing etc_hosts (extra_hosts argument from python-docker)
|
##### SUMMARY
docker_image module is missing etc_hosts parameter even if this is documented and accepted by docker.build() as of https://docker-py.readthedocs.io/en/stable/images.html
The same argument is accepted by docker_run but the lack of support here prevents us from building images when extra_hosts is needed.
I mention that the only reason why I propose using `etc_hosts` is in order to keep it in sync with already existing implementation from `docker_container` module which accepts `etc_hosts` which is translated to `extra_hosts` on docker-python-api.
I would personally prefer to use the same names as docker-api but this would require changing docker_container too.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
docker_image
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible-devel
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
<!--- Paste example playbooks or commands between quotes below -->
```yaml
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes -->
```paste below
```
|
https://github.com/ansible/ansible/issues/59233
|
https://github.com/ansible/ansible/pull/59540
|
30c1d9754dbfe58a27103b809f2a388ac949c316
|
7c6fb57b7d2dedb732fe7d41131c929ccb917a84
| 2019-07-18T11:12:50Z |
python
| 2019-07-26T20:39:21Z |
changelogs/fragments/docker_image_etc_hosts.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 59,233 |
docker_image: is missing etc_hosts (extra_hosts argument from python-docker)
|
##### SUMMARY
docker_image module is missing etc_hosts parameter even if this is documented and accepted by docker.build() as of https://docker-py.readthedocs.io/en/stable/images.html
The same argument is accepted by docker_run but the lack of support here prevents us from building images when extra_hosts is needed.
I mention that the only reason why I propose using `etc_hosts` is in order to keep it in sync with already existing implementation from `docker_container` module which accepts `etc_hosts` which is translated to `extra_hosts` on docker-python-api.
I would personally prefer to use the same names as docker-api but this would require changing docker_container too.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
docker_image
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible-devel
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
<!--- Paste example playbooks or commands between quotes below -->
```yaml
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes -->
```paste below
```
|
https://github.com/ansible/ansible/issues/59233
|
https://github.com/ansible/ansible/pull/59540
|
30c1d9754dbfe58a27103b809f2a388ac949c316
|
7c6fb57b7d2dedb732fe7d41131c929ccb917a84
| 2019-07-18T11:12:50Z |
python
| 2019-07-26T20:39:21Z |
lib/ansible/modules/cloud/docker/docker_image.py
|
#!/usr/bin/python
#
# Copyright 2016 Red Hat | Ansible
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
ANSIBLE_METADATA = {'metadata_version': '1.1',
'status': ['preview'],
'supported_by': 'community'}
DOCUMENTATION = '''
---
module: docker_image
short_description: Manage docker images.
version_added: "1.5"
description:
- Build, load or pull an image, making the image available for creating containers. Also supports tagging an
image into a repository and archiving an image to a .tar file.
- Since Ansible 2.8, it is recommended to explicitly specify the image's source (C(source=build),
C(source=load), C(source=pull) or C(source=local)). This will be required from Ansible 2.12 on.
options:
source:
description:
- "Determines where the module will try to retrieve the image from."
- "Use C(build) to build the image from a C(Dockerfile). I(path) must
be specified when this value is used."
- "Use C(load) to load the image from a C(.tar) file. I(load_path) must
be specified when this value is used."
- "Use C(pull) to pull the image from a registry."
- "Use C(local) to make sure that the image is already available on the local
docker daemon, i.e. do not try to build, pull or load the image."
- "Before Ansible 2.12, the value of this option will be auto-detected
to be backwards compatible, but a warning will be issued if it is not
explicitly specified. From Ansible 2.12 on, auto-detection will be disabled
and this option will be made mandatory."
type: str
choices:
- build
- load
- pull
- local
version_added: "2.8"
build:
description:
- "Specifies options used for building images."
type: dict
suboptions:
cache_from:
description:
- List of image names to consider as cache source.
type: list
dockerfile:
description:
- Use with state C(present) and source C(build) to provide an alternate name for the Dockerfile to use when building an image.
- This can also include a relative path (relative to I(path)).
type: str
http_timeout:
description:
- Timeout for HTTP requests during the image build operation. Provide a positive integer value for the number of
seconds.
type: int
path:
description:
- Use with state 'present' to build an image. Will be the path to a directory containing the context and
Dockerfile for building an image.
type: path
required: yes
pull:
description:
- When building an image downloads any updates to the FROM image in Dockerfile.
- The default is currently C(yes). This will change to C(no) in Ansible 2.12.
type: bool
rm:
description:
- Remove intermediate containers after build.
type: bool
default: yes
network:
description:
- The network to use for C(RUN) build instructions.
type: str
nocache:
description:
- Do not use cache when building an image.
type: bool
default: no
args:
description:
- Provide a dictionary of C(key:value) build arguments that map to Dockerfile ARG directive.
- Docker expects the value to be a string. For convenience any non-string values will be converted to strings.
- Requires Docker API >= 1.21.
type: dict
container_limits:
description:
- A dictionary of limits applied to each container created by the build process.
type: dict
suboptions:
memory:
description:
- Set memory limit for build.
type: int
memswap:
description:
- Total memory (memory + swap), -1 to disable swap.
type: int
cpushares:
description:
- CPU shares (relative weight).
type: int
cpusetcpus:
description:
- CPUs in which to allow execution, e.g., "0-3", "0,1".
type: str
use_config_proxy:
description:
- If set to `yes` and a proxy configuration is specified in the docker client configuration
(by default C($HOME/.docker/config.json)), the corresponding environment variables will
be set in the container being built.
- Needs Docker SDK for Python >= 3.7.0.
type: bool
target:
description:
- When building an image specifies an intermediate build stage by
name as a final stage for the resulting image.
type: str
version_added: "2.9"
version_added: "2.8"
archive_path:
description:
- Use with state C(present) to archive an image to a .tar file.
type: path
version_added: "2.1"
load_path:
description:
- Use with state C(present) to load an image from a .tar file.
- Set I(source) to C(load) if you want to load the image. The option will
be set automatically before Ansible 2.12 if this option is used (except
if I(path) is specified as well, in which case building will take precedence).
From Ansible 2.12 on, you have to set I(source) to C(load).
type: path
version_added: "2.2"
dockerfile:
description:
- Use with state C(present) and source C(build) to provide an alternate name for the Dockerfile to use when building an image.
- This can also include a relative path (relative to I(path)).
- Please use I(build.dockerfile) instead. This option will be removed in Ansible 2.12.
type: str
version_added: "2.0"
force:
description:
- Use with state I(absent) to un-tag and remove all images matching the specified name. Use with state
C(present) to build, load or pull an image when the image already exists. Also use with state C(present)
to force tagging an image.
- Please stop using this option, and use the more specialized force options
I(force_source), I(force_absent) and I(force_tag) instead.
- This option will be removed in Ansible 2.12.
type: bool
version_added: "2.1"
force_source:
description:
- Use with state C(present) to build, load or pull an image (depending on the
value of the I(source) option) when the image already exists.
type: bool
default: false
version_added: "2.8"
force_absent:
description:
- Use with state I(absent) to un-tag and remove all images matching the specified name.
type: bool
default: false
version_added: "2.8"
force_tag:
description:
- Use with state C(present) to force tagging an image.
type: bool
default: false
version_added: "2.8"
http_timeout:
description:
- Timeout for HTTP requests during the image build operation. Provide a positive integer value for the number of
seconds.
- Please use I(build.http_timeout) instead. This option will be removed in Ansible 2.12.
type: int
version_added: "2.1"
name:
description:
- "Image name. Name format will be one of: name, repository/name, registry_server:port/name.
When pushing or pulling an image the name can optionally include the tag by appending ':tag_name'."
- Note that image IDs (hashes) are not supported.
type: str
required: yes
path:
description:
- Use with state 'present' to build an image. Will be the path to a directory containing the context and
Dockerfile for building an image.
- Set I(source) to C(build) if you want to build the image. The option will
be set automatically before Ansible 2.12 if this option is used. From Ansible 2.12
on, you have to set I(source) to C(build).
- Please use I(build.path) instead. This option will be removed in Ansible 2.12.
type: path
aliases:
- build_path
pull:
description:
- When building an image downloads any updates to the FROM image in Dockerfile.
- Please use I(build.pull) instead. This option will be removed in Ansible 2.12.
- The default is currently C(yes). This will change to C(no) in Ansible 2.12.
type: bool
version_added: "2.1"
push:
description:
- Push the image to the registry. Specify the registry as part of the I(name) or I(repository) parameter.
type: bool
default: no
version_added: "2.2"
rm:
description:
- Remove intermediate containers after build.
- Please use I(build.rm) instead. This option will be removed in Ansible 2.12.
type: bool
default: yes
version_added: "2.1"
nocache:
description:
- Do not use cache when building an image.
- Please use I(build.nocache) instead. This option will be removed in Ansible 2.12.
type: bool
default: no
repository:
description:
- Full path to a repository. Use with state C(present) to tag the image into the repository. Expects
format I(repository:tag). If no tag is provided, will use the value of the C(tag) parameter or I(latest).
type: str
version_added: "2.1"
state:
description:
- Make assertions about the state of an image.
- When C(absent) an image will be removed. Use the force option to un-tag and remove all images
matching the provided name.
- When C(present) check if an image exists using the provided name and tag. If the image is not found or the
force option is used, the image will either be pulled, built or loaded, depending on the I(source) option.
- By default the image will be pulled from Docker Hub, or the registry specified in the image's name. Note that
this will change in Ansible 2.12, so to make sure that you are pulling, set I(source) to C(pull). To build
the image, provide a I(path) value set to a directory containing a context and Dockerfile, and set I(source)
to C(build). To load an image, specify I(load_path) to provide a path to an archive file. To tag an image to
a repository, provide a I(repository) path. If the name contains a repository path, it will be pushed.
- "NOTE: C(state=build) is DEPRECATED and will be removed in release 2.11. Specifying C(build) will behave the
same as C(present)."
type: str
default: present
choices:
- absent
- present
- build
tag:
description:
- Used to select an image when pulling. Will be added to the image when pushing, tagging or building. Defaults to
I(latest).
- If C(name) parameter format is I(name:tag), then tag value from C(name) will take precedence.
type: str
default: latest
buildargs:
description:
- Provide a dictionary of C(key:value) build arguments that map to Dockerfile ARG directive.
- Docker expects the value to be a string. For convenience any non-string values will be converted to strings.
- Requires Docker API >= 1.21.
- Please use I(build.args) instead. This option will be removed in Ansible 2.12.
type: dict
version_added: "2.2"
container_limits:
description:
- A dictionary of limits applied to each container created by the build process.
- Please use I(build.container_limits) instead. This option will be removed in Ansible 2.12.
type: dict
suboptions:
memory:
description:
- Set memory limit for build.
type: int
memswap:
description:
- Total memory (memory + swap), -1 to disable swap.
type: int
cpushares:
description:
- CPU shares (relative weight).
type: int
cpusetcpus:
description:
- CPUs in which to allow execution, e.g., "0-3", "0,1".
type: str
version_added: "2.1"
use_tls:
description:
- "DEPRECATED. Whether to use tls to connect to the docker daemon. Set to
C(encrypt) to use TLS. And set to C(verify) to use TLS and verify that
the server's certificate is valid for the server."
- "NOTE: If you specify this option, it will set the value of the I(tls) or
I(tls_verify) parameters if not set to I(no)."
- Will be removed in Ansible 2.11.
type: str
choices:
- 'no'
- 'encrypt'
- 'verify'
version_added: "2.0"
extends_documentation_fragment:
- docker
- docker.docker_py_1_documentation
requirements:
- "L(Docker SDK for Python,https://docker-py.readthedocs.io/en/stable/) >= 1.8.0 (use L(docker-py,https://pypi.org/project/docker-py/) for Python 2.6)"
- "Docker API >= 1.20"
author:
- Pavel Antonov (@softzilla)
- Chris Houseknecht (@chouseknecht)
'''
EXAMPLES = '''
- name: pull an image
docker_image:
name: pacur/centos-7
source: pull
- name: Tag and push to docker hub
docker_image:
name: pacur/centos-7:56
repository: dcoppenhagan/myimage:7.56
push: yes
source: local
- name: Tag and push to local registry
docker_image:
# Image will be centos:7
name: centos
# Will be pushed to localhost:5000/centos:7
repository: localhost:5000/centos
tag: 7
push: yes
source: local
- name: Add tag latest to image
docker_image:
name: myimage:7.1.2
repository: myimage:latest
source: local
- name: Remove image
docker_image:
state: absent
name: registry.ansible.com/chouseknecht/sinatra
tag: v1
- name: Build an image and push it to a private repo
docker_image:
build:
path: ./sinatra
name: registry.ansible.com/chouseknecht/sinatra
tag: v1
push: yes
source: build
- name: Archive image
docker_image:
name: registry.ansible.com/chouseknecht/sinatra
tag: v1
archive_path: my_sinatra.tar
source: local
- name: Load image from archive and push to a private registry
docker_image:
name: localhost:5000/myimages/sinatra
tag: v1
push: yes
load_path: my_sinatra.tar
source: load
- name: Build image and with build args
docker_image:
name: myimage
build:
path: /path/to/build/dir
args:
log_volume: /var/log/myapp
listen_port: 8080
source: build
- name: Build image using cache source
docker_image:
name: myimage:latest
build:
path: /path/to/build/dir
# Use as cache source for building myimage
cache_from:
- nginx:latest
- alpine:3.8
source: build
'''
RETURN = '''
image:
description: Image inspection results for the affected image.
returned: success
type: dict
sample: {}
'''
import os
import re
import traceback
from distutils.version import LooseVersion
from ansible.module_utils.docker.common import (
docker_version,
AnsibleDockerClient,
DockerBaseClass,
is_image_name_id,
is_valid_tag,
RequestException,
)
from ansible.module_utils._text import to_native
if docker_version is not None:
try:
if LooseVersion(docker_version) >= LooseVersion('2.0.0'):
from docker.auth import resolve_repository_name
else:
from docker.auth.auth import resolve_repository_name
from docker.utils.utils import parse_repository_tag
from docker.errors import DockerException
except ImportError:
# missing Docker SDK for Python handled in module_utils.docker.common
pass
class ImageManager(DockerBaseClass):
def __init__(self, client, results):
super(ImageManager, self).__init__()
self.client = client
self.results = results
parameters = self.client.module.params
self.check_mode = self.client.check_mode
self.source = parameters['source']
build = parameters['build'] or dict()
self.archive_path = parameters.get('archive_path')
self.cache_from = build.get('cache_from')
self.container_limits = build.get('container_limits')
self.dockerfile = build.get('dockerfile')
self.force_source = parameters.get('force_source')
self.force_absent = parameters.get('force_absent')
self.force_tag = parameters.get('force_tag')
self.load_path = parameters.get('load_path')
self.name = parameters.get('name')
self.network = build.get('network')
self.nocache = build.get('nocache', False)
self.build_path = build.get('path')
self.pull = build.get('pull')
self.target = build.get('target')
self.repository = parameters.get('repository')
self.rm = build.get('rm', True)
self.state = parameters.get('state')
self.tag = parameters.get('tag')
self.http_timeout = build.get('http_timeout')
self.push = parameters.get('push')
self.buildargs = build.get('args')
self.use_config_proxy = build.get('use_config_proxy')
# If name contains a tag, it takes precedence over tag parameter.
if not is_image_name_id(self.name):
repo, repo_tag = parse_repository_tag(self.name)
if repo_tag:
self.name = repo
self.tag = repo_tag
if self.state == 'present':
self.present()
elif self.state == 'absent':
self.absent()
def fail(self, msg):
self.client.fail(msg)
def present(self):
'''
Handles state = 'present', which includes building, loading or pulling an image,
depending on user provided parameters.
:returns None
'''
image = self.client.find_image(name=self.name, tag=self.tag)
if not image or self.force_source:
if self.source == 'build':
# Build the image
if not os.path.isdir(self.build_path):
self.fail("Requested build path %s could not be found or you do not have access." % self.build_path)
image_name = self.name
if self.tag:
image_name = "%s:%s" % (self.name, self.tag)
self.log("Building image %s" % image_name)
self.results['actions'].append("Built image %s from %s" % (image_name, self.build_path))
self.results['changed'] = True
if not self.check_mode:
self.results['image'] = self.build_image()
elif self.source == 'load':
# Load the image from an archive
if not os.path.isfile(self.load_path):
self.fail("Error loading image %s. Specified path %s does not exist." % (self.name,
self.load_path))
image_name = self.name
if self.tag:
image_name = "%s:%s" % (self.name, self.tag)
self.results['actions'].append("Loaded image %s from %s" % (image_name, self.load_path))
self.results['changed'] = True
if not self.check_mode:
self.results['image'] = self.load_image()
elif self.source == 'pull':
# pull the image
self.results['actions'].append('Pulled image %s:%s' % (self.name, self.tag))
self.results['changed'] = True
if not self.check_mode:
self.results['image'], dummy = self.client.pull_image(self.name, tag=self.tag)
elif self.source == 'local':
if image is None:
name = self.name
if self.tag:
name = "%s:%s" % (self.name, self.tag)
self.client.fail('Cannot find the image %s locally.' % name)
if not self.check_mode and image and image['Id'] == self.results['image']['Id']:
self.results['changed'] = False
if self.archive_path:
self.archive_image(self.name, self.tag)
if self.push and not self.repository:
self.push_image(self.name, self.tag)
elif self.repository:
self.tag_image(self.name, self.tag, self.repository, push=self.push)
def absent(self):
'''
Handles state = 'absent', which removes an image.
:return None
'''
name = self.name
if is_image_name_id(name):
image = self.client.find_image_by_id(name)
else:
image = self.client.find_image(name, self.tag)
if self.tag:
name = "%s:%s" % (self.name, self.tag)
if image:
if not self.check_mode:
try:
self.client.remove_image(name, force=self.force_absent)
except Exception as exc:
self.fail("Error removing image %s - %s" % (name, str(exc)))
self.results['changed'] = True
self.results['actions'].append("Removed image %s" % (name))
self.results['image']['state'] = 'Deleted'
def archive_image(self, name, tag):
'''
Archive an image to a .tar file. Called when archive_path is passed.
:param name - name of the image. Type: str
:return None
'''
if not tag:
tag = "latest"
image = self.client.find_image(name=name, tag=tag)
if not image:
self.log("archive image: image %s:%s not found" % (name, tag))
return
image_name = "%s:%s" % (name, tag)
self.results['actions'].append('Archived image %s to %s' % (image_name, self.archive_path))
self.results['changed'] = True
if not self.check_mode:
self.log("Getting archive of image %s" % image_name)
try:
image = self.client.get_image(image_name)
except Exception as exc:
self.fail("Error getting image %s - %s" % (image_name, str(exc)))
try:
with open(self.archive_path, 'wb') as fd:
if self.client.docker_py_version >= LooseVersion('3.0.0'):
for chunk in image:
fd.write(chunk)
else:
for chunk in image.stream(2048, decode_content=False):
fd.write(chunk)
except Exception as exc:
self.fail("Error writing image archive %s - %s" % (self.archive_path, str(exc)))
image = self.client.find_image(name=name, tag=tag)
if image:
self.results['image'] = image
def push_image(self, name, tag=None):
'''
If the name of the image contains a repository path, then push the image.
:param name Name of the image to push.
:param tag Use a specific tag.
:return: None
'''
repository = name
if not tag:
repository, tag = parse_repository_tag(name)
registry, repo_name = resolve_repository_name(repository)
self.log("push %s to %s/%s:%s" % (self.name, registry, repo_name, tag))
if registry:
self.results['actions'].append("Pushed image %s to %s/%s:%s" % (self.name, registry, repo_name, tag))
self.results['changed'] = True
if not self.check_mode:
status = None
try:
changed = False
for line in self.client.push(repository, tag=tag, stream=True, decode=True):
self.log(line, pretty_print=True)
if line.get('errorDetail'):
raise Exception(line['errorDetail']['message'])
status = line.get('status')
if status == 'Pushing':
changed = True
self.results['changed'] = changed
except Exception as exc:
if re.search('unauthorized', str(exc)):
if re.search('authentication required', str(exc)):
self.fail("Error pushing image %s/%s:%s - %s. Try logging into %s first." %
(registry, repo_name, tag, str(exc), registry))
else:
self.fail("Error pushing image %s/%s:%s - %s. Does the repository exist?" %
(registry, repo_name, tag, str(exc)))
self.fail("Error pushing image %s: %s" % (repository, str(exc)))
self.results['image'] = self.client.find_image(name=repository, tag=tag)
if not self.results['image']:
self.results['image'] = dict()
self.results['image']['push_status'] = status
def tag_image(self, name, tag, repository, push=False):
'''
Tag an image into a repository.
:param name: name of the image. required.
:param tag: image tag.
:param repository: path to the repository. required.
:param push: bool. push the image once it's tagged.
:return: None
'''
repo, repo_tag = parse_repository_tag(repository)
if not repo_tag:
repo_tag = "latest"
if tag:
repo_tag = tag
image = self.client.find_image(name=repo, tag=repo_tag)
found = 'found' if image else 'not found'
self.log("image %s was %s" % (repo, found))
if not image or self.force_tag:
self.log("tagging %s:%s to %s:%s" % (name, tag, repo, repo_tag))
self.results['changed'] = True
self.results['actions'].append("Tagged image %s:%s to %s:%s" % (name, tag, repo, repo_tag))
if not self.check_mode:
try:
# Finding the image does not always work, especially running a localhost registry. In those
# cases, if we don't set force=True, it errors.
image_name = name
if tag and not re.search(tag, name):
image_name = "%s:%s" % (name, tag)
tag_status = self.client.tag(image_name, repo, tag=repo_tag, force=True)
if not tag_status:
raise Exception("Tag operation failed.")
except Exception as exc:
self.fail("Error: failed to tag image - %s" % str(exc))
self.results['image'] = self.client.find_image(name=repo, tag=repo_tag)
if image and image['Id'] == self.results['image']['Id']:
self.results['changed'] = False
if push:
self.push_image(repo, repo_tag)
def build_image(self):
'''
Build an image
:return: image dict
'''
params = dict(
path=self.build_path,
tag=self.name,
rm=self.rm,
nocache=self.nocache,
timeout=self.http_timeout,
pull=self.pull,
forcerm=self.rm,
dockerfile=self.dockerfile,
decode=True,
)
if self.client.docker_py_version < LooseVersion('3.0.0'):
params['stream'] = True
build_output = []
if self.tag:
params['tag'] = "%s:%s" % (self.name, self.tag)
if self.container_limits:
params['container_limits'] = self.container_limits
if self.buildargs:
for key, value in self.buildargs.items():
self.buildargs[key] = to_native(value)
params['buildargs'] = self.buildargs
if self.cache_from:
params['cache_from'] = self.cache_from
if self.network:
params['network_mode'] = self.network
if self.use_config_proxy:
params['use_config_proxy'] = self.use_config_proxy
# Due to a bug in docker-py, it will crash if
# use_config_proxy is True and buildargs is None
if 'buildargs' not in params:
params['buildargs'] = {}
if self.target:
params['target'] = self.target
for line in self.client.build(**params):
# line = json.loads(line)
self.log(line, pretty_print=True)
if "stream" in line:
build_output.append(line["stream"])
if line.get('error'):
if line.get('errorDetail'):
errorDetail = line.get('errorDetail')
self.fail(
"Error building %s - code: %s, message: %s, logs: %s" % (
self.name,
errorDetail.get('code'),
errorDetail.get('message'),
build_output))
else:
self.fail("Error building %s - message: %s, logs: %s" % (
self.name, line.get('error'), build_output))
return self.client.find_image(name=self.name, tag=self.tag)
def load_image(self):
'''
Load an image from a .tar archive
:return: image dict
'''
try:
self.log("Opening image %s" % self.load_path)
image_tar = open(self.load_path, 'rb')
except Exception as exc:
self.fail("Error opening image %s - %s" % (self.load_path, str(exc)))
try:
self.log("Loading image from %s" % self.load_path)
self.client.load_image(image_tar)
except Exception as exc:
self.fail("Error loading image %s - %s" % (self.name, str(exc)))
try:
image_tar.close()
except Exception as exc:
self.fail("Error closing image %s - %s" % (self.name, str(exc)))
return self.client.find_image(self.name, self.tag)
def main():
argument_spec = dict(
source=dict(type='str', choices=['build', 'load', 'pull', 'local']),
build=dict(type='dict', suboptions=dict(
cache_from=dict(type='list', elements='str'),
container_limits=dict(type='dict', options=dict(
memory=dict(type='int'),
memswap=dict(type='int'),
cpushares=dict(type='int'),
cpusetcpus=dict(type='str'),
)),
dockerfile=dict(type='str'),
http_timeout=dict(type='int'),
network=dict(type='str'),
nocache=dict(type='bool', default=False),
path=dict(type='path', required=True),
pull=dict(type='bool'),
rm=dict(type='bool', default=True),
args=dict(type='dict'),
use_config_proxy=dict(type='bool'),
target=dict(type='str'),
)),
archive_path=dict(type='path'),
container_limits=dict(type='dict', options=dict(
memory=dict(type='int'),
memswap=dict(type='int'),
cpushares=dict(type='int'),
cpusetcpus=dict(type='str'),
), removedin_version='2.12'),
dockerfile=dict(type='str', removedin_version='2.12'),
force=dict(type='bool', removed_in_version='2.12'),
force_source=dict(type='bool', default=False),
force_absent=dict(type='bool', default=False),
force_tag=dict(type='bool', default=False),
http_timeout=dict(type='int', removedin_version='2.12'),
load_path=dict(type='path'),
name=dict(type='str', required=True),
nocache=dict(type='bool', default=False, removedin_version='2.12'),
path=dict(type='path', aliases=['build_path'], removedin_version='2.12'),
pull=dict(type='bool', removedin_version='2.12'),
push=dict(type='bool', default=False),
repository=dict(type='str'),
rm=dict(type='bool', default=True, removedin_version='2.12'),
state=dict(type='str', default='present', choices=['absent', 'present', 'build']),
tag=dict(type='str', default='latest'),
use_tls=dict(type='str', choices=['no', 'encrypt', 'verify'], removed_in_version='2.11'),
buildargs=dict(type='dict', removedin_version='2.12'),
)
required_if = [
# ('state', 'present', ['source']), -- enable in Ansible 2.12.
# ('source', 'build', ['build']), -- enable in Ansible 2.12.
('source', 'load', ['load_path']),
]
def detect_build_cache_from(client):
return client.module.params['build'] and client.module.params['build'].get('cache_from') is not None
def detect_build_network(client):
return client.module.params['build'] and client.module.params['build'].get('network') is not None
def detect_build_target(client):
return client.module.params['build'] and client.module.params['build'].get('target') is not None
def detect_use_config_proxy(client):
return client.module.params['build'] and client.module.params['build'].get('use_config_proxy') is not None
option_minimal_versions = dict()
option_minimal_versions["build.cache_from"] = dict(docker_py_version='2.1.0', docker_api_version='1.25', detect_usage=detect_build_cache_from)
option_minimal_versions["build.network"] = dict(docker_py_version='2.4.0', docker_api_version='1.25', detect_usage=detect_build_network)
option_minimal_versions["build.target"] = dict(docker_py_version='2.4.0', detect_usage=detect_build_target)
option_minimal_versions["build.use_config_proxy"] = dict(docker_py_version='3.7.0', detect_usage=detect_use_config_proxy)
client = AnsibleDockerClient(
argument_spec=argument_spec,
required_if=required_if,
supports_check_mode=True,
min_docker_version='1.8.0',
min_docker_api_version='1.20',
option_minimal_versions=option_minimal_versions,
)
if client.module.params['state'] == 'build':
client.module.warn('The "build" state has been deprecated for a long time '
'and will be removed in Ansible 2.11. Please use '
'"present", which has the same meaning as "build".')
client.module.params['state'] = 'present'
if client.module.params['use_tls']:
client.module.warn('The "use_tls" option has been deprecated for a long time '
'and will be removed in Ansible 2.11. Please use the'
'"tls" and "tls_verify" options instead.')
if not is_valid_tag(client.module.params['tag'], allow_empty=True):
client.fail('"{0}" is not a valid docker tag!'.format(client.module.params['tag']))
build_options = dict(
container_limits='container_limits',
dockerfile='dockerfile',
http_timeout='http_timeout',
nocache='nocache',
path='path',
pull='pull',
rm='rm',
buildargs='args',
)
for option, build_option in build_options.items():
default_value = None
if option in ('rm', ):
default_value = True
elif option in ('nocache', ):
default_value = False
if client.module.params[option] != default_value:
if client.module.params['build'] is None:
client.module.params['build'] = dict()
if client.module.params['build'].get(build_option, default_value) != default_value:
client.fail('Cannot specify both %s and build.%s!' % (option, build_option))
client.module.params['build'][build_option] = client.module.params[option]
client.module.warn('Please specify build.%s instead of %s. The %s option '
'has been renamed and will be removed in Ansible 2.12.' % (build_option, option, option))
if client.module.params['source'] == 'build':
if (not client.module.params['build'] or not client.module.params['build'].get('path')):
client.fail('If "source" is set to "build", the "build.path" option must be specified.')
if client.module.params['build'].get('pull') is None:
client.module.warn("The default for build.pull is currently 'yes', but will be changed to 'no' in Ansible 2.12. "
"Please set build.pull explicitly to the value you need.")
client.module.params['build']['pull'] = True # TODO: change to False in Ansible 2.12
if client.module.params['state'] == 'present' and client.module.params['source'] is None:
# Autodetection. To be removed in Ansible 2.12.
if (client.module.params['build'] or dict()).get('path'):
client.module.params['source'] = 'build'
elif client.module.params['load_path']:
client.module.params['source'] = 'load'
else:
client.module.params['source'] = 'pull'
client.module.warn('The value of the "source" option was determined to be "%s". '
'Please set the "source" option explicitly. Autodetection will '
'be removed in Ansible 2.12.' % client.module.params['source'])
if client.module.params['force']:
client.module.params['force_source'] = True
client.module.params['force_absent'] = True
client.module.params['force_tag'] = True
client.module.warn('The "force" option will be removed in Ansible 2.12. Please '
'use the "force_source", "force_absent" or "force_tag" option '
'instead, depending on what you want to force.')
try:
results = dict(
changed=False,
actions=[],
image={}
)
ImageManager(client, results)
client.module.exit_json(**results)
except DockerException as e:
client.fail('An unexpected docker error occurred: {0}'.format(e), exception=traceback.format_exc())
except RequestException as e:
client.fail('An unexpected requests error occurred when docker-py tried to talk to the docker daemon: {0}'.format(e), exception=traceback.format_exc())
if __name__ == '__main__':
main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 59,233 |
docker_image: is missing etc_hosts (extra_hosts argument from python-docker)
|
##### SUMMARY
docker_image module is missing etc_hosts parameter even if this is documented and accepted by docker.build() as of https://docker-py.readthedocs.io/en/stable/images.html
The same argument is accepted by docker_run but the lack of support here prevents us from building images when extra_hosts is needed.
I mention that the only reason why I propose using `etc_hosts` is in order to keep it in sync with already existing implementation from `docker_container` module which accepts `etc_hosts` which is translated to `extra_hosts` on docker-python-api.
I would personally prefer to use the same names as docker-api but this would require changing docker_container too.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
docker_image
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible-devel
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
<!--- Paste example playbooks or commands between quotes below -->
```yaml
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes -->
```paste below
```
|
https://github.com/ansible/ansible/issues/59233
|
https://github.com/ansible/ansible/pull/59540
|
30c1d9754dbfe58a27103b809f2a388ac949c316
|
7c6fb57b7d2dedb732fe7d41131c929ccb917a84
| 2019-07-18T11:12:50Z |
python
| 2019-07-26T20:39:21Z |
test/integration/targets/docker_image/files/EtcHostsDockerfile
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 59,233 |
docker_image: is missing etc_hosts (extra_hosts argument from python-docker)
|
##### SUMMARY
docker_image module is missing etc_hosts parameter even if this is documented and accepted by docker.build() as of https://docker-py.readthedocs.io/en/stable/images.html
The same argument is accepted by docker_run but the lack of support here prevents us from building images when extra_hosts is needed.
I mention that the only reason why I propose using `etc_hosts` is in order to keep it in sync with already existing implementation from `docker_container` module which accepts `etc_hosts` which is translated to `extra_hosts` on docker-python-api.
I would personally prefer to use the same names as docker-api but this would require changing docker_container too.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
docker_image
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible-devel
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
<!--- Paste example playbooks or commands between quotes below -->
```yaml
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes -->
```paste below
```
|
https://github.com/ansible/ansible/issues/59233
|
https://github.com/ansible/ansible/pull/59540
|
30c1d9754dbfe58a27103b809f2a388ac949c316
|
7c6fb57b7d2dedb732fe7d41131c929ccb917a84
| 2019-07-18T11:12:50Z |
python
| 2019-07-26T20:39:21Z |
test/integration/targets/docker_image/tasks/tests/options.yml
|
---
- name: Registering image name
set_fact:
iname: "{{ name_prefix ~ '-options' }}"
iname_1: "{{ name_prefix ~ '-options-1' }}"
- name: Registering image name
set_fact:
inames: "{{ inames + [iname, iname_1] }}"
####################################################################
## build.args ######################################################
####################################################################
- name: buildargs
docker_image:
name: "{{ iname }}"
build:
path: "{{ role_path }}/files"
args:
TEST1: val1
TEST2: val2
TEST3: "True"
pull: no
source: build
register: buildargs_1
- name: buildargs (idempotency)
docker_image:
name: "{{ iname }}"
build:
path: "{{ role_path }}/files"
args:
TEST1: val1
TEST2: val2
TEST3: "True"
pull: no
source: build
register: buildargs_2
- name: cleanup
docker_image:
name: "{{ iname }}"
state: absent
force_absent: yes
- assert:
that:
- buildargs_1 is changed
- buildargs_2 is not changed
when: docker_py_version is version('1.6.0', '>=')
- assert:
that:
- buildargs_1 is failed
- buildargs_2 is failed
when: docker_py_version is version('1.6.0', '<')
####################################################################
## container_limits ################################################
####################################################################
- name: container_limits (Failed due to min memory limit)
docker_image:
name: "{{ iname }}"
build:
path: "{{ role_path }}/files"
container_limits:
memory: 4000
pull: no
source: build
ignore_errors: yes
register: container_limits_1
- name: container_limits
docker_image:
name: "{{ iname }}"
build:
path: "{{ role_path }}/files"
container_limits:
memory: 5000000
memswap: 7000000
pull: no
source: build
register: container_limits_2
- name: cleanup
docker_image:
name: "{{ iname }}"
state: absent
force_absent: yes
- assert:
that:
# It *sometimes* happens that the first task does not fail.
# For now, we work around this by
# a) requiring that if it fails, the message must
# contain 'Minimum memory limit allowed is 4MB', and
# b) requiring that either the first task, or the second
# task is changed, but not both.
- "not container_limits_1 is failed or ('Minimum memory limit allowed is 4MB') in container_limits_1.msg"
- "container_limits_1 is changed or container_limits_2 is changed and not (container_limits_1 is changed and container_limits_2 is changed)"
####################################################################
## dockerfile ######################################################
####################################################################
- name: dockerfile
docker_image:
name: "{{ iname }}"
build:
path: "{{ role_path }}/files"
dockerfile: "MyDockerfile"
pull: no
source: build
register: dockerfile_1
- name: cleanup
docker_image:
name: "{{ iname }}"
state: absent
force_absent: yes
- assert:
that:
- dockerfile_1 is changed
- dockerfile_1['image']['Config']['WorkingDir'] == '/newdata'
####################################################################
## repository ######################################################
####################################################################
- name: Make sure image is not there
docker_image:
name: "{{ registry_address }}/test/{{ iname }}:latest"
state: absent
force_absent: yes
- name: repository
docker_image:
name: "{{ iname }}"
build:
path: "{{ role_path }}/files"
pull: no
repository: "{{ registry_address }}/test/{{ iname }}"
source: build
register: repository_1
- name: repository (idempotent)
docker_image:
name: "{{ iname }}"
build:
path: "{{ role_path }}/files"
pull: no
repository: "{{ registry_address }}/test/{{ iname }}"
source: build
register: repository_2
- assert:
that:
- repository_1 is changed
- repository_2 is not changed
- name: Get facts of image
docker_image_info:
name: "{{ registry_address }}/test/{{ iname }}:latest"
register: facts_1
- name: cleanup
docker_image:
name: "{{ registry_address }}/test/{{ iname }}:latest"
state: absent
force_absent: yes
- assert:
that:
- facts_1.images | length == 1
####################################################################
## force ###########################################################
####################################################################
- name: Build an image
docker_image:
name: "{{ iname }}"
build:
path: "{{ role_path }}/files"
pull: no
source: build
- name: force (changed)
docker_image:
name: "{{ iname }}"
build:
path: "{{ role_path }}/files"
dockerfile: "MyDockerfile"
pull: no
source: build
force_source: yes
register: force_1
- name: force (unchanged)
docker_image:
name: "{{ iname }}"
build:
path: "{{ role_path }}/files"
dockerfile: "MyDockerfile"
pull: no
source: build
force_source: yes
register: force_2
- name: cleanup
docker_image:
name: "{{ iname }}"
state: absent
force_absent: yes
- assert:
that:
- force_1 is changed
- force_2 is not changed
####################################################################
## load path #######################################################
####################################################################
- name: Archive image
docker_image:
name: "hello-world:latest"
archive_path: "{{ output_dir }}/image.tar"
source: pull
register: archive_image
- name: remove image
docker_image:
name: "hello-world:latest"
state: absent
force_absent: yes
- name: load image (changed)
docker_image:
name: "hello-world:latest"
load_path: "{{ output_dir }}/image.tar"
source: load
register: load_image
- name: load image (idempotency)
docker_image:
name: "hello-world:latest"
load_path: "{{ output_dir }}/image.tar"
source: load
register: load_image_1
- assert:
that:
- load_image is changed
- load_image_1 is not changed
- archive_image['image']['Id'] == load_image['image']['Id']
####################################################################
## path ############################################################
####################################################################
- name: Build image
docker_image:
name: "{{ iname }}"
build:
path: "{{ role_path }}/files"
pull: no
source: build
register: path_1
- name: Build image (idempotency)
docker_image:
name: "{{ iname }}"
build:
path: "{{ role_path }}/files"
pull: no
source: build
register: path_2
- name: cleanup
docker_image:
name: "{{ iname }}"
state: absent
force_absent: yes
- assert:
that:
- path_1 is changed
- path_2 is not changed
####################################################################
## target ##########################################################
####################################################################
- name: Build multi-stage image
docker_image:
name: "{{ iname }}"
build:
path: "{{ role_path }}/files"
dockerfile: "StagedDockerfile"
target: first
pull: no
source: build
register: dockerfile_2
- name: cleanup
docker_image:
name: "{{ iname }}"
state: absent
force_absent: yes
- assert:
that:
- dockerfile_2 is changed
- dockerfile_2.image.Config.WorkingDir == '/first'
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 59,576 |
vmware_deploy_ovf: AttributeError: 'vim.ServiceInstance' object has no attribute 'rootFolder'
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below -->
Deploying OVF got AttributeError
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
vmware_deploy_ovf
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```
ansible 2.9.0.dev0
config file = /home/qiz/workspace/gitlab/newgos_testing/ansible.cfg
configured module search path = [u'/home/qiz/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /home/qiz/workspace/github/ansible/lib/ansible
executable location = /home/qiz/workspace/github/ansible/bin/ansible
python version = 2.7.12 (default, Dec 4 2017, 14:50:18) [GCC 5.4.0 20160609]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```
ANSIBLE_SSH_RETRIES(/home/qiz/workspace/gitlab/newgos_testing/ansible.cfg) = 5
DEFAULT_BECOME(/home/qiz/workspace/gitlab/newgos_testing/ansible.cfg) = True
DEFAULT_BECOME_ASK_PASS(/home/qiz/workspace/gitlab/newgos_testing/ansible.cfg) = False
DEFAULT_BECOME_METHOD(/home/qiz/workspace/gitlab/newgos_testing/ansible.cfg) = sudo
DEFAULT_BECOME_USER(/home/qiz/workspace/gitlab/newgos_testing/ansible.cfg) = root
DEFAULT_CALLBACK_PLUGIN_PATH(/home/qiz/workspace/gitlab/newgos_testing/ansible.cfg) = [u'/home/qiz/workspace/gitlab/newgos_testi
DEFAULT_HOST_LIST(/home/qiz/workspace/gitlab/newgos_testing/ansible.cfg) = [u'/home/qiz/workspace/gitlab/newgos_testing/hosts']
DISPLAY_SKIPPED_HOSTS(/home/qiz/workspace/gitlab/newgos_testing/ansible.cfg) = False
HOST_KEY_CHECKING(/home/qiz/workspace/gitlab/newgos_testing/ansible.cfg) = False
RETRY_FILES_ENABLED(/home/qiz/workspace/gitlab/newgos_testing/ansible.cfg) = False
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
<!--- Paste example playbooks or commands between quotes below -->
```
- hosts: localhost
vars_files:
- ../vars/test.yml
tasks:
- vmware_deploy_ovf:
hostname: "{{ esxi_hostname }}"
username: "{{ esxi_username }}"
password: "{{ esxi_password }}"
validate_certs: False
datacenter: 'ha-datacenter'
datastore: "{{datastore}}"
networks: "{{ ovf_networks | default({'VM Network': 'VM Network'}) }}"
ovf: "{{ovf_path}}"
name: "{{ovf_vm_name}}"
allow_duplicates: False
disk_provisioning: "thin"
power_on: False
wait_for_ip_address: False
register: ovf_deploy
delegate_to: localhost
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes -->
```
The full traceback is:
Traceback (most recent call last):
File "/home/qiz/.ansible/tmp/ansible-tmp-1564076387.99-37546435245957/AnsiballZ_vmware_deploy_ovf.py", line 125, in <module>
_ansiballz_main()
File "/home/qiz/.ansible/tmp/ansible-tmp-1564076387.99-37546435245957/AnsiballZ_vmware_deploy_ovf.py", line 117, in _ansiballz_main
invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)
File "/home/qiz/.ansible/tmp/ansible-tmp-1564076387.99-37546435245957/AnsiballZ_vmware_deploy_ovf.py", line 54, in invoke_module
imp.load_module('__main__', mod, module, MOD_DESC)
File "/tmp/ansible_vmware_deploy_ovf_payload_MVOaC0/__main__.py", line 704, in <module>
File "/tmp/ansible_vmware_deploy_ovf_payload_MVOaC0/__main__.py", line 696, in main
File "/tmp/ansible_vmware_deploy_ovf_payload_MVOaC0/__main__.py", line 484, in upload
File "/tmp/ansible_vmware_deploy_ovf_payload_MVOaC0/__main__.py", line 389, in get_lease
File "/tmp/ansible_vmware_deploy_ovf_payload_MVOaC0/__main__.py", line 359, in get_objects
File "/tmp/ansible_vmware_deploy_ovf_payload_MVOaC0/ansible_vmware_deploy_ovf_payload.zip/ansible/module_utils/vmware.py", line 186, in find_network_by_name
File "/tmp/ansible_vmware_deploy_ovf_payload_MVOaC0/ansible_vmware_deploy_ovf_payload.zip/ansible/module_utils/vmware.py", line 132, in find_object_by_name
File "/tmp/ansible_vmware_deploy_ovf_payload_MVOaC0/ansible_vmware_deploy_ovf_payload.zip/ansible/module_utils/vmware.py", line 565, in get_all_objs
AttributeError: 'vim.ServiceInstance' object has no attribute 'rootFolder'
fatal: [localhost -> localhost]: FAILED! => {
"changed": false,
"module_stderr": "Traceback (most recent call last):\n File \"/home/qiz/.ansible/tmp/ansible-tmp-1564076387.99-37546435245957/AnsiballZ_vmware_deploy_ovf.py\", line 125, in <module>\n _ansiballz_main()\n File \"/home/qiz/.ansible/tmp/ansible-tmp-1564076387.99-37546435245957/AnsiballZ_vmware_deploy_ovf.py\", line 117, in _ansiballz_main\n invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)\n File \"/home/qiz/.ansible/tmp/ansible-tmp-1564076387.99-37546435245957/AnsiballZ_vmware_deploy_ovf.py\", line 54, in invoke_module\n imp.load_module('__main__', mod, module, MOD_DESC)\n File \"/tmp/ansible_vmware_deploy_ovf_payload_MVOaC0/__main__.py\", line 704, in <module>\n File \"/tmp/ansible_vmware_deploy_ovf_payload_MVOaC0/__main__.py\", line 696, in main\n File \"/tmp/ansible_vmware_deploy_ovf_payload_MVOaC0/__main__.py\", line 484, in upload\n File \"/tmp/ansible_vmware_deploy_ovf_payload_MVOaC0/__main__.py\", line 389, in get_lease\n File \"/tmp/ansible_vmware_deploy_ovf_payload_MVOaC0/__main__.py\", line 359, in get_objects\n File \"/tmp/ansible_vmware_deploy_ovf_payload_MVOaC0/ansible_vmware_deploy_ovf_payload.zip/ansible/module_utils/vmware.py\", line 186, in find_network_by_name\n File \"/tmp/ansible_vmware_deploy_ovf_payload_MVOaC0/ansible_vmware_deploy_ovf_payload.zip/ansible/module_utils/vmware.py\", line 132, in find_object_by_name\n File \"/tmp/ansible_vmware_deploy_ovf_payload_MVOaC0/ansible_vmware_deploy_ovf_payload.zip/ansible/module_utils/vmware.py\", line 565, in get_all_objs\nAttributeError: 'vim.ServiceInstance' object has no attribute 'rootFolder'\n",
"module_stdout": "",
"msg": "MODULE FAILURE\nSee stdout/stderr for the exact error",
"rc": 1
}
```
|
https://github.com/ansible/ansible/issues/59576
|
https://github.com/ansible/ansible/pull/59614
|
08d7905be2a45e51ceed315974e25d1008bf4263
|
336be58665e946b4c4c071ced83186fddc128448
| 2019-07-25T09:50:11Z |
python
| 2019-07-29T09:00:02Z |
lib/ansible/modules/cloud/vmware/vmware_deploy_ovf.py
|
#!/usr/bin/python
# -*- coding: utf-8 -*-
# Copyright: (c) 2017, Matt Martz <[email protected]>
#
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
ANSIBLE_METADATA = {
'metadata_version': '1.1',
'status': ['preview'],
'supported_by': 'community'
}
DOCUMENTATION = '''
author: 'Matt Martz (@sivel)'
short_description: 'Deploys a VMware virtual machine from an OVF or OVA file'
description:
- 'This module can be used to deploy a VMware VM from an OVF or OVA file'
module: vmware_deploy_ovf
notes: []
options:
allow_duplicates:
default: "yes"
description:
- Whether or not to allow duplicate VM names. ESXi allows duplicates, vCenter may not.
type: bool
datacenter:
default: ha-datacenter
description:
- Datacenter to deploy to.
type: str
cluster:
description:
- Cluster to deploy to.
type: str
datastore:
default: datastore1
description:
- Datastore to deploy to.
- "You can also specify datastore storage cluster. version_added: 2.9"
type: str
deployment_option:
description:
- The key of the chosen deployment option.
type: str
disk_provisioning:
choices:
- flat
- eagerZeroedThick
- monolithicSparse
- twoGbMaxExtentSparse
- twoGbMaxExtentFlat
- thin
- sparse
- thick
- seSparse
- monolithicFlat
default: thin
description:
- Disk provisioning type.
type: str
fail_on_spec_warnings:
description:
- Cause the module to treat OVF Import Spec warnings as errors.
default: "no"
type: bool
folder:
description:
- Absolute path of folder to place the virtual machine.
- If not specified, defaults to the value of C(datacenter.vmFolder).
- 'Examples:'
- ' folder: /ha-datacenter/vm'
- ' folder: ha-datacenter/vm'
- ' folder: /datacenter1/vm'
- ' folder: datacenter1/vm'
- ' folder: /datacenter1/vm/folder1'
- ' folder: datacenter1/vm/folder1'
- ' folder: /folder1/datacenter1/vm'
- ' folder: folder1/datacenter1/vm'
- ' folder: /folder1/datacenter1/vm/folder2'
type: str
inject_ovf_env:
description:
- Force the given properties to be inserted into an OVF Environment and injected through VMware Tools.
version_added: "2.8"
type: bool
name:
description:
- Name of the VM to work with.
- Virtual machine names in vCenter are not necessarily unique, which may be problematic.
type: str
networks:
default:
VM Network: VM Network
description:
- 'C(key: value) mapping of OVF network name, to the vCenter network name.'
type: dict
ovf:
description:
- 'Path to OVF or OVA file to deploy.'
aliases:
- ova
power_on:
default: true
description:
- 'Whether or not to power on the virtual machine after creation.'
type: bool
properties:
description:
- The assignment of values to the properties found in the OVF as key value pairs.
type: dict
resource_pool:
default: Resources
description:
- Resource Pool to deploy to.
type: str
wait:
default: true
description:
- 'Wait for the host to power on.'
type: bool
wait_for_ip_address:
default: false
description:
- Wait until vCenter detects an IP address for the VM.
- This requires vmware-tools (vmtoolsd) to properly work after creation.
type: bool
requirements:
- pyvmomi
version_added: "2.7"
extends_documentation_fragment: vmware.documentation
'''
EXAMPLES = r'''
- vmware_deploy_ovf:
hostname: '{{ vcenter_hostname }}'
username: '{{ vcenter_username }}'
password: '{{ vcenter_password }}'
ovf: /path/to/ubuntu-16.04-amd64.ovf
wait_for_ip_address: true
delegate_to: localhost
# Deploys a new VM named 'NewVM' in specific datacenter/cluster, with network mapping taken from variable and using ova template from an absolute path
- vmware_deploy_ovf:
hostname: '{{ vcenter_hostname }}'
username: '{{ vcenter_username }}'
password: '{{ vcenter_password }}'
datacenter: Datacenter1
cluster: Cluster1
datastore: vsandatastore
name: NewVM
networks: "{u'VM Network':u'{{ ProvisioningNetworkLabel }}'}"
validate_certs: no
power_on: no
ovf: /absolute/path/to/template/mytemplate.ova
delegate_to: localhost
'''
RETURN = r'''
instance:
description: metadata about the new virtual machine
returned: always
type: dict
sample: None
'''
import io
import os
import sys
import tarfile
import time
import traceback
import xml.etree.ElementTree as ET
from threading import Thread
from ansible.module_utils._text import to_native
from ansible.module_utils.basic import AnsibleModule
from ansible.module_utils.six import string_types
from ansible.module_utils.urls import generic_urlparse, open_url, urlparse, urlunparse
from ansible.module_utils.vmware import (find_network_by_name, find_vm_by_name, PyVmomi,
gather_vm_facts, vmware_argument_spec, wait_for_task, wait_for_vm_ip)
try:
from ansible.module_utils.vmware import vim
from pyVmomi import vmodl
except ImportError:
pass
def path_exists(value):
if not isinstance(value, string_types):
value = str(value)
value = os.path.expanduser(os.path.expandvars(value))
if not os.path.exists(value):
raise ValueError('%s is not a valid path' % value)
return value
class ProgressReader(io.FileIO):
def __init__(self, name, mode='r', closefd=True):
self.bytes_read = 0
io.FileIO.__init__(self, name, mode=mode, closefd=closefd)
def read(self, size=10240):
chunk = io.FileIO.read(self, size)
self.bytes_read += len(chunk)
return chunk
class TarFileProgressReader(tarfile.ExFileObject):
def __init__(self, *args):
self.bytes_read = 0
tarfile.ExFileObject.__init__(self, *args)
def __enter__(self):
return self
def __exit__(self, exc_type, exc_value, traceback):
try:
self.close()
except Exception:
pass
def read(self, size=10240):
chunk = tarfile.ExFileObject.read(self, size)
self.bytes_read += len(chunk)
return chunk
class VMDKUploader(Thread):
def __init__(self, vmdk, url, validate_certs=True, tarinfo=None, create=False):
Thread.__init__(self)
self.vmdk = vmdk
if tarinfo:
self.size = tarinfo.size
else:
self.size = os.stat(vmdk).st_size
self.url = url
self.validate_certs = validate_certs
self.tarinfo = tarinfo
self.f = None
self.e = None
self._create = create
@property
def bytes_read(self):
try:
return self.f.bytes_read
except AttributeError:
return 0
def _request_opts(self):
'''
Requests for vmdk files differ from other file types. Build the request options here to handle that
'''
headers = {
'Content-Length': self.size,
'Content-Type': 'application/octet-stream',
}
if self._create:
# Non-VMDK
method = 'PUT'
headers['Overwrite'] = 't'
else:
# VMDK
method = 'POST'
headers['Content-Type'] = 'application/x-vnd.vmware-streamVmdk'
return {
'method': method,
'headers': headers,
}
def _open_url(self):
open_url(self.url, data=self.f, validate_certs=self.validate_certs, **self._request_opts())
def run(self):
if self.tarinfo:
try:
with TarFileProgressReader(self.vmdk, self.tarinfo) as self.f:
self._open_url()
except Exception:
self.e = sys.exc_info()
else:
try:
with ProgressReader(self.vmdk, 'rb') as self.f:
self._open_url()
except Exception:
self.e = sys.exc_info()
class VMwareDeployOvf(PyVmomi):
def __init__(self, module):
super(VMwareDeployOvf, self).__init__(module)
self.module = module
self.params = module.params
self.datastore = None
self.datacenter = None
self.resource_pool = None
self.network_mappings = []
self.ovf_descriptor = None
self.tar = None
self.lease = None
self.import_spec = None
self.entity = None
def get_objects(self):
self.datacenter = self.find_datacenter_by_name(self.params['datacenter'])
if not self.datacenter:
self.module.fail_json(msg='%(datacenter)s could not be located' % self.params)
self.datastore = None
datastore_cluster_obj = self.find_datastore_cluster_by_name(self.params['datastore'])
if datastore_cluster_obj:
datastore = None
datastore_freespace = 0
for ds in datastore_cluster_obj.childEntity:
if isinstance(ds, vim.Datastore) and ds.summary.freeSpace > datastore_freespace:
# If datastore field is provided, filter destination datastores
if ds.summary.maintenanceMode != 'normal' or not ds.summary.accessible:
continue
datastore = ds
datastore_freespace = ds.summary.freeSpace
if datastore:
self.datastore = datastore
else:
self.datastore = self.find_datastore_by_name(self.params['datastore'])
if not self.datastore:
self.module.fail_json(msg='%(datastore)s could not be located' % self.params)
if self.params['cluster']:
resource_pools = []
cluster = self.find_cluster_by_name(self.params['cluster'], datacenter_name=self.datacenter)
if cluster is None:
self.module.fail_json(msg="Unable to find cluster '%(cluster)s'" % self.params)
self.resource_pool = self.find_resource_pool_by_cluster(self.params['resource_pool'], cluster=cluster)
else:
self.resource_pool = self.find_resource_pool_by_name(self.params['resource_pool'])
if not self.resource_pool:
self.module.fail_json(msg='%(resource_pool)s could not be located' % self.params)
for key, value in self.params['networks'].items():
network = find_network_by_name(self.si, value)
if not network:
self.module.fail_json(msg='%(network)s could not be located' % self.params)
network_mapping = vim.OvfManager.NetworkMapping()
network_mapping.name = key
network_mapping.network = network
self.network_mappings.append(network_mapping)
return self.datastore, self.datacenter, self.resource_pool, self.network_mappings
def get_ovf_descriptor(self):
if tarfile.is_tarfile(self.params['ovf']):
self.tar = tarfile.open(self.params['ovf'])
ovf = None
for candidate in self.tar.getmembers():
dummy, ext = os.path.splitext(candidate.name)
if ext.lower() == '.ovf':
ovf = candidate
break
if not ovf:
self.module.fail_json(msg='Could not locate OVF file in %(ovf)s' % self.params)
self.ovf_descriptor = to_native(self.tar.extractfile(ovf).read())
else:
with open(self.params['ovf']) as f:
self.ovf_descriptor = f.read()
return self.ovf_descriptor
def get_lease(self):
datastore, datacenter, resource_pool, network_mappings = self.get_objects()
params = {
'diskProvisioning': self.params['disk_provisioning'],
}
if self.params['name']:
params['entityName'] = self.params['name']
if network_mappings:
params['networkMapping'] = network_mappings
if self.params['deployment_option']:
params['deploymentOption'] = self.params['deployment_option']
if self.params['properties']:
params['propertyMapping'] = []
for key, value in self.params['properties'].items():
property_mapping = vim.KeyValue()
property_mapping.key = key
property_mapping.value = str(value) if isinstance(value, bool) else value
params['propertyMapping'].append(property_mapping)
if self.params['folder']:
folder = self.si.searchIndex.FindByInventoryPath(self.params['folder'])
if not folder:
self.module.fail_json(msg="Unable to find the specified folder %(folder)s" % self.params)
else:
folder = datacenter.vmFolder
spec_params = vim.OvfManager.CreateImportSpecParams(**params)
ovf_descriptor = self.get_ovf_descriptor()
self.import_spec = self.si.ovfManager.CreateImportSpec(
ovf_descriptor,
resource_pool,
datastore,
spec_params
)
errors = [to_native(e.msg) for e in getattr(self.import_spec, 'error', [])]
if self.params['fail_on_spec_warnings']:
errors.extend(
(to_native(w.msg) for w in getattr(self.import_spec, 'warning', []))
)
if errors:
self.module.fail_json(
msg='Failure validating OVF import spec: %s' % '. '.join(errors)
)
for warning in getattr(self.import_spec, 'warning', []):
self.module.warn('Problem validating OVF import spec: %s' % to_native(warning.msg))
if not self.params['allow_duplicates']:
name = self.import_spec.importSpec.configSpec.name
match = find_vm_by_name(self.si, name, folder=folder)
if match:
self.module.exit_json(instance=gather_vm_facts(self.si, match), changed=False)
if self.module.check_mode:
self.module.exit_json(changed=True, instance={'hw_name': name})
try:
self.lease = resource_pool.ImportVApp(
self.import_spec.importSpec,
folder
)
except vmodl.fault.SystemError as e:
self.module.fail_json(
msg='Failed to start import: %s' % to_native(e.msg)
)
while self.lease.state != vim.HttpNfcLease.State.ready:
time.sleep(0.1)
self.entity = self.lease.info.entity
return self.lease, self.import_spec
def _normalize_url(self, url):
'''
The hostname in URLs from vmware may be ``*`` update it accordingly
'''
url_parts = generic_urlparse(urlparse(url))
if url_parts.hostname == '*':
if url_parts.port:
url_parts.netloc = '%s:%d' % (self.params['hostname'], url_parts.port)
else:
url_parts.netloc = self.params['hostname']
return urlunparse(url_parts.as_list())
def upload(self):
if self.params['ovf'] is None:
self.module.fail_json(msg="OVF path is required for upload operation.")
ovf_dir = os.path.dirname(self.params['ovf'])
lease, import_spec = self.get_lease()
uploaders = []
for file_item in import_spec.fileItem:
device_upload_url = None
for device_url in lease.info.deviceUrl:
if file_item.deviceId == device_url.importKey:
device_upload_url = self._normalize_url(device_url.url)
break
if not device_upload_url:
lease.HttpNfcLeaseAbort(
vmodl.fault.SystemError(reason='Failed to find deviceUrl for file %s' % file_item.path)
)
self.module.fail_json(
msg='Failed to find deviceUrl for file %s' % file_item.path
)
vmdk_tarinfo = None
if self.tar:
vmdk = self.tar
try:
vmdk_tarinfo = self.tar.getmember(file_item.path)
except KeyError:
lease.HttpNfcLeaseAbort(
vmodl.fault.SystemError(reason='Failed to find VMDK file %s in OVA' % file_item.path)
)
self.module.fail_json(
msg='Failed to find VMDK file %s in OVA' % file_item.path
)
else:
vmdk = os.path.join(ovf_dir, file_item.path)
try:
path_exists(vmdk)
except ValueError:
lease.HttpNfcLeaseAbort(
vmodl.fault.SystemError(reason='Failed to find VMDK file at %s' % vmdk)
)
self.module.fail_json(
msg='Failed to find VMDK file at %s' % vmdk
)
uploaders.append(
VMDKUploader(
vmdk,
device_upload_url,
self.params['validate_certs'],
tarinfo=vmdk_tarinfo,
create=file_item.create
)
)
total_size = sum(u.size for u in uploaders)
total_bytes_read = [0] * len(uploaders)
for i, uploader in enumerate(uploaders):
uploader.start()
while uploader.is_alive():
time.sleep(0.1)
total_bytes_read[i] = uploader.bytes_read
lease.HttpNfcLeaseProgress(int(100.0 * sum(total_bytes_read) / total_size))
if uploader.e:
lease.HttpNfcLeaseAbort(
vmodl.fault.SystemError(reason='%s' % to_native(uploader.e[1]))
)
self.module.fail_json(
msg='%s' % to_native(uploader.e[1]),
exception=''.join(traceback.format_tb(uploader.e[2]))
)
def complete(self):
self.lease.HttpNfcLeaseComplete()
def inject_ovf_env(self):
attrib = {
'xmlns': 'http://schemas.dmtf.org/ovf/environment/1',
'xmlns:xsi': 'http://www.w3.org/2001/XMLSchema-instance',
'xmlns:oe': 'http://schemas.dmtf.org/ovf/environment/1',
'xmlns:ve': 'http://www.vmware.com/schema/ovfenv',
'oe:id': '',
've:esxId': self.entity._moId
}
env = ET.Element('Environment', **attrib)
platform = ET.SubElement(env, 'PlatformSection')
ET.SubElement(platform, 'Kind').text = self.si.about.name
ET.SubElement(platform, 'Version').text = self.si.about.version
ET.SubElement(platform, 'Vendor').text = self.si.about.vendor
ET.SubElement(platform, 'Locale').text = 'US'
prop_section = ET.SubElement(env, 'PropertySection')
for key, value in self.params['properties'].items():
params = {
'oe:key': key,
'oe:value': str(value) if isinstance(value, bool) else value
}
ET.SubElement(prop_section, 'Property', **params)
opt = vim.option.OptionValue()
opt.key = 'guestinfo.ovfEnv'
opt.value = '<?xml version="1.0" encoding="UTF-8"?>' + to_native(ET.tostring(env))
config_spec = vim.vm.ConfigSpec()
config_spec.extraConfig = [opt]
task = self.entity.ReconfigVM_Task(config_spec)
wait_for_task(task)
def deploy(self):
facts = {}
if self.params['inject_ovf_env']:
self.inject_ovf_env()
if self.params['power_on']:
task = self.entity.PowerOn()
if self.params['wait']:
wait_for_task(task)
if self.params['wait_for_ip_address']:
_facts = wait_for_vm_ip(self.si, self.entity)
if not _facts:
self.module.fail_json(msg='Waiting for IP address timed out')
facts.update(_facts)
if not facts:
facts.update(gather_vm_facts(self.si, self.entity))
return facts
def main():
argument_spec = vmware_argument_spec()
argument_spec.update({
'name': {},
'datastore': {
'default': 'datastore1',
},
'datacenter': {
'default': 'ha-datacenter',
},
'cluster': {
'default': None,
},
'deployment_option': {
'default': None,
},
'folder': {
'default': None,
},
'inject_ovf_env': {
'default': False,
'type': 'bool',
},
'resource_pool': {
'default': 'Resources',
},
'networks': {
'default': {
'VM Network': 'VM Network',
},
'type': 'dict',
},
'ovf': {
'type': path_exists,
'aliases': ['ova'],
},
'disk_provisioning': {
'choices': [
'flat',
'eagerZeroedThick',
'monolithicSparse',
'twoGbMaxExtentSparse',
'twoGbMaxExtentFlat',
'thin',
'sparse',
'thick',
'seSparse',
'monolithicFlat'
],
'default': 'thin',
},
'power_on': {
'type': 'bool',
'default': True,
},
'properties': {
'type': 'dict',
},
'wait': {
'type': 'bool',
'default': True,
},
'wait_for_ip_address': {
'type': 'bool',
'default': False,
},
'allow_duplicates': {
'type': 'bool',
'default': True,
},
'fail_on_spec_warnings': {
'type': 'bool',
'default': False,
},
})
module = AnsibleModule(
argument_spec=argument_spec,
supports_check_mode=True,
)
deploy_ovf = VMwareDeployOvf(module)
deploy_ovf.upload()
deploy_ovf.complete()
facts = deploy_ovf.deploy()
module.exit_json(instance=facts, changed=True)
if __name__ == '__main__':
main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 58,415 |
Python 3.8 win_ping test timeout
|
##### SUMMARY
NOTE: This issue only occurs when using Python 3.8.
The Windows group 1 tests are failing early in the nightly CI run for all Windows versions:
https://app.shippable.com/github/ansible/ansible/runs/129532/11/console
https://app.shippable.com/github/ansible/ansible/runs/129532/12/console
https://app.shippable.com/github/ansible/ansible/runs/129532/13/console
https://app.shippable.com/github/ansible/ansible/runs/129532/14/console
https://app.shippable.com/github/ansible/ansible/runs/129532/15/console
https://app.shippable.com/github/ansible/ansible/runs/129532/16/console
All of the tests time out on executing win_ping. Here's one example:
```
09:41 Run command: ansible -m win_ping -i windows_2008, windows_2008 -e 'ansible_connection=winrm ansible_host=ec2-3-16-152-175.us-east-2.compute.amazonaws.com ansible_user=administrator ansible_password=******************* ansible_port=5986 ansible_winrm_server_cert_validation=ignore'
59:59
59:59 NOTICE: Killed command to avoid an orphaned child process during handling of an unexpected exception.
```
##### ISSUE TYPE
Bug Report
##### COMPONENT NAME
windows
##### ANSIBLE VERSION
devel
##### CONFIGURATION
N/A
##### OS / ENVIRONMENT
Shippable
##### STEPS TO REPRODUCE
Run Windows group 1 tests on Shippable with coverage enabled.
##### EXPECTED RESULTS
Tests pass.
##### ACTUAL RESULTS
Tests fail.
|
https://github.com/ansible/ansible/issues/58415
|
https://github.com/ansible/ansible/pull/59733
|
4a7e11ba9afd84e78b9ad8403afe51db372f1c5f
|
ed4a729fd6af17c4962fe41a287222b7858b889a
| 2019-06-26T18:35:24Z |
python
| 2019-07-29T20:05:29Z |
test/runner/requirements/constraints.txt
|
coverage >= 4.2, != 4.3.2 # features in 4.2+ required, avoid known bug in 4.3.2 on python 2.6
cryptography < 2.2 ; python_version < '2.7' # cryptography 2.2 drops support for python 2.6
deepdiff < 4.0.0 ; python_version < '3' # deepdiff 4.0.0 and later require python 3
urllib3 < 1.24 ; python_version < '2.7' # urllib3 1.24 and later require python 2.7 or later
pywinrm >= 0.3.0 # message encryption support
sphinx < 1.6 ; python_version < '2.7' # sphinx 1.6 and later require python 2.7 or later
sphinx < 1.8 ; python_version >= '2.7' # sphinx 1.8 and later are currently incompatible with rstcheck 3.3
pygments >= 2.4.0 # Pygments 2.4.0 includes bugfixes for YAML and YAML+Jinja lexers
wheel < 0.30.0 ; python_version < '2.7' # wheel 0.30.0 and later require python 2.7 or later
yamllint != 1.8.0, < 1.14.0 ; python_version < '2.7' # yamllint 1.8.0 and 1.14.0+ require python 2.7+
pycrypto >= 2.6 # Need features found in 2.6 and greater
ncclient >= 0.5.2 # Need features added in 0.5.2 and greater
idna < 2.6 # requests requires idna < 2.6, but cryptography will cause the latest version to be installed instead
paramiko < 2.4.0 ; python_version < '2.7' # paramiko 2.4.0 drops support for python 2.6
paramiko < 2.5.0 ; python_version >= '2.7' # paramiko 2.5.0 requires cryptography 2.5.0+
pytest < 3.3.0 ; python_version < '2.7' # pytest 3.3.0 drops support for python 2.6
pytest < 5.0.0 ; python_version == '2.7' # pytest 5.0.0 and later will no longer support python 2.7
pytest-forked < 1.0.2 ; python_version < '2.7' # pytest-forked 1.0.2 and later require python 2.7 or later
pytest-forked >= 1.0.2 ; python_version >= '2.7' # pytest-forked before 1.0.2 does not work with pytest 4.2.0+ (which requires python 2.7+)
ntlm-auth >= 1.3.0 # message encryption support using cryptography
requests < 2.20.0 ; python_version < '2.7' # requests 2.20.0 drops support for python 2.6
requests-ntlm >= 1.1.0 # message encryption support
requests-credssp >= 0.1.0 # message encryption support
voluptuous >= 0.11.0 # Schema recursion via Self
openshift >= 0.6.2 # merge_type support
virtualenv < 16.0.0 ; python_version < '2.7' # virtualenv 16.0.0 and later require python 2.7 or later
pyopenssl < 18.0.0 ; python_version < '2.7' # pyOpenSSL 18.0.0 and later require python 2.7 or later
pyfmg == 0.6.1 # newer versions do not pass current unit tests
pyyaml < 5.1 ; python_version < '2.7' # pyyaml 5.1 and later require python 2.7 or later
pycparser < 2.19 ; python_version < '2.7' # pycparser 2.19 and later require python 2.7 or later
mock >= 2.0.0 # needed for features backported from Python 3.6 unittest.mock (assert_called, assert_called_once...)
pytest-mock >= 1.4.0 # needed for mock_use_standalone_module pytest option
xmltodict < 0.12.0 ; python_version < '2.7' # xmltodict 0.12.0 and later require python 2.7 or later
lxml < 4.3.0 ; python_version < '2.7' # lxml 4.3.0 and later require python 2.7 or later
pyvmomi < 6.0.0 ; python_version < '2.7' # pyvmomi 6.0.0 and later require python 2.7 or later
pyone == 1.1.9 # newer versions do not pass current integration tests
botocore >= 1.10.0 # adds support for the following AWS services: secretsmanager, fms, and acm-pca
# freeze pylint and its requirements for consistent test results
astroid == 2.2.5
isort == 4.3.15
lazy-object-proxy == 1.3.1
mccabe == 0.6.1
pylint == 2.3.1
typed-ast == 1.4.0 # 1.4.0 is required to compile on Python 3.8
wrapt == 1.11.1
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 58,413 |
Python 3.8 unit test timeout with coverage
|
##### SUMMARY
The nightly CI runs with code coverage are timing out for Python 3.8 unit tests:
https://app.shippable.com/github/ansible/ansible/runs/129532/10/console
##### ISSUE TYPE
Bug Report
##### COMPONENT NAME
unit tests
##### ANSIBLE VERSION
devel
##### CONFIGURATION
N/A
##### OS / ENVIRONMENT
Shippable
##### STEPS TO REPRODUCE
Run unit tests on Python 3.8 with code coverage enabled on Shippable.
##### EXPECTED RESULTS
Tests pass.
##### ACTUAL RESULTS
Tests time out.
|
https://github.com/ansible/ansible/issues/58413
|
https://github.com/ansible/ansible/pull/59733
|
4a7e11ba9afd84e78b9ad8403afe51db372f1c5f
|
ed4a729fd6af17c4962fe41a287222b7858b889a
| 2019-06-26T18:14:24Z |
python
| 2019-07-29T20:05:29Z |
test/runner/requirements/constraints.txt
|
coverage >= 4.2, != 4.3.2 # features in 4.2+ required, avoid known bug in 4.3.2 on python 2.6
cryptography < 2.2 ; python_version < '2.7' # cryptography 2.2 drops support for python 2.6
deepdiff < 4.0.0 ; python_version < '3' # deepdiff 4.0.0 and later require python 3
urllib3 < 1.24 ; python_version < '2.7' # urllib3 1.24 and later require python 2.7 or later
pywinrm >= 0.3.0 # message encryption support
sphinx < 1.6 ; python_version < '2.7' # sphinx 1.6 and later require python 2.7 or later
sphinx < 1.8 ; python_version >= '2.7' # sphinx 1.8 and later are currently incompatible with rstcheck 3.3
pygments >= 2.4.0 # Pygments 2.4.0 includes bugfixes for YAML and YAML+Jinja lexers
wheel < 0.30.0 ; python_version < '2.7' # wheel 0.30.0 and later require python 2.7 or later
yamllint != 1.8.0, < 1.14.0 ; python_version < '2.7' # yamllint 1.8.0 and 1.14.0+ require python 2.7+
pycrypto >= 2.6 # Need features found in 2.6 and greater
ncclient >= 0.5.2 # Need features added in 0.5.2 and greater
idna < 2.6 # requests requires idna < 2.6, but cryptography will cause the latest version to be installed instead
paramiko < 2.4.0 ; python_version < '2.7' # paramiko 2.4.0 drops support for python 2.6
paramiko < 2.5.0 ; python_version >= '2.7' # paramiko 2.5.0 requires cryptography 2.5.0+
pytest < 3.3.0 ; python_version < '2.7' # pytest 3.3.0 drops support for python 2.6
pytest < 5.0.0 ; python_version == '2.7' # pytest 5.0.0 and later will no longer support python 2.7
pytest-forked < 1.0.2 ; python_version < '2.7' # pytest-forked 1.0.2 and later require python 2.7 or later
pytest-forked >= 1.0.2 ; python_version >= '2.7' # pytest-forked before 1.0.2 does not work with pytest 4.2.0+ (which requires python 2.7+)
ntlm-auth >= 1.3.0 # message encryption support using cryptography
requests < 2.20.0 ; python_version < '2.7' # requests 2.20.0 drops support for python 2.6
requests-ntlm >= 1.1.0 # message encryption support
requests-credssp >= 0.1.0 # message encryption support
voluptuous >= 0.11.0 # Schema recursion via Self
openshift >= 0.6.2 # merge_type support
virtualenv < 16.0.0 ; python_version < '2.7' # virtualenv 16.0.0 and later require python 2.7 or later
pyopenssl < 18.0.0 ; python_version < '2.7' # pyOpenSSL 18.0.0 and later require python 2.7 or later
pyfmg == 0.6.1 # newer versions do not pass current unit tests
pyyaml < 5.1 ; python_version < '2.7' # pyyaml 5.1 and later require python 2.7 or later
pycparser < 2.19 ; python_version < '2.7' # pycparser 2.19 and later require python 2.7 or later
mock >= 2.0.0 # needed for features backported from Python 3.6 unittest.mock (assert_called, assert_called_once...)
pytest-mock >= 1.4.0 # needed for mock_use_standalone_module pytest option
xmltodict < 0.12.0 ; python_version < '2.7' # xmltodict 0.12.0 and later require python 2.7 or later
lxml < 4.3.0 ; python_version < '2.7' # lxml 4.3.0 and later require python 2.7 or later
pyvmomi < 6.0.0 ; python_version < '2.7' # pyvmomi 6.0.0 and later require python 2.7 or later
pyone == 1.1.9 # newer versions do not pass current integration tests
botocore >= 1.10.0 # adds support for the following AWS services: secretsmanager, fms, and acm-pca
# freeze pylint and its requirements for consistent test results
astroid == 2.2.5
isort == 4.3.15
lazy-object-proxy == 1.3.1
mccabe == 0.6.1
pylint == 2.3.1
typed-ast == 1.4.0 # 1.4.0 is required to compile on Python 3.8
wrapt == 1.11.1
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 59,650 |
Issue with delegate_to and loops
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
Filing on behalf of Netapp
I started running some collection tests on the latest branch and found what I believe to be a bug in the devel branch. I also wrote a program to isolate the commit where the issue started and the failure with the commit https://github.com/ansible/ansible/commit/b7868529ee595b07b0cee5729dd103ef8f021600. I have also attached inventory outputs and execution logs. Let me know if there is anything else you need.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
delegate_to
Delegation, Rolling Updates, and Local Actions
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
devel branch
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
Netapp e series
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
<!--- Paste example playbooks or commands between quotes below -->
```yaml
Recreate playbook:
- hosts: localhost
gather_facts: no
tasks:
- name: Gather facts about remote hosts
command: “hostname”
delegate_to: “{{ item }}”
delegate_facts: false # Makes no difference true/false
loop:
- beegfs_metadata1 # couple of Ubuntu 18.04 lts hosts as defined in the inventory
- beegfs_storage1
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
beegfs_metadata1 will succeed but will fail on beegfs_storage1; however if the order is reversed then beegfs_storage1 will succeed and beegfs_metadata1 will fail.
<!--- Paste verbatim command output between quotes -->
```paste below
ansible-playbook 2.9.0.dev0
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/swartzn/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/swartzn/projects/ansible/ansible/lib/ansible
executable location = /home/swartzn/projects/ansible/ansible/bin/ansible-playbook
python version = 3.6.7 (default, Oct 22 2018, 11:32:17) [GCC 8.2.0]
Using /etc/ansible/ansible.cfg as config file
setting up inventory plugins
host_list declined parsing /home/swartzn/projects/ansible/inventory.yml as it did not pass its verify_file() method
script declined parsing /home/swartzn/projects/ansible/inventory.yml as it did not pass its verify_file() method
Parsed /home/swartzn/projects/ansible/inventory.yml inventory source with yaml plugin
Loading callback plugin default of type stdout, v2.0 from /home/swartzn/projects/ansible/ansible/lib/ansible/plugins/callback/default.py
PLAYBOOK: playbook.yml **************************************************************************************************************************************************************************************************************************************************
Positional arguments: playbook.yml
verbosity: 5
connection: smart
timeout: 10
become_method: sudo
tags: ('all',)
inventory: ('/home/swartzn/projects/ansible/inventory.yml',)
forks: 5
1 plays in playbook.yml
PLAY [localhost] ********************************************************************************************************************************************************************************************************************************************************
META: ran handlers
TASK [Gathers facts about remote hosts] *********************************************************************************************************************************************************************************************************************************
task path: /home/swartzn/projects/ansible/playbook.yml:4
<10.113.1.206> ESTABLISH SSH CONNECTION FOR USER: beegfs
<10.113.1.206> SSH: ansible.cfg set ssh_args: (-C)(-o)(ControlMaster=auto)(-o)(ControlPersist=60s)
<10.113.1.206> SSH: ansible_password/ansible_ssh_password not set: (-o)(KbdInteractiveAuthentication=no)(-o)(PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey)(-o)(PasswordAuthentication=no)
<10.113.1.206> SSH: ANSIBLE_REMOTE_USER/remote_user/ansible_user/user/-u set: (-o)(User="beegfs")
<10.113.1.206> SSH: ANSIBLE_TIMEOUT/timeout set: (-o)(ConnectTimeout=10)
<10.113.1.206> SSH: PlayContext set ssh_common_args: ()
<10.113.1.206> SSH: PlayContext set ssh_extra_args: ()
<10.113.1.206> SSH: found only ControlPersist; added ControlPath: (-o)(ControlPath=/home/swartzn/.ansible/cp/3cc9ebacc8)
<10.113.1.206> SSH: EXEC ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="beegfs"' -o ConnectTimeout=10 -o ControlPath=/home/swartzn/.ansible/cp/3cc9ebacc8 10.113.1.206 '/bin/sh -c '"'"'echo ~beegfs && sleep 0'"'"''
<10.113.1.206> (0, b'/home/beegfs\n', b'OpenSSH_7.6p1 Ubuntu-4ubuntu0.3, OpenSSL 1.0.2n 7 Dec 2017\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 19: Applying options for *\r\ndebug1: auto-mux: Trying existing master\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug2: mux_client_hello_exchange: master version 4\r\ndebug3: mux_client_forwards: request forwardings: 0 local, 0 remote\r\ndebug3: mux_client_request_session: entering\r\ndebug3: mux_client_request_alive: entering\r\ndebug3: mux_client_request_alive: done pid = 28331\r\ndebug3: mux_client_request_session: session request sent\r\ndebug1: mux_client_request_session: master session id: 2\r\ndebug3: mux_client_read_packet: read header failed: Broken pipe\r\ndebug2: Received exit status from master 0\r\n')
<10.113.1.206> ESTABLISH SSH CONNECTION FOR USER: beegfs
<10.113.1.206> SSH: ansible.cfg set ssh_args: (-C)(-o)(ControlMaster=auto)(-o)(ControlPersist=60s)
<10.113.1.206> SSH: ansible_password/ansible_ssh_password not set: (-o)(KbdInteractiveAuthentication=no)(-o)(PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey)(-o)(PasswordAuthentication=no)
<10.113.1.206> SSH: ANSIBLE_REMOTE_USER/remote_user/ansible_user/user/-u set: (-o)(User="beegfs")
<10.113.1.206> SSH: ANSIBLE_TIMEOUT/timeout set: (-o)(ConnectTimeout=10)
<10.113.1.206> SSH: PlayContext set ssh_common_args: ()
<10.113.1.206> SSH: PlayContext set ssh_extra_args: ()
<10.113.1.206> SSH: found only ControlPersist; added ControlPath: (-o)(ControlPath=/home/swartzn/.ansible/cp/3cc9ebacc8)
<10.113.1.206> SSH: EXEC ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="beegfs"' -o ConnectTimeout=10 -o ControlPath=/home/swartzn/.ansible/cp/3cc9ebacc8 10.113.1.206 '/bin/sh -c '"'"'( umask 77 && mkdir -p "` echo /home/beegfs/.ansible/tmp/ansible-tmp-1564077394.3568225-151955846213474 `" && echo ansible-tmp-1564077394.3568225-151955846213474="` echo /home/beegfs/.ansible/tmp/ansible-tmp-1564077394.3568225-151955846213474 `" ) && sleep 0'"'"''
<10.113.1.206> (0, b'ansible-tmp-1564077394.3568225-151955846213474=/home/beegfs/.ansible/tmp/ansible-tmp-1564077394.3568225-151955846213474\n', b'OpenSSH_7.6p1 Ubuntu-4ubuntu0.3, OpenSSL 1.0.2n 7 Dec 2017\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 19: Applying options for *\r\ndebug1: auto-mux: Trying existing master\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug2: mux_client_hello_exchange: master version 4\r\ndebug3: mux_client_forwards: request forwardings: 0 local, 0 remote\r\ndebug3: mux_client_request_session: entering\r\ndebug3: mux_client_request_alive: entering\r\ndebug3: mux_client_request_alive: done pid = 28331\r\ndebug3: mux_client_request_session: session request sent\r\ndebug1: mux_client_request_session: master session id: 2\r\ndebug3: mux_client_read_packet: read header failed: Broken pipe\r\ndebug2: Received exit status from master 0\r\n')
Using module_utils file /home/swartzn/projects/ansible/ansible/lib/ansible/module_utils/common/__init__.py
Using module_utils file /home/swartzn/projects/ansible/ansible/lib/ansible/module_utils/common/collections.py
Using module_utils file /home/swartzn/projects/ansible/ansible/lib/ansible/module_utils/basic.py
Using module_utils file /home/swartzn/projects/ansible/ansible/lib/ansible/module_utils/_text.py
Using module_utils file /home/swartzn/projects/ansible/ansible/lib/ansible/module_utils/six/__init__.py
Using module_utils file /home/swartzn/projects/ansible/ansible/lib/ansible/module_utils/common/_collections_compat.py
Using module_utils file /home/swartzn/projects/ansible/ansible/lib/ansible/module_utils/common/file.py
Using module_utils file /home/swartzn/projects/ansible/ansible/lib/ansible/module_utils/parsing/__init__.py
Using module_utils file /home/swartzn/projects/ansible/ansible/lib/ansible/module_utils/common/process.py
Using module_utils file /home/swartzn/projects/ansible/ansible/lib/ansible/module_utils/common/parameters.py
Using module_utils file /home/swartzn/projects/ansible/ansible/lib/ansible/module_utils/common/validation.py
Using module_utils file /home/swartzn/projects/ansible/ansible/lib/ansible/module_utils/common/_utils.py
Using module_utils file /home/swartzn/projects/ansible/ansible/lib/ansible/module_utils/common/text/__init__.py
Using module_utils file /home/swartzn/projects/ansible/ansible/lib/ansible/module_utils/common/text/converters.py
Using module_utils file /home/swartzn/projects/ansible/ansible/lib/ansible/module_utils/pycompat24.py
Using module_utils file /home/swartzn/projects/ansible/ansible/lib/ansible/module_utils/parsing/convert_bool.py
Using module_utils file /home/swartzn/projects/ansible/ansible/lib/ansible/module_utils/common/sys_info.py
Using module_utils file /home/swartzn/projects/ansible/ansible/lib/ansible/module_utils/common/text/formatters.py
Using module_utils file /home/swartzn/projects/ansible/ansible/lib/ansible/module_utils/common/_json_compat.py
Using module_utils file /home/swartzn/projects/ansible/ansible/lib/ansible/module_utils/distro/__init__.py
Using module_utils file /home/swartzn/projects/ansible/ansible/lib/ansible/module_utils/distro/_distro.py
Using module file /home/swartzn/projects/ansible/ansible/lib/ansible/modules/commands/command.py
<10.113.1.206> PUT /home/swartzn/.ansible/tmp/ansible-local-283415wu2yknz/tmpmzxnh3w3 TO /home/beegfs/.ansible/tmp/ansible-tmp-1564077394.3568225-151955846213474/AnsiballZ_command.py
<10.113.1.206> SSH: ansible.cfg set ssh_args: (-C)(-o)(ControlMaster=auto)(-o)(ControlPersist=60s)
<10.113.1.206> SSH: ansible_password/ansible_ssh_password not set: (-o)(KbdInteractiveAuthentication=no)(-o)(PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey)(-o)(PasswordAuthentication=no)
<10.113.1.206> SSH: ANSIBLE_REMOTE_USER/remote_user/ansible_user/user/-u set: (-o)(User="beegfs")
<10.113.1.206> SSH: ANSIBLE_TIMEOUT/timeout set: (-o)(ConnectTimeout=10)
<10.113.1.206> SSH: PlayContext set ssh_common_args: ()
<10.113.1.206> SSH: PlayContext set sftp_extra_args: ()
<10.113.1.206> SSH: found only ControlPersist; added ControlPath: (-o)(ControlPath=/home/swartzn/.ansible/cp/3cc9ebacc8)
<10.113.1.206> SSH: EXEC sftp -b - -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="beegfs"' -o ConnectTimeout=10 -o ControlPath=/home/swartzn/.ansible/cp/3cc9ebacc8 '[10.113.1.206]'
<10.113.1.206> (0, b'sftp> put /home/swartzn/.ansible/tmp/ansible-local-283415wu2yknz/tmpmzxnh3w3 /home/beegfs/.ansible/tmp/ansible-tmp-1564077394.3568225-151955846213474/AnsiballZ_command.py\n', b'OpenSSH_7.6p1 Ubuntu-4ubuntu0.3, OpenSSL 1.0.2n 7 Dec 2017\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 19: Applying options for *\r\ndebug1: auto-mux: Trying existing master\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug2: mux_client_hello_exchange: master version 4\r\ndebug3: mux_client_forwards: request forwardings: 0 local, 0 remote\r\ndebug3: mux_client_request_session: entering\r\ndebug3: mux_client_request_alive: entering\r\ndebug3: mux_client_request_alive: done pid = 28331\r\ndebug3: mux_client_request_session: session request sent\r\ndebug1: mux_client_request_session: master session id: 2\r\ndebug2: Remote version: 3\r\ndebug2: Server supports extension "[email protected]" revision 1\r\ndebug2: Server supports extension "[email protected]" revision 2\r\ndebug2: Server supports extension "[email protected]" revision 2\r\ndebug2: Server supports extension "[email protected]" revision 1\r\ndebug2: Server supports extension "[email protected]" revision 1\r\ndebug3: Sent message fd 3 T:16 I:1\r\ndebug3: SSH_FXP_REALPATH . -> /home/beegfs size 0\r\ndebug3: Looking up /home/swartzn/.ansible/tmp/ansible-local-283415wu2yknz/tmpmzxnh3w3\r\ndebug3: Sent message fd 3 T:17 I:2\r\ndebug3: Received stat reply T:101 I:2\r\ndebug1: Couldn\'t stat remote file: No such file or directory\r\ndebug3: Sent message SSH2_FXP_OPEN I:3 P:/home/beegfs/.ansible/tmp/ansible-tmp-1564077394.3568225-151955846213474/AnsiballZ_command.py\r\ndebug3: Sent message SSH2_FXP_WRITE I:4 O:0 S:32768\r\ndebug3: SSH2_FXP_STATUS 0\r\ndebug3: In write loop, ack for 4 32768 bytes at 0\r\ndebug3: Sent message SSH2_FXP_WRITE I:5 O:32768 S:32768\r\ndebug3: Sent message SSH2_FXP_WRITE I:6 O:65536 S:32768\r\ndebug3: Sent message SSH2_FXP_WRITE I:7 O:98304 S:8691\r\ndebug3: SSH2_FXP_STATUS 0\r\ndebug3: In write loop, ack for 5 32768 bytes at 32768\r\ndebug3: SSH2_FXP_STATUS 0\r\ndebug3: In write loop, ack for 6 32768 bytes at 65536\r\ndebug3: SSH2_FXP_STATUS 0\r\ndebug3: In write loop, ack for 7 8691 bytes at 98304\r\ndebug3: Sent message SSH2_FXP_CLOSE I:4\r\ndebug3: SSH2_FXP_STATUS 0\r\ndebug3: mux_client_read_packet: read header failed: Broken pipe\r\ndebug2: Received exit status from master 0\r\n')
<10.113.1.206> ESTABLISH SSH CONNECTION FOR USER: beegfs
<10.113.1.206> SSH: ansible.cfg set ssh_args: (-C)(-o)(ControlMaster=auto)(-o)(ControlPersist=60s)
<10.113.1.206> SSH: ansible_password/ansible_ssh_password not set: (-o)(KbdInteractiveAuthentication=no)(-o)(PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey)(-o)(PasswordAuthentication=no)
<10.113.1.206> SSH: ANSIBLE_REMOTE_USER/remote_user/ansible_user/user/-u set: (-o)(User="beegfs")
<10.113.1.206> SSH: ANSIBLE_TIMEOUT/timeout set: (-o)(ConnectTimeout=10)
<10.113.1.206> SSH: PlayContext set ssh_common_args: ()
<10.113.1.206> SSH: PlayContext set ssh_extra_args: ()
<10.113.1.206> SSH: found only ControlPersist; added ControlPath: (-o)(ControlPath=/home/swartzn/.ansible/cp/3cc9ebacc8)
<10.113.1.206> SSH: EXEC ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="beegfs"' -o ConnectTimeout=10 -o ControlPath=/home/swartzn/.ansible/cp/3cc9ebacc8 10.113.1.206 '/bin/sh -c '"'"'chmod u+x /home/beegfs/.ansible/tmp/ansible-tmp-1564077394.3568225-151955846213474/ /home/beegfs/.ansible/tmp/ansible-tmp-1564077394.3568225-151955846213474/AnsiballZ_command.py && sleep 0'"'"''
<10.113.1.206> (0, b'', b'OpenSSH_7.6p1 Ubuntu-4ubuntu0.3, OpenSSL 1.0.2n 7 Dec 2017\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 19: Applying options for *\r\ndebug1: auto-mux: Trying existing master\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug2: mux_client_hello_exchange: master version 4\r\ndebug3: mux_client_forwards: request forwardings: 0 local, 0 remote\r\ndebug3: mux_client_request_session: entering\r\ndebug3: mux_client_request_alive: entering\r\ndebug3: mux_client_request_alive: done pid = 28331\r\ndebug3: mux_client_request_session: session request sent\r\ndebug1: mux_client_request_session: master session id: 2\r\ndebug3: mux_client_read_packet: read header failed: Broken pipe\r\ndebug2: Received exit status from master 0\r\n')
<10.113.1.206> ESTABLISH SSH CONNECTION FOR USER: beegfs
<10.113.1.206> SSH: ansible.cfg set ssh_args: (-C)(-o)(ControlMaster=auto)(-o)(ControlPersist=60s)
<10.113.1.206> SSH: ansible_password/ansible_ssh_password not set: (-o)(KbdInteractiveAuthentication=no)(-o)(PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey)(-o)(PasswordAuthentication=no)
<10.113.1.206> SSH: ANSIBLE_REMOTE_USER/remote_user/ansible_user/user/-u set: (-o)(User="beegfs")
<10.113.1.206> SSH: ANSIBLE_TIMEOUT/timeout set: (-o)(ConnectTimeout=10)
<10.113.1.206> SSH: PlayContext set ssh_common_args: ()
<10.113.1.206> SSH: PlayContext set ssh_extra_args: ()
<10.113.1.206> SSH: found only ControlPersist; added ControlPath: (-o)(ControlPath=/home/swartzn/.ansible/cp/3cc9ebacc8)
<10.113.1.206> SSH: EXEC ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="beegfs"' -o ConnectTimeout=10 -o ControlPath=/home/swartzn/.ansible/cp/3cc9ebacc8 -tt 10.113.1.206 '/bin/sh -c '"'"'/usr/bin/python3 /home/beegfs/.ansible/tmp/ansible-tmp-1564077394.3568225-151955846213474/AnsiballZ_command.py && sleep 0'"'"''
<10.113.1.206> (0, b'\r\n{"cmd": ["hostname"], "stdout": "beegfs-metadata", "stderr": "", "rc": 0, "start": "2019-07-25 17:55:50.009195", "end": "2019-07-25 17:55:50.012072", "delta": "0:00:00.002877", "changed": true, "invocation": {"module_args": {"_raw_params": "hostname", "warn": true, "_uses_shell": false, "stdin_add_newline": true, "strip_empty_ends": true, "argv": null, "chdir": null, "executable": null, "creates": null, "removes": null, "stdin": null}}}\r\n', b'OpenSSH_7.6p1 Ubuntu-4ubuntu0.3, OpenSSL 1.0.2n 7 Dec 2017\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 19: Applying options for *\r\ndebug1: auto-mux: Trying existing master\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug2: mux_client_hello_exchange: master version 4\r\ndebug3: mux_client_forwards: request forwardings: 0 local, 0 remote\r\ndebug3: mux_client_request_session: entering\r\ndebug3: mux_client_request_alive: entering\r\ndebug3: mux_client_request_alive: done pid = 28331\r\ndebug3: mux_client_request_session: session request sent\r\ndebug1: mux_client_request_session: master session id: 2\r\ndebug3: mux_client_read_packet: read header failed: Broken pipe\r\ndebug2: Received exit status from master 0\r\nShared connection to 10.113.1.206 closed.\r\n')
<10.113.1.206> ESTABLISH SSH CONNECTION FOR USER: beegfs
<10.113.1.206> SSH: ansible.cfg set ssh_args: (-C)(-o)(ControlMaster=auto)(-o)(ControlPersist=60s)
<10.113.1.206> SSH: ansible_password/ansible_ssh_password not set: (-o)(KbdInteractiveAuthentication=no)(-o)(PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey)(-o)(PasswordAuthentication=no)
<10.113.1.206> SSH: ANSIBLE_REMOTE_USER/remote_user/ansible_user/user/-u set: (-o)(User="beegfs")
<10.113.1.206> SSH: ANSIBLE_TIMEOUT/timeout set: (-o)(ConnectTimeout=10)
<10.113.1.206> SSH: PlayContext set ssh_common_args: ()
<10.113.1.206> SSH: PlayContext set ssh_extra_args: ()
<10.113.1.206> SSH: found only ControlPersist; added ControlPath: (-o)(ControlPath=/home/swartzn/.ansible/cp/3cc9ebacc8)
<10.113.1.206> SSH: EXEC ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="beegfs"' -o ConnectTimeout=10 -o ControlPath=/home/swartzn/.ansible/cp/3cc9ebacc8 10.113.1.206 '/bin/sh -c '"'"'rm -f -r /home/beegfs/.ansible/tmp/ansible-tmp-1564077394.3568225-151955846213474/ > /dev/null 2>&1 && sleep 0'"'"''
<10.113.1.206> (0, b'', b'OpenSSH_7.6p1 Ubuntu-4ubuntu0.3, OpenSSL 1.0.2n 7 Dec 2017\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 19: Applying options for *\r\ndebug1: auto-mux: Trying existing master\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug2: mux_client_hello_exchange: master version 4\r\ndebug3: mux_client_forwards: request forwardings: 0 local, 0 remote\r\ndebug3: mux_client_request_session: entering\r\ndebug3: mux_client_request_alive: entering\r\ndebug3: mux_client_request_alive: done pid = 28331\r\ndebug3: mux_client_request_session: session request sent\r\ndebug1: mux_client_request_session: master session id: 2\r\ndebug3: mux_client_read_packet: read header failed: Broken pipe\r\ndebug2: Received exit status from master 0\r\n')
changed: [localhost -> 10.113.1.206] => (item=beegfs_metadata1) => {
"ansible_loop_var": "item",
"changed": true,
"cmd": [
"hostname"
],
"delta": "0:00:00.002877",
"end": "2019-07-25 17:55:50.012072",
"invocation": {
"module_args": {
"_raw_params": "hostname",
"_uses_shell": false,
"argv": null,
"chdir": null,
"creates": null,
"executable": null,
"removes": null,
"stdin": null,
"stdin_add_newline": true,
"strip_empty_ends": true,
"warn": true
}
},
"item": "beegfs_metadata1",
"rc": 0,
"start": "2019-07-25 17:55:50.009195",
"stderr": "",
"stderr_lines": [],
"stdout": "beegfs-metadata",
"stdout_lines": [
"beegfs-metadata"
]
}
<beegfs_storage1> ESTABLISH SSH CONNECTION FOR USER: None
<beegfs_storage1> SSH: ansible.cfg set ssh_args: (-C)(-o)(ControlMaster=auto)(-o)(ControlPersist=60s)
<beegfs_storage1> SSH: ansible_password/ansible_ssh_password not set: (-o)(KbdInteractiveAuthentication=no)(-o)(PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey)(-o)(PasswordAuthentication=no)
<beegfs_storage1> SSH: ANSIBLE_TIMEOUT/timeout set: (-o)(ConnectTimeout=10)
<beegfs_storage1> SSH: PlayContext set ssh_common_args: ()
<beegfs_storage1> SSH: PlayContext set ssh_extra_args: ()
<beegfs_storage1> SSH: found only ControlPersist; added ControlPath: (-o)(ControlPath=/home/swartzn/.ansible/cp/a9c837a4c1)
<beegfs_storage1> SSH: EXEC ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o ConnectTimeout=10 -o ControlPath=/home/swartzn/.ansible/cp/a9c837a4c1 beegfs_storage1 '/bin/sh -c '"'"'echo ~ && sleep 0'"'"''
<beegfs_storage1> (255, b'', b'OpenSSH_7.6p1 Ubuntu-4ubuntu0.3, OpenSSL 1.0.2n 7 Dec 2017\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 19: Applying options for *\r\ndebug1: auto-mux: Trying existing master\r\ndebug1: Control socket "/home/swartzn/.ansible/cp/a9c837a4c1" does not exist\r\ndebug2: resolving "beegfs_storage1" port 22\r\nssh: Could not resolve hostname beegfs_storage1: Name or service not known\r\n')
failed: [localhost] (item=beegfs_storage1) => {
"ansible_loop_var": "item",
"item": "beegfs_storage1",
"msg": "Failed to connect to the host via ssh: OpenSSH_7.6p1 Ubuntu-4ubuntu0.3, OpenSSL 1.0.2n 7 Dec 2017\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 19: Applying options for *\r\ndebug1: auto-mux: Trying existing master\r\ndebug1: Control socket \"/home/swartzn/.ansible/cp/a9c837a4c1\" does not exist\r\ndebug2: resolving \"beegfs_storage1\" port 22\r\nssh: Could not resolve hostname beegfs_storage1: Name or service not known",
"unreachable": true
}
fatal: [localhost]: UNREACHABLE! => {
"changed": true,
"msg": "All items completed",
"results": [
{
"ansible_loop_var": "item",
"changed": true,
"cmd": [
"hostname"
],
"delta": "0:00:00.002877",
"end": "2019-07-25 17:55:50.012072",
"failed": false,
"invocation": {
"module_args": {
"_raw_params": "hostname",
"_uses_shell": false,
"argv": null,
"chdir": null,
"creates": null,
"executable": null,
"removes": null,
"stdin": null,
"stdin_add_newline": true,
"strip_empty_ends": true,
"warn": true
}
},
"item": "beegfs_metadata1",
"rc": 0,
"start": "2019-07-25 17:55:50.009195",
"stderr": "",
"stderr_lines": [],
"stdout": "beegfs-metadata",
"stdout_lines": [
"beegfs-metadata"
]
},
{
"ansible_loop_var": "item",
"item": "beegfs_storage1",
"msg": "Failed to connect to the host via ssh: OpenSSH_7.6p1 Ubuntu-4ubuntu0.3, OpenSSL 1.0.2n 7 Dec 2017\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 19: Applying options for *\r\ndebug1: auto-mux: Trying existing master\r\ndebug1: Control socket \"/home/swartzn/.ansible/cp/a9c837a4c1\" does not exist\r\ndebug2: resolving \"beegfs_storage1\" port 22\r\nssh: Could not resolve hostname beegfs_storage1: Name or service not known",
"unreachable": true
}
]
}
PLAY RECAP **************************************************************************************************************************************************************************************************************************************************************
localhost : ok=0 changed=0 unreachable=1 failed=0 skipped=0 rescued=0 ignored=0
[beegfs_metadata1.txt](https://github.com/ansible/ansible/files/3436503/beegfs_metadata1.txt)
[beegfs_storage1.txt](https://github.com/ansible/ansible/files/3436505/beegfs_storage1.txt)
```
|
https://github.com/ansible/ansible/issues/59650
|
https://github.com/ansible/ansible/pull/59659
|
127bd67f6e927321dafbd11a492ca8756cf8d89f
|
fd899956b427be38f425a805f0ab653af7fd3300
| 2019-07-26T15:55:29Z |
python
| 2019-07-30T07:46:29Z |
changelogs/fragments/59650-correctly-handler-delegate_to_hostname-loops.yaml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 59,650 |
Issue with delegate_to and loops
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
Filing on behalf of Netapp
I started running some collection tests on the latest branch and found what I believe to be a bug in the devel branch. I also wrote a program to isolate the commit where the issue started and the failure with the commit https://github.com/ansible/ansible/commit/b7868529ee595b07b0cee5729dd103ef8f021600. I have also attached inventory outputs and execution logs. Let me know if there is anything else you need.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
delegate_to
Delegation, Rolling Updates, and Local Actions
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
devel branch
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
Netapp e series
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
<!--- Paste example playbooks or commands between quotes below -->
```yaml
Recreate playbook:
- hosts: localhost
gather_facts: no
tasks:
- name: Gather facts about remote hosts
command: “hostname”
delegate_to: “{{ item }}”
delegate_facts: false # Makes no difference true/false
loop:
- beegfs_metadata1 # couple of Ubuntu 18.04 lts hosts as defined in the inventory
- beegfs_storage1
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
beegfs_metadata1 will succeed but will fail on beegfs_storage1; however if the order is reversed then beegfs_storage1 will succeed and beegfs_metadata1 will fail.
<!--- Paste verbatim command output between quotes -->
```paste below
ansible-playbook 2.9.0.dev0
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/swartzn/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/swartzn/projects/ansible/ansible/lib/ansible
executable location = /home/swartzn/projects/ansible/ansible/bin/ansible-playbook
python version = 3.6.7 (default, Oct 22 2018, 11:32:17) [GCC 8.2.0]
Using /etc/ansible/ansible.cfg as config file
setting up inventory plugins
host_list declined parsing /home/swartzn/projects/ansible/inventory.yml as it did not pass its verify_file() method
script declined parsing /home/swartzn/projects/ansible/inventory.yml as it did not pass its verify_file() method
Parsed /home/swartzn/projects/ansible/inventory.yml inventory source with yaml plugin
Loading callback plugin default of type stdout, v2.0 from /home/swartzn/projects/ansible/ansible/lib/ansible/plugins/callback/default.py
PLAYBOOK: playbook.yml **************************************************************************************************************************************************************************************************************************************************
Positional arguments: playbook.yml
verbosity: 5
connection: smart
timeout: 10
become_method: sudo
tags: ('all',)
inventory: ('/home/swartzn/projects/ansible/inventory.yml',)
forks: 5
1 plays in playbook.yml
PLAY [localhost] ********************************************************************************************************************************************************************************************************************************************************
META: ran handlers
TASK [Gathers facts about remote hosts] *********************************************************************************************************************************************************************************************************************************
task path: /home/swartzn/projects/ansible/playbook.yml:4
<10.113.1.206> ESTABLISH SSH CONNECTION FOR USER: beegfs
<10.113.1.206> SSH: ansible.cfg set ssh_args: (-C)(-o)(ControlMaster=auto)(-o)(ControlPersist=60s)
<10.113.1.206> SSH: ansible_password/ansible_ssh_password not set: (-o)(KbdInteractiveAuthentication=no)(-o)(PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey)(-o)(PasswordAuthentication=no)
<10.113.1.206> SSH: ANSIBLE_REMOTE_USER/remote_user/ansible_user/user/-u set: (-o)(User="beegfs")
<10.113.1.206> SSH: ANSIBLE_TIMEOUT/timeout set: (-o)(ConnectTimeout=10)
<10.113.1.206> SSH: PlayContext set ssh_common_args: ()
<10.113.1.206> SSH: PlayContext set ssh_extra_args: ()
<10.113.1.206> SSH: found only ControlPersist; added ControlPath: (-o)(ControlPath=/home/swartzn/.ansible/cp/3cc9ebacc8)
<10.113.1.206> SSH: EXEC ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="beegfs"' -o ConnectTimeout=10 -o ControlPath=/home/swartzn/.ansible/cp/3cc9ebacc8 10.113.1.206 '/bin/sh -c '"'"'echo ~beegfs && sleep 0'"'"''
<10.113.1.206> (0, b'/home/beegfs\n', b'OpenSSH_7.6p1 Ubuntu-4ubuntu0.3, OpenSSL 1.0.2n 7 Dec 2017\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 19: Applying options for *\r\ndebug1: auto-mux: Trying existing master\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug2: mux_client_hello_exchange: master version 4\r\ndebug3: mux_client_forwards: request forwardings: 0 local, 0 remote\r\ndebug3: mux_client_request_session: entering\r\ndebug3: mux_client_request_alive: entering\r\ndebug3: mux_client_request_alive: done pid = 28331\r\ndebug3: mux_client_request_session: session request sent\r\ndebug1: mux_client_request_session: master session id: 2\r\ndebug3: mux_client_read_packet: read header failed: Broken pipe\r\ndebug2: Received exit status from master 0\r\n')
<10.113.1.206> ESTABLISH SSH CONNECTION FOR USER: beegfs
<10.113.1.206> SSH: ansible.cfg set ssh_args: (-C)(-o)(ControlMaster=auto)(-o)(ControlPersist=60s)
<10.113.1.206> SSH: ansible_password/ansible_ssh_password not set: (-o)(KbdInteractiveAuthentication=no)(-o)(PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey)(-o)(PasswordAuthentication=no)
<10.113.1.206> SSH: ANSIBLE_REMOTE_USER/remote_user/ansible_user/user/-u set: (-o)(User="beegfs")
<10.113.1.206> SSH: ANSIBLE_TIMEOUT/timeout set: (-o)(ConnectTimeout=10)
<10.113.1.206> SSH: PlayContext set ssh_common_args: ()
<10.113.1.206> SSH: PlayContext set ssh_extra_args: ()
<10.113.1.206> SSH: found only ControlPersist; added ControlPath: (-o)(ControlPath=/home/swartzn/.ansible/cp/3cc9ebacc8)
<10.113.1.206> SSH: EXEC ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="beegfs"' -o ConnectTimeout=10 -o ControlPath=/home/swartzn/.ansible/cp/3cc9ebacc8 10.113.1.206 '/bin/sh -c '"'"'( umask 77 && mkdir -p "` echo /home/beegfs/.ansible/tmp/ansible-tmp-1564077394.3568225-151955846213474 `" && echo ansible-tmp-1564077394.3568225-151955846213474="` echo /home/beegfs/.ansible/tmp/ansible-tmp-1564077394.3568225-151955846213474 `" ) && sleep 0'"'"''
<10.113.1.206> (0, b'ansible-tmp-1564077394.3568225-151955846213474=/home/beegfs/.ansible/tmp/ansible-tmp-1564077394.3568225-151955846213474\n', b'OpenSSH_7.6p1 Ubuntu-4ubuntu0.3, OpenSSL 1.0.2n 7 Dec 2017\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 19: Applying options for *\r\ndebug1: auto-mux: Trying existing master\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug2: mux_client_hello_exchange: master version 4\r\ndebug3: mux_client_forwards: request forwardings: 0 local, 0 remote\r\ndebug3: mux_client_request_session: entering\r\ndebug3: mux_client_request_alive: entering\r\ndebug3: mux_client_request_alive: done pid = 28331\r\ndebug3: mux_client_request_session: session request sent\r\ndebug1: mux_client_request_session: master session id: 2\r\ndebug3: mux_client_read_packet: read header failed: Broken pipe\r\ndebug2: Received exit status from master 0\r\n')
Using module_utils file /home/swartzn/projects/ansible/ansible/lib/ansible/module_utils/common/__init__.py
Using module_utils file /home/swartzn/projects/ansible/ansible/lib/ansible/module_utils/common/collections.py
Using module_utils file /home/swartzn/projects/ansible/ansible/lib/ansible/module_utils/basic.py
Using module_utils file /home/swartzn/projects/ansible/ansible/lib/ansible/module_utils/_text.py
Using module_utils file /home/swartzn/projects/ansible/ansible/lib/ansible/module_utils/six/__init__.py
Using module_utils file /home/swartzn/projects/ansible/ansible/lib/ansible/module_utils/common/_collections_compat.py
Using module_utils file /home/swartzn/projects/ansible/ansible/lib/ansible/module_utils/common/file.py
Using module_utils file /home/swartzn/projects/ansible/ansible/lib/ansible/module_utils/parsing/__init__.py
Using module_utils file /home/swartzn/projects/ansible/ansible/lib/ansible/module_utils/common/process.py
Using module_utils file /home/swartzn/projects/ansible/ansible/lib/ansible/module_utils/common/parameters.py
Using module_utils file /home/swartzn/projects/ansible/ansible/lib/ansible/module_utils/common/validation.py
Using module_utils file /home/swartzn/projects/ansible/ansible/lib/ansible/module_utils/common/_utils.py
Using module_utils file /home/swartzn/projects/ansible/ansible/lib/ansible/module_utils/common/text/__init__.py
Using module_utils file /home/swartzn/projects/ansible/ansible/lib/ansible/module_utils/common/text/converters.py
Using module_utils file /home/swartzn/projects/ansible/ansible/lib/ansible/module_utils/pycompat24.py
Using module_utils file /home/swartzn/projects/ansible/ansible/lib/ansible/module_utils/parsing/convert_bool.py
Using module_utils file /home/swartzn/projects/ansible/ansible/lib/ansible/module_utils/common/sys_info.py
Using module_utils file /home/swartzn/projects/ansible/ansible/lib/ansible/module_utils/common/text/formatters.py
Using module_utils file /home/swartzn/projects/ansible/ansible/lib/ansible/module_utils/common/_json_compat.py
Using module_utils file /home/swartzn/projects/ansible/ansible/lib/ansible/module_utils/distro/__init__.py
Using module_utils file /home/swartzn/projects/ansible/ansible/lib/ansible/module_utils/distro/_distro.py
Using module file /home/swartzn/projects/ansible/ansible/lib/ansible/modules/commands/command.py
<10.113.1.206> PUT /home/swartzn/.ansible/tmp/ansible-local-283415wu2yknz/tmpmzxnh3w3 TO /home/beegfs/.ansible/tmp/ansible-tmp-1564077394.3568225-151955846213474/AnsiballZ_command.py
<10.113.1.206> SSH: ansible.cfg set ssh_args: (-C)(-o)(ControlMaster=auto)(-o)(ControlPersist=60s)
<10.113.1.206> SSH: ansible_password/ansible_ssh_password not set: (-o)(KbdInteractiveAuthentication=no)(-o)(PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey)(-o)(PasswordAuthentication=no)
<10.113.1.206> SSH: ANSIBLE_REMOTE_USER/remote_user/ansible_user/user/-u set: (-o)(User="beegfs")
<10.113.1.206> SSH: ANSIBLE_TIMEOUT/timeout set: (-o)(ConnectTimeout=10)
<10.113.1.206> SSH: PlayContext set ssh_common_args: ()
<10.113.1.206> SSH: PlayContext set sftp_extra_args: ()
<10.113.1.206> SSH: found only ControlPersist; added ControlPath: (-o)(ControlPath=/home/swartzn/.ansible/cp/3cc9ebacc8)
<10.113.1.206> SSH: EXEC sftp -b - -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="beegfs"' -o ConnectTimeout=10 -o ControlPath=/home/swartzn/.ansible/cp/3cc9ebacc8 '[10.113.1.206]'
<10.113.1.206> (0, b'sftp> put /home/swartzn/.ansible/tmp/ansible-local-283415wu2yknz/tmpmzxnh3w3 /home/beegfs/.ansible/tmp/ansible-tmp-1564077394.3568225-151955846213474/AnsiballZ_command.py\n', b'OpenSSH_7.6p1 Ubuntu-4ubuntu0.3, OpenSSL 1.0.2n 7 Dec 2017\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 19: Applying options for *\r\ndebug1: auto-mux: Trying existing master\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug2: mux_client_hello_exchange: master version 4\r\ndebug3: mux_client_forwards: request forwardings: 0 local, 0 remote\r\ndebug3: mux_client_request_session: entering\r\ndebug3: mux_client_request_alive: entering\r\ndebug3: mux_client_request_alive: done pid = 28331\r\ndebug3: mux_client_request_session: session request sent\r\ndebug1: mux_client_request_session: master session id: 2\r\ndebug2: Remote version: 3\r\ndebug2: Server supports extension "[email protected]" revision 1\r\ndebug2: Server supports extension "[email protected]" revision 2\r\ndebug2: Server supports extension "[email protected]" revision 2\r\ndebug2: Server supports extension "[email protected]" revision 1\r\ndebug2: Server supports extension "[email protected]" revision 1\r\ndebug3: Sent message fd 3 T:16 I:1\r\ndebug3: SSH_FXP_REALPATH . -> /home/beegfs size 0\r\ndebug3: Looking up /home/swartzn/.ansible/tmp/ansible-local-283415wu2yknz/tmpmzxnh3w3\r\ndebug3: Sent message fd 3 T:17 I:2\r\ndebug3: Received stat reply T:101 I:2\r\ndebug1: Couldn\'t stat remote file: No such file or directory\r\ndebug3: Sent message SSH2_FXP_OPEN I:3 P:/home/beegfs/.ansible/tmp/ansible-tmp-1564077394.3568225-151955846213474/AnsiballZ_command.py\r\ndebug3: Sent message SSH2_FXP_WRITE I:4 O:0 S:32768\r\ndebug3: SSH2_FXP_STATUS 0\r\ndebug3: In write loop, ack for 4 32768 bytes at 0\r\ndebug3: Sent message SSH2_FXP_WRITE I:5 O:32768 S:32768\r\ndebug3: Sent message SSH2_FXP_WRITE I:6 O:65536 S:32768\r\ndebug3: Sent message SSH2_FXP_WRITE I:7 O:98304 S:8691\r\ndebug3: SSH2_FXP_STATUS 0\r\ndebug3: In write loop, ack for 5 32768 bytes at 32768\r\ndebug3: SSH2_FXP_STATUS 0\r\ndebug3: In write loop, ack for 6 32768 bytes at 65536\r\ndebug3: SSH2_FXP_STATUS 0\r\ndebug3: In write loop, ack for 7 8691 bytes at 98304\r\ndebug3: Sent message SSH2_FXP_CLOSE I:4\r\ndebug3: SSH2_FXP_STATUS 0\r\ndebug3: mux_client_read_packet: read header failed: Broken pipe\r\ndebug2: Received exit status from master 0\r\n')
<10.113.1.206> ESTABLISH SSH CONNECTION FOR USER: beegfs
<10.113.1.206> SSH: ansible.cfg set ssh_args: (-C)(-o)(ControlMaster=auto)(-o)(ControlPersist=60s)
<10.113.1.206> SSH: ansible_password/ansible_ssh_password not set: (-o)(KbdInteractiveAuthentication=no)(-o)(PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey)(-o)(PasswordAuthentication=no)
<10.113.1.206> SSH: ANSIBLE_REMOTE_USER/remote_user/ansible_user/user/-u set: (-o)(User="beegfs")
<10.113.1.206> SSH: ANSIBLE_TIMEOUT/timeout set: (-o)(ConnectTimeout=10)
<10.113.1.206> SSH: PlayContext set ssh_common_args: ()
<10.113.1.206> SSH: PlayContext set ssh_extra_args: ()
<10.113.1.206> SSH: found only ControlPersist; added ControlPath: (-o)(ControlPath=/home/swartzn/.ansible/cp/3cc9ebacc8)
<10.113.1.206> SSH: EXEC ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="beegfs"' -o ConnectTimeout=10 -o ControlPath=/home/swartzn/.ansible/cp/3cc9ebacc8 10.113.1.206 '/bin/sh -c '"'"'chmod u+x /home/beegfs/.ansible/tmp/ansible-tmp-1564077394.3568225-151955846213474/ /home/beegfs/.ansible/tmp/ansible-tmp-1564077394.3568225-151955846213474/AnsiballZ_command.py && sleep 0'"'"''
<10.113.1.206> (0, b'', b'OpenSSH_7.6p1 Ubuntu-4ubuntu0.3, OpenSSL 1.0.2n 7 Dec 2017\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 19: Applying options for *\r\ndebug1: auto-mux: Trying existing master\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug2: mux_client_hello_exchange: master version 4\r\ndebug3: mux_client_forwards: request forwardings: 0 local, 0 remote\r\ndebug3: mux_client_request_session: entering\r\ndebug3: mux_client_request_alive: entering\r\ndebug3: mux_client_request_alive: done pid = 28331\r\ndebug3: mux_client_request_session: session request sent\r\ndebug1: mux_client_request_session: master session id: 2\r\ndebug3: mux_client_read_packet: read header failed: Broken pipe\r\ndebug2: Received exit status from master 0\r\n')
<10.113.1.206> ESTABLISH SSH CONNECTION FOR USER: beegfs
<10.113.1.206> SSH: ansible.cfg set ssh_args: (-C)(-o)(ControlMaster=auto)(-o)(ControlPersist=60s)
<10.113.1.206> SSH: ansible_password/ansible_ssh_password not set: (-o)(KbdInteractiveAuthentication=no)(-o)(PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey)(-o)(PasswordAuthentication=no)
<10.113.1.206> SSH: ANSIBLE_REMOTE_USER/remote_user/ansible_user/user/-u set: (-o)(User="beegfs")
<10.113.1.206> SSH: ANSIBLE_TIMEOUT/timeout set: (-o)(ConnectTimeout=10)
<10.113.1.206> SSH: PlayContext set ssh_common_args: ()
<10.113.1.206> SSH: PlayContext set ssh_extra_args: ()
<10.113.1.206> SSH: found only ControlPersist; added ControlPath: (-o)(ControlPath=/home/swartzn/.ansible/cp/3cc9ebacc8)
<10.113.1.206> SSH: EXEC ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="beegfs"' -o ConnectTimeout=10 -o ControlPath=/home/swartzn/.ansible/cp/3cc9ebacc8 -tt 10.113.1.206 '/bin/sh -c '"'"'/usr/bin/python3 /home/beegfs/.ansible/tmp/ansible-tmp-1564077394.3568225-151955846213474/AnsiballZ_command.py && sleep 0'"'"''
<10.113.1.206> (0, b'\r\n{"cmd": ["hostname"], "stdout": "beegfs-metadata", "stderr": "", "rc": 0, "start": "2019-07-25 17:55:50.009195", "end": "2019-07-25 17:55:50.012072", "delta": "0:00:00.002877", "changed": true, "invocation": {"module_args": {"_raw_params": "hostname", "warn": true, "_uses_shell": false, "stdin_add_newline": true, "strip_empty_ends": true, "argv": null, "chdir": null, "executable": null, "creates": null, "removes": null, "stdin": null}}}\r\n', b'OpenSSH_7.6p1 Ubuntu-4ubuntu0.3, OpenSSL 1.0.2n 7 Dec 2017\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 19: Applying options for *\r\ndebug1: auto-mux: Trying existing master\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug2: mux_client_hello_exchange: master version 4\r\ndebug3: mux_client_forwards: request forwardings: 0 local, 0 remote\r\ndebug3: mux_client_request_session: entering\r\ndebug3: mux_client_request_alive: entering\r\ndebug3: mux_client_request_alive: done pid = 28331\r\ndebug3: mux_client_request_session: session request sent\r\ndebug1: mux_client_request_session: master session id: 2\r\ndebug3: mux_client_read_packet: read header failed: Broken pipe\r\ndebug2: Received exit status from master 0\r\nShared connection to 10.113.1.206 closed.\r\n')
<10.113.1.206> ESTABLISH SSH CONNECTION FOR USER: beegfs
<10.113.1.206> SSH: ansible.cfg set ssh_args: (-C)(-o)(ControlMaster=auto)(-o)(ControlPersist=60s)
<10.113.1.206> SSH: ansible_password/ansible_ssh_password not set: (-o)(KbdInteractiveAuthentication=no)(-o)(PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey)(-o)(PasswordAuthentication=no)
<10.113.1.206> SSH: ANSIBLE_REMOTE_USER/remote_user/ansible_user/user/-u set: (-o)(User="beegfs")
<10.113.1.206> SSH: ANSIBLE_TIMEOUT/timeout set: (-o)(ConnectTimeout=10)
<10.113.1.206> SSH: PlayContext set ssh_common_args: ()
<10.113.1.206> SSH: PlayContext set ssh_extra_args: ()
<10.113.1.206> SSH: found only ControlPersist; added ControlPath: (-o)(ControlPath=/home/swartzn/.ansible/cp/3cc9ebacc8)
<10.113.1.206> SSH: EXEC ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="beegfs"' -o ConnectTimeout=10 -o ControlPath=/home/swartzn/.ansible/cp/3cc9ebacc8 10.113.1.206 '/bin/sh -c '"'"'rm -f -r /home/beegfs/.ansible/tmp/ansible-tmp-1564077394.3568225-151955846213474/ > /dev/null 2>&1 && sleep 0'"'"''
<10.113.1.206> (0, b'', b'OpenSSH_7.6p1 Ubuntu-4ubuntu0.3, OpenSSL 1.0.2n 7 Dec 2017\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 19: Applying options for *\r\ndebug1: auto-mux: Trying existing master\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug2: mux_client_hello_exchange: master version 4\r\ndebug3: mux_client_forwards: request forwardings: 0 local, 0 remote\r\ndebug3: mux_client_request_session: entering\r\ndebug3: mux_client_request_alive: entering\r\ndebug3: mux_client_request_alive: done pid = 28331\r\ndebug3: mux_client_request_session: session request sent\r\ndebug1: mux_client_request_session: master session id: 2\r\ndebug3: mux_client_read_packet: read header failed: Broken pipe\r\ndebug2: Received exit status from master 0\r\n')
changed: [localhost -> 10.113.1.206] => (item=beegfs_metadata1) => {
"ansible_loop_var": "item",
"changed": true,
"cmd": [
"hostname"
],
"delta": "0:00:00.002877",
"end": "2019-07-25 17:55:50.012072",
"invocation": {
"module_args": {
"_raw_params": "hostname",
"_uses_shell": false,
"argv": null,
"chdir": null,
"creates": null,
"executable": null,
"removes": null,
"stdin": null,
"stdin_add_newline": true,
"strip_empty_ends": true,
"warn": true
}
},
"item": "beegfs_metadata1",
"rc": 0,
"start": "2019-07-25 17:55:50.009195",
"stderr": "",
"stderr_lines": [],
"stdout": "beegfs-metadata",
"stdout_lines": [
"beegfs-metadata"
]
}
<beegfs_storage1> ESTABLISH SSH CONNECTION FOR USER: None
<beegfs_storage1> SSH: ansible.cfg set ssh_args: (-C)(-o)(ControlMaster=auto)(-o)(ControlPersist=60s)
<beegfs_storage1> SSH: ansible_password/ansible_ssh_password not set: (-o)(KbdInteractiveAuthentication=no)(-o)(PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey)(-o)(PasswordAuthentication=no)
<beegfs_storage1> SSH: ANSIBLE_TIMEOUT/timeout set: (-o)(ConnectTimeout=10)
<beegfs_storage1> SSH: PlayContext set ssh_common_args: ()
<beegfs_storage1> SSH: PlayContext set ssh_extra_args: ()
<beegfs_storage1> SSH: found only ControlPersist; added ControlPath: (-o)(ControlPath=/home/swartzn/.ansible/cp/a9c837a4c1)
<beegfs_storage1> SSH: EXEC ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o ConnectTimeout=10 -o ControlPath=/home/swartzn/.ansible/cp/a9c837a4c1 beegfs_storage1 '/bin/sh -c '"'"'echo ~ && sleep 0'"'"''
<beegfs_storage1> (255, b'', b'OpenSSH_7.6p1 Ubuntu-4ubuntu0.3, OpenSSL 1.0.2n 7 Dec 2017\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 19: Applying options for *\r\ndebug1: auto-mux: Trying existing master\r\ndebug1: Control socket "/home/swartzn/.ansible/cp/a9c837a4c1" does not exist\r\ndebug2: resolving "beegfs_storage1" port 22\r\nssh: Could not resolve hostname beegfs_storage1: Name or service not known\r\n')
failed: [localhost] (item=beegfs_storage1) => {
"ansible_loop_var": "item",
"item": "beegfs_storage1",
"msg": "Failed to connect to the host via ssh: OpenSSH_7.6p1 Ubuntu-4ubuntu0.3, OpenSSL 1.0.2n 7 Dec 2017\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 19: Applying options for *\r\ndebug1: auto-mux: Trying existing master\r\ndebug1: Control socket \"/home/swartzn/.ansible/cp/a9c837a4c1\" does not exist\r\ndebug2: resolving \"beegfs_storage1\" port 22\r\nssh: Could not resolve hostname beegfs_storage1: Name or service not known",
"unreachable": true
}
fatal: [localhost]: UNREACHABLE! => {
"changed": true,
"msg": "All items completed",
"results": [
{
"ansible_loop_var": "item",
"changed": true,
"cmd": [
"hostname"
],
"delta": "0:00:00.002877",
"end": "2019-07-25 17:55:50.012072",
"failed": false,
"invocation": {
"module_args": {
"_raw_params": "hostname",
"_uses_shell": false,
"argv": null,
"chdir": null,
"creates": null,
"executable": null,
"removes": null,
"stdin": null,
"stdin_add_newline": true,
"strip_empty_ends": true,
"warn": true
}
},
"item": "beegfs_metadata1",
"rc": 0,
"start": "2019-07-25 17:55:50.009195",
"stderr": "",
"stderr_lines": [],
"stdout": "beegfs-metadata",
"stdout_lines": [
"beegfs-metadata"
]
},
{
"ansible_loop_var": "item",
"item": "beegfs_storage1",
"msg": "Failed to connect to the host via ssh: OpenSSH_7.6p1 Ubuntu-4ubuntu0.3, OpenSSL 1.0.2n 7 Dec 2017\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 19: Applying options for *\r\ndebug1: auto-mux: Trying existing master\r\ndebug1: Control socket \"/home/swartzn/.ansible/cp/a9c837a4c1\" does not exist\r\ndebug2: resolving \"beegfs_storage1\" port 22\r\nssh: Could not resolve hostname beegfs_storage1: Name or service not known",
"unreachable": true
}
]
}
PLAY RECAP **************************************************************************************************************************************************************************************************************************************************************
localhost : ok=0 changed=0 unreachable=1 failed=0 skipped=0 rescued=0 ignored=0
[beegfs_metadata1.txt](https://github.com/ansible/ansible/files/3436503/beegfs_metadata1.txt)
[beegfs_storage1.txt](https://github.com/ansible/ansible/files/3436505/beegfs_storage1.txt)
```
|
https://github.com/ansible/ansible/issues/59650
|
https://github.com/ansible/ansible/pull/59659
|
127bd67f6e927321dafbd11a492ca8756cf8d89f
|
fd899956b427be38f425a805f0ab653af7fd3300
| 2019-07-26T15:55:29Z |
python
| 2019-07-30T07:46:29Z |
lib/ansible/vars/manager.py
|
# (c) 2012-2014, Michael DeHaan <[email protected]>
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
# Make coding more python3-ish
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import os
import sys
from collections import defaultdict
try:
from hashlib import sha1
except ImportError:
from sha import sha as sha1
from jinja2.exceptions import UndefinedError
from ansible import constants as C
from ansible.errors import AnsibleError, AnsibleParserError, AnsibleUndefinedVariable, AnsibleFileNotFound, AnsibleAssertionError, AnsibleTemplateError
from ansible.inventory.host import Host
from ansible.inventory.helpers import sort_groups, get_group_vars
from ansible.module_utils._text import to_bytes, to_text
from ansible.module_utils.common._collections_compat import Mapping, MutableMapping, Sequence
from ansible.module_utils.six import iteritems, text_type, string_types
from ansible.plugins.loader import lookup_loader, vars_loader
from ansible.vars.fact_cache import FactCache
from ansible.template import Templar
from ansible.utils.display import Display
from ansible.utils.listify import listify_lookup_plugin_terms
from ansible.utils.vars import combine_vars, load_extra_vars, load_options_vars
from ansible.utils.unsafe_proxy import wrap_var
from ansible.vars.clean import namespace_facts, clean_facts
display = Display()
def preprocess_vars(a):
'''
Ensures that vars contained in the parameter passed in are
returned as a list of dictionaries, to ensure for instance
that vars loaded from a file conform to an expected state.
'''
if a is None:
return None
elif not isinstance(a, list):
data = [a]
else:
data = a
for item in data:
if not isinstance(item, MutableMapping):
raise AnsibleError("variable files must contain either a dictionary of variables, or a list of dictionaries. Got: %s (%s)" % (a, type(a)))
return data
class VariableManager:
_ALLOWED = frozenset(['plugins_by_group', 'groups_plugins_play', 'groups_plugins_inventory', 'groups_inventory',
'all_plugins_play', 'all_plugins_inventory', 'all_inventory'])
def __init__(self, loader=None, inventory=None, version_info=None):
self._nonpersistent_fact_cache = defaultdict(dict)
self._vars_cache = defaultdict(dict)
self._extra_vars = defaultdict(dict)
self._host_vars_files = defaultdict(dict)
self._group_vars_files = defaultdict(dict)
self._inventory = inventory
self._loader = loader
self._hostvars = None
self._omit_token = '__omit_place_holder__%s' % sha1(os.urandom(64)).hexdigest()
self._options_vars = load_options_vars(version_info)
# If the basedir is specified as the empty string then it results in cwd being used.
# This is not a safe location to load vars from.
basedir = self._options_vars.get('basedir', False)
self.safe_basedir = bool(basedir is False or basedir)
# load extra vars
self._extra_vars = load_extra_vars(loader=self._loader)
# load fact cache
try:
self._fact_cache = FactCache()
except AnsibleError as e:
# bad cache plugin is not fatal error
# fallback to a dict as in memory cache
display.warning(to_text(e))
self._fact_cache = {}
def __getstate__(self):
data = dict(
fact_cache=self._fact_cache,
np_fact_cache=self._nonpersistent_fact_cache,
vars_cache=self._vars_cache,
extra_vars=self._extra_vars,
host_vars_files=self._host_vars_files,
group_vars_files=self._group_vars_files,
omit_token=self._omit_token,
options_vars=self._options_vars,
inventory=self._inventory,
safe_basedir=self.safe_basedir,
)
return data
def __setstate__(self, data):
self._fact_cache = data.get('fact_cache', defaultdict(dict))
self._nonpersistent_fact_cache = data.get('np_fact_cache', defaultdict(dict))
self._vars_cache = data.get('vars_cache', defaultdict(dict))
self._extra_vars = data.get('extra_vars', dict())
self._host_vars_files = data.get('host_vars_files', defaultdict(dict))
self._group_vars_files = data.get('group_vars_files', defaultdict(dict))
self._omit_token = data.get('omit_token', '__omit_place_holder__%s' % sha1(os.urandom(64)).hexdigest())
self._inventory = data.get('inventory', None)
self._options_vars = data.get('options_vars', dict())
self.safe_basedir = data.get('safe_basedir', False)
@property
def extra_vars(self):
return self._extra_vars
def set_inventory(self, inventory):
self._inventory = inventory
def get_vars(self, play=None, host=None, task=None, include_hostvars=True, include_delegate_to=True, use_cache=True,
_hosts=None, _hosts_all=None):
'''
Returns the variables, with optional "context" given via the parameters
for the play, host, and task (which could possibly result in different
sets of variables being returned due to the additional context).
The order of precedence is:
- play->roles->get_default_vars (if there is a play context)
- group_vars_files[host] (if there is a host context)
- host_vars_files[host] (if there is a host context)
- host->get_vars (if there is a host context)
- fact_cache[host] (if there is a host context)
- play vars (if there is a play context)
- play vars_files (if there's no host context, ignore
file names that cannot be templated)
- task->get_vars (if there is a task context)
- vars_cache[host] (if there is a host context)
- extra vars
``_hosts`` and ``_hosts_all`` should be considered private args, with only internal trusted callers relying
on the functionality they provide. These arguments may be removed at a later date without a deprecation
period and without warning.
'''
display.debug("in VariableManager get_vars()")
all_vars = dict()
magic_variables = self._get_magic_variables(
play=play,
host=host,
task=task,
include_hostvars=include_hostvars,
include_delegate_to=include_delegate_to,
_hosts=_hosts,
_hosts_all=_hosts_all,
)
# default for all cases
basedirs = []
if self.safe_basedir: # avoid adhoc/console loading cwd
basedirs = [self._loader.get_basedir()]
if play:
# first we compile any vars specified in defaults/main.yml
# for all roles within the specified play
for role in play.get_roles():
all_vars = combine_vars(all_vars, role.get_default_vars())
if task:
# set basedirs
if C.PLAYBOOK_VARS_ROOT == 'all': # should be default
basedirs = task.get_search_path()
elif C.PLAYBOOK_VARS_ROOT in ('bottom', 'playbook_dir'): # only option in 2.4.0
basedirs = [task.get_search_path()[0]]
elif C.PLAYBOOK_VARS_ROOT != 'top':
# preserves default basedirs, only option pre 2.3
raise AnsibleError('Unknown playbook vars logic: %s' % C.PLAYBOOK_VARS_ROOT)
# if we have a task in this context, and that task has a role, make
# sure it sees its defaults above any other roles, as we previously
# (v1) made sure each task had a copy of its roles default vars
if task._role is not None and (play or task.action == 'include_role'):
all_vars = combine_vars(all_vars, task._role.get_default_vars(dep_chain=task.get_dep_chain()))
if host:
# THE 'all' group and the rest of groups for a host, used below
all_group = self._inventory.groups.get('all')
host_groups = sort_groups([g for g in host.get_groups() if g.name not in ['all']])
def _get_plugin_vars(plugin, path, entities):
data = {}
try:
data = plugin.get_vars(self._loader, path, entities)
except AttributeError:
try:
for entity in entities:
if isinstance(entity, Host):
data.update(plugin.get_host_vars(entity.name))
else:
data.update(plugin.get_group_vars(entity.name))
except AttributeError:
if hasattr(plugin, 'run'):
raise AnsibleError("Cannot use v1 type vars plugin %s from %s" % (plugin._load_name, plugin._original_path))
else:
raise AnsibleError("Invalid vars plugin %s from %s" % (plugin._load_name, plugin._original_path))
return data
# internal fuctions that actually do the work
def _plugins_inventory(entities):
''' merges all entities by inventory source '''
data = {}
for inventory_dir in self._inventory._sources:
if ',' in inventory_dir and not os.path.exists(inventory_dir): # skip host lists
continue
elif not os.path.isdir(to_bytes(inventory_dir)): # always pass 'inventory directory'
inventory_dir = os.path.dirname(inventory_dir)
for plugin in vars_loader.all():
data = combine_vars(data, _get_plugin_vars(plugin, inventory_dir, entities))
return data
def _plugins_play(entities):
''' merges all entities adjacent to play '''
data = {}
for plugin in vars_loader.all():
for path in basedirs:
data = combine_vars(data, _get_plugin_vars(plugin, path, entities))
return data
# configurable functions that are sortable via config, rememer to add to _ALLOWED if expanding this list
def all_inventory():
return all_group.get_vars()
def all_plugins_inventory():
return _plugins_inventory([all_group])
def all_plugins_play():
return _plugins_play([all_group])
def groups_inventory():
''' gets group vars from inventory '''
return get_group_vars(host_groups)
def groups_plugins_inventory():
''' gets plugin sources from inventory for groups '''
return _plugins_inventory(host_groups)
def groups_plugins_play():
''' gets plugin sources from play for groups '''
return _plugins_play(host_groups)
def plugins_by_groups():
'''
merges all plugin sources by group,
This should be used instead, NOT in combination with the other groups_plugins* functions
'''
data = {}
for group in host_groups:
data[group] = combine_vars(data[group], _plugins_inventory(group))
data[group] = combine_vars(data[group], _plugins_play(group))
return data
# Merge groups as per precedence config
# only allow to call the functions we want exposed
for entry in C.VARIABLE_PRECEDENCE:
if entry in self._ALLOWED:
display.debug('Calling %s to load vars for %s' % (entry, host.name))
all_vars = combine_vars(all_vars, locals()[entry]())
else:
display.warning('Ignoring unknown variable precedence entry: %s' % (entry))
# host vars, from inventory, inventory adjacent and play adjacent via plugins
all_vars = combine_vars(all_vars, host.get_vars())
all_vars = combine_vars(all_vars, _plugins_inventory([host]))
all_vars = combine_vars(all_vars, _plugins_play([host]))
# finally, the facts caches for this host, if it exists
# TODO: cleaning of facts should eventually become part of taskresults instead of vars
try:
facts = wrap_var(self._fact_cache.get(host.name, {}))
all_vars.update(namespace_facts(facts))
# push facts to main namespace
if C.INJECT_FACTS_AS_VARS:
all_vars = combine_vars(all_vars, wrap_var(clean_facts(facts)))
else:
# always 'promote' ansible_local
all_vars = combine_vars(all_vars, wrap_var({'ansible_local': facts.get('ansible_local', {})}))
except KeyError:
pass
if play:
all_vars = combine_vars(all_vars, play.get_vars())
vars_files = play.get_vars_files()
try:
for vars_file_item in vars_files:
# create a set of temporary vars here, which incorporate the extra
# and magic vars so we can properly template the vars_files entries
temp_vars = combine_vars(all_vars, self._extra_vars)
temp_vars = combine_vars(temp_vars, magic_variables)
templar = Templar(loader=self._loader, variables=temp_vars)
# we assume each item in the list is itself a list, as we
# support "conditional includes" for vars_files, which mimics
# the with_first_found mechanism.
vars_file_list = vars_file_item
if not isinstance(vars_file_list, list):
vars_file_list = [vars_file_list]
# now we iterate through the (potential) files, and break out
# as soon as we read one from the list. If none are found, we
# raise an error, which is silently ignored at this point.
try:
for vars_file in vars_file_list:
vars_file = templar.template(vars_file)
if not (isinstance(vars_file, Sequence)):
raise AnsibleError(
"Invalid vars_files entry found: %r\n"
"vars_files entries should be either a string type or "
"a list of string types after template expansion" % vars_file
)
try:
data = preprocess_vars(self._loader.load_from_file(vars_file, unsafe=True))
if data is not None:
for item in data:
all_vars = combine_vars(all_vars, item)
break
except AnsibleFileNotFound:
# we continue on loader failures
continue
except AnsibleParserError:
raise
else:
# if include_delegate_to is set to False, we ignore the missing
# vars file here because we're working on a delegated host
if include_delegate_to:
raise AnsibleFileNotFound("vars file %s was not found" % vars_file_item)
except (UndefinedError, AnsibleUndefinedVariable):
if host is not None and self._fact_cache.get(host.name, dict()).get('module_setup') and task is not None:
raise AnsibleUndefinedVariable("an undefined variable was found when attempting to template the vars_files item '%s'"
% vars_file_item, obj=vars_file_item)
else:
# we do not have a full context here, and the missing variable could be because of that
# so just show a warning and continue
display.vvv("skipping vars_file '%s' due to an undefined variable" % vars_file_item)
continue
display.vvv("Read vars_file '%s'" % vars_file_item)
except TypeError:
raise AnsibleParserError("Error while reading vars files - please supply a list of file names. "
"Got '%s' of type %s" % (vars_files, type(vars_files)))
# By default, we now merge in all vars from all roles in the play,
# unless the user has disabled this via a config option
if not C.DEFAULT_PRIVATE_ROLE_VARS:
for role in play.get_roles():
all_vars = combine_vars(all_vars, role.get_vars(include_params=False))
# next, we merge in the vars from the role, which will specifically
# follow the role dependency chain, and then we merge in the tasks
# vars (which will look at parent blocks/task includes)
if task:
if task._role:
all_vars = combine_vars(all_vars, task._role.get_vars(task.get_dep_chain(), include_params=False))
all_vars = combine_vars(all_vars, task.get_vars())
# next, we merge in the vars cache (include vars) and nonpersistent
# facts cache (set_fact/register), in that order
if host:
# include_vars non-persistent cache
all_vars = combine_vars(all_vars, self._vars_cache.get(host.get_name(), dict()))
# fact non-persistent cache
all_vars = combine_vars(all_vars, self._nonpersistent_fact_cache.get(host.name, dict()))
# next, we merge in role params and task include params
if task:
if task._role:
all_vars = combine_vars(all_vars, task._role.get_role_params(task.get_dep_chain()))
# special case for include tasks, where the include params
# may be specified in the vars field for the task, which should
# have higher precedence than the vars/np facts above
all_vars = combine_vars(all_vars, task.get_include_params())
# extra vars
all_vars = combine_vars(all_vars, self._extra_vars)
# magic variables
all_vars = combine_vars(all_vars, magic_variables)
# special case for the 'environment' magic variable, as someone
# may have set it as a variable and we don't want to stomp on it
if task:
all_vars['environment'] = task.environment
# if we have a task and we're delegating to another host, figure out the
# variables for that host now so we don't have to rely on hostvars later
if task and task.delegate_to is not None and include_delegate_to:
all_vars['ansible_delegated_vars'], all_vars['_ansible_loop_cache'] = self._get_delegated_vars(play, task, all_vars)
# 'vars' magic var
if task or play:
# has to be copy, otherwise recursive ref
all_vars['vars'] = all_vars.copy()
display.debug("done with get_vars()")
return all_vars
def _get_magic_variables(self, play, host, task, include_hostvars, include_delegate_to,
_hosts=None, _hosts_all=None):
'''
Returns a dictionary of so-called "magic" variables in Ansible,
which are special variables we set internally for use.
'''
variables = {}
variables['playbook_dir'] = os.path.abspath(self._loader.get_basedir())
variables['ansible_playbook_python'] = sys.executable
if play:
# This is a list of all role names of all dependencies for all roles for this play
dependency_role_names = list(set([d._role_name for r in play.roles for d in r.get_all_dependencies()]))
# This is a list of all role names of all roles for this play
play_role_names = [r._role_name for r in play.roles]
# ansible_role_names includes all role names, dependent or directly referenced by the play
variables['ansible_role_names'] = list(set(dependency_role_names + play_role_names))
# ansible_play_role_names includes the names of all roles directly referenced by this play
# roles that are implicitly referenced via dependencies are not listed.
variables['ansible_play_role_names'] = play_role_names
# ansible_dependent_role_names includes the names of all roles that are referenced via dependencies
# dependencies that are also explicitly named as roles are included in this list
variables['ansible_dependent_role_names'] = dependency_role_names
# DEPRECATED: role_names should be deprecated in favor of ansible_role_names or ansible_play_role_names
variables['role_names'] = variables['ansible_play_role_names']
variables['ansible_play_name'] = play.get_name()
if task:
if task._role:
variables['role_name'] = task._role.get_name()
variables['role_path'] = task._role._role_path
variables['role_uuid'] = text_type(task._role._uuid)
if self._inventory is not None:
variables['groups'] = self._inventory.get_groups_dict()
if play:
templar = Templar(loader=self._loader)
if templar.is_template(play.hosts):
pattern = 'all'
else:
pattern = play.hosts or 'all'
# add the list of hosts in the play, as adjusted for limit/filters
if not _hosts_all:
_hosts_all = [h.name for h in self._inventory.get_hosts(pattern=pattern, ignore_restrictions=True)]
if not _hosts:
_hosts = [h.name for h in self._inventory.get_hosts()]
variables['ansible_play_hosts_all'] = _hosts_all[:]
variables['ansible_play_hosts'] = [x for x in variables['ansible_play_hosts_all'] if x not in play._removed_hosts]
variables['ansible_play_batch'] = [x for x in _hosts if x not in play._removed_hosts]
# DEPRECATED: play_hosts should be deprecated in favor of ansible_play_batch,
# however this would take work in the templating engine, so for now we'll add both
variables['play_hosts'] = variables['ansible_play_batch']
# the 'omit' value alows params to be left out if the variable they are based on is undefined
variables['omit'] = self._omit_token
# Set options vars
for option, option_value in iteritems(self._options_vars):
variables[option] = option_value
if self._hostvars is not None and include_hostvars:
variables['hostvars'] = self._hostvars
return variables
def _get_delegated_vars(self, play, task, existing_variables):
if not hasattr(task, 'loop'):
# This "task" is not a Task, so we need to skip it
return {}, None
# we unfortunately need to template the delegate_to field here,
# as we're fetching vars before post_validate has been called on
# the task that has been passed in
vars_copy = existing_variables.copy()
templar = Templar(loader=self._loader, variables=vars_copy)
items = []
has_loop = True
if task.loop_with is not None:
if task.loop_with in lookup_loader:
try:
loop_terms = listify_lookup_plugin_terms(terms=task.loop, templar=templar,
loader=self._loader, fail_on_undefined=True, convert_bare=False)
items = lookup_loader.get(task.loop_with, loader=self._loader, templar=templar).run(terms=loop_terms, variables=vars_copy)
except AnsibleTemplateError:
# This task will be skipped later due to this, so we just setup
# a dummy array for the later code so it doesn't fail
items = [None]
else:
raise AnsibleError("Failed to find the lookup named '%s' in the available lookup plugins" % task.loop_with)
elif task.loop is not None:
try:
items = templar.template(task.loop)
except AnsibleTemplateError:
# This task will be skipped later due to this, so we just setup
# a dummy array for the later code so it doesn't fail
items = [None]
else:
has_loop = False
items = [None]
delegated_host_vars = dict()
item_var = getattr(task.loop_control, 'loop_var', 'item')
cache_items = False
for item in items:
# update the variables with the item value for templating, in case we need it
if item is not None:
vars_copy[item_var] = item
templar.set_available_variables = vars_copy
delegated_host_name = templar.template(task.delegate_to, fail_on_undefined=False)
if delegated_host_name != task.delegate_to:
cache_items = True
if delegated_host_name is None:
raise AnsibleError(message="Undefined delegate_to host for task:", obj=task._ds)
if not isinstance(delegated_host_name, string_types):
raise AnsibleError(message="the field 'delegate_to' has an invalid type (%s), and could not be"
" converted to a string type." % type(delegated_host_name),
obj=task._ds)
if delegated_host_name in delegated_host_vars:
# no need to repeat ourselves, as the delegate_to value
# does not appear to be tied to the loop item variable
continue
# a dictionary of variables to use if we have to create a new host below
# we set the default port based on the default transport here, to make sure
# we use the proper default for windows
new_port = C.DEFAULT_REMOTE_PORT
if C.DEFAULT_TRANSPORT == 'winrm':
new_port = 5986
new_delegated_host_vars = dict(
ansible_delegated_host=delegated_host_name,
ansible_host=delegated_host_name, # not redundant as other sources can change ansible_host
ansible_port=new_port,
ansible_user=C.DEFAULT_REMOTE_USER,
ansible_connection=C.DEFAULT_TRANSPORT,
)
# now try to find the delegated-to host in inventory, or failing that,
# create a new host on the fly so we can fetch variables for it
delegated_host = None
if self._inventory is not None:
delegated_host = self._inventory.get_host(delegated_host_name)
# try looking it up based on the address field, and finally
# fall back to creating a host on the fly to use for the var lookup
if delegated_host is None:
if delegated_host_name in C.LOCALHOST:
delegated_host = self._inventory.localhost
else:
for h in self._inventory.get_hosts(ignore_limits=True, ignore_restrictions=True):
# check if the address matches, or if both the delegated_to host
# and the current host are in the list of localhost aliases
if h.address == delegated_host_name:
delegated_host = h
break
else:
delegated_host = Host(name=delegated_host_name)
delegated_host.vars = combine_vars(delegated_host.vars, new_delegated_host_vars)
else:
delegated_host = Host(name=delegated_host_name)
delegated_host.vars = combine_vars(delegated_host.vars, new_delegated_host_vars)
# now we go fetch the vars for the delegated-to host and save them in our
# master dictionary of variables to be used later in the TaskExecutor/PlayContext
delegated_host_vars[delegated_host_name] = self.get_vars(
play=play,
host=delegated_host,
task=task,
include_delegate_to=False,
include_hostvars=False,
)
_ansible_loop_cache = None
if has_loop and cache_items:
# delegate_to templating produced a change, so we will cache the templated items
# in a special private hostvar
# this ensures that delegate_to+loop doesn't produce different results than TaskExecutor
# which may reprocess the loop
_ansible_loop_cache = items
return delegated_host_vars, _ansible_loop_cache
def clear_facts(self, hostname):
'''
Clears the facts for a host
'''
self._fact_cache.pop(hostname, None)
def set_host_facts(self, host, facts):
'''
Sets or updates the given facts for a host in the fact cache.
'''
if not isinstance(facts, Mapping):
raise AnsibleAssertionError("the type of 'facts' to set for host_facts should be a Mapping but is a %s" % type(facts))
try:
host_cache = self._fact_cache[host]
except KeyError:
# We get to set this as new
host_cache = facts
else:
if not isinstance(host_cache, MutableMapping):
raise TypeError('The object retrieved for {0} must be a MutableMapping but was'
' a {1}'.format(host, type(host_cache)))
# Update the existing facts
host_cache.update(facts)
# Save the facts back to the backing store
self._fact_cache[host] = host_cache
def set_nonpersistent_facts(self, host, facts):
'''
Sets or updates the given facts for a host in the fact cache.
'''
if not isinstance(facts, Mapping):
raise AnsibleAssertionError("the type of 'facts' to set for nonpersistent_facts should be a Mapping but is a %s" % type(facts))
try:
self._nonpersistent_fact_cache[host].update(facts)
except KeyError:
self._nonpersistent_fact_cache[host] = facts
def set_host_variable(self, host, varname, value):
'''
Sets a value in the vars_cache for a host.
'''
if host not in self._vars_cache:
self._vars_cache[host] = dict()
if varname in self._vars_cache[host] and isinstance(self._vars_cache[host][varname], MutableMapping) and isinstance(value, MutableMapping):
self._vars_cache[host] = combine_vars(self._vars_cache[host], {varname: value})
else:
self._vars_cache[host][varname] = value
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 59,695 |
ERROR! Unexpected Exception, this is probably a bug: string index out of range
|
##### SUMMARY
Ansible run fails and I want to retry the failed hosts with the --limit @windows.retry command and it fails with "ERROR! Unexpected Exception, this is probably a bug: string index out of range"
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
limit
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
$ ansible --version
ansible 2.9.0.dev0
config file = /Users/tanner/projects/ansible.git/playbooks.git/celadonsystems.com/ansible.cfg
configured module search path = [u'/Users/tanner/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /Users/tanner/projects/ansible.git/ansible/lib/ansible
executable location = /Users/tanner/projects/ansible.git/ansible/bin/ansible
python version = 2.7.10 (default, Feb 22 2019, 21:55:15) [GCC 4.2.1 Compatible Apple LLVM 10.0.1 (clang-1001.0.37.14)]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
$ ansible-config dump --only-changed
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
Management host macOS 10.14.6
Managed hosts various releases of Windows
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
<!--- Paste example playbooks or commands between quotes below -->
Run ansible and have a failure so you get the retry file
```yaml
$ ansible-playbook -i testing ../windows.yml --limit @../windows.retry
```
##### EXPECTED RESULTS
Ansible will run for just the failed hosts in the retry file
##### ACTUAL RESULTS
ERROR! Unexpected Exception, this is probably a bug: string index out of range
<!--- Paste verbatim command output between quotes -->
```paste below
ERROR! Unexpected Exception, this is probably a bug: string index out of range
```
|
https://github.com/ansible/ansible/issues/59695
|
https://github.com/ansible/ansible/pull/59776
|
8a6c7a97ccedf99d5bf4c39e0118ef61501d15ee
|
2ebc4e1e7eda61f133d9425872e2a858bae92ec4
| 2019-07-28T16:34:25Z |
python
| 2019-07-30T17:02:17Z |
changelogs/fragments/limit_file_parsing.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 59,695 |
ERROR! Unexpected Exception, this is probably a bug: string index out of range
|
##### SUMMARY
Ansible run fails and I want to retry the failed hosts with the --limit @windows.retry command and it fails with "ERROR! Unexpected Exception, this is probably a bug: string index out of range"
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
limit
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
$ ansible --version
ansible 2.9.0.dev0
config file = /Users/tanner/projects/ansible.git/playbooks.git/celadonsystems.com/ansible.cfg
configured module search path = [u'/Users/tanner/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /Users/tanner/projects/ansible.git/ansible/lib/ansible
executable location = /Users/tanner/projects/ansible.git/ansible/bin/ansible
python version = 2.7.10 (default, Feb 22 2019, 21:55:15) [GCC 4.2.1 Compatible Apple LLVM 10.0.1 (clang-1001.0.37.14)]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
$ ansible-config dump --only-changed
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
Management host macOS 10.14.6
Managed hosts various releases of Windows
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
<!--- Paste example playbooks or commands between quotes below -->
Run ansible and have a failure so you get the retry file
```yaml
$ ansible-playbook -i testing ../windows.yml --limit @../windows.retry
```
##### EXPECTED RESULTS
Ansible will run for just the failed hosts in the retry file
##### ACTUAL RESULTS
ERROR! Unexpected Exception, this is probably a bug: string index out of range
<!--- Paste verbatim command output between quotes -->
```paste below
ERROR! Unexpected Exception, this is probably a bug: string index out of range
```
|
https://github.com/ansible/ansible/issues/59695
|
https://github.com/ansible/ansible/pull/59776
|
8a6c7a97ccedf99d5bf4c39e0118ef61501d15ee
|
2ebc4e1e7eda61f133d9425872e2a858bae92ec4
| 2019-07-28T16:34:25Z |
python
| 2019-07-30T17:02:17Z |
lib/ansible/inventory/manager.py
|
# (c) 2012-2014, Michael DeHaan <[email protected]>
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
#############################################
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import fnmatch
import os
import sys
import re
import itertools
import traceback
from operator import attrgetter
from random import shuffle
from ansible import constants as C
from ansible.errors import AnsibleError, AnsibleOptionsError, AnsibleParserError
from ansible.inventory.data import InventoryData
from ansible.module_utils.six import string_types
from ansible.module_utils._text import to_bytes, to_text
from ansible.parsing.utils.addresses import parse_address
from ansible.plugins.loader import inventory_loader
from ansible.utils.helpers import deduplicate_list
from ansible.utils.path import unfrackpath
from ansible.utils.display import Display
display = Display()
IGNORED_ALWAYS = [br"^\.", b"^host_vars$", b"^group_vars$", b"^vars_plugins$"]
IGNORED_PATTERNS = [to_bytes(x) for x in C.INVENTORY_IGNORE_PATTERNS]
IGNORED_EXTS = [b'%s$' % to_bytes(re.escape(x)) for x in C.INVENTORY_IGNORE_EXTS]
IGNORED = re.compile(b'|'.join(IGNORED_ALWAYS + IGNORED_PATTERNS + IGNORED_EXTS))
PATTERN_WITH_SUBSCRIPT = re.compile(
r'''^
(.+) # A pattern expression ending with...
\[(?: # A [subscript] expression comprising:
(-?[0-9]+)| # A single positive or negative number
([0-9]+)([:-]) # Or an x:y or x: range.
([0-9]*)
)\]
$
''', re.X
)
def order_patterns(patterns):
''' takes a list of patterns and reorders them by modifier to apply them consistently '''
# FIXME: this goes away if we apply patterns incrementally or by groups
pattern_regular = []
pattern_intersection = []
pattern_exclude = []
for p in patterns:
if p[0] == "!":
pattern_exclude.append(p)
elif p[0] == "&":
pattern_intersection.append(p)
elif p:
pattern_regular.append(p)
# if no regular pattern was given, hence only exclude and/or intersection
# make that magically work
if pattern_regular == []:
pattern_regular = ['all']
# when applying the host selectors, run those without the "&" or "!"
# first, then the &s, then the !s.
return pattern_regular + pattern_intersection + pattern_exclude
def split_host_pattern(pattern):
"""
Takes a string containing host patterns separated by commas (or a list
thereof) and returns a list of single patterns (which may not contain
commas). Whitespace is ignored.
Also accepts ':' as a separator for backwards compatibility, but it is
not recommended due to the conflict with IPv6 addresses and host ranges.
Example: 'a,b[1], c[2:3] , d' -> ['a', 'b[1]', 'c[2:3]', 'd']
"""
if isinstance(pattern, list):
return list(itertools.chain(*map(split_host_pattern, pattern)))
elif not isinstance(pattern, string_types):
pattern = to_text(pattern, errors='surrogate_or_strict')
# If it's got commas in it, we'll treat it as a straightforward
# comma-separated list of patterns.
if u',' in pattern:
patterns = pattern.split(u',')
# If it doesn't, it could still be a single pattern. This accounts for
# non-separator uses of colons: IPv6 addresses and [x:y] host ranges.
else:
try:
(base, port) = parse_address(pattern, allow_ranges=True)
patterns = [pattern]
except Exception:
# The only other case we accept is a ':'-separated list of patterns.
# This mishandles IPv6 addresses, and is retained only for backwards
# compatibility.
patterns = re.findall(
to_text(r'''(?: # We want to match something comprising:
[^\s:\[\]] # (anything other than whitespace or ':[]'
| # ...or...
\[[^\]]*\] # a single complete bracketed expression)
)+ # occurring once or more
'''), pattern, re.X
)
return [p.strip() for p in patterns]
class InventoryManager(object):
''' Creates and manages inventory '''
def __init__(self, loader, sources=None):
# base objects
self._loader = loader
self._inventory = InventoryData()
# a list of host(names) to contain current inquiries to
self._restriction = None
self._subset = None
# caches
self._hosts_patterns_cache = {} # resolved full patterns
self._pattern_cache = {} # resolved individual patterns
# the inventory dirs, files, script paths or lists of hosts
if sources is None:
self._sources = []
elif isinstance(sources, string_types):
self._sources = [sources]
else:
self._sources = sources
# get to work!
self.parse_sources(cache=True)
@property
def localhost(self):
return self._inventory.localhost
@property
def groups(self):
return self._inventory.groups
@property
def hosts(self):
return self._inventory.hosts
def add_host(self, host, group=None, port=None):
return self._inventory.add_host(host, group, port)
def add_group(self, group):
return self._inventory.add_group(group)
def get_groups_dict(self):
return self._inventory.get_groups_dict()
def reconcile_inventory(self):
self.clear_caches()
return self._inventory.reconcile_inventory()
def get_host(self, hostname):
return self._inventory.get_host(hostname)
def _fetch_inventory_plugins(self):
''' sets up loaded inventory plugins for usage '''
display.vvvv('setting up inventory plugins')
plugins = []
for name in C.INVENTORY_ENABLED:
plugin = inventory_loader.get(name)
if plugin:
plugins.append(plugin)
else:
display.warning('Failed to load inventory plugin, skipping %s' % name)
if not plugins:
raise AnsibleError("No inventory plugins available to generate inventory, make sure you have at least one whitelisted.")
return plugins
def parse_sources(self, cache=False):
''' iterate over inventory sources and parse each one to populate it'''
parsed = False
# allow for multiple inventory parsing
for source in self._sources:
if source:
if ',' not in source:
source = unfrackpath(source, follow=False)
parse = self.parse_source(source, cache=cache)
if parse and not parsed:
parsed = True
if parsed:
# do post processing
self._inventory.reconcile_inventory()
else:
if C.INVENTORY_UNPARSED_IS_FAILED:
raise AnsibleError("No inventory was parsed, please check your configuration and options.")
else:
display.warning("No inventory was parsed, only implicit localhost is available")
def parse_source(self, source, cache=False):
''' Generate or update inventory for the source provided '''
parsed = False
display.debug(u'Examining possible inventory source: %s' % source)
# use binary for path functions
b_source = to_bytes(source)
# process directories as a collection of inventories
if os.path.isdir(b_source):
display.debug(u'Searching for inventory files in directory: %s' % source)
for i in sorted(os.listdir(b_source)):
display.debug(u'Considering %s' % i)
# Skip hidden files and stuff we explicitly ignore
if IGNORED.search(i):
continue
# recursively deal with directory entries
fullpath = to_text(os.path.join(b_source, i), errors='surrogate_or_strict')
parsed_this_one = self.parse_source(fullpath, cache=cache)
display.debug(u'parsed %s as %s' % (fullpath, parsed_this_one))
if not parsed:
parsed = parsed_this_one
else:
# left with strings or files, let plugins figure it out
# set so new hosts can use for inventory_file/dir vars
self._inventory.current_source = source
# try source with each plugin
failures = []
for plugin in self._fetch_inventory_plugins():
plugin_name = to_text(getattr(plugin, '_load_name', getattr(plugin, '_original_path', '')))
display.debug(u'Attempting to use plugin %s (%s)' % (plugin_name, plugin._original_path))
# initialize and figure out if plugin wants to attempt parsing this file
try:
plugin_wants = bool(plugin.verify_file(source))
except Exception:
plugin_wants = False
if plugin_wants:
try:
# FIXME in case plugin fails 1/2 way we have partial inventory
plugin.parse(self._inventory, self._loader, source, cache=cache)
try:
plugin.update_cache_if_changed()
except AttributeError:
# some plugins might not implement caching
pass
parsed = True
display.vvv('Parsed %s inventory source with %s plugin' % (source, plugin_name))
break
except AnsibleParserError as e:
display.debug('%s was not parsable by %s' % (source, plugin_name))
tb = ''.join(traceback.format_tb(sys.exc_info()[2]))
failures.append({'src': source, 'plugin': plugin_name, 'exc': e, 'tb': tb})
except Exception as e:
display.debug('%s failed while attempting to parse %s' % (plugin_name, source))
tb = ''.join(traceback.format_tb(sys.exc_info()[2]))
failures.append({'src': source, 'plugin': plugin_name, 'exc': AnsibleError(e), 'tb': tb})
else:
display.vvv("%s declined parsing %s as it did not pass its verify_file() method" % (plugin_name, source))
else:
if not parsed and failures:
# only if no plugin processed files should we show errors.
for fail in failures:
display.warning(u'\n* Failed to parse %s with %s plugin: %s' % (to_text(fail['src']), fail['plugin'], to_text(fail['exc'])))
if 'tb' in fail:
display.vvv(to_text(fail['tb']))
if C.INVENTORY_ANY_UNPARSED_IS_FAILED:
raise AnsibleError(u'Completely failed to parse inventory source %s' % (source))
if not parsed:
if source != '/etc/ansible/hosts' or os.path.exists(source):
# only warn if NOT using the default and if using it, only if the file is present
display.warning("Unable to parse %s as an inventory source" % source)
# clear up, jic
self._inventory.current_source = None
return parsed
def clear_caches(self):
''' clear all caches '''
self._hosts_patterns_cache = {}
self._pattern_cache = {}
# FIXME: flush inventory cache
def refresh_inventory(self):
''' recalculate inventory '''
self.clear_caches()
self._inventory = InventoryData()
self.parse_sources(cache=False)
def _match_list(self, items, pattern_str):
# compile patterns
try:
if not pattern_str[0] == '~':
pattern = re.compile(fnmatch.translate(pattern_str))
else:
pattern = re.compile(pattern_str[1:])
except Exception:
raise AnsibleError('Invalid host list pattern: %s' % pattern_str)
# apply patterns
results = []
for item in items:
if pattern.match(item):
results.append(item)
return results
def get_hosts(self, pattern="all", ignore_limits=False, ignore_restrictions=False, order=None):
"""
Takes a pattern or list of patterns and returns a list of matching
inventory host names, taking into account any active restrictions
or applied subsets
"""
hosts = []
# Check if pattern already computed
if isinstance(pattern, list):
pattern_list = pattern[:]
else:
pattern_list = [pattern]
if pattern_list:
if not ignore_limits and self._subset:
pattern_list.extend(self._subset)
if not ignore_restrictions and self._restriction:
pattern_list.extend(self._restriction)
# This is only used as a hash key in the self._hosts_patterns_cache dict
# a tuple is faster than stringifying
pattern_hash = tuple(pattern_list)
if pattern_hash not in self._hosts_patterns_cache:
patterns = split_host_pattern(pattern)
hosts[:] = self._evaluate_patterns(patterns)
# mainly useful for hostvars[host] access
if not ignore_limits and self._subset:
# exclude hosts not in a subset, if defined
subset_uuids = set(s._uuid for s in self._evaluate_patterns(self._subset))
hosts[:] = [h for h in hosts if h._uuid in subset_uuids]
if not ignore_restrictions and self._restriction:
# exclude hosts mentioned in any restriction (ex: failed hosts)
hosts[:] = [h for h in hosts if h.name in self._restriction]
self._hosts_patterns_cache[pattern_hash] = deduplicate_list(hosts)
# sort hosts list if needed (should only happen when called from strategy)
if order in ['sorted', 'reverse_sorted']:
hosts[:] = sorted(self._hosts_patterns_cache[pattern_hash][:], key=attrgetter('name'), reverse=(order == 'reverse_sorted'))
elif order == 'reverse_inventory':
hosts[:] = self._hosts_patterns_cache[pattern_hash][::-1]
else:
hosts[:] = self._hosts_patterns_cache[pattern_hash][:]
if order == 'shuffle':
shuffle(hosts)
elif order not in [None, 'inventory']:
raise AnsibleOptionsError("Invalid 'order' specified for inventory hosts: %s" % order)
return hosts
def _evaluate_patterns(self, patterns):
"""
Takes a list of patterns and returns a list of matching host names,
taking into account any negative and intersection patterns.
"""
patterns = order_patterns(patterns)
hosts = []
for p in patterns:
# avoid resolving a pattern that is a plain host
if p in self._inventory.hosts:
hosts.append(self._inventory.get_host(p))
else:
that = self._match_one_pattern(p)
if p[0] == "!":
that = set(that)
hosts = [h for h in hosts if h not in that]
elif p[0] == "&":
that = set(that)
hosts = [h for h in hosts if h in that]
else:
existing_hosts = set(y.name for y in hosts)
hosts.extend([h for h in that if h.name not in existing_hosts])
return hosts
def _match_one_pattern(self, pattern):
"""
Takes a single pattern and returns a list of matching host names.
Ignores intersection (&) and exclusion (!) specifiers.
The pattern may be:
1. A regex starting with ~, e.g. '~[abc]*'
2. A shell glob pattern with ?/*/[chars]/[!chars], e.g. 'foo*'
3. An ordinary word that matches itself only, e.g. 'foo'
The pattern is matched using the following rules:
1. If it's 'all', it matches all hosts in all groups.
2. Otherwise, for each known group name:
(a) if it matches the group name, the results include all hosts
in the group or any of its children.
(b) otherwise, if it matches any hosts in the group, the results
include the matching hosts.
This means that 'foo*' may match one or more groups (thus including all
hosts therein) but also hosts in other groups.
The built-in groups 'all' and 'ungrouped' are special. No pattern can
match these group names (though 'all' behaves as though it matches, as
described above). The word 'ungrouped' can match a host of that name,
and patterns like 'ungr*' and 'al*' can match either hosts or groups
other than all and ungrouped.
If the pattern matches one or more group names according to these rules,
it may have an optional range suffix to select a subset of the results.
This is allowed only if the pattern is not a regex, i.e. '~foo[1]' does
not work (the [1] is interpreted as part of the regex), but 'foo*[1]'
would work if 'foo*' matched the name of one or more groups.
Duplicate matches are always eliminated from the results.
"""
if pattern[0] in ("&", "!"):
pattern = pattern[1:]
if pattern not in self._pattern_cache:
(expr, slice) = self._split_subscript(pattern)
hosts = self._enumerate_matches(expr)
try:
hosts = self._apply_subscript(hosts, slice)
except IndexError:
raise AnsibleError("No hosts matched the subscripted pattern '%s'" % pattern)
self._pattern_cache[pattern] = hosts
return self._pattern_cache[pattern]
def _split_subscript(self, pattern):
"""
Takes a pattern, checks if it has a subscript, and returns the pattern
without the subscript and a (start,end) tuple representing the given
subscript (or None if there is no subscript).
Validates that the subscript is in the right syntax, but doesn't make
sure the actual indices make sense in context.
"""
# Do not parse regexes for enumeration info
if pattern[0] == '~':
return (pattern, None)
# We want a pattern followed by an integer or range subscript.
# (We can't be more restrictive about the expression because the
# fnmatch semantics permit [\[:\]] to occur.)
subscript = None
m = PATTERN_WITH_SUBSCRIPT.match(pattern)
if m:
(pattern, idx, start, sep, end) = m.groups()
if idx:
subscript = (int(idx), None)
else:
if not end:
end = -1
subscript = (int(start), int(end))
if sep == '-':
display.warning("Use [x:y] inclusive subscripts instead of [x-y] which has been removed")
return (pattern, subscript)
def _apply_subscript(self, hosts, subscript):
"""
Takes a list of hosts and a (start,end) tuple and returns the subset of
hosts based on the subscript (which may be None to return all hosts).
"""
if not hosts or not subscript:
return hosts
(start, end) = subscript
if end:
if end == -1:
end = len(hosts) - 1
return hosts[start:end + 1]
else:
return [hosts[start]]
def _enumerate_matches(self, pattern):
"""
Returns a list of host names matching the given pattern according to the
rules explained above in _match_one_pattern.
"""
results = []
# check if pattern matches group
matching_groups = self._match_list(self._inventory.groups, pattern)
if matching_groups:
for groupname in matching_groups:
results.extend(self._inventory.groups[groupname].get_hosts())
# check hosts if no groups matched or it is a regex/glob pattern
if not matching_groups or pattern[0] == '~' or any(special in pattern for special in ('.', '?', '*', '[')):
# pattern might match host
matching_hosts = self._match_list(self._inventory.hosts, pattern)
if matching_hosts:
for hostname in matching_hosts:
results.append(self._inventory.hosts[hostname])
if not results and pattern in C.LOCALHOST:
# get_host autocreates implicit when needed
implicit = self._inventory.get_host(pattern)
if implicit:
results.append(implicit)
# Display warning if specified host pattern did not match any groups or hosts
if not results and not matching_groups and pattern != 'all':
msg = "Could not match supplied host pattern, ignoring: %s" % pattern
display.debug(msg)
if C.HOST_PATTERN_MISMATCH == 'warning':
display.warning(msg)
elif C.HOST_PATTERN_MISMATCH == 'error':
raise AnsibleError(msg)
# no need to write 'ignore' state
return results
def list_hosts(self, pattern="all"):
""" return a list of hostnames for a pattern """
# FIXME: cache?
result = [h for h in self.get_hosts(pattern)]
# allow implicit localhost if pattern matches and no other results
if len(result) == 0 and pattern in C.LOCALHOST:
result = [pattern]
return result
def list_groups(self):
# FIXME: cache?
return sorted(self._inventory.groups.keys(), key=lambda x: x)
def restrict_to_hosts(self, restriction):
"""
Restrict list operations to the hosts given in restriction. This is used
to batch serial operations in main playbook code, don't use this for other
reasons.
"""
if restriction is None:
return
elif not isinstance(restriction, list):
restriction = [restriction]
self._restriction = set(to_text(h.name) for h in restriction)
def subset(self, subset_pattern):
"""
Limits inventory results to a subset of inventory that matches a given
pattern, such as to select a given geographic of numeric slice amongst
a previous 'hosts' selection that only select roles, or vice versa.
Corresponds to --limit parameter to ansible-playbook
"""
if subset_pattern is None:
self._subset = None
else:
subset_patterns = split_host_pattern(subset_pattern)
results = []
# allow Unix style @filename data
for x in subset_patterns:
if x[0] == "@":
fd = open(x[1:])
results.extend([to_text(l.strip()) for l in fd.read().split("\n")])
fd.close()
else:
results.append(to_text(x))
self._subset = results
def remove_restriction(self):
""" Do not restrict list operations """
self._restriction = None
def clear_pattern_cache(self):
self._pattern_cache = {}
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 59,695 |
ERROR! Unexpected Exception, this is probably a bug: string index out of range
|
##### SUMMARY
Ansible run fails and I want to retry the failed hosts with the --limit @windows.retry command and it fails with "ERROR! Unexpected Exception, this is probably a bug: string index out of range"
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
limit
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
$ ansible --version
ansible 2.9.0.dev0
config file = /Users/tanner/projects/ansible.git/playbooks.git/celadonsystems.com/ansible.cfg
configured module search path = [u'/Users/tanner/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /Users/tanner/projects/ansible.git/ansible/lib/ansible
executable location = /Users/tanner/projects/ansible.git/ansible/bin/ansible
python version = 2.7.10 (default, Feb 22 2019, 21:55:15) [GCC 4.2.1 Compatible Apple LLVM 10.0.1 (clang-1001.0.37.14)]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
$ ansible-config dump --only-changed
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
Management host macOS 10.14.6
Managed hosts various releases of Windows
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
<!--- Paste example playbooks or commands between quotes below -->
Run ansible and have a failure so you get the retry file
```yaml
$ ansible-playbook -i testing ../windows.yml --limit @../windows.retry
```
##### EXPECTED RESULTS
Ansible will run for just the failed hosts in the retry file
##### ACTUAL RESULTS
ERROR! Unexpected Exception, this is probably a bug: string index out of range
<!--- Paste verbatim command output between quotes -->
```paste below
ERROR! Unexpected Exception, this is probably a bug: string index out of range
```
|
https://github.com/ansible/ansible/issues/59695
|
https://github.com/ansible/ansible/pull/59776
|
8a6c7a97ccedf99d5bf4c39e0118ef61501d15ee
|
2ebc4e1e7eda61f133d9425872e2a858bae92ec4
| 2019-07-28T16:34:25Z |
python
| 2019-07-30T17:02:17Z |
test/integration/targets/inventory/runme.sh
|
#!/usr/bin/env bash
set -x
# https://github.com/ansible/ansible/issues/52152
# Ensure that non-matching limit causes failure with rc 1
ansible-playbook -i ../../inventory --limit foo playbook.yml
if [ "$?" != "1" ]; then
echo "Non-matching limit should cause failure"
exit 1
fi
ansible-playbook -i ../../inventory "$@" strategy.yml
ANSIBLE_TRANSFORM_INVALID_GROUP_CHARS=always ansible-playbook -i ../../inventory "$@" strategy.yml
ANSIBLE_TRANSFORM_INVALID_GROUP_CHARS=never ansible-playbook -i ../../inventory "$@" strategy.yml
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 59,504 |
win_template and win_copy with content should support encoding
|
##### SUMMARY
Mentioned modules should allow to select encoding for a file (defaulting to utf8 or utf16)
I have a requirement to generate simple files with ascii encoding. I can't do that with neither win_copy nor win_template
win_copy currently creates file with UTF16
##### ISSUE TYPE
- Feature Idea
##### COMPONENT NAME
win_copy
win_template
##### ADDITIONAL INFORMATION
Tested on WinSrv 2016, PS-5.1
```yaml
- name: Bootstraping
hosts: all
tasks:
- name: Deploy file
win_copy:
content: 'text'
dest: 'C:\some\path.txt'
```
|
https://github.com/ansible/ansible/issues/59504
|
https://github.com/ansible/ansible/pull/59701
|
4c1f52c6c0003cfeb271342a579a1f6887c8eb9c
|
652bfc7e19f3bab086c1d0389c8d87933f261d54
| 2019-07-24T10:00:21Z |
python
| 2019-07-30T22:05:24Z |
lib/ansible/modules/files/template.py
|
#!/usr/bin/python
# -*- coding: utf-8 -*-
# Copyright: (c) 2017, Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
# This is a virtual module that is entirely implemented as an action plugin and runs on the controller
from __future__ import absolute_import, division, print_function
__metaclass__ = type
ANSIBLE_METADATA = {'metadata_version': '1.1',
'status': ['stableinterface'],
'supported_by': 'core'}
DOCUMENTATION = r'''
---
module: template
version_added: historical
short_description: Template a file out to a remote server
description:
- Templates are processed by the L(Jinja2 templating language,http://jinja.pocoo.org/docs/).
- Documentation on the template formatting can be found in the
L(Template Designer Documentation,http://jinja.pocoo.org/docs/templates/).
- Additional variables listed below can be used in templates.
- C(ansible_managed) (configurable via the C(defaults) section of C(ansible.cfg)) contains a string which can be used to
describe the template name, host, modification time of the template file and the owner uid.
- C(template_host) contains the node name of the template's machine.
- C(template_uid) is the numeric user id of the owner.
- C(template_path) is the path of the template.
- C(template_fullpath) is the absolute path of the template.
- C(template_destpath) is the path of the template on the remote system (added in 2.8).
- C(template_run_date) is the date that the template was rendered.
options:
src:
description:
- Path of a Jinja2 formatted template on the Ansible controller.
- This can be a relative or an absolute path.
type: path
required: yes
dest:
description:
- Location to render the template to on the remote machine.
type: path
required: yes
backup:
description:
- Determine whether a backup should be created.
- When set to C(yes), create a backup file including the timestamp information
so you can get the original file back if you somehow clobbered it incorrectly.
type: bool
default: no
newline_sequence:
description:
- Specify the newline sequence to use for templating files.
type: str
choices: [ '\n', '\r', '\r\n' ]
default: '\n'
version_added: '2.4'
block_start_string:
description:
- The string marking the beginning of a block.
type: str
default: '{%'
version_added: '2.4'
block_end_string:
description:
- The string marking the end of a block.
type: str
default: '%}'
version_added: '2.4'
variable_start_string:
description:
- The string marking the beginning of a print statement.
type: str
default: '{{'
version_added: '2.4'
variable_end_string:
description:
- The string marking the end of a print statement.
type: str
default: '}}'
version_added: '2.4'
trim_blocks:
description:
- Determine when newlines should be removed from blocks.
- When set to C(yes) the first newline after a block is removed (block, not variable tag!).
type: bool
default: yes
version_added: '2.4'
lstrip_blocks:
description:
- Determine when leading spaces and tabs should be stripped.
- When set to C(yes) leading spaces and tabs are stripped from the start of a line to a block.
- This functionality requires Jinja 2.7 or newer.
type: bool
default: no
version_added: '2.6'
force:
description:
- Determine when the file is being transferred if the destination already exists.
- When set to C(yes), replace the remote file when contents are different than the source.
- When set to C(no), the file will only be transferred if the destination does not exist.
type: bool
default: yes
follow:
description:
- Determine whether symbolic links should be followed.
- When set to C(yes) symbolic links will be followed, if they exist.
- When set to C(no) symbolic links will not be followed.
- Previous to Ansible 2.4, this was hardcoded as C(yes).
type: bool
default: no
version_added: '2.4'
output_encoding:
description:
- Overrides the encoding used to write the template file defined by C(dest).
- It defaults to C(utf-8), but any encoding supported by python can be used.
- The source template file must always be encoded using C(utf-8), for homogeneity.
type: str
default: utf-8
version_added: '2.7'
notes:
- Including a string that uses a date in the template will result in the template being marked 'changed' each time.
- Since Ansible 0.9, templates are loaded with C(trim_blocks=True).
- >
Also, you can override jinja2 settings by adding a special header to template file.
i.e. C(#jinja2:variable_start_string:'[%', variable_end_string:'%]', trim_blocks: False)
which changes the variable interpolation markers to C([% var %]) instead of C({{ var }}).
This is the best way to prevent evaluation of things that look like, but should not be Jinja2.
- Using raw/endraw in Jinja2 will not work as you expect because templates in Ansible are recursively
evaluated.
- You can use the M(copy) module with the C(content:) option if you prefer the template inline,
as part of the playbook.
- For Windows you can use M(win_template) which uses '\\r\\n' as C(newline_sequence) by default.
seealso:
- module: copy
- module: win_copy
- module: win_template
author:
- Ansible Core Team
- Michael DeHaan
extends_documentation_fragment:
- files
- validate
'''
EXAMPLES = r'''
- name: Template a file to /etc/files.conf
template:
src: /mytemplates/foo.j2
dest: /etc/file.conf
owner: bin
group: wheel
mode: '0644'
- name: Template a file, using symbolic modes (equivalent to 0644)
template:
src: /mytemplates/foo.j2
dest: /etc/file.conf
owner: bin
group: wheel
mode: u=rw,g=r,o=r
- name: Copy a version of named.conf that is dependent on the OS. setype obtained by doing ls -Z /etc/named.conf on original file
template:
src: named.conf_{{ ansible_os_family}}.j2
dest: /etc/named.conf
group: named
setype: named_conf_t
mode: 0640
- name: Create a DOS-style text file from a template
template:
src: config.ini.j2
dest: /share/windows/config.ini
newline_sequence: '\r\n'
- name: Copy a new sudoers file into place, after passing validation with visudo
template:
src: /mine/sudoers
dest: /etc/sudoers
validate: /usr/sbin/visudo -cf %s
- name: Update sshd configuration safely, avoid locking yourself out
template:
src: etc/ssh/sshd_config.j2
dest: /etc/ssh/sshd_config
owner: root
group: root
mode: '0600'
validate: /usr/sbin/sshd -t -f %s
backup: yes
'''
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 59,504 |
win_template and win_copy with content should support encoding
|
##### SUMMARY
Mentioned modules should allow to select encoding for a file (defaulting to utf8 or utf16)
I have a requirement to generate simple files with ascii encoding. I can't do that with neither win_copy nor win_template
win_copy currently creates file with UTF16
##### ISSUE TYPE
- Feature Idea
##### COMPONENT NAME
win_copy
win_template
##### ADDITIONAL INFORMATION
Tested on WinSrv 2016, PS-5.1
```yaml
- name: Bootstraping
hosts: all
tasks:
- name: Deploy file
win_copy:
content: 'text'
dest: 'C:\some\path.txt'
```
|
https://github.com/ansible/ansible/issues/59504
|
https://github.com/ansible/ansible/pull/59701
|
4c1f52c6c0003cfeb271342a579a1f6887c8eb9c
|
652bfc7e19f3bab086c1d0389c8d87933f261d54
| 2019-07-24T10:00:21Z |
python
| 2019-07-30T22:05:24Z |
lib/ansible/modules/windows/win_template.py
|
#!/usr/bin/python
# -*- coding: utf-8 -*-
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
# this is a virtual module that is entirely implemented server side
ANSIBLE_METADATA = {'metadata_version': '1.1',
'status': ['stableinterface'],
'supported_by': 'core'}
DOCUMENTATION = r'''
---
module: win_template
version_added: "1.9.2"
short_description: Templates a file out to a remote server
description:
- Templates are processed by the Jinja2 templating language
(U(http://jinja.pocoo.org/docs/)) - documentation on the template
formatting can be found in the Template Designer Documentation
(U(http://jinja.pocoo.org/docs/templates/)).
- "Additional variables can be used in templates: C(ansible_managed)
(configurable via the C(defaults) section of C(ansible.cfg)) contains a string
which can be used to describe the template name, host, modification time of the
template file and the owner uid."
- "C(template_host) contains the node name of the template's machine."
- "C(template_uid) the owner."
- "C(template_path) the absolute path of the template."
- "C(template_fullpath) is the absolute path of the template."
- "C(template_destpath) is the path of the template on the remote system (added in 2.8)."
- "C(template_run_date) is the date that the template was rendered."
- "Note that including a string that uses a date in the template will result in the template being marked 'changed' each time."
- For other platforms you can use M(template) which uses '\n' as C(newline_sequence).
options:
src:
description:
- Path of a Jinja2 formatted template on the local server. This can be a relative or absolute path.
type: path
required: yes
dest:
description:
- Location to render the template to on the remote machine.
type: path
required: yes
backup:
description:
- Determine whether a backup should be created.
- When set to C(yes), create a backup file including the timestamp information
so you can get the original file back if you somehow clobbered it incorrectly.
type: bool
default: no
version_added: '2.8'
newline_sequence:
description:
- Specify the newline sequence to use for templating files.
type: str
choices: [ '\n', '\r', '\r\n' ]
default: '\r\n'
version_added: '2.4'
block_start_string:
description:
- The string marking the beginning of a block.
type: str
default: '{%'
version_added: '2.4'
block_end_string:
description:
- The string marking the end of a block.
type: str
default: '%}'
version_added: '2.4'
variable_start_string:
description:
- The string marking the beginning of a print statement.
type: str
default: '{{'
version_added: '2.4'
variable_end_string:
description:
- The string marking the end of a print statement.
type: str
default: '}}'
version_added: '2.4'
trim_blocks:
description:
- If this is set to C(yes) the first newline after a block is removed (block, not variable tag!).
type: bool
default: no
version_added: '2.4'
force:
description:
- If C(yes), will replace the remote file when contents are different
from the source.
- If C(no), the file will only be transferred if the destination does
not exist.
type: bool
default: yes
version_added: '2.4'
notes:
- Templates are loaded with C(trim_blocks=yes).
- Beware fetching files from windows machines when creating templates
because certain tools, such as Powershell ISE, and regedit's export facility
add a Byte Order Mark as the first character of the file, which can cause tracebacks.
- To find Byte Order Marks in files, use C(Format-Hex <file> -Count 16) on Windows, and use C(od -a -t x1 -N 16 <file>) on Linux.
- "Also, you can override jinja2 settings by adding a special header to template file.
i.e. C(#jinja2:variable_start_string:'[%', variable_end_string:'%]', trim_blocks: no)
which changes the variable interpolation markers to [% var %] instead of {{ var }}.
This is the best way to prevent evaluation of things that look like, but should not be Jinja2.
raw/endraw in Jinja2 will not work as you expect because templates in Ansible are recursively evaluated."
- You can use the M(win_copy) module with the C(content:) option if you prefer the template inline,
as part of the playbook.
seealso:
- module: template
- module: win_copy
author:
- Jon Hawkesworth (@jhawkesworth)
'''
EXAMPLES = r'''
- name: Create a file from a Jinja2 template
win_template:
src: /mytemplates/file.conf.j2
dest: C:\Temp\file.conf
- name: Create a Unix-style file from a Jinja2 template
win_template:
src: unix/config.conf.j2
dest: C:\share\unix\config.conf
newline_sequence: '\n'
backup: yes
'''
RETURN = r'''
backup_file:
description: Name of the backup file that was created.
returned: if backup=yes
type: str
sample: C:\Path\To\File.txt.11540.20150212-220915.bak
'''
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 59,504 |
win_template and win_copy with content should support encoding
|
##### SUMMARY
Mentioned modules should allow to select encoding for a file (defaulting to utf8 or utf16)
I have a requirement to generate simple files with ascii encoding. I can't do that with neither win_copy nor win_template
win_copy currently creates file with UTF16
##### ISSUE TYPE
- Feature Idea
##### COMPONENT NAME
win_copy
win_template
##### ADDITIONAL INFORMATION
Tested on WinSrv 2016, PS-5.1
```yaml
- name: Bootstraping
hosts: all
tasks:
- name: Deploy file
win_copy:
content: 'text'
dest: 'C:\some\path.txt'
```
|
https://github.com/ansible/ansible/issues/59504
|
https://github.com/ansible/ansible/pull/59701
|
4c1f52c6c0003cfeb271342a579a1f6887c8eb9c
|
652bfc7e19f3bab086c1d0389c8d87933f261d54
| 2019-07-24T10:00:21Z |
python
| 2019-07-30T22:05:24Z |
lib/ansible/plugins/doc_fragments/template_common.py
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 59,504 |
win_template and win_copy with content should support encoding
|
##### SUMMARY
Mentioned modules should allow to select encoding for a file (defaulting to utf8 or utf16)
I have a requirement to generate simple files with ascii encoding. I can't do that with neither win_copy nor win_template
win_copy currently creates file with UTF16
##### ISSUE TYPE
- Feature Idea
##### COMPONENT NAME
win_copy
win_template
##### ADDITIONAL INFORMATION
Tested on WinSrv 2016, PS-5.1
```yaml
- name: Bootstraping
hosts: all
tasks:
- name: Deploy file
win_copy:
content: 'text'
dest: 'C:\some\path.txt'
```
|
https://github.com/ansible/ansible/issues/59504
|
https://github.com/ansible/ansible/pull/59701
|
4c1f52c6c0003cfeb271342a579a1f6887c8eb9c
|
652bfc7e19f3bab086c1d0389c8d87933f261d54
| 2019-07-24T10:00:21Z |
python
| 2019-07-30T22:05:24Z |
test/integration/targets/win_template/tasks/main.yml
|
# test code for the template module
# (c) 2014, Michael DeHaan <[email protected]>
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
# DOS TEMPLATE
- name: fill in a basic template (DOS)
win_template:
src: foo.j2
dest: '{{ win_output_dir }}/foo.dos.templated'
register: template_result
- name: verify that the file was marked as changed (DOS)
assert:
that:
- 'template_result is changed'
- name: fill in a basic template again (DOS)
win_template:
src: foo.j2
dest: '{{ win_output_dir }}/foo.dos.templated'
register: template_result2
- name: verify that the template was not changed (DOS)
assert:
that:
- 'template_result2 is not changed'
# VERIFY DOS CONTENTS
- name: copy known good into place (DOS)
win_copy:
src: foo.dos.txt
dest: '{{ win_output_dir }}\\foo.dos.txt'
- name: compare templated file to known good (DOS)
raw: fc.exe {{ win_output_dir }}\\foo.dos.templated {{ win_output_dir }}\\foo.dos.txt
register: diff_result
- debug:
var: diff_result
- name: verify templated file matches known good (DOS)
assert:
that:
- '"FC: no differences encountered" in diff_result.stdout'
- "diff_result.rc == 0"
# UNIX TEMPLATE
- name: fill in a basic template (Unix)
win_template:
src: foo.j2
dest: '{{ win_output_dir }}/foo.unix.templated'
newline_sequence: '\n'
register: template_result
- name: verify that the file was marked as changed (Unix)
assert:
that:
- 'template_result is changed'
- name: fill in a basic template again (Unix)
win_template:
src: foo.j2
dest: '{{ win_output_dir }}/foo.unix.templated'
newline_sequence: '\n'
register: template_result2
- name: verify that the template was not changed (Unix)
assert:
that:
- 'template_result2 is not changed'
# VERIFY UNIX CONTENTS
- name: copy known good into place (Unix)
win_copy:
src: foo.unix.txt
dest: '{{ win_output_dir }}\\foo.unix.txt'
- name: compare templated file to known good (Unix)
raw: fc.exe {{ win_output_dir }}\\foo.unix.templated {{ win_output_dir }}\\foo.unix.txt
register: diff_result
- debug:
var: diff_result
- name: verify templated file matches known good (Unix)
assert:
that:
- '"FC: no differences encountered" in diff_result.stdout'
# VERIFY DOS CONTENTS
- name: copy known good into place (DOS)
win_copy:
src: foo.dos.txt
dest: '{{ win_output_dir }}\\foo.dos.txt'
- name: compare templated file to known good (DOS)
raw: fc.exe {{ win_output_dir }}\\foo.dos.templated {{ win_output_dir }}\\foo.dos.txt
register: diff_result
- debug:
var: diff_result
- name: verify templated file matches known good (DOS)
assert:
that:
- '"FC: no differences encountered" in diff_result.stdout'
# TEST BACKUP
- name: test backup (check_mode)
win_template:
src: foo.j2
dest: '{{ win_output_dir }}/foo.unix.templated'
backup: yes
register: cm_backup_result
check_mode: yes
- name: verify that a backup_file was returned
assert:
that:
- cm_backup_result is changed
- cm_backup_result.backup_file is not none
- name: test backup (normal mode)
win_template:
src: foo.j2
dest: '{{ win_output_dir }}/foo.unix.templated'
backup: yes
register: nm_backup_result
- name: check backup_file
win_stat:
path: '{{ nm_backup_result.backup_file }}'
register: backup_file
- name: verify that a backup_file was returned
assert:
that:
- nm_backup_result is changed
- backup_file.stat.exists == true
- name: create template dest directory
win_file:
path: '{{win_output_dir}}\directory'
state: directory
- name: template src file to directory with backslash (check mode)
win_template:
src: foo.j2
dest: '{{win_output_dir}}\directory\'
check_mode: yes
register: template_to_dir_backslash_check
- name: get result of template src file to directory with backslash (check_mode)
win_stat:
path: '{{win_output_dir}}\directory\foo.j2'
register: template_to_dir_backslash_result_check
- name: assert template src file to directory with backslash (check mode)
assert:
that:
- template_to_dir_backslash_check is changed
- not template_to_dir_backslash_result_check.stat.exists
- name: template src file to directory with backslash
win_template:
src: foo.j2
dest: '{{win_output_dir}}\directory\'
register: template_to_dir_backslash
- name: get result of template src file to directory with backslash
win_stat:
path: '{{win_output_dir}}\directory\foo.j2'
register: template_to_dir_backslash_result
- name: assert template src file to directory with backslash
assert:
that:
- template_to_dir_backslash is changed
- template_to_dir_backslash_result.stat.exists
- template_to_dir_backslash_result.stat.checksum == 'ed4f166b2937875ecad39c06648551f5af0b56d3'
- name: template src file to directory with backslash (idempotent)
win_template:
src: foo.j2
dest: '{{win_output_dir}}\directory\'
register: template_to_dir_backslash_again
- name: assert template src file to directory with backslash (idempotent)
assert:
that:
- not template_to_dir_backslash_again is changed
- name: template src file to directory (check mode)
win_template:
src: another_foo.j2
dest: '{{win_output_dir}}\directory'
check_mode: yes
register: template_to_dir_check
- name: get result of template src file to directory (check_mode)
win_stat:
path: '{{win_output_dir}}\directory\another_foo.j2'
register: template_to_dir_result_check
- name: assert template src file to directory (check mode)
assert:
that:
- template_to_dir_check is changed
- not template_to_dir_result_check.stat.exists
- name: template src file to directory
win_template:
src: another_foo.j2
dest: '{{win_output_dir}}\directory'
register: template_to_dir
- name: get result of template src file to directory
win_stat:
path: '{{win_output_dir}}\directory\another_foo.j2'
register: template_to_dir_result
- name: assert template src file to directory with
assert:
that:
- template_to_dir is changed
- template_to_dir_result.stat.exists
- template_to_dir_result.stat.checksum == 'b10b6f27290d554a77da2457b2ccd7d6de86b920'
- name: template src file to directory (idempotent)
win_template:
src: another_foo.j2
dest: '{{win_output_dir}}\directory'
register: template_to_dir_again
- name: assert template src file to directory (idempotent)
assert:
that:
- not template_to_dir_again is changed
# VERIFY MODE
# can't set file mode on windows so commenting this test out
#- name: set file mode
# win_file: path={{win_output_dir}}/foo.templated mode=0644
# register: file_result
#- name: ensure file mode did not change
# assert:
# that:
# - "file_result.changed != True"
# commenting out all the following tests as expanduser and file modes not windows concepts.
# VERIFY dest as a directory does not break file attributes
# Note: expanduser is needed to go down the particular codepath that was broken before
#- name: setup directory for test
# win_file: state=directory dest={{win_output_dir | expanduser}}/template-dir mode=0755 owner=nobody group=root
#- name: set file mode when the destination is a directory
# win_template: src=foo.j2 dest={{win_output_dir | expanduser}}/template-dir/ mode=0600 owner=root group=root
#- name: set file mode when the destination is a directory
# win_template: src=foo.j2 dest={{win_output_dir | expanduser}}/template-dir/ mode=0600 owner=root group=root
# register: file_result
#
#- name: check that the file has the correct attributes
# win_stat: path={{win_output_dir | expanduser}}/template-dir/foo.j2
# register: file_attrs
#
#- assert:
# that:
# - "file_attrs.stat.uid == 0"
# - "file_attrs.stat.pw_name == 'root'"
# - "file_attrs.stat.mode == '0600'"
#
#- name: check that the containing directory did not change attributes
# win_stat: path={{win_output_dir | expanduser}}/template-dir/
# register: dir_attrs
#
#- assert:
# that:
# - "dir_attrs.stat.uid != 0"
# - "dir_attrs.stat.pw_name == 'nobody'"
# - "dir_attrs.stat.mode == '0755'"
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 59,504 |
win_template and win_copy with content should support encoding
|
##### SUMMARY
Mentioned modules should allow to select encoding for a file (defaulting to utf8 or utf16)
I have a requirement to generate simple files with ascii encoding. I can't do that with neither win_copy nor win_template
win_copy currently creates file with UTF16
##### ISSUE TYPE
- Feature Idea
##### COMPONENT NAME
win_copy
win_template
##### ADDITIONAL INFORMATION
Tested on WinSrv 2016, PS-5.1
```yaml
- name: Bootstraping
hosts: all
tasks:
- name: Deploy file
win_copy:
content: 'text'
dest: 'C:\some\path.txt'
```
|
https://github.com/ansible/ansible/issues/59504
|
https://github.com/ansible/ansible/pull/59701
|
4c1f52c6c0003cfeb271342a579a1f6887c8eb9c
|
652bfc7e19f3bab086c1d0389c8d87933f261d54
| 2019-07-24T10:00:21Z |
python
| 2019-07-30T22:05:24Z |
test/integration/targets/win_template/templates/foo.utf-8.j2
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 59,667 |
win_dsc / ScheduledTask start time not working in 2.8
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below -->
Calling win_dsc with the ScheduledTask resource from https://github.com/PowerShell/ComputerManagementDsc/wiki/ScheduledTask
worked in 2.7 but fails in 2.8 - Tested in 2.8.2 and 2.8.3.
Possibly related to the exec_wrapper. The AnsiballZ_win_dsc.ps1 LOOKS correct...
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
win_dsc
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.8.3
config file = None
configured module search path = [u'/home/mundym/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python2.7/site-packages/ansible
executable location = /usr/bin/ansible
python version = 2.7.5 (default, Oct 30 2018, 23:45:53) [GCC 4.8.5 20150623 (Red Hat 4.8.5-36)]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
CentOS 7 for Ansible, target OS is Windows Server 2016
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
Try to create a scheduled task with minimal options but definitely include start time
<!--- Paste example playbooks or commands between quotes below -->
```yaml
- set_fact:
scheduled_tasks:
- task_name: "Task"
task_path: "{{task_path}}"
action_executable: "powershell.exe"
action_arguments: "scriptname"
action_workingpath: "dir"
schedule_type: Daily
days_interval: 1
start_time: "06:00PM"
executeas_credential_username: "User"
executeas_credential_password: "Password"
start_when_available: False
execution_timelimit: "72:00:00"
logon_type: Password
state: Present
- win_psmodule:
name: ComputerManagementDsc
- win_dsc:
resource_name: ScheduledTask
TaskName : "{{ item.task_name }}"
TaskPath : "{{ task_path }}"
Description : "{{ item.description | default(omit) }}"
ActionExecutable : "{{ item.action_executable }}"
ActionArguments : "{{ item.action_arguments | default(omit) }}"
ActionWorkingPath : "{{ item.action_workingpath | default(omit) }}"
ScheduleType : "{{ item.schedule_type}}"
DaysInterval : "{{ item.days_interval | default(omit) }}"
WeeksInterval : "{{ item.weeks_interval | default(omit) }}"
RepeatInterval : "{{ item.repeat_interval | default(omit) }}"
StartTime : "{{ item.start_time | default(omit) }}"
DaysOfWeek : "{{ item.days_of_week | default(omit) }}"
ExecuteAsCredential_username: "{{ item.executeas_credential_username | default(omit) }}"
ExecuteAsCredential_password: "{{ item.executeas_credential_password | default(omit) }}"
LogonType : "{{ item.logon_type | default(omit) }}"
RandomDelay : "{{ item.random_delay | default(omit) }}"
RepetitionDuration : "{{ item.repetition_duration | default('Indefinitely') }}"
DisallowDemandStart : "{{ item.disallow_demand_start | default(False) }}"
DisallowHardTerminate : "{{ item.disallow_hard_terminate | default(False) }}"
Compatibility : "{{ item.compatibility | default('Vista') }}"
Hidden : "{{ item.hidden | default(False) }}"
RunOnlyIfIdle : "{{ item.run_only_if_idle | default(omit) }}"
IdleDuration : "{{ item.idle_duration | default(omit) }}"
IdleWaitTimeout : "{{ item.idle_wait_timeout | default(omit) }}"
RestartOnIdle : "{{ item.restart_on_idle | default(omit) }}"
DontStopOnIdleEnd : "{{ item.dont_stop_on_idle_end | default(omit) }}"
DisallowStartOnRemoteAppSession: "{{ item.disallow_start_on_remote_app_session | default(False) }}"
ExecutionTimeLimit : "{{ item.execution_time_limit | default(omit) }}"
RestartCount : "{{ item.restart_count | default(omit) }}"
RestartInterval : "{{ item.restart_interval | default(omit) }}"
multipleInstances : "{{ item.multiple_instances | default('IgnoreNew') }}"
Priority : "{{ item.priority| default('7') }}"
RunLevel : "{{ item.runlevel | default('Limited') }}"
Enable : true
Ensure : Present
loop: "{{scheduled_tasks}}"
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
Task will be created
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes -->
```paste below
"msg": "argument for StartTime is of type System.String and we were unable to convert to delegate: Exception calling \"ParseExact\" with \"4\" argument(s): \"String was not recognized as a valid DateTime.\""
```
|
https://github.com/ansible/ansible/issues/59667
|
https://github.com/ansible/ansible/pull/59703
|
196347ff326e0fdd1d0c72adbc4aff42362b15aa
|
04ec47bdf16112435a1278149146bdb81de2b979
| 2019-07-26T23:48:54Z |
python
| 2019-07-30T22:45:37Z |
changelogs/fragments/win_dsc-datetime.yaml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 59,667 |
win_dsc / ScheduledTask start time not working in 2.8
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below -->
Calling win_dsc with the ScheduledTask resource from https://github.com/PowerShell/ComputerManagementDsc/wiki/ScheduledTask
worked in 2.7 but fails in 2.8 - Tested in 2.8.2 and 2.8.3.
Possibly related to the exec_wrapper. The AnsiballZ_win_dsc.ps1 LOOKS correct...
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
win_dsc
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.8.3
config file = None
configured module search path = [u'/home/mundym/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python2.7/site-packages/ansible
executable location = /usr/bin/ansible
python version = 2.7.5 (default, Oct 30 2018, 23:45:53) [GCC 4.8.5 20150623 (Red Hat 4.8.5-36)]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
CentOS 7 for Ansible, target OS is Windows Server 2016
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
Try to create a scheduled task with minimal options but definitely include start time
<!--- Paste example playbooks or commands between quotes below -->
```yaml
- set_fact:
scheduled_tasks:
- task_name: "Task"
task_path: "{{task_path}}"
action_executable: "powershell.exe"
action_arguments: "scriptname"
action_workingpath: "dir"
schedule_type: Daily
days_interval: 1
start_time: "06:00PM"
executeas_credential_username: "User"
executeas_credential_password: "Password"
start_when_available: False
execution_timelimit: "72:00:00"
logon_type: Password
state: Present
- win_psmodule:
name: ComputerManagementDsc
- win_dsc:
resource_name: ScheduledTask
TaskName : "{{ item.task_name }}"
TaskPath : "{{ task_path }}"
Description : "{{ item.description | default(omit) }}"
ActionExecutable : "{{ item.action_executable }}"
ActionArguments : "{{ item.action_arguments | default(omit) }}"
ActionWorkingPath : "{{ item.action_workingpath | default(omit) }}"
ScheduleType : "{{ item.schedule_type}}"
DaysInterval : "{{ item.days_interval | default(omit) }}"
WeeksInterval : "{{ item.weeks_interval | default(omit) }}"
RepeatInterval : "{{ item.repeat_interval | default(omit) }}"
StartTime : "{{ item.start_time | default(omit) }}"
DaysOfWeek : "{{ item.days_of_week | default(omit) }}"
ExecuteAsCredential_username: "{{ item.executeas_credential_username | default(omit) }}"
ExecuteAsCredential_password: "{{ item.executeas_credential_password | default(omit) }}"
LogonType : "{{ item.logon_type | default(omit) }}"
RandomDelay : "{{ item.random_delay | default(omit) }}"
RepetitionDuration : "{{ item.repetition_duration | default('Indefinitely') }}"
DisallowDemandStart : "{{ item.disallow_demand_start | default(False) }}"
DisallowHardTerminate : "{{ item.disallow_hard_terminate | default(False) }}"
Compatibility : "{{ item.compatibility | default('Vista') }}"
Hidden : "{{ item.hidden | default(False) }}"
RunOnlyIfIdle : "{{ item.run_only_if_idle | default(omit) }}"
IdleDuration : "{{ item.idle_duration | default(omit) }}"
IdleWaitTimeout : "{{ item.idle_wait_timeout | default(omit) }}"
RestartOnIdle : "{{ item.restart_on_idle | default(omit) }}"
DontStopOnIdleEnd : "{{ item.dont_stop_on_idle_end | default(omit) }}"
DisallowStartOnRemoteAppSession: "{{ item.disallow_start_on_remote_app_session | default(False) }}"
ExecutionTimeLimit : "{{ item.execution_time_limit | default(omit) }}"
RestartCount : "{{ item.restart_count | default(omit) }}"
RestartInterval : "{{ item.restart_interval | default(omit) }}"
multipleInstances : "{{ item.multiple_instances | default('IgnoreNew') }}"
Priority : "{{ item.priority| default('7') }}"
RunLevel : "{{ item.runlevel | default('Limited') }}"
Enable : true
Ensure : Present
loop: "{{scheduled_tasks}}"
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
Task will be created
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes -->
```paste below
"msg": "argument for StartTime is of type System.String and we were unable to convert to delegate: Exception calling \"ParseExact\" with \"4\" argument(s): \"String was not recognized as a valid DateTime.\""
```
|
https://github.com/ansible/ansible/issues/59667
|
https://github.com/ansible/ansible/pull/59703
|
196347ff326e0fdd1d0c72adbc4aff42362b15aa
|
04ec47bdf16112435a1278149146bdb81de2b979
| 2019-07-26T23:48:54Z |
python
| 2019-07-30T22:45:37Z |
lib/ansible/modules/windows/win_dsc.ps1
|
#!powershell
# Copyright: (c) 2015, Trond Hindenes <[email protected]>, and others
# Copyright: (c) 2017, Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
#AnsibleRequires -CSharpUtil Ansible.Basic
#Requires -Version 5
Function ConvertTo-ArgSpecType {
<#
.SYNOPSIS
Converts the DSC parameter type to the arg spec type required for Ansible.
#>
param(
[Parameter(Mandatory=$true)][String]$CimType
)
$arg_type = switch($CimType) {
Boolean { "bool" }
Char16 { [Func[[Object], [Char]]]{ [System.Char]::Parse($args[0].ToString()) } }
DateTime { [Func[[Object], [DateTime]]]{
# o == ISO 8601 format
[System.DateTime]::ParseExact($args[0].ToString(), "o", [CultureInfo]::InvariantCulture,
[System.Globalization.DateTimeStyles]::None)
}}
Instance { "dict" }
Real32 { "float" }
Real64 { [Func[[Object], [Double]]]{ [System.Double]::Parse($args[0].ToString()) } }
Reference { "dict" }
SInt16 { [Func[[Object], [Int16]]]{ [System.Int16]::Parse($args[0].ToString()) } }
SInt32 { "int" }
SInt64 { [Func[[Object], [Int64]]]{ [System.Int64]::Parse($args[0].ToString()) } }
SInt8 { [Func[[Object], [SByte]]]{ [System.SByte]::Parse($args[0].ToString()) } }
String { "str" }
UInt16 { [Func[[Object], [UInt16]]]{ [System.UInt16]::Parse($args[0].ToString()) } }
UInt32 { [Func[[Object], [UInt32]]]{ [System.UInt32]::Parse($args[0].ToString()) } }
UInt64 { [Func[[Object], [UInt64]]]{ [System.UInt64]::Parse($args[0].ToString()) } }
UInt8 { [Func[[Object], [Byte]]]{ [System.Byte]::Parse($args[0].ToString()) } }
Unknown { "raw" }
default { "raw" }
}
return $arg_type
}
Function Get-DscCimClassProperties {
<#
.SYNOPSIS
Get's a list of CimProperties of a CIM Class. It filters out any magic or
read only properties that we don't need to know about.
#>
param([Parameter(Mandatory=$true)][String]$ClassName)
$resource = Get-CimClass -ClassName $ClassName -Namespace root\Microsoft\Windows\DesiredStateConfiguration
# Filter out any magic properties that are used internally on an OMI_BaseResource
# https://github.com/PowerShell/PowerShell/blob/master/src/System.Management.Automation/DscSupport/CimDSCParser.cs#L1203
$magic_properties = @("ResourceId", "SourceInfo", "ModuleName", "ModuleVersion", "ConfigurationName")
$properties = $resource.CimClassProperties | Where-Object {
($resource.CimSuperClassName -ne "OMI_BaseResource" -or $_.Name -notin $magic_properties) -and
-not $_.Flags.HasFlag([Microsoft.Management.Infrastructure.CimFlags]::ReadOnly)
}
return ,$properties
}
Function Add-PropertyOption {
<#
.SYNOPSIS
Adds the spec for the property type to the existing module specification.
#>
param(
[Parameter(Mandatory=$true)][Hashtable]$Spec,
[Parameter(Mandatory=$true)]
[Microsoft.Management.Infrastructure.CimPropertyDeclaration]$Property
)
$option = @{
required = $false
}
$property_name = $Property.Name
$property_type = $Property.CimType.ToString()
if ($Property.Flags.HasFlag([Microsoft.Management.Infrastructure.CimFlags]::Key) -or
$Property.Flags.HasFlag([Microsoft.Management.Infrastructure.CimFlags]::Required)) {
$option.required = $true
}
if ($null -ne $Property.Qualifiers['Values']) {
$option.choices = [System.Collections.Generic.List`1[Object]]$Property.Qualifiers['Values'].Value
}
if ($property_name -eq "Name") {
# For backwards compatibility we support specifying the Name DSC property as item_name
$option.aliases = @("item_name")
} elseif ($property_name -ceq "key") {
# There seems to be a bug in the CIM property parsing when the property name is 'Key'. The CIM instance will
# think the name is 'key' when the MOF actually defines it as 'Key'. We set the proper casing so the module arg
# validator won't fire a case sensitive warning
$property_name = "Key"
}
if ($Property.ReferenceClassName -eq "MSFT_Credential") {
# Special handling for the MSFT_Credential type (PSCredential), we handle this with having 2 options that
# have the suffix _username and _password.
$option_spec_pass = @{
type = "str"
required = $option.required
no_log = $true
}
$Spec.options."$($property_name)_password" = $option_spec_pass
$Spec.required_together.Add(@("$($property_name)_username", "$($property_name)_password")) > $null
$property_name = "$($property_name)_username"
$option.type = "str"
} elseif ($Property.ReferenceClassName -eq "MSFT_KeyValuePair") {
$option.type = "dict"
} elseif ($property_type.EndsWith("Array")) {
$option.type = "list"
$option.elements = ConvertTo-ArgSpecType -CimType $property_type.Substring(0, $property_type.Length - 5)
} else {
$option.type = ConvertTo-ArgSpecType -CimType $property_type
}
if (($option.type -eq "dict" -or ($option.type -eq "list" -and $option.elements -eq "dict")) -and
$Property.ReferenceClassName -ne "MSFT_KeyValuePair") {
# Get the sub spec if the type is a Instance (CimInstance/dict)
$sub_option_spec = Get-OptionSpec -ClassName $Property.ReferenceClassName
$option += $sub_option_spec
}
$Spec.options.$property_name = $option
}
Function Get-OptionSpec {
<#
.SYNOPSIS
Generates the specifiec used in AnsibleModule for a CIM MOF resource name.
.NOTES
This won't be able to retrieve the default values for an option as that is not defined in the MOF for a resource.
Default values are still preserved in the DSC engine if we don't pass in the property at all, we just can't report
on what they are automatically.
#>
param(
[Parameter(Mandatory=$true)][String]$ClassName
)
$spec = @{
options = @{}
required_together = [System.Collections.ArrayList]@()
}
$properties = Get-DscCimClassProperties -ClassName $ClassName
foreach ($property in $properties) {
Add-PropertyOption -Spec $spec -Property $property
}
return $spec
}
Function ConvertTo-CimInstance {
<#
.SYNOPSIS
Converts a dict to a CimInstance of the specified Class. Also provides a
better error message if this fails that contains the option name that failed.
#>
param(
[Parameter(Mandatory=$true)][String]$Name,
[Parameter(Mandatory=$true)][String]$ClassName,
[Parameter(Mandatory=$true)][System.Collections.IDictionary]$Value,
[Parameter(Mandatory=$true)][Ansible.Basic.AnsibleModule]$Module,
[Switch]$Recurse
)
$properties = @{}
foreach ($value_info in $Value.GetEnumerator()) {
# Need to remove all null values from existing dict so the conversion works
if ($null -eq $value_info.Value) {
continue
}
$properties.($value_info.Key) = $value_info.Value
}
if ($Recurse) {
# We want to validate and convert and values to what's required by DSC
$properties = ConvertTo-DscProperty -ClassName $ClassName -Params $properties -Module $Module
}
try {
return (New-CimInstance -ClassName $ClassName -Property $properties -ClientOnly)
} catch {
# New-CimInstance raises a poor error message, make sure we mention what option it is for
$Module.FailJson("Failed to cast dict value for option '$Name' to a CimInstance: $($_.Exception.Message)", $_)
}
}
Function ConvertTo-DscProperty {
<#
.SYNOPSIS
Converts the input module parameters that have been validated and casted
into the types expected by the DSC engine. This is mostly done to deal with
types like PSCredential and Dictionaries.
#>
param(
[Parameter(Mandatory=$true)][String]$ClassName,
[Parameter(Mandatory=$true)][System.Collections.IDictionary]$Params,
[Parameter(Mandatory=$true)][Ansible.Basic.AnsibleModule]$Module
)
$properties = Get-DscCimClassProperties -ClassName $ClassName
$dsc_properties = @{}
foreach ($property in $properties) {
$property_name = $property.Name
$property_type = $property.CimType.ToString()
if ($property.ReferenceClassName -eq "MSFT_Credential") {
$username = $Params."$($property_name)_username"
$password = $Params."$($property_name)_password"
# No user set == No option set in playbook, skip this property
if ($null -eq $username) {
continue
}
$sec_password = ConvertTo-SecureString -String $password -AsPlainText -Force
$value = New-Object -TypeName System.Management.Automation.PSCredential -ArgumentList $username, $sec_password
} else {
$value = $Params.$property_name
# The actual value wasn't set, skip adding this property
if ($null -eq $value) {
continue
}
if ($property.ReferenceClassName -eq "MSFT_KeyValuePair") {
$key_value_pairs = [System.Collections.Generic.List`1[CimInstance]]@()
foreach ($value_info in $value.GetEnumerator()) {
$kvp = @{Key = $value_info.Key; Value = $value_info.Value.ToString()}
$cim_instance = ConvertTo-CimInstance -Name $property_name -ClassName MSFT_KeyValuePair `
-Value $kvp -Module $Module
$key_value_pairs.Add($cim_instance) > $null
}
$value = $key_value_pairs.ToArray()
} elseif ($null -ne $property.ReferenceClassName) {
# Convert the dict to a CimInstance (or list of CimInstances)
$convert_args = @{
ClassName = $property.ReferenceClassName
Module = $Module
Name = $property_name
Recurse = $true
}
if ($property_type.EndsWith("Array")) {
$value = [System.Collections.Generic.List`1[CimInstance]]@()
foreach ($raw in $Params.$property_name.GetEnumerator()) {
$cim_instance = ConvertTo-CimInstance -Value $raw @convert_args
$value.Add($cim_instance) > $null
}
$value = $value.ToArray() # Need to make sure we are dealing with an Array not a List
} else {
$value = ConvertTo-CimInstance -Value $value @convert_args
}
}
}
$dsc_properties.$property_name = $value
}
return $dsc_properties
}
Function Invoke-DscMethod {
<#
.SYNOPSIS
Invokes the DSC Resource Method specified in another PS pipeline. This is
done so we can retrieve the Verbose stream and return it back to the user
for futher debugging.
#>
param(
[Parameter(Mandatory=$true)][Ansible.Basic.AnsibleModule]$Module,
[Parameter(Mandatory=$true)][String]$Method,
[Parameter(Mandatory=$true)][Hashtable]$Arguments
)
# Invoke the DSC resource in a separate runspace so we can capture the Verbose output
$ps = [PowerShell]::Create()
$ps.AddCommand("Invoke-DscResource").AddParameter("Method", $Method) > $null
$ps.AddParameters($Arguments) > $null
$result = $ps.Invoke()
# Pass the warnings through to the AnsibleModule return result
foreach ($warning in $ps.Streams.Warning) {
$Module.Warn($warning.Message)
}
# If running at a high enough verbosity, add the verbose output to the AnsibleModule return result
if ($Module.Verbosity -ge 3) {
$verbose_logs = [System.Collections.Generic.List`1[String]]@()
foreach ($verbosity in $ps.Streams.Verbose) {
$verbose_logs.Add($verbosity.Message) > $null
}
$Module.Result."verbose_$($Method.ToLower())" = $verbose_logs
}
if ($ps.HadErrors) {
# Cannot pass in the ErrorRecord as it's a RemotingErrorRecord and doesn't contain the ScriptStackTrace
# or other info that would be useful
$Module.FailJson("Failed to invoke DSC $Method method: $($ps.Streams.Error[0].Exception.Message)")
}
return $result
}
# win_dsc is unique in that is builds the arg spec based on DSC Resource input. To get this info
# we need to read the resource_name and module_version value which is done outside of Ansible.Basic
if ($args.Length -gt 0) {
$params = Get-Content -Path $args[0] | ConvertFrom-Json
} else {
$params = $complex_args
}
if (-not $params.ContainsKey("resource_name")) {
$res = @{
msg = "missing required argument: resource_name"
failed = $true
}
Write-Output -InputObject (ConvertTo-Json -Compress -InputObject $res)
exit 1
}
$resource_name = $params.resource_name
if ($params.ContainsKey("module_version")) {
$module_version = $params.module_version
} else {
$module_version = "latest"
}
$module_versions = (Get-DscResource -Name $resource_name -ErrorAction SilentlyContinue | Sort-Object -Property Version)
$resource = $null
if ($module_version -eq "latest" -and $null -ne $module_versions) {
$resource = $module_versions[-1]
} elseif ($module_version -ne "latest") {
$resource = $module_versions | Where-Object { $_.Version -eq $module_version }
}
if (-not $resource) {
if ($module_version -eq "latest") {
$msg = "Resource '$resource_name' not found."
} else {
$msg = "Resource '$resource_name' with version '$module_version' not found."
$msg += " Versions installed: '$($module_versions.Version -join "', '")'."
}
Write-Output -InputObject (ConvertTo-Json -Compress -InputObject @{ failed = $true; msg = $msg })
exit 1
}
# Build the base args for the DSC Invocation based on the resource selected
$dsc_args = @{
Name = $resource.Name
}
# Binary resources are not working very well with that approach - need to guesstimate module name/version
$module_version = $null
if ($resource.Module) {
$dsc_args.ModuleName = @{
ModuleName = $resource.Module.Name
ModuleVersion = $resource.Module.Version
}
$module_version = $resource.Module.Version.ToString()
} else {
$dsc_args.ModuleName = "PSDesiredStateConfiguration"
}
# To ensure the class registered with CIM is the one based on our version, we want to run the Get method so the DSC
# engine updates the metadata propery. We don't care about any errors here
try {
Invoke-DscResource -Method Get -Property @{Fake="Fake"} @dsc_args > $null
} catch {}
# Dynamically build the option spec based on the resource_name specified and create the module object
$spec = Get-OptionSpec -ClassName $resource.ResourceType
$spec.supports_check_mode = $true
$spec.options.module_version = @{ type = "str"; default = "latest" }
$spec.options.resource_name = @{ type = "str"; required = $true }
$module = [Ansible.Basic.AnsibleModule]::Create($args, $spec)
$module.Result.reboot_required = $false
$module.Result.module_version = $module_version
# Build the DSC invocation arguments and invoke the resource
$dsc_args.Property = ConvertTo-DscProperty -ClassName $resource.ResourceType -Module $module -Params $Module.Params
$dsc_args.Verbose = $true
$test_result = Invoke-DscMethod -Module $module -Method Test -Arguments $dsc_args
if ($test_result.InDesiredState -ne $true) {
if (-not $module.CheckMode) {
$result = Invoke-DscMethod -Module $module -Method Set -Arguments $dsc_args
$module.Result.reboot_required = $result.RebootRequired
}
$module.Result.changed = $true
}
$module.ExitJson()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 59,667 |
win_dsc / ScheduledTask start time not working in 2.8
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below -->
Calling win_dsc with the ScheduledTask resource from https://github.com/PowerShell/ComputerManagementDsc/wiki/ScheduledTask
worked in 2.7 but fails in 2.8 - Tested in 2.8.2 and 2.8.3.
Possibly related to the exec_wrapper. The AnsiballZ_win_dsc.ps1 LOOKS correct...
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
win_dsc
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.8.3
config file = None
configured module search path = [u'/home/mundym/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python2.7/site-packages/ansible
executable location = /usr/bin/ansible
python version = 2.7.5 (default, Oct 30 2018, 23:45:53) [GCC 4.8.5 20150623 (Red Hat 4.8.5-36)]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
CentOS 7 for Ansible, target OS is Windows Server 2016
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
Try to create a scheduled task with minimal options but definitely include start time
<!--- Paste example playbooks or commands between quotes below -->
```yaml
- set_fact:
scheduled_tasks:
- task_name: "Task"
task_path: "{{task_path}}"
action_executable: "powershell.exe"
action_arguments: "scriptname"
action_workingpath: "dir"
schedule_type: Daily
days_interval: 1
start_time: "06:00PM"
executeas_credential_username: "User"
executeas_credential_password: "Password"
start_when_available: False
execution_timelimit: "72:00:00"
logon_type: Password
state: Present
- win_psmodule:
name: ComputerManagementDsc
- win_dsc:
resource_name: ScheduledTask
TaskName : "{{ item.task_name }}"
TaskPath : "{{ task_path }}"
Description : "{{ item.description | default(omit) }}"
ActionExecutable : "{{ item.action_executable }}"
ActionArguments : "{{ item.action_arguments | default(omit) }}"
ActionWorkingPath : "{{ item.action_workingpath | default(omit) }}"
ScheduleType : "{{ item.schedule_type}}"
DaysInterval : "{{ item.days_interval | default(omit) }}"
WeeksInterval : "{{ item.weeks_interval | default(omit) }}"
RepeatInterval : "{{ item.repeat_interval | default(omit) }}"
StartTime : "{{ item.start_time | default(omit) }}"
DaysOfWeek : "{{ item.days_of_week | default(omit) }}"
ExecuteAsCredential_username: "{{ item.executeas_credential_username | default(omit) }}"
ExecuteAsCredential_password: "{{ item.executeas_credential_password | default(omit) }}"
LogonType : "{{ item.logon_type | default(omit) }}"
RandomDelay : "{{ item.random_delay | default(omit) }}"
RepetitionDuration : "{{ item.repetition_duration | default('Indefinitely') }}"
DisallowDemandStart : "{{ item.disallow_demand_start | default(False) }}"
DisallowHardTerminate : "{{ item.disallow_hard_terminate | default(False) }}"
Compatibility : "{{ item.compatibility | default('Vista') }}"
Hidden : "{{ item.hidden | default(False) }}"
RunOnlyIfIdle : "{{ item.run_only_if_idle | default(omit) }}"
IdleDuration : "{{ item.idle_duration | default(omit) }}"
IdleWaitTimeout : "{{ item.idle_wait_timeout | default(omit) }}"
RestartOnIdle : "{{ item.restart_on_idle | default(omit) }}"
DontStopOnIdleEnd : "{{ item.dont_stop_on_idle_end | default(omit) }}"
DisallowStartOnRemoteAppSession: "{{ item.disallow_start_on_remote_app_session | default(False) }}"
ExecutionTimeLimit : "{{ item.execution_time_limit | default(omit) }}"
RestartCount : "{{ item.restart_count | default(omit) }}"
RestartInterval : "{{ item.restart_interval | default(omit) }}"
multipleInstances : "{{ item.multiple_instances | default('IgnoreNew') }}"
Priority : "{{ item.priority| default('7') }}"
RunLevel : "{{ item.runlevel | default('Limited') }}"
Enable : true
Ensure : Present
loop: "{{scheduled_tasks}}"
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
Task will be created
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes -->
```paste below
"msg": "argument for StartTime is of type System.String and we were unable to convert to delegate: Exception calling \"ParseExact\" with \"4\" argument(s): \"String was not recognized as a valid DateTime.\""
```
|
https://github.com/ansible/ansible/issues/59667
|
https://github.com/ansible/ansible/pull/59703
|
196347ff326e0fdd1d0c72adbc4aff42362b15aa
|
04ec47bdf16112435a1278149146bdb81de2b979
| 2019-07-26T23:48:54Z |
python
| 2019-07-30T22:45:37Z |
lib/ansible/modules/windows/win_dsc.py
|
#!/usr/bin/python
# -*- coding: utf-8 -*-
# Copyright: (c) 2015, Trond Hindenes <[email protected]>, and others
# Copyright: (c) 2017, Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
ANSIBLE_METADATA = {'metadata_version': '1.1',
'status': ['preview'],
'supported_by': 'community'}
DOCUMENTATION = r'''
---
module: win_dsc
version_added: "2.4"
short_description: Invokes a PowerShell DSC configuration
description:
- Configures a resource using PowerShell DSC.
- Requires PowerShell version 5.0 or newer.
- Most of the options for this module are dynamic and will vary depending on
the DSC Resource specified in I(resource_name).
- See :doc:`/user_guide/windows_dsc` for more information on how to use this module.
options:
resource_name:
description:
- The name of the DSC Resource to use.
- Must be accessible to PowerShell using any of the default paths.
type: str
required: yes
module_version:
description:
- Can be used to configure the exact version of the DSC resource to be
invoked.
- Useful if the target node has multiple versions installed of the module
containing the DSC resource.
- If not specified, the module will follow standard PowerShell convention
and use the highest version available.
type: str
default: latest
free_form:
description:
- The M(win_dsc) module takes in multiple free form options based on the
DSC resource being invoked by I(resource_name).
- There is no option actually named C(free_form) so see the examples.
- This module will try and convert the option to the correct type required
by the DSC resource and throw a warning if it fails.
- If the type of the DSC resource option is a C(CimInstance) or
C(CimInstance[]), this means the value should be a dictionary or list
of dictionaries based on the values required by that option.
- If the type of the DSC resource option is a C(PSCredential) then there
needs to be 2 options set in the Ansible task definition suffixed with
C(_username) and C(_password).
- If the type of the DSC resource option is an array, then a list should be
provided but a comma separated string also work. Use a list where
possible as no escaping is required and it works with more complex types
list C(CimInstance[]).
- If the type of the DSC resource option is a C(DateTime), use a string in
the form of an ISO 8901 string.
- Since Ansible 2.8, Ansible will now validate the input fields against the
DSC resource definition automatically. Older versions will silently
ignore invalid fields.
type: str
required: true
notes:
- By default there are a few builtin resources that come with PowerShell 5.0,
see U(https://docs.microsoft.com/en-us/powershell/dsc/builtinresource) for
more information on these resources.
- Custom DSC resources can be installed with M(win_psmodule) using the I(name)
option.
- The DSC engine run's each task as the SYSTEM account, any resources that need
to be accessed with a different account need to have C(PsDscRunAsCredential)
set.
- To see the valid options for a DSC resource, run the module with C(-vvv) to
show the possible module invocation. Default values are not shown in this
output but are applied within the DSC engine.
author:
- Trond Hindenes (@trondhindenes)
'''
EXAMPLES = r'''
- name: Extract zip file
win_dsc:
resource_name: Archive
Ensure: Present
Path: C:\Temp\zipfile.zip
Destination: C:\Temp\Temp2
- name: Install a Windows feature with the WindowsFeature resource
win_dsc:
resource_name: WindowsFeature
Name: telnet-client
- name: Edit HKCU reg key under specific user
win_regedit:
resource_name: Registry
Ensure: Present
Key: HKEY_CURRENT_USER\ExampleKey
ValueName: TestValue
ValueData: TestData
PsDscRunAsCredential_username: '{{ansible_user}}'
PsDscRunAsCredential_password: '{{ansible_password}}'
no_log: true
- name: Create file with multiple attributes
win_dsc:
resource_name: File
DestinationPath: C:\ansible\dsc
Attributes: # can also be a comma separated string, e.g. 'Hidden, System'
- Hidden
- System
Ensure: Present
Type: Directory
- name: Call DSC resource with DateTime option
win_dsc:
resource_name: DateTimeResource
DateTimeOption: '2019-02-22T13:57:31.2311892+00:00'
# more complex example using custom DSC resource and dict values
- name: Setup the xWebAdministration module
win_psmodule:
name: xWebAdministration
state: present
- name: Create IIS Website with Binding and Authentication options
win_dsc:
resource_name: xWebsite
Ensure: Present
Name: DSC Website
State: Started
PhysicalPath: C:\inetpub\wwwroot
BindingInfo: # Example of a CimInstance[] DSC parameter (list of dicts)
- Protocol: https
Port: 1234
CertificateStoreName: MY
CertificateThumbprint: C676A89018C4D5902353545343634F35E6B3A659
HostName: DSCTest
IPAddress: '*'
SSLFlags: '1'
- Protocol: http
Port: 4321
IPAddress: '*'
AuthenticationInfo: # Example of a CimInstance DSC parameter (dict)
Anonymous: no
Basic: true
Digest: false
Windows: yes
'''
RETURN = r'''
module_version:
description: The version of the dsc resource/module used.
returned: always
type: str
sample: "1.0.1"
reboot_required:
description: Flag returned from the DSC engine indicating whether or not
the machine requires a reboot for the invoked changes to take effect.
returned: always
type: bool
sample: true
verbose_test:
description: The verbose output as a list from executing the DSC test
method.
returned: Ansible verbosity is -vvv or greater
type: list
sample: [
"Perform operation 'Invoke CimMethod' with the following parameters, ",
"[SERVER]: LCM: [Start Test ] [[File]DirectResourceAccess]",
"Operation 'Invoke CimMethod' complete."
]
verbose_set:
description: The verbose output as a list from executing the DSC Set
method.
returned: Ansible verbosity is -vvv or greater and a change occurred
type: list
sample: [
"Perform operation 'Invoke CimMethod' with the following parameters, ",
"[SERVER]: LCM: [Start Set ] [[File]DirectResourceAccess]",
"Operation 'Invoke CimMethod' complete."
]
'''
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 59,835 |
collections: sanity pylint unable to import ansible.module_utils.six.moves.urllib.*
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
pylint is failing on imports of ansible.module_utils.six.moves.urllib.*
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
ansible-test sanity pylint
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
devel
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
<!--- Paste example playbooks or commands between quotes below -->
```yaml
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes -->
```paste below
Running sanity test 'pylint' with Python 3.6
ERROR: Found 6 pylint issue(s) which need to be resolved:
ERROR: plugins/modules/vmware_cfg_backup.py:95:0: import-error Unable to import 'ansible.module_utils.six.moves.urllib.error'
ERROR: plugins/modules/vmware_guest_screenshot.py:142:0: import-error Unable to import 'ansible.module_utils.six.moves.urllib.parse'
ERROR: plugins/modules/vmware_host_firewall_manager.py:176:0: import-error Unable to import 'ansible_collections.jctanner.cloud_vmware.plugins.module_utils.compat'
ERROR: plugins/modules/vsphere_copy.py:112:0: import-error Unable to import 'ansible.module_utils.six.moves.urllib.parse'
ERROR: plugins/modules/vsphere_file.py:137:0: import-error Unable to import 'ansible.module_utils.six.moves.urllib.error'
ERROR: plugins/modules/vsphere_file.py:138:0: import-error Unable to import 'ansible.module_utils.six.moves.urllib.parse'
```
|
https://github.com/ansible/ansible/issues/59835
|
https://github.com/ansible/ansible/pull/59836
|
2198ecb1d2021bf949dd93fa0daf865a083b026f
|
ef6be41bf1fde2986662dc09451f6db46590b1e2
| 2019-07-31T01:40:57Z |
python
| 2019-07-31T02:19:54Z |
test/sanity/pylint/config/collection
|
[MESSAGES CONTROL]
disable=
abstract-method,
access-member-before-definition,
ansible-deprecated-version,
arguments-differ,
assignment-from-no-return,
assignment-from-none,
attribute-defined-outside-init,
bad-continuation,
bad-indentation,
bad-mcs-classmethod-argument,
broad-except,
c-extension-no-member,
cell-var-from-loop,
chained-comparison,
comparison-with-callable,
consider-iterating-dictionary,
consider-merging-isinstance,
consider-using-dict-comprehension,
consider-using-enumerate,
consider-using-get,
consider-using-in,
consider-using-set-comprehension,
consider-using-ternary,
deprecated-lambda,
deprecated-method,
deprecated-module,
eval-used,
exec-used,
expression-not-assigned,
fixme,
function-redefined,
global-statement,
global-variable-undefined,
import-self,
inconsistent-return-statements,
invalid-envvar-default,
invalid-name,
invalid-sequence-index,
keyword-arg-before-vararg,
len-as-condition,
line-too-long,
literal-comparison,
locally-disabled,
method-hidden,
misplaced-comparison-constant,
missing-docstring,
no-else-raise,
no-else-return,
no-init,
no-member,
no-name-in-module,
no-self-use,
no-value-for-parameter,
non-iterator-returned,
not-a-mapping,
not-an-iterable,
not-callable,
old-style-class,
pointless-statement,
pointless-string-statement,
possibly-unused-variable,
protected-access,
redefined-argument-from-local,
redefined-builtin,
redefined-outer-name,
redefined-variable-type,
reimported,
relative-beyond-top-level, # https://github.com/PyCQA/pylint/issues/2967
signature-differs,
simplifiable-if-expression,
simplifiable-if-statement,
subprocess-popen-preexec-fn,
super-init-not-called,
superfluous-parens,
too-few-public-methods,
too-many-ancestors,
too-many-arguments,
too-many-boolean-expressions,
too-many-branches,
too-many-function-args,
too-many-instance-attributes,
too-many-lines,
too-many-locals,
too-many-nested-blocks,
too-many-public-methods,
too-many-return-statements,
too-many-statements,
trailing-comma-tuple,
trailing-comma-tuple,
try-except-raise,
unbalanced-tuple-unpacking,
undefined-loop-variable,
unexpected-keyword-arg,
ungrouped-imports,
unidiomatic-typecheck,
unnecessary-pass,
unsubscriptable-object,
unsupported-assignment-operation,
unsupported-delete-operation,
unsupported-membership-test,
unused-argument,
unused-import,
unused-variable,
used-before-assignment,
useless-object-inheritance,
useless-return,
useless-super-delegation,
wrong-import-order,
wrong-import-position,
[BASIC]
bad-names=foo,
bar,
baz,
toto,
tutu,
tata,
_,
good-names=i,
j,
k,
ex,
Run,
[TYPECHECK]
ignored-modules=
_MovedItems,
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 59,787 |
iam_password_policy ignores group/aws module defaults
|
##### SUMMARY
I am using the [module defaults] feature to set common defaults among aws modules. However, iam_password_policy ignored these defaults.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
lib/ansible/config/module_defaults.yml
##### ANSIBLE VERSION
```
ansible 2.8.1
config file = /home/mchappel/.ansible.cfg
configured module search path = [u'/home/mchappel/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python2.7/site-packages/ansible
executable location = /usr/bin/ansible
python version = 2.7.5 (default, Jun 11 2019, 12:19:05) [GCC 4.8.5 20150623 (Red Hat 4.8.5-36)]
```
##### CONFIGURATION
```
DEFAULT_CALLBACK_WHITELIST(/home/mchappel/.ansible.cfg) = [u'timer', u'profile_tasks']
HOST_KEY_CHECKING(/home/mchappel/.ansible.cfg) = False
RETRY_FILES_ENABLED(/home/mchappel/.ansible.cfg) = False
```
##### OS / ENVIRONMENT
RHEL7
##### STEPS TO REPRODUCE
```
- hosts: all
gather_facts: no
serial: 1
tasks:
- name: 'run roles under assumed context'
module_defaults:
group/aws:
aws_access_key: '{{ current_account_sts_credentials.access_key }}'
aws_secret_key: '{{ current_account_sts_credentials.secret_key }}'
security_token: '{{ current_account_sts_credentials.session_token }}'
block:
- iam_password_policy:
state: present
allow_pw_change: yes
min_pw_length: 14
require_symbols: yes
require_numbers: yes
require_uppercase: yes
require_lowercase: yes
pw_reuse_prevent: 6
pw_expire: 90
```
##### EXPECTED RESULTS
iam_password_policy uses provided defaults
##### ACTUAL RESULTS
iam_password_policy ignores provided defaults
|
https://github.com/ansible/ansible/issues/59787
|
https://github.com/ansible/ansible/pull/59788
|
b09fbc3bf3f5d10c8c1d6605c22de53ec9a3b054
|
c1e5758c4cb563cf727681ce798a682cc5f8acaf
| 2019-07-30T14:54:06Z |
python
| 2019-07-31T15:53:14Z |
changelogs/fragments/59788-module_defaults-update-aws-group.yaml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 59,787 |
iam_password_policy ignores group/aws module defaults
|
##### SUMMARY
I am using the [module defaults] feature to set common defaults among aws modules. However, iam_password_policy ignored these defaults.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
lib/ansible/config/module_defaults.yml
##### ANSIBLE VERSION
```
ansible 2.8.1
config file = /home/mchappel/.ansible.cfg
configured module search path = [u'/home/mchappel/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python2.7/site-packages/ansible
executable location = /usr/bin/ansible
python version = 2.7.5 (default, Jun 11 2019, 12:19:05) [GCC 4.8.5 20150623 (Red Hat 4.8.5-36)]
```
##### CONFIGURATION
```
DEFAULT_CALLBACK_WHITELIST(/home/mchappel/.ansible.cfg) = [u'timer', u'profile_tasks']
HOST_KEY_CHECKING(/home/mchappel/.ansible.cfg) = False
RETRY_FILES_ENABLED(/home/mchappel/.ansible.cfg) = False
```
##### OS / ENVIRONMENT
RHEL7
##### STEPS TO REPRODUCE
```
- hosts: all
gather_facts: no
serial: 1
tasks:
- name: 'run roles under assumed context'
module_defaults:
group/aws:
aws_access_key: '{{ current_account_sts_credentials.access_key }}'
aws_secret_key: '{{ current_account_sts_credentials.secret_key }}'
security_token: '{{ current_account_sts_credentials.session_token }}'
block:
- iam_password_policy:
state: present
allow_pw_change: yes
min_pw_length: 14
require_symbols: yes
require_numbers: yes
require_uppercase: yes
require_lowercase: yes
pw_reuse_prevent: 6
pw_expire: 90
```
##### EXPECTED RESULTS
iam_password_policy uses provided defaults
##### ACTUAL RESULTS
iam_password_policy ignores provided defaults
|
https://github.com/ansible/ansible/issues/59787
|
https://github.com/ansible/ansible/pull/59788
|
b09fbc3bf3f5d10c8c1d6605c22de53ec9a3b054
|
c1e5758c4cb563cf727681ce798a682cc5f8acaf
| 2019-07-30T14:54:06Z |
python
| 2019-07-31T15:53:14Z |
lib/ansible/config/module_defaults.yml
|
version: '1.0'
groupings:
aws_acm_info:
- aws
aws_api_gateway:
- aws
aws_application_scaling_policy:
- aws
aws_az_info:
- aws
aws_batch_compute_environment:
- aws
aws_batch_job_definition:
- aws
aws_batch_job_queue:
- aws
aws_caller_info:
- aws
aws_config_aggregation_authorization:
- aws
aws_config_aggregator:
- aws
aws_config_delivery_channel:
- aws
aws_config_recorder:
- aws
aws_config_rule:
- aws
aws_direct_connect_connection:
- aws
aws_direct_connect_gateway:
- aws
aws_direct_connect_link_aggregation_group:
- aws
aws_direct_connect_virtual_interface:
- aws
aws_eks_cluster:
- aws
aws_elasticbeanstalk_app:
- aws
aws_glue_connection:
- aws
aws_glue_job:
- aws
aws_inspector_target:
- aws
aws_kms:
- aws
aws_kms_info:
- aws
aws_region_info:
- aws
aws_s3:
- aws
aws_s3_bucket_facts:
- aws
aws_s3_cors:
- aws
aws_ses_identity:
- aws
aws_ses_identity_policy:
- aws
aws_sgw_info:
- aws
aws_ssm_parameter_store:
- aws
aws_waf_condition:
- aws
aws_waf_info:
- aws
aws_waf_rule:
- aws
aws_waf_web_acl:
- aws
cloudformation:
- aws
cloudformation_facts:
- aws
cloudfront_distribution:
- aws
cloudfront_facts:
- aws
cloudfront_invalidation:
- aws
cloudfront_origin_access_identity:
- aws
cloudtrail:
- aws
cloudwatchevent_rule:
- aws
cloudwatchlogs_log_group:
- aws
cloudwatchlogs_log_group_info:
- aws
cpm_plugconfig:
- cpm
cpm_plugcontrol:
- cpm
cpm_serial_port_config:
- cpm
cpm_serial_port_info:
- cpm
cpm_user:
- cpm
data_pipeline:
- aws
dynamodb_table:
- aws
dynamodb_ttl:
- aws
ec2:
- aws
ec2_ami:
- aws
ec2_ami_copy:
- aws
ec2_ami_info:
- aws
ec2_asg:
- aws
ec2_asg_info:
- aws
ec2_asg_lifecycle_hook:
- aws
ec2_customer_gateway:
- aws
ec2_customer_gateway_info:
- aws
ec2_eip:
- aws
ec2_eip_info:
- aws
ec2_elb:
- aws
ec2_elb_info:
- aws
ec2_elb_lb:
- aws
ec2_eni:
- aws
ec2_eni_info:
- aws
ec2_group:
- aws
ec2_group_info:
- aws
ec2_instance:
- aws
ec2_instance_info:
- aws
ec2_key:
- aws
ec2_launch_template:
- aws
ec2_lc:
- aws
ec2_lc_info:
- aws
ec2_lc_find:
- aws
ec2_metric_alarm:
- aws
ec2_placement_group:
- aws
ec2_placement_group_info:
- aws
ec2_scaling_policy:
- aws
ec2_snapshot:
- aws
ec2_snapshot_copy:
- aws
ec2_snapshot_info:
- aws
ec2_tag:
- aws
ec2_vol:
- aws
ec2_vol_info:
- aws
ec2_vpc_dhcp_option:
- aws
ec2_vpc_dhcp_option_info:
- aws
ec2_vpc_egress_igw:
- aws
ec2_vpc_endpoint:
- aws
ec2_vpc_endpoint_info:
- aws
ec2_vpc_igw:
- aws
ec2_vpc_igw_info:
- aws
ec2_vpc_nacl:
- aws
ec2_vpc_nacl_info:
- aws
ec2_vpc_nat_gateway:
- aws
ec2_vpc_nat_gateway_info:
- aws
ec2_vpc_net:
- aws
ec2_vpc_net_info:
- aws
ec2_vpc_peer:
- aws
ec2_vpc_peering_info:
- aws
ec2_vpc_route_table:
- aws
ec2_vpc_route_table_info:
- aws
ec2_vpc_subnet:
- aws
ec2_vpc_subnet_info:
- aws
ec2_vpc_vgw:
- aws
ec2_vpc_vgw_info:
- aws
ec2_vpc_vpn:
- aws
ec2_vpc_vpn_info:
- aws
ec2_win_password:
- aws
ecs_attribute:
- aws
ecs_cluster:
- aws
ecs_ecr:
- aws
ecs_service:
- aws
ecs_service_facts:
- aws
ecs_task:
- aws
ecs_taskdefinition:
- aws
ecs_taskdefinition_info:
- aws
efs:
- aws
efs_facts:
- aws
elasticache:
- aws
elasticache_info:
- aws
elasticache_parameter_group:
- aws
elasticache_snapshot:
- aws
elasticache_subnet_group:
- aws
elb_application_lb:
- aws
elb_application_lb_info:
- aws
elb_classic_lb:
- aws
elb_classic_lb_info:
- aws
elb_instance:
- aws
elb_network_lb:
- aws
elb_target:
- aws
elb_target_group:
- aws
elb_target_group_info:
- aws
execute_lambda:
- aws
iam:
- aws
iam_cert:
- aws
iam_group:
- aws
iam_managed_policy:
- aws
iam_mfa_device_info:
- aws
iam_policy:
- aws
iam_role:
- aws
iam_role_info:
- aws
iam_server_certificate_info:
- aws
iam_user:
- aws
kinesis_stream:
- aws
lambda:
- aws
lambda_alias:
- aws
lambda_event:
- aws
lambda_facts:
- aws
lambda_policy:
- aws
lightsail:
- aws
rds:
- aws
rds_instance:
- aws
rds_instance_info:
- aws
rds_param_group:
- aws
rds_snapshot_info:
- aws
rds_subnet_group:
- aws
redshift:
- aws
redshift_info:
- aws
redshift_subnet_group:
- aws
route53:
- aws
route53_info:
- aws
route53_health_check:
- aws
route53_zone:
- aws
s3_bucket:
- aws
s3_lifecycle:
- aws
s3_logging:
- aws
s3_sync:
- aws
s3_website:
- aws
sns:
- aws
sns_topic:
- aws
sqs_queue:
- aws
sts_assume_role:
- aws
sts_session_token:
- aws
gcp_compute_address:
- gcp
gcp_compute_address_facts:
- gcp
gcp_compute_backend_bucket:
- gcp
gcp_compute_backend_bucket_facts:
- gcp
gcp_compute_backend_service:
- gcp
gcp_compute_backend_service_facts:
- gcp
gcp_compute_disk:
- gcp
gcp_compute_disk_facts:
- gcp
gcp_compute_firewall:
- gcp
gcp_compute_firewall_facts:
- gcp
gcp_compute_forwarding_rule:
- gcp
gcp_compute_forwarding_rule_facts:
- gcp
gcp_compute_global_address:
- gcp
gcp_compute_global_address_facts:
- gcp
gcp_compute_global_forwarding_rule:
- gcp
gcp_compute_global_forwarding_rule_facts:
- gcp
gcp_compute_health_check:
- gcp
gcp_compute_health_check_facts:
- gcp
gcp_compute_http_health_check:
- gcp
gcp_compute_http_health_check_facts:
- gcp
gcp_compute_https_health_check:
- gcp
gcp_compute_https_health_check_facts:
- gcp
gcp_compute_image:
- gcp
gcp_compute_image_facts:
- gcp
gcp_compute_instance:
- gcp
gcp_compute_instance_facts:
- gcp
gcp_compute_instance_group:
- gcp
gcp_compute_instance_group_facts:
- gcp
gcp_compute_instance_group_manager:
- gcp
gcp_compute_instance_group_manager_facts:
- gcp
gcp_compute_instance_template:
- gcp
gcp_compute_instance_template_facts:
- gcp
gcp_compute_network:
- gcp
gcp_compute_network_facts:
- gcp
gcp_compute_route:
- gcp
gcp_compute_route_facts:
- gcp
gcp_compute_router_facts:
- gcp
gcp_compute_ssl_certificate:
- gcp
gcp_compute_ssl_certificate_facts:
- gcp
gcp_compute_ssl_policy:
- gcp
gcp_compute_ssl_policy_facts:
- gcp
gcp_compute_subnetwork:
- gcp
gcp_compute_subnetwork_facts:
- gcp
gcp_compute_target_http_proxy:
- gcp
gcp_compute_target_http_proxy_facts:
- gcp
gcp_compute_target_https_proxy:
- gcp
gcp_compute_target_https_proxy_facts:
- gcp
gcp_compute_target_pool:
- gcp
gcp_compute_target_pool_facts:
- gcp
gcp_compute_target_ssl_proxy:
- gcp
gcp_compute_target_ssl_proxy_facts:
- gcp
gcp_compute_target_tcp_proxy:
- gcp
gcp_compute_target_tcp_proxy_facts:
- gcp
gcp_compute_target_vpn_gateway:
- gcp
gcp_compute_target_vpn_gateway_facts:
- gcp
gcp_compute_url_map:
- gcp
gcp_compute_url_map_facts:
- gcp
gcp_compute_vpn_tunnel:
- gcp
gcp_compute_vpn_tunnel_facts:
- gcp
gcp_container_cluster:
- gcp
gcp_container_node_pool:
- gcp
gcp_dns_managed_zone:
- gcp
gcp_dns_resource_record_set:
- gcp
gcp_pubsub_subscription:
- gcp
gcp_pubsub_topic:
- gcp
gcp_storage_bucket:
- gcp
gcp_storage_bucket_access_control:
- gcp
azure_rm_acs:
- azure
azure_rm_aks:
- azure
azure_rm_aks_facts:
- azure
azure_rm_appserviceplan:
- azure
azure_rm_appserviceplan_facts:
- azure
azure_rm_availabilityset:
- azure
azure_rm_availabilityset_facts:
- azure
azure_rm_containerinstance:
- azure
azure_rm_containerregistry:
- azure
azure_rm_deployment:
- azure
azure_rm_dnsrecordset:
- azure
azure_rm_dnsrecordset_facts:
- azure
azure_rm_dnszone:
- azure
azure_rm_dnszone_facts:
- azure
azure_rm_functionapp:
- azure
azure_rm_functionapp_facts:
- azure
azure_rm_image:
- azure
azure_rm_keyvault:
- azure
azure_rm_keyvaultkey:
- azure
azure_rm_keyvaultsecret:
- azure
azure_rm_loadbalancer:
- azure
azure_rm_loadbalancer_facts:
- azure
azure_rm_manageddisk:
- azure
azure_rm_manageddisk_facts:
- azure
azure_rm_mysqldatabase:
- azure
azure_rm_mysqldatabase_facts:
- azure
azure_rm_mysqlserver:
- azure
azure_rm_mysqlserver_facts:
- azure
azure_rm_networkinterface:
- azure
azure_rm_networkinterface_facts:
- azure
azure_rm_postgresqldatabase:
- azure
azure_rm_postgresqldatabase_facts:
- azure
azure_rm_postgresqlserver:
- azure
azure_rm_publicipaddress:
- azure
azure_rm_publicipaddress_facts:
- azure
azure_rm_resource:
- azure
azure_rm_resource_facts:
- azure
azure_rm_resourcegroup:
- azure
azure_rm_resourcegroup_info:
- azure
azure_rm_securitygroup:
- azure
azure_rm_securitygroup_facts:
- azure
azure_rm_sqldatabase:
- azure
azure_rm_sqlserver:
- azure
azure_rm_sqlserver_facts:
- azure
azure_rm_storageaccount:
- azure
azure_rm_storageaccount_facts:
- azure
azure_rm_storageblob:
- azure
azure_rm_subnet:
- azure
azure_rm_virtualmachine:
- azure
azure_rm_virtualmachine_extension:
- azure
azure_rm_virtualmachine_facts:
- azure
azure_rm_virtualmachineimage_facts:
- azure
azure_rm_virtualmachine_scaleset:
- azure
azure_rm_virtualmachine_scaleset_facts:
- azure
azure_rm_virtualnetwork:
- azure
azure_rm_virtualnetwork_facts:
- azure
azure_rm_webapp:
- azure
k8s:
- k8s
k8s_auth:
- k8s
k8s_facts:
- k8s
k8s_info:
- k8s
k8s_service:
- k8s
k8s_scale:
- k8s
kubevirt_cdi_upload:
- k8s
kubevirt_preset:
- k8s
kubevirt_pvc:
- k8s
kubevirt_rs:
- k8s
kubevirt_template:
- k8s
kubevirt_vm:
- k8s
os_auth:
- os
os_client_config:
- os
os_coe_cluster:
- os
os_coe_cluster_template:
- os
os_flavor_facts:
- os
os_floating_ip:
- os
os_group:
- os
os_image:
- os
os_image_facts:
- os
os_ironic:
- os
os_ironic_inspect:
- os
os_ironic_node:
- os
os_keypair:
- os
os_keystone_domain:
- os
os_keystone_domain_facts:
- os
os_keystone_endpoint:
- os
os_keystone_role:
- os
os_keystone_service:
- os
os_listener:
- os
os_loadbalancer:
- os
os_member:
- os
os_network:
- os
os_networks_facts:
- os
os_nova_flavor:
- os
os_nova_host_aggregate:
- os
os_object:
- os
os_pool:
- os
os_port:
- os
os_port_facts:
- os
os_project:
- os
os_project_access:
- os
os_project_facts:
- os
os_quota:
- os
os_recordset:
- os
os_router:
- os
os_security_group:
- os
os_security_group_rule:
- os
os_server:
- os
os_server_action:
- os
os_server_facts:
- os
os_server_group:
- os
os_server_metadata:
- os
os_server_volume:
- os
os_stack:
- os
os_subnet:
- os
os_subnets_facts:
- os
os_user:
- os
os_user_facts:
- os
os_user_group:
- os
os_user_role:
- os
os_volume:
- os
os_volume_snapshot:
- os
os_zone:
- os
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 59,775 |
include description for COLLECTIONS_PATHS
|
https://docs.ansible.com/ansible/devel/reference_appendices/config.html#collections-paths
collections_paths need a description
|
https://github.com/ansible/ansible/issues/59775
|
https://github.com/ansible/ansible/pull/59778
|
c1e5758c4cb563cf727681ce798a682cc5f8acaf
|
3eeaf2f9746fd6709bd565d88d0680019f261de3
| 2019-07-30T12:25:33Z |
python
| 2019-07-31T15:57:44Z |
lib/ansible/config/base.yml
|
# Copyright (c) 2017 Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
---
ALLOW_WORLD_READABLE_TMPFILES:
name: Allow world readable temporary files
default: False
description:
- This makes the temporary files created on the machine to be world readable and will issue a warning instead of failing the task.
- It is useful when becoming an unprivileged user.
env: []
ini:
- {key: allow_world_readable_tmpfiles, section: defaults}
type: boolean
yaml: {key: defaults.allow_world_readable_tmpfiles}
version_added: "2.1"
ANSIBLE_CONNECTION_PATH:
name: Path of ansible-connection script
default: null
description:
- Specify where to look for the ansible-connection script. This location will be checked before searching $PATH.
- If null, ansible will start with the same directory as the ansible script.
type: path
env: [{name: ANSIBLE_CONNECTION_PATH}]
ini:
- {key: ansible_connection_path, section: persistent_connection}
yaml: {key: persistent_connection.ansible_connection_path}
version_added: "2.8"
ANSIBLE_COW_SELECTION:
name: Cowsay filter selection
default: default
description: This allows you to chose a specific cowsay stencil for the banners or use 'random' to cycle through them.
env: [{name: ANSIBLE_COW_SELECTION}]
ini:
- {key: cow_selection, section: defaults}
ANSIBLE_COW_WHITELIST:
name: Cowsay filter whitelist
default: ['bud-frogs', 'bunny', 'cheese', 'daemon', 'default', 'dragon', 'elephant-in-snake', 'elephant', 'eyes', 'hellokitty', 'kitty', 'luke-koala', 'meow', 'milk', 'moofasa', 'moose', 'ren', 'sheep', 'small', 'stegosaurus', 'stimpy', 'supermilker', 'three-eyes', 'turkey', 'turtle', 'tux', 'udder', 'vader-koala', 'vader', 'www']
description: White list of cowsay templates that are 'safe' to use, set to empty list if you want to enable all installed templates.
env: [{name: ANSIBLE_COW_WHITELIST}]
ini:
- {key: cow_whitelist, section: defaults}
type: list
yaml: {key: display.cowsay_whitelist}
ANSIBLE_FORCE_COLOR:
name: Force color output
default: False
description: This options forces color mode even when running without a TTY or the "nocolor" setting is True.
env: [{name: ANSIBLE_FORCE_COLOR}]
ini:
- {key: force_color, section: defaults}
type: boolean
yaml: {key: display.force_color}
ANSIBLE_NOCOLOR:
name: Suppress color output
default: False
description: This setting allows suppressing colorizing output, which is used to give a better indication of failure and status information.
env: [{name: ANSIBLE_NOCOLOR}]
ini:
- {key: nocolor, section: defaults}
type: boolean
yaml: {key: display.nocolor}
ANSIBLE_NOCOWS:
name: Suppress cowsay output
default: False
description: If you have cowsay installed but want to avoid the 'cows' (why????), use this.
env: [{name: ANSIBLE_NOCOWS}]
ini:
- {key: nocows, section: defaults}
type: boolean
yaml: {key: display.i_am_no_fun}
ANSIBLE_COW_PATH:
name: Set path to cowsay command
default: null
description: Specify a custom cowsay path or swap in your cowsay implementation of choice
env: [{name: ANSIBLE_COW_PATH}]
ini:
- {key: cowpath, section: defaults}
type: string
yaml: {key: display.cowpath}
ANSIBLE_PIPELINING:
name: Connection pipelining
default: False
description:
- Pipelining, if supported by the connection plugin, reduces the number of network operations required to execute a module on the remote server,
by executing many Ansible modules without actual file transfer.
- This can result in a very significant performance improvement when enabled.
- "However this conflicts with privilege escalation (become). For example, when using 'sudo:' operations you must first
disable 'requiretty' in /etc/sudoers on all managed hosts, which is why it is disabled by default."
- This options is disabled if ``ANSIBLE_KEEP_REMOTE_FILES`` is enabled.
env:
- name: ANSIBLE_PIPELINING
- name: ANSIBLE_SSH_PIPELINING
ini:
- section: connection
key: pipelining
- section: ssh_connection
key: pipelining
type: boolean
yaml: {key: plugins.connection.pipelining}
ANSIBLE_SSH_ARGS:
# TODO: move to ssh plugin
default: -C -o ControlMaster=auto -o ControlPersist=60s
description:
- If set, this will override the Ansible default ssh arguments.
- In particular, users may wish to raise the ControlPersist time to encourage performance. A value of 30 minutes may be appropriate.
- Be aware that if `-o ControlPath` is set in ssh_args, the control path setting is not used.
env: [{name: ANSIBLE_SSH_ARGS}]
ini:
- {key: ssh_args, section: ssh_connection}
yaml: {key: ssh_connection.ssh_args}
ANSIBLE_SSH_CONTROL_PATH:
# TODO: move to ssh plugin
default: null
description:
- This is the location to save ssh's ControlPath sockets, it uses ssh's variable substitution.
- Since 2.3, if null, ansible will generate a unique hash. Use `%(directory)s` to indicate where to use the control dir path setting.
- Before 2.3 it defaulted to `control_path=%(directory)s/ansible-ssh-%%h-%%p-%%r`.
- Be aware that this setting is ignored if `-o ControlPath` is set in ssh args.
env: [{name: ANSIBLE_SSH_CONTROL_PATH}]
ini:
- {key: control_path, section: ssh_connection}
yaml: {key: ssh_connection.control_path}
ANSIBLE_SSH_CONTROL_PATH_DIR:
# TODO: move to ssh plugin
default: ~/.ansible/cp
description:
- This sets the directory to use for ssh control path if the control path setting is null.
- Also, provides the `%(directory)s` variable for the control path setting.
env: [{name: ANSIBLE_SSH_CONTROL_PATH_DIR}]
ini:
- {key: control_path_dir, section: ssh_connection}
yaml: {key: ssh_connection.control_path_dir}
ANSIBLE_SSH_EXECUTABLE:
# TODO: move to ssh plugin
default: ssh
description:
- This defines the location of the ssh binary. It defaults to `ssh` which will use the first ssh binary available in $PATH.
- This option is usually not required, it might be useful when access to system ssh is restricted,
or when using ssh wrappers to connect to remote hosts.
env: [{name: ANSIBLE_SSH_EXECUTABLE}]
ini:
- {key: ssh_executable, section: ssh_connection}
yaml: {key: ssh_connection.ssh_executable}
version_added: "2.2"
ANSIBLE_SSH_RETRIES:
# TODO: move to ssh plugin
default: 0
description: Number of attempts to establish a connection before we give up and report the host as 'UNREACHABLE'
env: [{name: ANSIBLE_SSH_RETRIES}]
ini:
- {key: retries, section: ssh_connection}
type: integer
yaml: {key: ssh_connection.retries}
ANY_ERRORS_FATAL:
name: Make Task failures fatal
default: False
description: Sets the default value for the any_errors_fatal keyword, if True, Task failures will be considered fatal errors.
env:
- name: ANSIBLE_ANY_ERRORS_FATAL
ini:
- section: defaults
key: any_errors_fatal
type: boolean
yaml: {key: errors.any_task_errors_fatal}
version_added: "2.4"
BECOME_ALLOW_SAME_USER:
name: Allow becoming the same user
default: False
description: This setting controls if become is skipped when remote user and become user are the same. I.E root sudo to root.
env: [{name: ANSIBLE_BECOME_ALLOW_SAME_USER}]
ini:
- {key: become_allow_same_user, section: privilege_escalation}
type: boolean
yaml: {key: privilege_escalation.become_allow_same_user}
AGNOSTIC_BECOME_PROMPT:
name: Display an agnostic become prompt
default: True
type: boolean
description: Display an agnostic become prompt instead of displaying a prompt containing the command line supplied become method
env: [{name: ANSIBLE_AGNOSTIC_BECOME_PROMPT}]
ini:
- {key: agnostic_become_prompt, section: privilege_escalation}
yaml: {key: privilege_escalation.agnostic_become_prompt}
version_added: "2.5"
CACHE_PLUGIN:
name: Persistent Cache plugin
default: memory
description: Chooses which cache plugin to use, the default 'memory' is ephimeral.
env: [{name: ANSIBLE_CACHE_PLUGIN}]
ini:
- {key: fact_caching, section: defaults}
yaml: {key: facts.cache.plugin}
CACHE_PLUGIN_CONNECTION:
name: Cache Plugin URI
default: ~
description: Defines connection or path information for the cache plugin
env: [{name: ANSIBLE_CACHE_PLUGIN_CONNECTION}]
ini:
- {key: fact_caching_connection, section: defaults}
yaml: {key: facts.cache.uri}
CACHE_PLUGIN_PREFIX:
name: Cache Plugin table prefix
default: ansible_facts
description: Prefix to use for cache plugin files/tables
env: [{name: ANSIBLE_CACHE_PLUGIN_PREFIX}]
ini:
- {key: fact_caching_prefix, section: defaults}
yaml: {key: facts.cache.prefix}
CACHE_PLUGIN_TIMEOUT:
name: Cache Plugin expiration timeout
default: 86400
description: Expiration timeout for the cache plugin data
env: [{name: ANSIBLE_CACHE_PLUGIN_TIMEOUT}]
ini:
- {key: fact_caching_timeout, section: defaults}
type: integer
yaml: {key: facts.cache.timeout}
COLLECTIONS_PATHS:
name: ordered list of root paths for loading installed Ansible collections content
default: ~/.ansible/collections:/usr/share/ansible/collections
type: pathspec
env:
- {name: ANSIBLE_COLLECTIONS_PATHS}
ini:
- {key: collections_paths, section: defaults}
COLOR_CHANGED:
name: Color for 'changed' task status
default: yellow
description: Defines the color to use on 'Changed' task status
env: [{name: ANSIBLE_COLOR_CHANGED}]
ini:
- {key: changed, section: colors}
yaml: {key: display.colors.changed}
COLOR_CONSOLE_PROMPT:
name: "Color for ansible-console's prompt task status"
default: white
description: Defines the default color to use for ansible-console
env: [{name: ANSIBLE_COLOR_CONSOLE_PROMPT}]
ini:
- {key: console_prompt, section: colors}
version_added: "2.7"
COLOR_DEBUG:
name: Color for debug statements
default: dark gray
description: Defines the color to use when emitting debug messages
env: [{name: ANSIBLE_COLOR_DEBUG}]
ini:
- {key: debug, section: colors}
yaml: {key: display.colors.debug}
COLOR_DEPRECATE:
name: Color for deprecation messages
default: purple
description: Defines the color to use when emitting deprecation messages
env: [{name: ANSIBLE_COLOR_DEPRECATE}]
ini:
- {key: deprecate, section: colors}
yaml: {key: display.colors.deprecate}
COLOR_DIFF_ADD:
name: Color for diff added display
default: green
description: Defines the color to use when showing added lines in diffs
env: [{name: ANSIBLE_COLOR_DIFF_ADD}]
ini:
- {key: diff_add, section: colors}
yaml: {key: display.colors.diff.add}
COLOR_DIFF_LINES:
name: Color for diff lines display
default: cyan
description: Defines the color to use when showing diffs
env: [{name: ANSIBLE_COLOR_DIFF_LINES}]
ini:
- {key: diff_lines, section: colors}
COLOR_DIFF_REMOVE:
name: Color for diff removed display
default: red
description: Defines the color to use when showing removed lines in diffs
env: [{name: ANSIBLE_COLOR_DIFF_REMOVE}]
ini:
- {key: diff_remove, section: colors}
COLOR_ERROR:
name: Color for error messages
default: red
description: Defines the color to use when emitting error messages
env: [{name: ANSIBLE_COLOR_ERROR}]
ini:
- {key: error, section: colors}
yaml: {key: colors.error}
COLOR_HIGHLIGHT:
name: Color for highlighting
default: white
description: Defines the color to use for highlighting
env: [{name: ANSIBLE_COLOR_HIGHLIGHT}]
ini:
- {key: highlight, section: colors}
COLOR_OK:
name: Color for 'ok' task status
default: green
description: Defines the color to use when showing 'OK' task status
env: [{name: ANSIBLE_COLOR_OK}]
ini:
- {key: ok, section: colors}
COLOR_SKIP:
name: Color for 'skip' task status
default: cyan
description: Defines the color to use when showing 'Skipped' task status
env: [{name: ANSIBLE_COLOR_SKIP}]
ini:
- {key: skip, section: colors}
COLOR_UNREACHABLE:
name: Color for 'unreachable' host state
default: bright red
description: Defines the color to use on 'Unreachable' status
env: [{name: ANSIBLE_COLOR_UNREACHABLE}]
ini:
- {key: unreachable, section: colors}
COLOR_VERBOSE:
name: Color for verbose messages
default: blue
description: Defines the color to use when emitting verbose messages. i.e those that show with '-v's.
env: [{name: ANSIBLE_COLOR_VERBOSE}]
ini:
- {key: verbose, section: colors}
COLOR_WARN:
name: Color for warning messages
default: bright purple
description: Defines the color to use when emitting warning messages
env: [{name: ANSIBLE_COLOR_WARN}]
ini:
- {key: warn, section: colors}
CONDITIONAL_BARE_VARS:
name: Allow bare variable evaluation in conditionals
default: True
type: boolean
description:
- With this setting on (True), running conditional evaluation 'var' is treated differently than 'var.subkey' as the first is evaluated
directly while the second goes through the Jinja2 parser. But 'false' strings in 'var' get evaluated as booleans.
- With this setting off they both evaluate the same but in cases in which 'var' was 'false' (a string) it won't get evaluated as a boolean anymore.
- Currently this setting defaults to 'True' but will soon change to 'False' and the setting itself will be removed in the future.
- Expect the default to change in version 2.10 and that this setting eventually will be deprecated after 2.12
env: [{name: ANSIBLE_CONDITIONAL_BARE_VARS}]
ini:
- {key: conditional_bare_variables, section: defaults}
version_added: "2.8"
ACTION_WARNINGS:
name: Toggle action warnings
default: True
description:
- By default Ansible will issue a warning when received from a task action (module or action plugin)
- These warnings can be silenced by adjusting this setting to False.
env: [{name: ANSIBLE_ACTION_WARNINGS}]
ini:
- {key: action_warnings, section: defaults}
type: boolean
version_added: "2.5"
COMMAND_WARNINGS:
name: Command module warnings
default: True
description:
- By default Ansible will issue a warning when the shell or command module is used and the command appears to be similar to an existing Ansible module.
- These warnings can be silenced by adjusting this setting to False. You can also control this at the task level with the module option ``warn``.
env: [{name: ANSIBLE_COMMAND_WARNINGS}]
ini:
- {key: command_warnings, section: defaults}
type: boolean
version_added: "1.8"
LOCALHOST_WARNING:
name: Warning when using implicit inventory with only localhost
default: True
description:
- By default Ansible will issue a warning when there are no hosts in the
inventory.
- These warnings can be silenced by adjusting this setting to False.
env: [{name: ANSIBLE_LOCALHOST_WARNING}]
ini:
- {key: localhost_warning, section: defaults}
type: boolean
version_added: "2.6"
DOC_FRAGMENT_PLUGIN_PATH:
name: documentation fragment plugins path
default: ~/.ansible/plugins/doc_fragments:/usr/share/ansible/plugins/doc_fragments
description: Colon separated paths in which Ansible will search for Documentation Fragments Plugins.
env: [{name: ANSIBLE_DOC_FRAGMENT_PLUGINS}]
ini:
- {key: doc_fragment_plugins, section: defaults}
type: pathspec
DEFAULT_ACTION_PLUGIN_PATH:
name: Action plugins path
default: ~/.ansible/plugins/action:/usr/share/ansible/plugins/action
description: Colon separated paths in which Ansible will search for Action Plugins.
env: [{name: ANSIBLE_ACTION_PLUGINS}]
ini:
- {key: action_plugins, section: defaults}
type: pathspec
yaml: {key: plugins.action.path}
DEFAULT_ALLOW_UNSAFE_LOOKUPS:
name: Allow unsafe lookups
default: False
description:
- "When enabled, this option allows lookup plugins (whether used in variables as ``{{lookup('foo')}}`` or as a loop as with_foo)
to return data that is not marked 'unsafe'."
- By default, such data is marked as unsafe to prevent the templating engine from evaluating any jinja2 templating language,
as this could represent a security risk. This option is provided to allow for backwards-compatibility,
however users should first consider adding allow_unsafe=True to any lookups which may be expected to contain data which may be run
through the templating engine late
env: []
ini:
- {key: allow_unsafe_lookups, section: defaults}
type: boolean
version_added: "2.2.3"
DEFAULT_ASK_PASS:
name: Ask for the login password
default: False
description:
- This controls whether an Ansible playbook should prompt for a login password.
If using SSH keys for authentication, you probably do not needed to change this setting.
env: [{name: ANSIBLE_ASK_PASS}]
ini:
- {key: ask_pass, section: defaults}
type: boolean
yaml: {key: defaults.ask_pass}
DEFAULT_ASK_VAULT_PASS:
name: Ask for the vault password(s)
default: False
description:
- This controls whether an Ansible playbook should prompt for a vault password.
env: [{name: ANSIBLE_ASK_VAULT_PASS}]
ini:
- {key: ask_vault_pass, section: defaults}
type: boolean
DEFAULT_BECOME:
name: Enable privilege escalation (become)
default: False
description: Toggles the use of privilege escalation, allowing you to 'become' another user after login.
env: [{name: ANSIBLE_BECOME}]
ini:
- {key: become, section: privilege_escalation}
type: boolean
DEFAULT_BECOME_ASK_PASS:
name: Ask for the privilege escalation (become) password
default: False
description: Toggle to prompt for privilege escalation password.
env: [{name: ANSIBLE_BECOME_ASK_PASS}]
ini:
- {key: become_ask_pass, section: privilege_escalation}
type: boolean
DEFAULT_BECOME_METHOD:
name: Choose privilege escalation method
default: 'sudo'
description: Privilege escalation method to use when `become` is enabled.
env: [{name: ANSIBLE_BECOME_METHOD}]
ini:
- {section: privilege_escalation, key: become_method}
DEFAULT_BECOME_EXE:
name: Choose 'become' executable
default: ~
description: 'executable to use for privilege escalation, otherwise Ansible will depend on PATH'
env: [{name: ANSIBLE_BECOME_EXE}]
ini:
- {key: become_exe, section: privilege_escalation}
DEFAULT_BECOME_FLAGS:
name: Set 'become' executable options
default: ''
description: Flags to pass to the privilege escalation executable.
env: [{name: ANSIBLE_BECOME_FLAGS}]
ini:
- {key: become_flags, section: privilege_escalation}
BECOME_PLUGIN_PATH:
name: Become plugins path
default: ~/.ansible/plugins/become:/usr/share/ansible/plugins/become
description: Colon separated paths in which Ansible will search for Become Plugins.
env: [{name: ANSIBLE_BECOME_PLUGINS}]
ini:
- {key: become_plugins, section: defaults}
type: pathspec
version_added: "2.8"
DEFAULT_BECOME_USER:
# FIXME: should really be blank and make -u passing optional depending on it
name: Set the user you 'become' via privilege escalation
default: root
description: The user your login/remote user 'becomes' when using privilege escalation, most systems will use 'root' when no user is specified.
env: [{name: ANSIBLE_BECOME_USER}]
ini:
- {key: become_user, section: privilege_escalation}
yaml: {key: become.user}
DEFAULT_CACHE_PLUGIN_PATH:
name: Cache Plugins Path
default: ~/.ansible/plugins/cache:/usr/share/ansible/plugins/cache
description: Colon separated paths in which Ansible will search for Cache Plugins.
env: [{name: ANSIBLE_CACHE_PLUGINS}]
ini:
- {key: cache_plugins, section: defaults}
type: pathspec
DEFAULT_CALLABLE_WHITELIST:
name: Template 'callable' whitelist
default: []
description: Whitelist of callable methods to be made available to template evaluation
env: [{name: ANSIBLE_CALLABLE_WHITELIST}]
ini:
- {key: callable_whitelist, section: defaults}
type: list
DEFAULT_CALLBACK_PLUGIN_PATH:
name: Callback Plugins Path
default: ~/.ansible/plugins/callback:/usr/share/ansible/plugins/callback
description: Colon separated paths in which Ansible will search for Callback Plugins.
env: [{name: ANSIBLE_CALLBACK_PLUGINS}]
ini:
- {key: callback_plugins, section: defaults}
type: pathspec
yaml: {key: plugins.callback.path}
DEFAULT_CALLBACK_WHITELIST:
name: Callback Whitelist
default: []
description:
- "List of whitelisted callbacks, not all callbacks need whitelisting,
but many of those shipped with Ansible do as we don't want them activated by default."
env: [{name: ANSIBLE_CALLBACK_WHITELIST}]
ini:
- {key: callback_whitelist, section: defaults}
type: list
yaml: {key: plugins.callback.whitelist}
DEFAULT_CLICONF_PLUGIN_PATH:
name: Cliconf Plugins Path
default: ~/.ansible/plugins/cliconf:/usr/share/ansible/plugins/cliconf
description: Colon separated paths in which Ansible will search for Cliconf Plugins.
env: [{name: ANSIBLE_CLICONF_PLUGINS}]
ini:
- {key: cliconf_plugins, section: defaults}
type: pathspec
DEFAULT_CONNECTION_PLUGIN_PATH:
name: Connection Plugins Path
default: ~/.ansible/plugins/connection:/usr/share/ansible/plugins/connection
description: Colon separated paths in which Ansible will search for Connection Plugins.
env: [{name: ANSIBLE_CONNECTION_PLUGINS}]
ini:
- {key: connection_plugins, section: defaults}
type: pathspec
yaml: {key: plugins.connection.path}
DEFAULT_DEBUG:
name: Debug mode
default: False
description:
- "Toggles debug output in Ansible. This is *very* verbose and can hinder
multiprocessing. Debug output can also include secret information
despite no_log settings being enabled, which means debug mode should not be used in
production."
env: [{name: ANSIBLE_DEBUG}]
ini:
- {key: debug, section: defaults}
type: boolean
DEFAULT_EXECUTABLE:
name: Target shell executable
default: /bin/sh
description:
- "This indicates the command to use to spawn a shell under for Ansible's execution needs on a target.
Users may need to change this in rare instances when shell usage is constrained, but in most cases it may be left as is."
env: [{name: ANSIBLE_EXECUTABLE}]
ini:
- {key: executable, section: defaults}
DEFAULT_FACT_PATH:
name: local fact path
default: ~
description:
- "This option allows you to globally configure a custom path for 'local_facts' for the implied M(setup) task when using fact gathering."
- "If not set, it will fallback to the default from the M(setup) module: ``/etc/ansible/facts.d``."
- "This does **not** affect user defined tasks that use the M(setup) module."
env: [{name: ANSIBLE_FACT_PATH}]
ini:
- {key: fact_path, section: defaults}
type: path
yaml: {key: facts.gathering.fact_path}
DEFAULT_FILTER_PLUGIN_PATH:
name: Jinja2 Filter Plugins Path
default: ~/.ansible/plugins/filter:/usr/share/ansible/plugins/filter
description: Colon separated paths in which Ansible will search for Jinja2 Filter Plugins.
env: [{name: ANSIBLE_FILTER_PLUGINS}]
ini:
- {key: filter_plugins, section: defaults}
type: pathspec
DEFAULT_FORCE_HANDLERS:
name: Force handlers to run after failure
default: False
description:
- This option controls if notified handlers run on a host even if a failure occurs on that host.
- When false, the handlers will not run if a failure has occurred on a host.
- This can also be set per play or on the command line. See Handlers and Failure for more details.
env: [{name: ANSIBLE_FORCE_HANDLERS}]
ini:
- {key: force_handlers, section: defaults}
type: boolean
version_added: "1.9.1"
DEFAULT_FORKS:
name: Number of task forks
default: 5
description: Maximum number of forks Ansible will use to execute tasks on target hosts.
env: [{name: ANSIBLE_FORKS}]
ini:
- {key: forks, section: defaults}
type: integer
DEFAULT_GATHERING:
name: Gathering behaviour
default: 'implicit'
description:
- This setting controls the default policy of fact gathering (facts discovered about remote systems).
- "When 'implicit' (the default), the cache plugin will be ignored and facts will be gathered per play unless 'gather_facts: False' is set."
- "When 'explicit' the inverse is true, facts will not be gathered unless directly requested in the play."
- "The 'smart' value means each new host that has no facts discovered will be scanned,
but if the same host is addressed in multiple plays it will not be contacted again in the playbook run."
- "This option can be useful for those wishing to save fact gathering time. Both 'smart' and 'explicit' will use the cache plugin."
env: [{name: ANSIBLE_GATHERING}]
ini:
- key: gathering
section: defaults
version_added: "1.6"
choices: ['smart', 'explicit', 'implicit']
DEFAULT_GATHER_SUBSET:
name: Gather facts subset
default: ['all']
description:
- Set the `gather_subset` option for the M(setup) task in the implicit fact gathering.
See the module documentation for specifics.
- "It does **not** apply to user defined M(setup) tasks."
env: [{name: ANSIBLE_GATHER_SUBSET}]
ini:
- key: gather_subset
section: defaults
version_added: "2.1"
type: list
DEFAULT_GATHER_TIMEOUT:
name: Gather facts timeout
default: 10
description:
- Set the timeout in seconds for the implicit fact gathering.
- "It does **not** apply to user defined M(setup) tasks."
env: [{name: ANSIBLE_GATHER_TIMEOUT}]
ini:
- {key: gather_timeout, section: defaults}
type: integer
yaml: {key: defaults.gather_timeout}
DEFAULT_HANDLER_INCLUDES_STATIC:
name: Make handler M(include) static
default: False
description:
- "Since 2.0 M(include) can be 'dynamic', this setting (if True) forces that if the include appears in a ``handlers`` section to be 'static'."
env: [{name: ANSIBLE_HANDLER_INCLUDES_STATIC}]
ini:
- {key: handler_includes_static, section: defaults}
type: boolean
deprecated:
why: include itself is deprecated and this setting will not matter in the future
version: "2.12"
alternatives: none as its already built into the decision between include_tasks and import_tasks
DEFAULT_HASH_BEHAVIOUR:
name: Hash merge behaviour
default: replace
type: string
choices: ["replace", "merge"]
description:
- This setting controls how variables merge in Ansible.
By default Ansible will override variables in specific precedence orders, as described in Variables.
When a variable of higher precedence wins, it will replace the other value.
- "Some users prefer that variables that are hashes (aka 'dictionaries' in Python terms) are merged.
This setting is called 'merge'. This is not the default behavior and it does not affect variables whose values are scalars
(integers, strings) or arrays. We generally recommend not using this setting unless you think you have an absolute need for it,
and playbooks in the official examples repos do not use this setting"
- In version 2.0 a ``combine`` filter was added to allow doing this for a particular variable (described in Filters).
env: [{name: ANSIBLE_HASH_BEHAVIOUR}]
ini:
- {key: hash_behaviour, section: defaults}
DEFAULT_HOST_LIST:
name: Inventory Source
default: /etc/ansible/hosts
description: Comma separated list of Ansible inventory sources
env:
- name: ANSIBLE_INVENTORY
expand_relative_paths: True
ini:
- key: inventory
section: defaults
type: pathlist
yaml: {key: defaults.inventory}
DEFAULT_HTTPAPI_PLUGIN_PATH:
name: HttpApi Plugins Path
default: ~/.ansible/plugins/httpapi:/usr/share/ansible/plugins/httpapi
description: Colon separated paths in which Ansible will search for HttpApi Plugins.
env: [{name: ANSIBLE_HTTPAPI_PLUGINS}]
ini:
- {key: httpapi_plugins, section: defaults}
type: pathspec
DEFAULT_INTERNAL_POLL_INTERVAL:
name: Internal poll interval
default: 0.001
env: []
ini:
- {key: internal_poll_interval, section: defaults}
type: float
version_added: "2.2"
description:
- This sets the interval (in seconds) of Ansible internal processes polling each other.
Lower values improve performance with large playbooks at the expense of extra CPU load.
Higher values are more suitable for Ansible usage in automation scenarios,
when UI responsiveness is not required but CPU usage might be a concern.
- "The default corresponds to the value hardcoded in Ansible <= 2.1"
DEFAULT_INVENTORY_PLUGIN_PATH:
name: Inventory Plugins Path
default: ~/.ansible/plugins/inventory:/usr/share/ansible/plugins/inventory
description: Colon separated paths in which Ansible will search for Inventory Plugins.
env: [{name: ANSIBLE_INVENTORY_PLUGINS}]
ini:
- {key: inventory_plugins, section: defaults}
type: pathspec
DEFAULT_JINJA2_EXTENSIONS:
name: Enabled Jinja2 extensions
default: []
description:
- This is a developer-specific feature that allows enabling additional Jinja2 extensions.
- "See the Jinja2 documentation for details. If you do not know what these do, you probably don't need to change this setting :)"
env: [{name: ANSIBLE_JINJA2_EXTENSIONS}]
ini:
- {key: jinja2_extensions, section: defaults}
DEFAULT_JINJA2_NATIVE:
name: Use Jinja2's NativeEnvironment for templating
default: False
description: This option preserves variable types during template operations. This requires Jinja2 >= 2.10.
env: [{name: ANSIBLE_JINJA2_NATIVE}]
ini:
- {key: jinja2_native, section: defaults}
type: boolean
yaml: {key: jinja2_native}
version_added: 2.7
DEFAULT_KEEP_REMOTE_FILES:
name: Keep remote files
default: False
description:
- Enables/disables the cleaning up of the temporary files Ansible used to execute the tasks on the remote.
- If this option is enabled it will disable ``ANSIBLE_PIPELINING``.
env: [{name: ANSIBLE_KEEP_REMOTE_FILES}]
ini:
- {key: keep_remote_files, section: defaults}
type: boolean
DEFAULT_LIBVIRT_LXC_NOSECLABEL:
# TODO: move to plugin
name: No security label on Lxc
default: False
description:
- "This setting causes libvirt to connect to lxc containers by passing --noseclabel to virsh.
This is necessary when running on systems which do not have SELinux."
env:
- name: LIBVIRT_LXC_NOSECLABEL
deprecated:
why: environment variables without "ANSIBLE_" prefix are deprecated
version: "2.12"
alternatives: the "ANSIBLE_LIBVIRT_LXC_NOSECLABEL" environment variable
- name: ANSIBLE_LIBVIRT_LXC_NOSECLABEL
ini:
- {key: libvirt_lxc_noseclabel, section: selinux}
type: boolean
version_added: "2.1"
DEFAULT_LOAD_CALLBACK_PLUGINS:
name: Load callbacks for adhoc
default: False
description:
- Controls whether callback plugins are loaded when running /usr/bin/ansible.
This may be used to log activity from the command line, send notifications, and so on.
Callback plugins are always loaded for ``ansible-playbook``.
env: [{name: ANSIBLE_LOAD_CALLBACK_PLUGINS}]
ini:
- {key: bin_ansible_callbacks, section: defaults}
type: boolean
version_added: "1.8"
DEFAULT_LOCAL_TMP:
name: Controller temporary directory
default: ~/.ansible/tmp
description: Temporary directory for Ansible to use on the controller.
env: [{name: ANSIBLE_LOCAL_TEMP}]
ini:
- {key: local_tmp, section: defaults}
type: tmppath
DEFAULT_LOG_PATH:
name: Ansible log file path
default: ~
description: File to which Ansible will log on the controller. When empty logging is disabled.
env: [{name: ANSIBLE_LOG_PATH}]
ini:
- {key: log_path, section: defaults}
type: path
DEFAULT_LOG_FILTER:
name: Name filters for python logger
default: []
description: List of logger names to filter out of the log file
env: [{name: ANSIBLE_LOG_FILTER}]
ini:
- {key: log_filter, section: defaults}
type: list
DEFAULT_LOOKUP_PLUGIN_PATH:
name: Lookup Plugins Path
description: Colon separated paths in which Ansible will search for Lookup Plugins.
default: ~/.ansible/plugins/lookup:/usr/share/ansible/plugins/lookup
env: [{name: ANSIBLE_LOOKUP_PLUGINS}]
ini:
- {key: lookup_plugins, section: defaults}
type: pathspec
yaml: {key: defaults.lookup_plugins}
DEFAULT_MANAGED_STR:
name: Ansible managed
default: 'Ansible managed'
description: Sets the macro for the 'ansible_managed' variable available for M(template) and M(win_template) modules. This is only relevant for those two modules.
env: []
ini:
- {key: ansible_managed, section: defaults}
yaml: {key: defaults.ansible_managed}
DEFAULT_MODULE_ARGS:
name: Adhoc default arguments
default: ''
description:
- This sets the default arguments to pass to the ``ansible`` adhoc binary if no ``-a`` is specified.
env: [{name: ANSIBLE_MODULE_ARGS}]
ini:
- {key: module_args, section: defaults}
DEFAULT_MODULE_COMPRESSION:
name: Python module compression
default: ZIP_DEFLATED
description: Compression scheme to use when transferring Python modules to the target.
env: []
ini:
- {key: module_compression, section: defaults}
# vars:
# - name: ansible_module_compression
DEFAULT_MODULE_NAME:
name: Default adhoc module
default: command
description: "Module to use with the ``ansible`` AdHoc command, if none is specified via ``-m``."
env: []
ini:
- {key: module_name, section: defaults}
DEFAULT_MODULE_PATH:
name: Modules Path
description: Colon separated paths in which Ansible will search for Modules.
default: ~/.ansible/plugins/modules:/usr/share/ansible/plugins/modules
env: [{name: ANSIBLE_LIBRARY}]
ini:
- {key: library, section: defaults}
type: pathspec
DEFAULT_MODULE_UTILS_PATH:
name: Module Utils Path
description: Colon separated paths in which Ansible will search for Module utils files, which are shared by modules.
default: ~/.ansible/plugins/module_utils:/usr/share/ansible/plugins/module_utils
env: [{name: ANSIBLE_MODULE_UTILS}]
ini:
- {key: module_utils, section: defaults}
type: pathspec
DEFAULT_NETCONF_PLUGIN_PATH:
name: Netconf Plugins Path
default: ~/.ansible/plugins/netconf:/usr/share/ansible/plugins/netconf
description: Colon separated paths in which Ansible will search for Netconf Plugins.
env: [{name: ANSIBLE_NETCONF_PLUGINS}]
ini:
- {key: netconf_plugins, section: defaults}
type: pathspec
DEFAULT_NO_LOG:
name: No log
default: False
description: "Toggle Ansible's display and logging of task details, mainly used to avoid security disclosures."
env: [{name: ANSIBLE_NO_LOG}]
ini:
- {key: no_log, section: defaults}
type: boolean
DEFAULT_NO_TARGET_SYSLOG:
name: No syslog on target
default: False
description: Toggle Ansible logging to syslog on the target when it executes tasks.
env: [{name: ANSIBLE_NO_TARGET_SYSLOG}]
ini:
- {key: no_target_syslog, section: defaults}
type: boolean
yaml: {key: defaults.no_target_syslog}
DEFAULT_NULL_REPRESENTATION:
name: Represent a null
default: ~
description: What templating should return as a 'null' value. When not set it will let Jinja2 decide.
env: [{name: ANSIBLE_NULL_REPRESENTATION}]
ini:
- {key: null_representation, section: defaults}
type: none
DEFAULT_POLL_INTERVAL:
name: Async poll interval
default: 15
description:
- For asynchronous tasks in Ansible (covered in Asynchronous Actions and Polling),
this is how often to check back on the status of those tasks when an explicit poll interval is not supplied.
The default is a reasonably moderate 15 seconds which is a tradeoff between checking in frequently and
providing a quick turnaround when something may have completed.
env: [{name: ANSIBLE_POLL_INTERVAL}]
ini:
- {key: poll_interval, section: defaults}
type: integer
DEFAULT_PRIVATE_KEY_FILE:
name: Private key file
default: ~
description:
- Option for connections using a certificate or key file to authenticate, rather than an agent or passwords,
you can set the default value here to avoid re-specifying --private-key with every invocation.
env: [{name: ANSIBLE_PRIVATE_KEY_FILE}]
ini:
- {key: private_key_file, section: defaults}
type: path
DEFAULT_PRIVATE_ROLE_VARS:
name: Private role variables
default: False
description:
- Makes role variables inaccessible from other roles.
- This was introduced as a way to reset role variables to default values if
a role is used more than once in a playbook.
env: [{name: ANSIBLE_PRIVATE_ROLE_VARS}]
ini:
- {key: private_role_vars, section: defaults}
type: boolean
yaml: {key: defaults.private_role_vars}
DEFAULT_REMOTE_PORT:
name: Remote port
default: ~
description: Port to use in remote connections, when blank it will use the connection plugin default.
env: [{name: ANSIBLE_REMOTE_PORT}]
ini:
- {key: remote_port, section: defaults}
type: integer
yaml: {key: defaults.remote_port}
DEFAULT_REMOTE_USER:
name: Login/Remote User
default:
description:
- Sets the login user for the target machines
- "When blank it uses the connection plugin's default, normally the user currently executing Ansible."
env: [{name: ANSIBLE_REMOTE_USER}]
ini:
- {key: remote_user, section: defaults}
DEFAULT_ROLES_PATH:
name: Roles path
default: ~/.ansible/roles:/usr/share/ansible/roles:/etc/ansible/roles
description: Colon separated paths in which Ansible will search for Roles.
env: [{name: ANSIBLE_ROLES_PATH}]
expand_relative_paths: True
ini:
- {key: roles_path, section: defaults}
type: pathspec
yaml: {key: defaults.roles_path}
DEFAULT_SCP_IF_SSH:
# TODO: move to ssh plugin
default: smart
description:
- "Preferred method to use when transferring files over ssh."
- When set to smart, Ansible will try them until one succeeds or they all fail.
- If set to True, it will force 'scp', if False it will use 'sftp'.
env: [{name: ANSIBLE_SCP_IF_SSH}]
ini:
- {key: scp_if_ssh, section: ssh_connection}
DEFAULT_SELINUX_SPECIAL_FS:
name: Problematic file systems
default: fuse, nfs, vboxsf, ramfs, 9p
description:
- "Some filesystems do not support safe operations and/or return inconsistent errors,
this setting makes Ansible 'tolerate' those in the list w/o causing fatal errors."
- Data corruption may occur and writes are not always verified when a filesystem is in the list.
env: []
ini:
- {key: special_context_filesystems, section: selinux}
type: list
DEFAULT_SFTP_BATCH_MODE:
# TODO: move to ssh plugin
default: True
description: 'TODO: write it'
env: [{name: ANSIBLE_SFTP_BATCH_MODE}]
ini:
- {key: sftp_batch_mode, section: ssh_connection}
type: boolean
yaml: {key: ssh_connection.sftp_batch_mode}
DEFAULT_SQUASH_ACTIONS:
name: Squashable actions
default: apk, apt, dnf, homebrew, openbsd_pkg, pacman, pip, pkgng, yum, zypper
description:
- Ansible can optimise actions that call modules that support list parameters when using ``with_`` looping.
Instead of calling the module once for each item, the module is called once with the full list.
- The default value for this setting is only for certain package managers, but it can be used for any module.
- Currently, this is only supported for modules that have a name or pkg parameter, and only when the item is the only thing being passed to the parameter.
env: [{name: ANSIBLE_SQUASH_ACTIONS}]
ini:
- {key: squash_actions, section: defaults}
type: list
version_added: "2.0"
deprecated:
why: Loop squashing is deprecated and this configuration will no longer be used
version: "2.11"
alternatives: a list directly with the module argument
DEFAULT_SSH_TRANSFER_METHOD:
# TODO: move to ssh plugin
default:
description: 'unused?'
# - "Preferred method to use when transferring files over ssh"
# - Setting to smart will try them until one succeeds or they all fail
#choices: ['sftp', 'scp', 'dd', 'smart']
env: [{name: ANSIBLE_SSH_TRANSFER_METHOD}]
ini:
- {key: transfer_method, section: ssh_connection}
DEFAULT_STDOUT_CALLBACK:
name: Main display callback plugin
default: default
description:
- "Set the main callback used to display Ansible output, you can only have one at a time."
- You can have many other callbacks, but just one can be in charge of stdout.
env: [{name: ANSIBLE_STDOUT_CALLBACK}]
ini:
- {key: stdout_callback, section: defaults}
ENABLE_TASK_DEBUGGER:
name: Whether to enable the task debugger
default: False
description:
- Whether or not to enable the task debugger, this previously was done as a strategy plugin.
- Now all strategy plugins can inherit this behavior. The debugger defaults to activating when
- a task is failed on unreachable. Use the debugger keyword for more flexibility.
type: boolean
env: [{name: ANSIBLE_ENABLE_TASK_DEBUGGER}]
ini:
- {key: enable_task_debugger, section: defaults}
version_added: "2.5"
TASK_DEBUGGER_IGNORE_ERRORS:
name: Whether a failed task with ignore_errors=True will still invoke the debugger
default: True
description:
- This option defines whether the task debugger will be invoked on a failed task when ignore_errors=True
is specified.
- True specifies that the debugger will honor ignore_errors, False will not honor ignore_errors.
type: boolean
env: [{name: ANSIBLE_TASK_DEBUGGER_IGNORE_ERRORS}]
ini:
- {key: task_debugger_ignore_errors, section: defaults}
version_added: "2.7"
DEFAULT_STRATEGY:
name: Implied strategy
default: 'linear'
description: Set the default strategy used for plays.
env: [{name: ANSIBLE_STRATEGY}]
ini:
- {key: strategy, section: defaults}
version_added: "2.3"
DEFAULT_STRATEGY_PLUGIN_PATH:
name: Strategy Plugins Path
description: Colon separated paths in which Ansible will search for Strategy Plugins.
default: ~/.ansible/plugins/strategy:/usr/share/ansible/plugins/strategy
env: [{name: ANSIBLE_STRATEGY_PLUGINS}]
ini:
- {key: strategy_plugins, section: defaults}
type: pathspec
DEFAULT_SU:
default: False
description: 'Toggle the use of "su" for tasks.'
env: [{name: ANSIBLE_SU}]
ini:
- {key: su, section: defaults}
type: boolean
yaml: {key: defaults.su}
DEFAULT_SYSLOG_FACILITY:
name: syslog facility
default: LOG_USER
description: Syslog facility to use when Ansible logs to the remote target
env: [{name: ANSIBLE_SYSLOG_FACILITY}]
ini:
- {key: syslog_facility, section: defaults}
DEFAULT_TASK_INCLUDES_STATIC:
name: Task include static
default: False
description:
- The `include` tasks can be static or dynamic, this toggles the default expected behaviour if autodetection fails and it is not explicitly set in task.
env: [{name: ANSIBLE_TASK_INCLUDES_STATIC}]
ini:
- {key: task_includes_static, section: defaults}
type: boolean
version_added: "2.1"
deprecated:
why: include itself is deprecated and this setting will not matter in the future
version: "2.12"
alternatives: None, as its already built into the decision between include_tasks and import_tasks
DEFAULT_TERMINAL_PLUGIN_PATH:
name: Terminal Plugins Path
default: ~/.ansible/plugins/terminal:/usr/share/ansible/plugins/terminal
description: Colon separated paths in which Ansible will search for Terminal Plugins.
env: [{name: ANSIBLE_TERMINAL_PLUGINS}]
ini:
- {key: terminal_plugins, section: defaults}
type: pathspec
DEFAULT_TEST_PLUGIN_PATH:
name: Jinja2 Test Plugins Path
description: Colon separated paths in which Ansible will search for Jinja2 Test Plugins.
default: ~/.ansible/plugins/test:/usr/share/ansible/plugins/test
env: [{name: ANSIBLE_TEST_PLUGINS}]
ini:
- {key: test_plugins, section: defaults}
type: pathspec
DEFAULT_TIMEOUT:
name: Connection timeout
default: 10
description: This is the default timeout for connection plugins to use.
env: [{name: ANSIBLE_TIMEOUT}]
ini:
- {key: timeout, section: defaults}
type: integer
DEFAULT_TRANSPORT:
name: Connection plugin
default: smart
description: "Default connection plugin to use, the 'smart' option will toggle between 'ssh' and 'paramiko' depending on controller OS and ssh versions"
env: [{name: ANSIBLE_TRANSPORT}]
ini:
- {key: transport, section: defaults}
DEFAULT_UNDEFINED_VAR_BEHAVIOR:
name: Jinja2 fail on undefined
default: True
version_added: "1.3"
description:
- When True, this causes ansible templating to fail steps that reference variable names that are likely typoed.
- "Otherwise, any '{{ template_expression }}' that contains undefined variables will be rendered in a template or ansible action line exactly as written."
env: [{name: ANSIBLE_ERROR_ON_UNDEFINED_VARS}]
ini:
- {key: error_on_undefined_vars, section: defaults}
type: boolean
DEFAULT_VARS_PLUGIN_PATH:
name: Vars Plugins Path
default: ~/.ansible/plugins/vars:/usr/share/ansible/plugins/vars
description: Colon separated paths in which Ansible will search for Vars Plugins.
env: [{name: ANSIBLE_VARS_PLUGINS}]
ini:
- {key: vars_plugins, section: defaults}
type: pathspec
# TODO: unused?
#DEFAULT_VAR_COMPRESSION_LEVEL:
# default: 0
# description: 'TODO: write it'
# env: [{name: ANSIBLE_VAR_COMPRESSION_LEVEL}]
# ini:
# - {key: var_compression_level, section: defaults}
# type: integer
# yaml: {key: defaults.var_compression_level}
DEFAULT_VAULT_ID_MATCH:
name: Force vault id match
default: False
description: 'If true, decrypting vaults with a vault id will only try the password from the matching vault-id'
env: [{name: ANSIBLE_VAULT_ID_MATCH}]
ini:
- {key: vault_id_match, section: defaults}
yaml: {key: defaults.vault_id_match}
DEFAULT_VAULT_IDENTITY:
name: Vault id label
default: default
description: 'The label to use for the default vault id label in cases where a vault id label is not provided'
env: [{name: ANSIBLE_VAULT_IDENTITY}]
ini:
- {key: vault_identity, section: defaults}
yaml: {key: defaults.vault_identity}
DEFAULT_VAULT_ENCRYPT_IDENTITY:
name: Vault id to use for encryption
default:
description: 'The vault_id to use for encrypting by default. If multiple vault_ids are provided, this specifies which to use for encryption. The --encrypt-vault-id cli option overrides the configured value.'
env: [{name: ANSIBLE_VAULT_ENCRYPT_IDENTITY}]
ini:
- {key: vault_encrypt_identity, section: defaults}
yaml: {key: defaults.vault_encrypt_identity}
DEFAULT_VAULT_IDENTITY_LIST:
name: Default vault ids
default: []
description: 'A list of vault-ids to use by default. Equivalent to multiple --vault-id args. Vault-ids are tried in order.'
env: [{name: ANSIBLE_VAULT_IDENTITY_LIST}]
ini:
- {key: vault_identity_list, section: defaults}
type: list
yaml: {key: defaults.vault_identity_list}
DEFAULT_VAULT_PASSWORD_FILE:
name: Vault password file
default: ~
description: 'The vault password file to use. Equivalent to --vault-password-file or --vault-id'
env: [{name: ANSIBLE_VAULT_PASSWORD_FILE}]
ini:
- {key: vault_password_file, section: defaults}
type: path
yaml: {key: defaults.vault_password_file}
DEFAULT_VERBOSITY:
name: Verbosity
default: 0
description: Sets the default verbosity, equivalent to the number of ``-v`` passed in the command line.
env: [{name: ANSIBLE_VERBOSITY}]
ini:
- {key: verbosity, section: defaults}
type: integer
DEPRECATION_WARNINGS:
name: Deprecation messages
default: True
description: "Toggle to control the showing of deprecation warnings"
env: [{name: ANSIBLE_DEPRECATION_WARNINGS}]
ini:
- {key: deprecation_warnings, section: defaults}
type: boolean
DIFF_ALWAYS:
name: Show differences
default: False
description: Configuration toggle to tell modules to show differences when in 'changed' status, equivalent to ``--diff``.
env: [{name: ANSIBLE_DIFF_ALWAYS}]
ini:
- {key: always, section: diff}
type: bool
DIFF_CONTEXT:
name: Difference context
default: 3
description: How many lines of context to show when displaying the differences between files.
env: [{name: ANSIBLE_DIFF_CONTEXT}]
ini:
- {key: context, section: diff}
type: integer
DISPLAY_ARGS_TO_STDOUT:
name: Show task arguments
default: False
description:
- "Normally ``ansible-playbook`` will print a header for each task that is run.
These headers will contain the name: field from the task if you specified one.
If you didn't then ``ansible-playbook`` uses the task's action to help you tell which task is presently running.
Sometimes you run many of the same action and so you want more information about the task to differentiate it from others of the same action.
If you set this variable to True in the config then ``ansible-playbook`` will also include the task's arguments in the header."
- "This setting defaults to False because there is a chance that you have sensitive values in your parameters and
you do not want those to be printed."
- "If you set this to True you should be sure that you have secured your environment's stdout
(no one can shoulder surf your screen and you aren't saving stdout to an insecure file) or
made sure that all of your playbooks explicitly added the ``no_log: True`` parameter to tasks which have sensitive values
See How do I keep secret data in my playbook? for more information."
env: [{name: ANSIBLE_DISPLAY_ARGS_TO_STDOUT}]
ini:
- {key: display_args_to_stdout, section: defaults}
type: boolean
version_added: "2.1"
DISPLAY_SKIPPED_HOSTS:
name: Show skipped results
default: True
description: "Toggle to control displaying skipped task/host entries in a task in the default callback"
env:
- name: DISPLAY_SKIPPED_HOSTS
deprecated:
why: environment variables without "ANSIBLE_" prefix are deprecated
version: "2.12"
alternatives: the "ANSIBLE_DISPLAY_SKIPPED_HOSTS" environment variable
- name: ANSIBLE_DISPLAY_SKIPPED_HOSTS
ini:
- {key: display_skipped_hosts, section: defaults}
type: boolean
DOCSITE_ROOT_URL:
name: Root docsite URL
default: https://docs.ansible.com/ansible/
description: Root docsite URL used to generate docs URLs in warning/error text;
must be an absolute URL with valid scheme and trailing slash.
ini:
- {key: docsite_root_url, section: defaults}
version_added: "2.8"
DUPLICATE_YAML_DICT_KEY:
name: Controls ansible behaviour when finding duplicate keys in YAML.
default: warn
description:
- By default Ansible will issue a warning when a duplicate dict key is encountered in YAML.
- These warnings can be silenced by adjusting this setting to False.
env: [{name: ANSIBLE_DUPLICATE_YAML_DICT_KEY}]
ini:
- {key: duplicate_dict_key, section: defaults}
type: string
choices: ['warn', 'error', 'ignore']
version_added: "2.9"
ERROR_ON_MISSING_HANDLER:
name: Missing handler error
default: True
description: "Toggle to allow missing handlers to become a warning instead of an error when notifying."
env: [{name: ANSIBLE_ERROR_ON_MISSING_HANDLER}]
ini:
- {key: error_on_missing_handler, section: defaults}
type: boolean
CONNECTION_FACTS_MODULES:
name: Map of connections to fact modules
default:
eos: eos_facts
frr: frr_facts
ios: ios_facts
iosxr: iosxr_facts
junos: junos_facts
nxos: nxos_facts
vyos: vyos_facts
description: "Which modules to run during a play's fact gathering stage based on connection"
env: [{name: ANSIBLE_CONNECTION_FACTS_MODULES}]
ini:
- {key: connection_facts_modules, section: defaults}
type: dict
FACTS_MODULES:
name: Gather Facts Modules
default:
- smart
description: "Which modules to run during a play's fact gathering stage, using the default of 'smart' will try to figure it out based on connection type."
env: [{name: ANSIBLE_FACTS_MODULES}]
ini:
- {key: facts_modules, section: defaults}
type: list
vars:
- name: ansible_facts_modules
GALAXY_IGNORE_CERTS:
name: Galaxy validate certs
default: False
description:
- If set to yes, ansible-galaxy will not validate TLS certificates.
This can be useful for testing against a server with a self-signed certificate.
env: [{name: ANSIBLE_GALAXY_IGNORE}]
ini:
- {key: ignore_certs, section: galaxy}
type: boolean
GALAXY_ROLE_SKELETON:
name: Galaxy role or collection skeleton directory
default:
description: Role or collection skeleton directory to use as a template for the ``init`` action in ``ansible-galaxy``, same as ``--role-skeleton``.
env: [{name: ANSIBLE_GALAXY_ROLE_SKELETON}]
ini:
- {key: role_skeleton, section: galaxy}
type: path
GALAXY_ROLE_SKELETON_IGNORE:
name: Galaxy skeleton ignore
default: ["^.git$", "^.*/.git_keep$"]
description: patterns of files to ignore inside a Galaxy role or collection skeleton directory
env: [{name: ANSIBLE_GALAXY_ROLE_SKELETON_IGNORE}]
ini:
- {key: role_skeleton_ignore, section: galaxy}
type: list
# TODO: unused?
#GALAXY_SCMS:
# name: Galaxy SCMS
# default: git, hg
# description: Available galaxy source control management systems.
# env: [{name: ANSIBLE_GALAXY_SCMS}]
# ini:
# - {key: scms, section: galaxy}
# type: list
GALAXY_SERVER:
default: https://galaxy.ansible.com
description: "URL to prepend when roles don't specify the full URI, assume they are referencing this server as the source."
env: [{name: ANSIBLE_GALAXY_SERVER}]
ini:
- {key: server, section: galaxy}
yaml: {key: galaxy.server}
GALAXY_TOKEN:
default: null
description: "GitHub personal access token"
env: [{name: ANSIBLE_GALAXY_TOKEN}]
ini:
- {key: token, section: galaxy}
yaml: {key: galaxy.token}
HOST_KEY_CHECKING:
name: Check host keys
default: True
description: 'Set this to "False" if you want to avoid host key checking by the underlying tools Ansible uses to connect to the host'
env: [{name: ANSIBLE_HOST_KEY_CHECKING}]
ini:
- {key: host_key_checking, section: defaults}
type: boolean
HOST_PATTERN_MISMATCH:
name: Control host pattern mismatch behaviour
default: 'warning'
description: This setting changes the behaviour of mismatched host patterns, it allows you to force a fatal error, a warning or just ignore it
env: [{name: ANSIBLE_HOST_PATTERN_MISMATCH}]
ini:
- {key: host_pattern_mismatch, section: inventory}
choices: ['warning', 'error', 'ignore']
version_added: "2.8"
INTERPRETER_PYTHON:
name: Python interpreter path (or automatic discovery behavior) used for module execution
default: auto_legacy
env: [{name: ANSIBLE_PYTHON_INTERPRETER}]
ini:
- {key: interpreter_python, section: defaults}
vars:
- {name: ansible_python_interpreter}
version_added: "2.8"
description:
- Path to the Python interpreter to be used for module execution on remote targets, or an automatic discovery mode.
Supported discovery modes are ``auto``, ``auto_silent``, and ``auto_legacy`` (the default). All discovery modes
employ a lookup table to use the included system Python (on distributions known to include one), falling back to a
fixed ordered list of well-known Python interpreter locations if a platform-specific default is not available. The
fallback behavior will issue a warning that the interpreter should be set explicitly (since interpreters installed
later may change which one is used). This warning behavior can be disabled by setting ``auto_silent``. The default
value of ``auto_legacy`` provides all the same behavior, but for backwards-compatibility with older Ansible releases
that always defaulted to ``/usr/bin/python``, will use that interpreter if present (and issue a warning that the
default behavior will change to that of ``auto`` in a future Ansible release.
INTERPRETER_PYTHON_DISTRO_MAP:
name: Mapping of known included platform pythons for various Linux distros
default:
centos: &rhelish
'6': /usr/bin/python
'8': /usr/libexec/platform-python
fedora:
'23': /usr/bin/python3
redhat: *rhelish
rhel: *rhelish
ubuntu:
'14': /usr/bin/python
'16': /usr/bin/python3
version_added: "2.8"
# FUTURE: add inventory override once we're sure it can't be abused by a rogue target
# FUTURE: add a platform layer to the map so we could use for, eg, freebsd/macos/etc?
INTERPRETER_PYTHON_FALLBACK:
name: Ordered list of Python interpreters to check for in discovery
default:
- /usr/bin/python
- python3.7
- python3.6
- python3.5
- python2.7
- python2.6
- /usr/libexec/platform-python
- /usr/bin/python3
- python
# FUTURE: add inventory override once we're sure it can't be abused by a rogue target
version_added: "2.8"
TRANSFORM_INVALID_GROUP_CHARS:
name: Transform invalid characters in group names
default: 'never'
description:
- Make ansible transform invalid characters in group names supplied by inventory sources.
- If 'never' it will allow for the group name but warn about the issue.
- When 'always' it will replace any invalid charachters with '_' (underscore) and warn the user
- When 'silently', it does the same as 'always' sans the warnings.
env: [{name: ANSIBLE_TRANSFORM_INVALID_GROUP_CHARS}]
ini:
- {key: force_valid_group_names, section: defaults}
type: string
choices: ['always', 'never', 'silently']
version_added: '2.8'
INVALID_TASK_ATTRIBUTE_FAILED:
name: Controls whether invalid attributes for a task result in errors instead of warnings
default: True
description: If 'false', invalid attributes for a task will result in warnings instead of errors
type: boolean
env:
- name: ANSIBLE_INVALID_TASK_ATTRIBUTE_FAILED
ini:
- key: invalid_task_attribute_failed
section: defaults
version_added: "2.7"
INVENTORY_ANY_UNPARSED_IS_FAILED:
name: Controls whether any unparseable inventory source is a fatal error
default: False
description: >
If 'true', it is a fatal error when any given inventory source
cannot be successfully parsed by any available inventory plugin;
otherwise, this situation only attracts a warning.
type: boolean
env: [{name: ANSIBLE_INVENTORY_ANY_UNPARSED_IS_FAILED}]
ini:
- {key: any_unparsed_is_failed, section: inventory}
version_added: "2.7"
INVENTORY_CACHE_ENABLED:
name: Inventory caching enabled
default: False
description: Toggle to turn on inventory caching
env: [{name: ANSIBLE_INVENTORY_CACHE}]
ini:
- {key: cache, section: inventory}
type: bool
INVENTORY_CACHE_PLUGIN:
name: Inventory cache plugin
description: The plugin for caching inventory. If INVENTORY_CACHE_PLUGIN is not provided CACHE_PLUGIN can be used instead.
env: [{name: ANSIBLE_INVENTORY_CACHE_PLUGIN}]
ini:
- {key: cache_plugin, section: inventory}
INVENTORY_CACHE_PLUGIN_CONNECTION:
name: Inventory cache plugin URI to override the defaults section
description: The inventory cache connection. If INVENTORY_CACHE_PLUGIN_CONNECTION is not provided CACHE_PLUGIN_CONNECTION can be used instead.
env: [{name: ANSIBLE_INVENTORY_CACHE_CONNECTION}]
ini:
- {key: cache_connection, section: inventory}
INVENTORY_CACHE_PLUGIN_PREFIX:
name: Inventory cache plugin table prefix
description: The table prefix for the cache plugin. If INVENTORY_CACHE_PLUGIN_PREFIX is not provided CACHE_PLUGIN_PREFIX can be used instead.
env: [{name: ANSIBLE_INVENTORY_CACHE_PLUGIN_PREFIX}]
default: ansible_facts
ini:
- {key: cache_prefix, section: inventory}
INVENTORY_CACHE_TIMEOUT:
name: Inventory cache plugin expiration timeout
description: Expiration timeout for the inventory cache plugin data. If INVENTORY_CACHE_TIMEOUT is not provided CACHE_TIMEOUT can be used instead.
default: 3600
env: [{name: ANSIBLE_INVENTORY_CACHE_TIMEOUT}]
ini:
- {key: cache_timeout, section: inventory}
INVENTORY_ENABLED:
name: Active Inventory plugins
default: ['host_list', 'script', 'auto', 'yaml', 'ini', 'toml']
description: List of enabled inventory plugins, it also determines the order in which they are used.
env: [{name: ANSIBLE_INVENTORY_ENABLED}]
ini:
- {key: enable_plugins, section: inventory}
type: list
INVENTORY_EXPORT:
name: Set ansible-inventory into export mode
default: False
description: Controls if ansible-inventory will accurately reflect Ansible's view into inventory or its optimized for exporting.
env: [{name: ANSIBLE_INVENTORY_EXPORT}]
ini:
- {key: export, section: inventory}
type: bool
INVENTORY_IGNORE_EXTS:
name: Inventory ignore extensions
default: "{{(BLACKLIST_EXTS + ( '.orig', '.ini', '.cfg', '.retry'))}}"
description: List of extensions to ignore when using a directory as an inventory source
env: [{name: ANSIBLE_INVENTORY_IGNORE}]
ini:
- {key: inventory_ignore_extensions, section: defaults}
- {key: ignore_extensions, section: inventory}
type: list
INVENTORY_IGNORE_PATTERNS:
name: Inventory ignore patterns
default: []
description: List of patterns to ignore when using a directory as an inventory source
env: [{name: ANSIBLE_INVENTORY_IGNORE_REGEX}]
ini:
- {key: inventory_ignore_patterns, section: defaults}
- {key: ignore_patterns, section: inventory}
type: list
INVENTORY_UNPARSED_IS_FAILED:
name: Unparsed Inventory failure
default: False
description: >
If 'true' it is a fatal error if every single potential inventory
source fails to parse, otherwise this situation will only attract a
warning.
env: [{name: ANSIBLE_INVENTORY_UNPARSED_FAILED}]
ini:
- {key: unparsed_is_failed, section: inventory}
type: bool
MAX_FILE_SIZE_FOR_DIFF:
name: Diff maximum file size
default: 104448
description: Maximum size of files to be considered for diff display
env: [{name: ANSIBLE_MAX_DIFF_SIZE}]
ini:
- {key: max_diff_size, section: defaults}
type: int
NETWORK_GROUP_MODULES:
name: Network module families
default: [eos, nxos, ios, iosxr, junos, enos, ce, vyos, sros, dellos9, dellos10, dellos6, asa, aruba, aireos, bigip, ironware, onyx, netconf]
description: 'TODO: write it'
env:
- name: NETWORK_GROUP_MODULES
deprecated:
why: environment variables without "ANSIBLE_" prefix are deprecated
version: "2.12"
alternatives: the "ANSIBLE_NETWORK_GROUP_MODULES" environment variable
- name: ANSIBLE_NETWORK_GROUP_MODULES
ini:
- {key: network_group_modules, section: defaults}
type: list
yaml: {key: defaults.network_group_modules}
INJECT_FACTS_AS_VARS:
default: True
description:
- Facts are available inside the `ansible_facts` variable, this setting also pushes them as their own vars in the main namespace.
- Unlike inside the `ansible_facts` dictionary, these will have an `ansible_` prefix.
env: [{name: ANSIBLE_INJECT_FACT_VARS}]
ini:
- {key: inject_facts_as_vars, section: defaults}
type: boolean
version_added: "2.5"
OLD_PLUGIN_CACHE_CLEARING:
description: Previouslly Ansible would only clear some of the plugin loading caches when loading new roles, this led to some behaviours in which a plugin loaded in prevoius plays would be unexpectedly 'sticky'. This setting allows to return to that behaviour.
env: [{name: ANSIBLE_OLD_PLUGIN_CACHE_CLEAR}]
ini:
- {key: old_plugin_cache_clear, section: defaults}
type: boolean
default: False
version_added: "2.8"
PARAMIKO_HOST_KEY_AUTO_ADD:
# TODO: move to plugin
default: False
description: 'TODO: write it'
env: [{name: ANSIBLE_PARAMIKO_HOST_KEY_AUTO_ADD}]
ini:
- {key: host_key_auto_add, section: paramiko_connection}
type: boolean
PARAMIKO_LOOK_FOR_KEYS:
name: look for keys
default: True
description: 'TODO: write it'
env: [{name: ANSIBLE_PARAMIKO_LOOK_FOR_KEYS}]
ini:
- {key: look_for_keys, section: paramiko_connection}
type: boolean
PERSISTENT_CONTROL_PATH_DIR:
name: Persistence socket path
default: ~/.ansible/pc
description: Path to socket to be used by the connection persistence system.
env: [{name: ANSIBLE_PERSISTENT_CONTROL_PATH_DIR}]
ini:
- {key: control_path_dir, section: persistent_connection}
type: path
PERSISTENT_CONNECT_TIMEOUT:
name: Persistence timeout
default: 30
description: This controls how long the persistent connection will remain idle before it is destroyed.
env: [{name: ANSIBLE_PERSISTENT_CONNECT_TIMEOUT}]
ini:
- {key: connect_timeout, section: persistent_connection}
type: integer
PERSISTENT_CONNECT_RETRY_TIMEOUT:
name: Persistence connection retry timeout
default: 15
description: This controls the retry timeout for presistent connection to connect to the local domain socket.
env: [{name: ANSIBLE_PERSISTENT_CONNECT_RETRY_TIMEOUT}]
ini:
- {key: connect_retry_timeout, section: persistent_connection}
type: integer
PERSISTENT_COMMAND_TIMEOUT:
name: Persistence command timeout
default: 30
description: This controls the amount of time to wait for response from remote device before timing out presistent connection.
env: [{name: ANSIBLE_PERSISTENT_COMMAND_TIMEOUT}]
ini:
- {key: command_timeout, section: persistent_connection}
type: int
PLAYBOOK_VARS_ROOT:
name: playbook vars files root
default: top
version_added: "2.4.1"
description:
- This sets which playbook dirs will be used as a root to process vars plugins, which includes finding host_vars/group_vars
- The ``top`` option follows the traditional behaviour of using the top playbook in the chain to find the root directory.
- The ``bottom`` option follows the 2.4.0 behaviour of using the current playbook to find the root directory.
- The ``all`` option examines from the first parent to the current playbook.
env: [{name: ANSIBLE_PLAYBOOK_VARS_ROOT}]
ini:
- {key: playbook_vars_root, section: defaults}
choices: [ top, bottom, all ]
PLUGIN_FILTERS_CFG:
name: Config file for limiting valid plugins
default: null
version_added: "2.5.0"
description:
- "A path to configuration for filtering which plugins installed on the system are allowed to be used."
- "See :ref:`plugin_filtering_config` for details of the filter file's format."
- " The default is /etc/ansible/plugin_filters.yml"
ini:
- key: plugin_filters_cfg
section: default
deprecated:
why: Specifying "plugin_filters_cfg" under the "default" section is deprecated
version: "2.12"
alternatives: the "defaults" section instead
- key: plugin_filters_cfg
section: defaults
type: path
PYTHON_MODULE_RLIMIT_NOFILE:
name: Adjust maximum file descriptor soft limit during Python module execution
description:
- Attempts to set RLIMIT_NOFILE soft limit to the specified value when executing Python modules (can speed up subprocess usage on
Python 2.x. See https://bugs.python.org/issue11284). The value will be limited by the existing hard limit. Default
value of 0 does not attempt to adjust existing system-defined limits.
default: 0
env:
- {name: ANSIBLE_PYTHON_MODULE_RLIMIT_NOFILE}
ini:
- {key: python_module_rlimit_nofile, section: defaults}
vars:
- {name: ansible_python_module_rlimit_nofile}
version_added: '2.8'
RETRY_FILES_ENABLED:
name: Retry files
default: False
description: This controls whether a failed Ansible playbook should create a .retry file.
env: [{name: ANSIBLE_RETRY_FILES_ENABLED}]
ini:
- {key: retry_files_enabled, section: defaults}
type: bool
RETRY_FILES_SAVE_PATH:
name: Retry files path
default: ~
description: This sets the path in which Ansible will save .retry files when a playbook fails and retry files are enabled.
env: [{name: ANSIBLE_RETRY_FILES_SAVE_PATH}]
ini:
- {key: retry_files_save_path, section: defaults}
type: path
SHOW_CUSTOM_STATS:
name: Display custom stats
default: False
description: 'This adds the custom stats set via the set_stats plugin to the default output'
env: [{name: ANSIBLE_SHOW_CUSTOM_STATS}]
ini:
- {key: show_custom_stats, section: defaults}
type: bool
STRING_TYPE_FILTERS:
name: Filters to preserve strings
default: [string, to_json, to_nice_json, to_yaml, ppretty, json]
description:
- "This list of filters avoids 'type conversion' when templating variables"
- Useful when you want to avoid conversion into lists or dictionaries for JSON strings, for example.
env: [{name: ANSIBLE_STRING_TYPE_FILTERS}]
ini:
- {key: dont_type_filters, section: jinja2}
type: list
SYSTEM_WARNINGS:
name: System warnings
default: True
description:
- Allows disabling of warnings related to potential issues on the system running ansible itself (not on the managed hosts)
- These may include warnings about 3rd party packages or other conditions that should be resolved if possible.
env: [{name: ANSIBLE_SYSTEM_WARNINGS}]
ini:
- {key: system_warnings, section: defaults}
type: boolean
TAGS_RUN:
name: Run Tags
default: []
type: list
description: default list of tags to run in your plays, Skip Tags has precedence.
env: [{name: ANSIBLE_RUN_TAGS}]
ini:
- {key: run, section: tags}
version_added: "2.5"
TAGS_SKIP:
name: Skip Tags
default: []
type: list
description: default list of tags to skip in your plays, has precedence over Run Tags
env: [{name: ANSIBLE_SKIP_TAGS}]
ini:
- {key: skip, section: tags}
version_added: "2.5"
USE_PERSISTENT_CONNECTIONS:
name: Persistence
default: False
description: Toggles the use of persistence for connections.
env: [{name: ANSIBLE_USE_PERSISTENT_CONNECTIONS}]
ini:
- {key: use_persistent_connections, section: defaults}
type: boolean
VARIABLE_PRECEDENCE:
name: Group variable precedence
default: ['all_inventory', 'groups_inventory', 'all_plugins_inventory', 'all_plugins_play', 'groups_plugins_inventory', 'groups_plugins_play']
description: Allows to change the group variable precedence merge order.
env: [{name: ANSIBLE_PRECEDENCE}]
ini:
- {key: precedence, section: defaults}
type: list
version_added: "2.4"
YAML_FILENAME_EXTENSIONS:
name: Valid YAML extensions
default: [".yml", ".yaml", ".json"]
description:
- "Check all of these extensions when looking for 'variable' files which should be YAML or JSON or vaulted versions of these."
- 'This affects vars_files, include_vars, inventory and vars plugins among others.'
env:
- name: ANSIBLE_YAML_FILENAME_EXT
ini:
- section: defaults
key: yaml_valid_extensions
type: list
NETCONF_SSH_CONFIG:
description: This variable is used to enable bastion/jump host with netconf connection. If set to True the bastion/jump
host ssh settings should be present in ~/.ssh/config file, alternatively it can be set
to custom ssh configuration file path to read the bastion/jump host settings.
env: [{name: ANSIBLE_NETCONF_SSH_CONFIG}]
ini:
- {key: ssh_config, section: netconf_connection}
yaml: {key: netconf_connection.ssh_config}
default: null
STRING_CONVERSION_ACTION:
version_added: '2.8'
description:
- Action to take when a module parameter value is converted to a string (this does not affect variables).
For string parameters, values such as '1.00', "['a', 'b',]", and 'yes', 'y', etc.
will be converted by the YAML parser unless fully quoted.
- Valid options are 'error', 'warn', and 'ignore'.
- Since 2.8, this option defaults to 'warn' but will change to 'error' in 2.12.
default: 'warn'
env:
- name: ANSIBLE_STRING_CONVERSION_ACTION
ini:
- section: defaults
key: string_conversion_action
type: string
VERBOSE_TO_STDERR:
version_added: '2.8'
description:
- Force 'verbose' option to use stderr instead of stdout
default: False
env:
- name: ANSIBLE_VERBOSE_TO_STDERR
ini:
- section: defaults
key: verbose_to_stderr
type: bool
...
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 59,102 |
IAM Password Policy errors when no password expiration set
|
##### SUMMARY
When attempting to set no password expiration policy botocore spits out lovely validation errors:
```
The full traceback is:
Traceback (most recent call last):
File "/tmp/ansible_iam_password_policy_payload_gn34QS/__main__.py", line 140, in update_password_policy
HardExpiry=pw_expire
File "/usr/lib/python2.7/site-packages/boto3/resources/factory.py", line 520, in do_action
response = action(self, *args, **kwargs)
File "/usr/lib/python2.7/site-packages/boto3/resources/action.py", line 83, in __call__
response = getattr(parent.meta.client, operation_name)(**params)
File "/usr/lib/python2.7/site-packages/botocore/client.py", line 312, in _api_call
return self._make_api_call(operation_name, kwargs)
File "/usr/lib/python2.7/site-packages/botocore/client.py", line 575, in _make_api_call
api_params, operation_model, context=request_context)
File "/usr/lib/python2.7/site-packages/botocore/client.py", line 630, in _convert_to_request_dict
api_params, operation_model)
File "/usr/lib/python2.7/site-packages/botocore/validate.py", line 291, in serialize_to_request
raise ParamValidationError(report=report.generate_report())
ParamValidationError: Parameter validation failed:
Invalid range for parameter MaxPasswordAge, value: 0, valid range: 1-inf
fatal: [it-cloud-aws-login -> localhost]: FAILED! => {
"boto3_version": "1.4.6",
"botocore_version": "1.6.0",
"changed": false,
"invocation": {
"module_args": {
"allow_pw_change": true,
"aws_access_key": null,
"aws_secret_key": null,
"debug_botocore_endpoint_logs": false,
"ec2_url": null,
"min_pw_length": 14,
"profile": null,
"pw_expire": false,
"pw_max_age": 0,
"pw_reuse_prevent": 6,
"region": null,
"require_lowercase": true,
"require_numbers": true,
"require_symbols": true,
"require_uppercase": true,
"security_token": null,
"state": "present",
"validate_certs": true
}
},
"msg": "Couldn't update IAM Password Policy: Parameter validation failed:\nInvalid range for parameter MaxPasswordAge, value: 0, valid range: 1-inf"
}
```
I tried
- not passing a value: `Couldn't update IAM Password Policy: Parameter validation failed:\nInvalid type for parameter MaxPasswordAge, value: None, type: <type 'NoneType'>, valid types: <type 'int'>, <type 'long'>`
- Passing '0': `Invalid range for parameter MaxPasswordAge, value: 0, valid range: 1-inf`
- Passing 'inf': `argument pw_max_age is of type <type 'str'> and we were unable to convert to int: <type 'str'> cannot be converted to an int`
It would appear that botocore expects you *not* to pass a value, at which point it sets it automatically to 0...
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
iam_password_policy
##### ANSIBLE VERSION
```
$ ansible --version
ansible 2.8.1
config file = /home/mchappel/.ansible.cfg
configured module search path = [u'/home/mchappel/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python2.7/site-packages/ansible
executable location = /usr/bin/ansible
python version = 2.7.5 (default, Jun 11 2019, 12:19:05) [GCC 4.8.5 20150623 (Red Hat 4.8.5-36)]
```
##### CONFIGURATION
```
DEFAULT_CALLBACK_WHITELIST(/home/mchappel/.ansible.cfg) = [u'timer', u'profile_tasks']
HOST_KEY_CHECKING(/home/mchappel/.ansible.cfg) = False
RETRY_FILES_ENABLED(/home/mchappel/.ansible.cfg) = False
```
##### OS / ENVIRONMENT
RHEL7
```
python2-botocore-1.6.0-1.el7.noarch
python2-boto-2.45.0-3.el7.noarch
python2-boto3-1.4.6-1.el7.noarch
```
##### STEPS TO REPRODUCE
```
---
- name: 'Password policy for AWS account'
iam_password_policy:
state: present
allow_pw_change: yes
min_pw_length: 14
require_symbols: yes
require_numbers: yes
require_uppercase: yes
require_lowercase: yes
pw_reuse_prevent: 6
pw_expire: false
pw_max_age: 0
```
##### EXPECTED RESULTS
Policy successfully set with no password expiry
##### ACTUAL RESULTS
Errors (see above)
|
https://github.com/ansible/ansible/issues/59102
|
https://github.com/ansible/ansible/pull/59848
|
3eeaf2f9746fd6709bd565d88d0680019f261de3
|
934d25a820c29b885329675453748e4f88750c63
| 2019-07-15T14:36:01Z |
python
| 2019-07-31T16:03:30Z |
changelogs/fragments/59848-iam-password-policy-Fix-no-expiration.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 59,102 |
IAM Password Policy errors when no password expiration set
|
##### SUMMARY
When attempting to set no password expiration policy botocore spits out lovely validation errors:
```
The full traceback is:
Traceback (most recent call last):
File "/tmp/ansible_iam_password_policy_payload_gn34QS/__main__.py", line 140, in update_password_policy
HardExpiry=pw_expire
File "/usr/lib/python2.7/site-packages/boto3/resources/factory.py", line 520, in do_action
response = action(self, *args, **kwargs)
File "/usr/lib/python2.7/site-packages/boto3/resources/action.py", line 83, in __call__
response = getattr(parent.meta.client, operation_name)(**params)
File "/usr/lib/python2.7/site-packages/botocore/client.py", line 312, in _api_call
return self._make_api_call(operation_name, kwargs)
File "/usr/lib/python2.7/site-packages/botocore/client.py", line 575, in _make_api_call
api_params, operation_model, context=request_context)
File "/usr/lib/python2.7/site-packages/botocore/client.py", line 630, in _convert_to_request_dict
api_params, operation_model)
File "/usr/lib/python2.7/site-packages/botocore/validate.py", line 291, in serialize_to_request
raise ParamValidationError(report=report.generate_report())
ParamValidationError: Parameter validation failed:
Invalid range for parameter MaxPasswordAge, value: 0, valid range: 1-inf
fatal: [it-cloud-aws-login -> localhost]: FAILED! => {
"boto3_version": "1.4.6",
"botocore_version": "1.6.0",
"changed": false,
"invocation": {
"module_args": {
"allow_pw_change": true,
"aws_access_key": null,
"aws_secret_key": null,
"debug_botocore_endpoint_logs": false,
"ec2_url": null,
"min_pw_length": 14,
"profile": null,
"pw_expire": false,
"pw_max_age": 0,
"pw_reuse_prevent": 6,
"region": null,
"require_lowercase": true,
"require_numbers": true,
"require_symbols": true,
"require_uppercase": true,
"security_token": null,
"state": "present",
"validate_certs": true
}
},
"msg": "Couldn't update IAM Password Policy: Parameter validation failed:\nInvalid range for parameter MaxPasswordAge, value: 0, valid range: 1-inf"
}
```
I tried
- not passing a value: `Couldn't update IAM Password Policy: Parameter validation failed:\nInvalid type for parameter MaxPasswordAge, value: None, type: <type 'NoneType'>, valid types: <type 'int'>, <type 'long'>`
- Passing '0': `Invalid range for parameter MaxPasswordAge, value: 0, valid range: 1-inf`
- Passing 'inf': `argument pw_max_age is of type <type 'str'> and we were unable to convert to int: <type 'str'> cannot be converted to an int`
It would appear that botocore expects you *not* to pass a value, at which point it sets it automatically to 0...
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
iam_password_policy
##### ANSIBLE VERSION
```
$ ansible --version
ansible 2.8.1
config file = /home/mchappel/.ansible.cfg
configured module search path = [u'/home/mchappel/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python2.7/site-packages/ansible
executable location = /usr/bin/ansible
python version = 2.7.5 (default, Jun 11 2019, 12:19:05) [GCC 4.8.5 20150623 (Red Hat 4.8.5-36)]
```
##### CONFIGURATION
```
DEFAULT_CALLBACK_WHITELIST(/home/mchappel/.ansible.cfg) = [u'timer', u'profile_tasks']
HOST_KEY_CHECKING(/home/mchappel/.ansible.cfg) = False
RETRY_FILES_ENABLED(/home/mchappel/.ansible.cfg) = False
```
##### OS / ENVIRONMENT
RHEL7
```
python2-botocore-1.6.0-1.el7.noarch
python2-boto-2.45.0-3.el7.noarch
python2-boto3-1.4.6-1.el7.noarch
```
##### STEPS TO REPRODUCE
```
---
- name: 'Password policy for AWS account'
iam_password_policy:
state: present
allow_pw_change: yes
min_pw_length: 14
require_symbols: yes
require_numbers: yes
require_uppercase: yes
require_lowercase: yes
pw_reuse_prevent: 6
pw_expire: false
pw_max_age: 0
```
##### EXPECTED RESULTS
Policy successfully set with no password expiry
##### ACTUAL RESULTS
Errors (see above)
|
https://github.com/ansible/ansible/issues/59102
|
https://github.com/ansible/ansible/pull/59848
|
3eeaf2f9746fd6709bd565d88d0680019f261de3
|
934d25a820c29b885329675453748e4f88750c63
| 2019-07-15T14:36:01Z |
python
| 2019-07-31T16:03:30Z |
lib/ansible/modules/cloud/amazon/iam_password_policy.py
|
#!/usr/bin/python
# Copyright: (c) 2018, Aaron Smith <[email protected]>
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
ANSIBLE_METADATA = {'metadata_version': '1.1',
'status': ['preview'],
'supported_by': 'community'}
DOCUMENTATION = '''
---
module: iam_password_policy
short_description: Update an IAM Password Policy
description:
- Module updates an IAM Password Policy on a given AWS account
version_added: "2.8"
requirements: [ 'botocore', 'boto3' ]
author:
- "Aaron Smith (@slapula)"
options:
state:
description:
- Specifies the overall state of the password policy.
required: true
choices: ['present', 'absent']
min_pw_length:
description:
- Minimum password length.
default: 6
aliases: [minimum_password_length]
require_symbols:
description:
- Require symbols in password.
default: false
type: bool
require_numbers:
description:
- Require numbers in password.
default: false
type: bool
require_uppercase:
description:
- Require uppercase letters in password.
default: false
type: bool
require_lowercase:
description:
- Require lowercase letters in password.
default: false
type: bool
allow_pw_change:
description:
- Allow users to change their password.
default: false
type: bool
aliases: [allow_password_change]
pw_max_age:
description:
- Maximum age for a password in days.
default: 0
aliases: [password_max_age]
pw_reuse_prevent:
description:
- Prevent re-use of passwords.
default: 0
aliases: [password_reuse_prevent, prevent_reuse]
pw_expire:
description:
- Prevents users from change an expired password.
default: false
type: bool
aliases: [password_expire, expire]
extends_documentation_fragment:
- aws
- ec2
'''
EXAMPLES = '''
- name: Password policy for AWS account
iam_password_policy:
state: present
min_pw_length: 8
require_symbols: false
require_numbers: true
require_uppercase: true
require_lowercase: true
allow_pw_change: true
pw_max_age: 60
pw_reuse_prevent: 5
pw_expire: false
'''
RETURN = ''' # '''
try:
import botocore
except ImportError:
pass # caught by AnsibleAWSModule
from ansible.module_utils.aws.core import AnsibleAWSModule
from ansible.module_utils.ec2 import boto3_conn, get_aws_connection_info, AWSRetry
from ansible.module_utils.ec2 import camel_dict_to_snake_dict, boto3_tag_list_to_ansible_dict
class IAMConnection(object):
def __init__(self, module):
try:
self.connection = module.resource('iam')
self.module = module
except Exception as e:
module.fail_json(msg="Failed to connect to AWS: %s" % str(e))
def update_password_policy(self, module, policy):
min_pw_length = module.params.get('min_pw_length')
require_symbols = module.params.get('require_symbols')
require_numbers = module.params.get('require_numbers')
require_uppercase = module.params.get('require_uppercase')
require_lowercase = module.params.get('require_lowercase')
allow_pw_change = module.params.get('allow_pw_change')
pw_max_age = module.params.get('pw_max_age')
pw_reuse_prevent = module.params.get('pw_reuse_prevent')
pw_expire = module.params.get('pw_expire')
try:
results = policy.update(
MinimumPasswordLength=min_pw_length,
RequireSymbols=require_symbols,
RequireNumbers=require_numbers,
RequireUppercaseCharacters=require_uppercase,
RequireLowercaseCharacters=require_lowercase,
AllowUsersToChangePassword=allow_pw_change,
MaxPasswordAge=pw_max_age,
PasswordReusePrevention=pw_reuse_prevent,
HardExpiry=pw_expire
)
policy.reload()
except (botocore.exceptions.ClientError, botocore.exceptions.BotoCoreError) as e:
self.module.fail_json_aws(e, msg="Couldn't update IAM Password Policy")
return camel_dict_to_snake_dict(results)
def delete_password_policy(self, policy):
try:
results = policy.delete()
except (botocore.exceptions.ClientError, botocore.exceptions.BotoCoreError) as e:
if e.response['Error']['Code'] == 'NoSuchEntity':
self.module.exit_json(changed=False, task_status={'IAM': "Couldn't find IAM Password Policy"})
else:
self.module.fail_json_aws(e, msg="Couldn't delete IAM Password Policy")
return camel_dict_to_snake_dict(results)
def main():
module = AnsibleAWSModule(
argument_spec={
'state': dict(choices=['present', 'absent'], required=True),
'min_pw_length': dict(type='int', aliases=['minimum_password_length'], default=6),
'require_symbols': dict(type='bool', default=False),
'require_numbers': dict(type='bool', default=False),
'require_uppercase': dict(type='bool', default=False),
'require_lowercase': dict(type='bool', default=False),
'allow_pw_change': dict(type='bool', aliases=['allow_password_change'], default=False),
'pw_max_age': dict(type='int', aliases=['password_max_age'], default=0),
'pw_reuse_prevent': dict(type='int', aliases=['password_reuse_prevent', 'prevent_reuse'], default=0),
'pw_expire': dict(type='bool', aliases=['password_expire', 'expire'], default=False),
},
supports_check_mode=True,
)
resource = IAMConnection(module)
policy = resource.connection.AccountPasswordPolicy()
state = module.params.get('state')
if state == 'present':
update_result = resource.update_password_policy(module, policy)
module.exit_json(changed=True, task_status={'IAM': update_result})
if state == 'absent':
delete_result = resource.delete_password_policy(policy)
module.exit_json(changed=True, task_status={'IAM': delete_result})
if __name__ == '__main__':
main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 59,806 |
CNOS modules showing as "supported" when it shouldn't
|
##### COMPONENT NAME
cnos_linkagg
https://docs.ansible.com/ansible/devel/modules/network_maintained.html#network-supported
Specifically:
https://docs.ansible.com/ansible/latest/modules/cnos_linkagg_module.html#cnos-linkagg-module
should be modified to be community supported (not fully supported).
Thanks!
|
https://github.com/ansible/ansible/issues/59806
|
https://github.com/ansible/ansible/pull/59878
|
f1fd13c0efdfb901c82517699781ec36b47c8134
|
eb15ee91dfdb28c77060b332bb5461e9c7015d4d
| 2019-07-30T18:14:24Z |
python
| 2019-07-31T18:58:24Z |
lib/ansible/modules/network/cnos/cnos_linkagg.py
|
#!/usr/bin/python
# -*- coding: utf-8 -*-
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
#
# Copyright (C) 2017 Lenovo, Inc.
# (c) 2017, Ansible by Red Hat, inc
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
#
# Module to work on Link Aggregation with Lenovo Switches
# Lenovo Networking
#
ANSIBLE_METADATA = {'metadata_version': '1.1',
'status': ['preview'],
'supported_by': 'network'}
DOCUMENTATION = """
---
module: cnos_linkagg
version_added: "2.8"
author: "Anil Kumar Muraleedharan (@auraleedhar)"
short_description: Manage link aggregation groups on Lenovo CNOS devices
description:
- This module provides declarative management of link aggregation groups
on Lenovo CNOS network devices.
notes:
- Tested against CNOS 10.8.1
options:
group:
description:
- Channel-group number for the port-channel
Link aggregation group. Range 1-255.
mode:
description:
- Mode of the link aggregation group.
choices: ['active', 'on', 'passive']
members:
description:
- List of members of the link aggregation group.
aggregate:
description: List of link aggregation definitions.
state:
description:
- State of the link aggregation group.
default: present
choices: ['present', 'absent']
purge:
description:
- Purge links not defined in the I(aggregate) parameter.
type: bool
default: no
provider:
description:
- B(Deprecated)
- "Starting with Ansible 2.5 we recommend using C(connection: network_cli)."
- For more information please see the L(CNOS Platform Options guide, ../network/user_guide/platform_cnos.html).
- HORIZONTALLINE
- A dict object containing connection details.
version_added: "2.8"
suboptions:
host:
description:
- Specifies the DNS host name or address for connecting to the remote
device over the specified transport. The value of host is used as
the destination address for the transport.
required: true
port:
description:
- Specifies the port to use when building the connection to the remote device.
default: 22
username:
description:
- Configures the username to use to authenticate the connection to
the remote device. This value is used to authenticate
the SSH session. If the value is not specified in the task, the
value of environment variable C(ANSIBLE_NET_USERNAME) will be used instead.
password:
description:
- Specifies the password to use to authenticate the connection to
the remote device. This value is used to authenticate
the SSH session. If the value is not specified in the task, the
value of environment variable C(ANSIBLE_NET_PASSWORD) will be used instead.
timeout:
description:
- Specifies the timeout in seconds for communicating with the network device
for either connecting or sending commands. If the timeout is
exceeded before the operation is completed, the module will error.
default: 10
ssh_keyfile:
description:
- Specifies the SSH key to use to authenticate the connection to
the remote device. This value is the path to the
key used to authenticate the SSH session. If the value is not specified
in the task, the value of environment variable C(ANSIBLE_NET_SSH_KEYFILE)
will be used instead.
authorize:
description:
- Instructs the module to enter privileged mode on the remote device
before sending any commands. If not specified, the device will
attempt to execute all commands in non-privileged mode. If the value
is not specified in the task, the value of environment variable
C(ANSIBLE_NET_AUTHORIZE) will be used instead.
type: bool
default: 'no'
auth_pass:
description:
- Specifies the password to use if required to enter privileged mode
on the remote device. If I(authorize) is false, then this argument
does nothing. If the value is not specified in the task, the value of
environment variable C(ANSIBLE_NET_AUTH_PASS) will be used instead.
"""
EXAMPLES = """
- name: create link aggregation group
cnos_linkagg:
group: 10
state: present
- name: delete link aggregation group
cnos_linkagg:
group: 10
state: absent
- name: set link aggregation group to members
cnos_linkagg:
group: 200
mode: active
members:
- Ethernet1/33
- Ethernet1/44
- name: remove link aggregation group from GigabitEthernet0/0
cnos_linkagg:
group: 200
mode: active
members:
- Ethernet1/33
- name: Create aggregate of linkagg definitions
cnos_linkagg:
aggregate:
- { group: 3, mode: on, members: [Ethernet1/33] }
- { group: 100, mode: passive, members: [Ethernet1/44] }
"""
RETURN = """
commands:
description: The list of configuration mode commands to send to the device
returned: always, except for the platforms that use Netconf transport to
manage the device.
type: list
sample:
- interface port-channel 30
- interface Ethernet1/33
- channel-group 30 mode on
- no interface port-channel 30
"""
import re
from copy import deepcopy
from ansible.module_utils.basic import AnsibleModule
from ansible.module_utils.network.common.config import CustomNetworkConfig
from ansible.module_utils.network.common.utils import remove_default_spec
from ansible.module_utils.network.cnos.cnos import get_config, load_config
from ansible.module_utils.network.cnos.cnos import cnos_argument_spec
def search_obj_in_list(group, lst):
for o in lst:
if o['group'] == group:
return o
def map_obj_to_commands(updates, module):
commands = list()
want, have = updates
purge = module.params['purge']
for w in want:
group = w['group']
mode = w['mode']
members = w.get('members') or []
state = w['state']
del w['state']
obj_in_have = search_obj_in_list(group, have)
if state == 'absent':
if obj_in_have:
commands.append('no interface port-channel {0}'.format(group))
elif state == 'present':
cmd = ['interface port-channel {0}'.format(group),
'exit']
if not obj_in_have:
if not group:
module.fail_json(msg='group is a required option')
commands.extend(cmd)
if members:
for m in members:
commands.append('interface {0}'.format(m))
commands.append('channel-group {0} mode {1}'.format(group, mode))
else:
if members:
if 'members' not in obj_in_have.keys():
for m in members:
commands.extend(cmd)
commands.append('interface {0}'.format(m))
commands.append('channel-group {0} mode {1}'.format(group, mode))
elif set(members) != set(obj_in_have['members']):
missing_members = list(set(members) - set(obj_in_have['members']))
for m in missing_members:
commands.extend(cmd)
commands.append('interface {0}'.format(m))
commands.append('channel-group {0} mode {1}'.format(group, mode))
superfluous_members = list(set(obj_in_have['members']) - set(members))
for m in superfluous_members:
commands.extend(cmd)
commands.append('interface {0}'.format(m))
commands.append('no channel-group')
if purge:
for h in have:
obj_in_want = search_obj_in_list(h['group'], want)
if not obj_in_want:
commands.append('no interface port-channel {0}'.format(h['group']))
return commands
def map_params_to_obj(module):
obj = []
aggregate = module.params.get('aggregate')
if aggregate:
for item in aggregate:
for key in item:
if item.get(key) is None:
item[key] = module.params[key]
d = item.copy()
d['group'] = str(d['group'])
obj.append(d)
else:
obj.append({
'group': str(module.params['group']),
'mode': module.params['mode'],
'members': module.params['members'],
'state': module.params['state']
})
return obj
def parse_mode(module, config, group, member):
mode = None
netcfg = CustomNetworkConfig(indent=1, contents=config)
parents = ['interface {0}'.format(member)]
body = netcfg.get_section(parents)
match_int = re.findall(r'interface {0}\n'.format(member), body, re.M)
if match_int:
match = re.search(r'channel-group {0} mode (\S+)'.format(group),
body, re.M)
if match:
mode = match.group(1)
return mode
def parse_members(module, config, group):
members = []
for line in config.strip().split('!'):
l = line.strip()
if l.startswith('interface'):
match_group = re.findall(r'channel-group {0} mode'.format(group), l, re.M)
if match_group:
match = re.search(r'interface (\S+)', l, re.M)
if match:
members.append(match.group(1))
return members
def get_channel(module, config, group):
match = re.findall(r'^interface (\S+)', config, re.M)
if not match:
return {}
channel = {}
for item in set(match):
member = item
channel['mode'] = parse_mode(module, config, group, member)
channel['members'] = parse_members(module, config, group)
return channel
def map_config_to_obj(module):
objs = list()
config = get_config(module)
for line in config.split('\n'):
l = line.strip()
match = re.search(r'interface port-channel(\S+)', l, re.M)
if match:
obj = {}
group = match.group(1)
obj['group'] = group
obj.update(get_channel(module, config, group))
objs.append(obj)
return objs
def main():
""" main entry point for module execution
"""
element_spec = dict(
group=dict(type='int'),
mode=dict(choices=['active', 'on', 'passive']),
members=dict(type='list'),
state=dict(default='present',
choices=['present', 'absent'])
)
aggregate_spec = deepcopy(element_spec)
aggregate_spec['group'] = dict(required=True)
required_one_of = [['group', 'aggregate']]
required_together = [['members', 'mode']]
mutually_exclusive = [['group', 'aggregate']]
# remove default in aggregate spec, to handle common arguments
remove_default_spec(aggregate_spec)
argument_spec = dict(
aggregate=dict(type='list', elements='dict', options=aggregate_spec,
required_together=required_together),
purge=dict(default=False, type='bool')
)
argument_spec.update(element_spec)
argument_spec.update(cnos_argument_spec)
module = AnsibleModule(argument_spec=argument_spec,
required_one_of=required_one_of,
required_together=required_together,
mutually_exclusive=mutually_exclusive,
supports_check_mode=True)
warnings = list()
result = {'changed': False}
if warnings:
result['warnings'] = warnings
want = map_params_to_obj(module)
have = map_config_to_obj(module)
commands = map_obj_to_commands((want, have), module)
result['commands'] = commands
if commands:
if not module.check_mode:
load_config(module, commands)
result['changed'] = True
module.exit_json(**result)
if __name__ == '__main__':
main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 59,549 |
ansible_net_model is not populated when "C" in Cisco is lowercase in `show version`
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below -->
When the "C" in Cisco is lowercase in `show version` the `anislbe_net_model` is not being populated.
Example from show version:
~~~
somehostname#show version
Cisco IOS Software, C3560E Software (C3560E-UNIVERSALK9-M), Version 15.2(2)E9, RELEASE SOFTWARE (fc4)
Technical Support: http://www.cisco.com/techsupport
Copyright (c) 1986-2018 by Cisco Systems, Inc.
Compiled Sat 08-Sep-18 16:28 by fubar
ROM: Bootstrap program is C3560E boot loader
BOOTLDR: C3560E Boot Loader (C3560X-HBOOT-M) Version 15.2(3r)E, RELEASE SOFTWARE (fc1)
somehostname uptime is 10 weeks, 2 days, 10 hours, 52 minutes
System returned to ROM by power-on
System restarted at 07:04:16 GMT Mon May 13 2019
System image file is "flash:/c3560e-universalk9-mz.152-2.E9.bin"
Last reload reason: Reload command
<snip>
License Level: ipbase
License Type: Permanent
Next reload license Level: ipbase
cisco WS-C3560X-48P (PowerPC405) processor (revision W0) with 262144K bytes of memory.
Processor board ID FDO1944F34G
Last reset from power-on
1 Virtual Ethernet interface
1 FastEthernet interface
52 Gigabit Ethernet interfaces
2 Ten Gigabit Ethernet interfaces
The password-recovery mechanism is enabled.
512K bytes of flash-simulated non-volatile configuration memory.
Base ethernet MAC Address : XX:XX:XX:XX:XX:XX
Motherboard assembly number : XX-XXXXX-XX
Motherboard serial number : XXXXXXXXXXX
Model revision number : W0
Motherboard revision number : A0
Model number : WS-C3560X-48P-S
Daughterboard assembly number : 800-32786-02
Daughterboard serial number : FDO194406YV
System serial number : FDO1944F34G
Top Assembly Part Number : 800-38995-01
Top Assembly Revision Number : B0
Version ID : V07
CLEI Code Number : CMMPT00DRB
Hardware Board Revision Number : 0x05
Switch Ports Model SW Version SW Image
------ ----- ----- ---------- ----------
* 1 54 WS-C3560X-48P 15.2(2)E9 C3560E-UNIVERSALK9-M
Configuration register is 0xF
somehostname#
~~~
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
ios_facts
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.8.0
config file = /etc/ansible/ansible.cfg
configured module search path = [u'/home/u346989/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python2.7/site-packages/ansible
executable location = /bin/ansible
python version = 2.7.5 (default, Jun 11 2019, 12:19:05) [GCC 4.8.5 20150623 (Red Hat 4.8.5-36)]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
ANSIBLE_NOCOWS(/etc/ansible/ansible.cfg) = True
DEFAULT_FORKS(/etc/ansible/ansible.cfg) = 6
DEFAULT_ROLES_PATH(/etc/ansible/ansible.cfg) = [u'/etc/ansible/roles', u'/usr/share/ansible/roles']
HOST_KEY_CHECKING(/etc/ansible/ansible.cfg) = False
PERSISTENT_COMMAND_TIMEOUT(/etc/ansible/ansible.cfg) = 900
PERSISTENT_CONNECT_RETRY_TIMEOUT(/etc/ansible/ansible.cfg) = 120
PERSISTENT_CONNECT_TIMEOUT(/etc/ansible/ansible.cfg) = 900
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
```
Switch Ports Model SW Version SW Image
------ ----- ----- ---------- ----------
* 1 54 WS-C3560X-48P 15.2(2)E9 C3560E-UNIVERSALK9-M
```
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
<!--- Paste example playbooks or commands between quotes below -->
```yaml
- hosts: all
connection: network_cli
gather_facts: no
tasks:
- name: gather facts from IOS device
ios_facts:
gather_subset: all
- name: Debug ansible_net_model
debug:
var: ansible_net_model
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
Allow regex match of lowercase "C" in Cisco and populate `ansible_net_model`
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes -->
```paste below
TASK [Debug ansible_net_model] *******************************************************************
ok: [<somehostname>] => {
"ansible_net_model": "VARIABLE IS NOT DEFINED!"
}
```
PR: https://github.com/ansible/ansible/pull/59550
|
https://github.com/ansible/ansible/issues/59549
|
https://github.com/ansible/ansible/pull/59550
|
eea46a0d1b99a6dadedbb6a3502d599235fa7ec3
|
4eb156b2f5ac49583f581a4ba39a6397f234e399
| 2019-07-24T19:17:22Z |
python
| 2019-08-01T13:27:27Z |
lib/ansible/plugins/cliconf/ios.py
|
#
# (c) 2017 Red Hat Inc.
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
#
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
DOCUMENTATION = """
---
author: Ansible Networking Team
cliconf: ios
short_description: Use ios cliconf to run command on Cisco IOS platform
description:
- This ios plugin provides low level abstraction apis for
sending and receiving CLI commands from Cisco IOS network devices.
version_added: "2.4"
"""
import re
import time
import json
from itertools import chain
from ansible.errors import AnsibleConnectionFailure
from ansible.module_utils._text import to_text
from ansible.module_utils.common._collections_compat import Mapping
from ansible.module_utils.six import iteritems
from ansible.module_utils.network.common.config import NetworkConfig, dumps
from ansible.module_utils.network.common.utils import to_list
from ansible.plugins.cliconf import CliconfBase, enable_mode
class Cliconf(CliconfBase):
@enable_mode
def get_config(self, source='running', flags=None, format=None):
if source not in ('running', 'startup'):
raise ValueError("fetching configuration from %s is not supported" % source)
if format:
raise ValueError("'format' value %s is not supported for get_config" % format)
if not flags:
flags = []
if source == 'running':
cmd = 'show running-config '
else:
cmd = 'show startup-config '
cmd += ' '.join(to_list(flags))
cmd = cmd.strip()
return self.send_command(cmd)
def get_diff(self, candidate=None, running=None, diff_match='line', diff_ignore_lines=None, path=None, diff_replace='line'):
"""
Generate diff between candidate and running configuration. If the
remote host supports onbox diff capabilities ie. supports_onbox_diff in that case
candidate and running configurations are not required to be passed as argument.
In case if onbox diff capability is not supported candidate argument is mandatory
and running argument is optional.
:param candidate: The configuration which is expected to be present on remote host.
:param running: The base configuration which is used to generate diff.
:param diff_match: Instructs how to match the candidate configuration with current device configuration
Valid values are 'line', 'strict', 'exact', 'none'.
'line' - commands are matched line by line
'strict' - command lines are matched with respect to position
'exact' - command lines must be an equal match
'none' - will not compare the candidate configuration with the running configuration
:param diff_ignore_lines: Use this argument to specify one or more lines that should be
ignored during the diff. This is used for lines in the configuration
that are automatically updated by the system. This argument takes
a list of regular expressions or exact line matches.
:param path: The ordered set of parents that uniquely identify the section or hierarchy
the commands should be checked against. If the parents argument
is omitted, the commands are checked against the set of top
level or global commands.
:param diff_replace: Instructs on the way to perform the configuration on the device.
If the replace argument is set to I(line) then the modified lines are
pushed to the device in configuration mode. If the replace argument is
set to I(block) then the entire command block is pushed to the device in
configuration mode if any line is not correct.
:return: Configuration diff in json format.
{
'config_diff': '',
'banner_diff': {}
}
"""
diff = {}
device_operations = self.get_device_operations()
option_values = self.get_option_values()
if candidate is None and device_operations['supports_generate_diff']:
raise ValueError("candidate configuration is required to generate diff")
if diff_match not in option_values['diff_match']:
raise ValueError("'match' value %s in invalid, valid values are %s" % (diff_match, ', '.join(option_values['diff_match'])))
if diff_replace not in option_values['diff_replace']:
raise ValueError("'replace' value %s in invalid, valid values are %s" % (diff_replace, ', '.join(option_values['diff_replace'])))
# prepare candidate configuration
candidate_obj = NetworkConfig(indent=1)
want_src, want_banners = self._extract_banners(candidate)
candidate_obj.load(want_src)
if running and diff_match != 'none':
# running configuration
have_src, have_banners = self._extract_banners(running)
running_obj = NetworkConfig(indent=1, contents=have_src, ignore_lines=diff_ignore_lines)
configdiffobjs = candidate_obj.difference(running_obj, path=path, match=diff_match, replace=diff_replace)
else:
configdiffobjs = candidate_obj.items
have_banners = {}
diff['config_diff'] = dumps(configdiffobjs, 'commands') if configdiffobjs else ''
banners = self._diff_banners(want_banners, have_banners)
diff['banner_diff'] = banners if banners else {}
return diff
@enable_mode
def edit_config(self, candidate=None, commit=True, replace=None, comment=None):
resp = {}
operations = self.get_device_operations()
self.check_edit_config_capability(operations, candidate, commit, replace, comment)
results = []
requests = []
if commit:
self.send_command('configure terminal')
for line in to_list(candidate):
if not isinstance(line, Mapping):
line = {'command': line}
cmd = line['command']
if cmd != 'end' and cmd[0] != '!':
results.append(self.send_command(**line))
requests.append(cmd)
self.send_command('end')
else:
raise ValueError('check mode is not supported')
resp['request'] = requests
resp['response'] = results
return resp
def edit_macro(self, candidate=None, commit=True, replace=None, comment=None):
resp = {}
operations = self.get_device_operations()
self.check_edit_config_capabiltiy(operations, candidate, commit, replace, comment)
results = []
requests = []
if commit:
commands = ''
for line in candidate:
if line != 'None':
commands += (' ' + line + '\n')
self.send_command('config terminal', sendonly=True)
obj = {'command': commands, 'sendonly': True}
results.append(self.send_command(**obj))
requests.append(commands)
self.send_command('end', sendonly=True)
time.sleep(0.1)
results.append(self.send_command('\n'))
requests.append('\n')
resp['request'] = requests
resp['response'] = results
return resp
def get(self, command=None, prompt=None, answer=None, sendonly=False, output=None, newline=True, check_all=False):
if not command:
raise ValueError('must provide value of command to execute')
if output:
raise ValueError("'output' value %s is not supported for get" % output)
return self.send_command(command=command, prompt=prompt, answer=answer, sendonly=sendonly, newline=newline, check_all=check_all)
def get_device_info(self):
device_info = {}
device_info['network_os'] = 'ios'
reply = self.get(command='show version')
data = to_text(reply, errors='surrogate_or_strict').strip()
match = re.search(r'Version (\S+)', data)
if match:
device_info['network_os_version'] = match.group(1).strip(',')
model_search_strs = [r'^Cisco (.+) \(revision', r'^[Cc]isco (\S+).+bytes of .*memory']
for item in model_search_strs:
match = re.search(item, data, re.M)
if match:
version = match.group(1).split(' ')
device_info['network_os_model'] = version[0]
break
match = re.search(r'^(.+) uptime', data, re.M)
if match:
device_info['network_os_hostname'] = match.group(1)
match = re.search(r'image file is "(.+)"', data)
if match:
device_info['network_os_image'] = match.group(1)
return device_info
def get_device_operations(self):
return {
'supports_diff_replace': True,
'supports_commit': False,
'supports_rollback': False,
'supports_defaults': True,
'supports_onbox_diff': False,
'supports_commit_comment': False,
'supports_multiline_delimiter': True,
'supports_diff_match': True,
'supports_diff_ignore_lines': True,
'supports_generate_diff': True,
'supports_replace': False
}
def get_option_values(self):
return {
'format': ['text'],
'diff_match': ['line', 'strict', 'exact', 'none'],
'diff_replace': ['line', 'block'],
'output': []
}
def get_capabilities(self):
result = super(Cliconf, self).get_capabilities()
result['rpc'] += ['edit_banner', 'get_diff', 'run_commands', 'get_defaults_flag']
result['device_operations'] = self.get_device_operations()
result.update(self.get_option_values())
return json.dumps(result)
def edit_banner(self, candidate=None, multiline_delimiter="@", commit=True):
"""
Edit banner on remote device
:param banners: Banners to be loaded in json format
:param multiline_delimiter: Line delimiter for banner
:param commit: Boolean value that indicates if the device candidate
configuration should be pushed in the running configuration or discarded.
:param diff: Boolean flag to indicate if configuration that is applied on remote host should
generated and returned in response or not
:return: Returns response of executing the configuration command received
from remote host
"""
resp = {}
banners_obj = json.loads(candidate)
results = []
requests = []
if commit:
for key, value in iteritems(banners_obj):
key += ' %s' % multiline_delimiter
self.send_command('config terminal', sendonly=True)
for cmd in [key, value, multiline_delimiter]:
obj = {'command': cmd, 'sendonly': True}
results.append(self.send_command(**obj))
requests.append(cmd)
self.send_command('end', sendonly=True)
time.sleep(0.1)
results.append(self.send_command('\n'))
requests.append('\n')
resp['request'] = requests
resp['response'] = results
return resp
def run_commands(self, commands=None, check_rc=True):
if commands is None:
raise ValueError("'commands' value is required")
responses = list()
for cmd in to_list(commands):
if not isinstance(cmd, Mapping):
cmd = {'command': cmd}
output = cmd.pop('output', None)
if output:
raise ValueError("'output' value %s is not supported for run_commands" % output)
try:
out = self.send_command(**cmd)
except AnsibleConnectionFailure as e:
if check_rc:
raise
out = getattr(e, 'err', to_text(e))
responses.append(out)
return responses
def get_defaults_flag(self):
"""
The method identifies the filter that should be used to fetch running-configuration
with defaults.
:return: valid default filter
"""
out = self.get('show running-config ?')
out = to_text(out, errors='surrogate_then_replace')
commands = set()
for line in out.splitlines():
if line.strip():
commands.add(line.strip().split()[0])
if 'all' in commands:
return 'all'
else:
return 'full'
def _extract_banners(self, config):
banners = {}
banner_cmds = re.findall(r'^banner (\w+)', config, re.M)
for cmd in banner_cmds:
regex = r'banner %s \^C(.+?)(?=\^C)' % cmd
match = re.search(regex, config, re.S)
if match:
key = 'banner %s' % cmd
banners[key] = match.group(1).strip()
for cmd in banner_cmds:
regex = r'banner %s \^C(.+?)(?=\^C)' % cmd
match = re.search(regex, config, re.S)
if match:
config = config.replace(str(match.group(1)), '')
config = re.sub(r'banner \w+ \^C\^C', '!! banner removed', config)
return config, banners
def _diff_banners(self, want, have):
candidate = {}
for key, value in iteritems(want):
if value != have.get(key):
candidate[key] = value
return candidate
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 59,195 |
ansible-galaxy import broken on master after merge of collection branch
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
`ansible-galaxy import` is broken, potentially by the introduction of the `collection` subcommand? Fails early trying to read CLI args...
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
`ansible-galaxy`
##### ANSIBLE VERSION
```
ansible 2.9.0.dev0
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/calvin/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/calvin/projects/ansible/lib/ansible
executable location = /home/calvin/.local/share/virtualenvs/orion/bin/ansible
python version = 3.6.8 (default, Mar 21 2019, 10:08:12) [GCC 8.3.1 20190223 (Red Hat 8.3.1-2)]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```
(empty)
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
##### STEPS TO REPRODUCE
Simply invoke the `import` subcommand or any repository:
```ansible-galaxy import -vvv --server=http://127.0.0.1:8000 orionuser1 ansible-testing-content```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
An import to start, to observe the import log, and be given the results.
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes -->
```
ansible-galaxy 2.9.0.dev0
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/calvin/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/calvin/projects/ansible/lib/ansible
executable location = /home/calvin/.local/share/virtualenvs/orion/bin/ansible-galaxy
python version = 3.6.8 (default, Mar 21 2019, 10:08:12) [GCC 8.3.1 20190223 (Red Hat 8.3.1-2)]
Using /etc/ansible/ansible.cfg as config file
Opened /home/calvin/.ansible_galaxy
ERROR! Unexpected Exception, this is probably a bug: 'args'
the full traceback was:
Traceback (most recent call last):
File "/home/calvin/projects/ansible/bin/ansible-galaxy", line 111, in <module>
exit_code = cli.run()
File "/home/calvin/projects/ansible/lib/ansible/cli/galaxy.py", line 268, in run
context.CLIARGS['func']()
File "/home/calvin/projects/ansible/lib/ansible/cli/galaxy.py", line 832, in execute_import
if len(context.CLIARGS['args']) < 2:
File "/home/calvin/projects/ansible/lib/ansible/module_utils/common/collections.py", line 20, in __getitem__
return self._store[key]
KeyError: 'args'
```
|
https://github.com/ansible/ansible/issues/59195
|
https://github.com/ansible/ansible/pull/59898
|
20b5ff5ab7fdc1fca7c6c402f5447cf4cfdd9c33
|
88e34491895ec0f4241ef2284b5711d06b878499
| 2019-07-17T18:21:14Z |
python
| 2019-08-01T21:31:28Z |
lib/ansible/cli/galaxy.py
|
# Copyright: (c) 2013, James Cammarata <[email protected]>
# Copyright: (c) 2018, Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import os.path
import re
import shutil
import textwrap
import time
import yaml
from jinja2 import BaseLoader, Environment, FileSystemLoader
import ansible.constants as C
from ansible import context
from ansible.cli import CLI
from ansible.cli.arguments import option_helpers as opt_help
from ansible.errors import AnsibleError, AnsibleOptionsError
from ansible.galaxy import Galaxy, get_collections_galaxy_meta_info
from ansible.galaxy.api import GalaxyAPI
from ansible.galaxy.collection import build_collection, install_collections, parse_collections_requirements_file, \
publish_collection
from ansible.galaxy.login import GalaxyLogin
from ansible.galaxy.role import GalaxyRole
from ansible.galaxy.token import GalaxyToken
from ansible.module_utils.ansible_release import __version__ as ansible_version
from ansible.module_utils._text import to_bytes, to_native, to_text
from ansible.playbook.role.requirement import RoleRequirement
from ansible.utils.collection_loader import is_collection_ref
from ansible.utils.display import Display
from ansible.utils.plugin_docs import get_versioned_doclink
display = Display()
class GalaxyCLI(CLI):
'''command to manage Ansible roles in shared repositories, the default of which is Ansible Galaxy *https://galaxy.ansible.com*.'''
SKIP_INFO_KEYS = ("name", "description", "readme_html", "related", "summary_fields", "average_aw_composite", "average_aw_score", "url")
def __init__(self, args):
# Inject role into sys.argv[1] as a backwards compatibility step
if len(args) > 1 and args[1] not in ['-h', '--help'] and 'role' not in args and 'collection' not in args:
# TODO: Should we add a warning here and eventually deprecate the implicit role subcommand choice
args.insert(1, 'role')
self.api = None
self.galaxy = None
super(GalaxyCLI, self).__init__(args)
def init_parser(self):
''' create an options parser for bin/ansible '''
super(GalaxyCLI, self).init_parser(
desc="Perform various Role related operations.",
)
# common
common = opt_help.argparse.ArgumentParser(add_help=False)
common.add_argument('-s', '--server', dest='api_server', default=C.GALAXY_SERVER, help='The API server destination')
common.add_argument('-c', '--ignore-certs', action='store_true', dest='ignore_certs', default=C.GALAXY_IGNORE_CERTS,
help='Ignore SSL certificate validation errors.')
opt_help.add_verbosity_options(common)
# options that apply to more than one action
user_repo = opt_help.argparse.ArgumentParser(add_help=False)
user_repo.add_argument('github_user', help='GitHub username')
user_repo.add_argument('github_repo', help='GitHub repository')
offline = opt_help.argparse.ArgumentParser(add_help=False)
offline.add_argument('--offline', dest='offline', default=False, action='store_true',
help="Don't query the galaxy API when creating roles")
default_roles_path = C.config.get_configuration_definition('DEFAULT_ROLES_PATH').get('default', '')
roles_path = opt_help.argparse.ArgumentParser(add_help=False)
roles_path.add_argument('-p', '--roles-path', dest='roles_path', type=opt_help.unfrack_path(pathsep=True),
default=C.DEFAULT_ROLES_PATH, action=opt_help.PrependListAction,
help='The path to the directory containing your roles. The default is the first writable one'
'configured via DEFAULT_ROLES_PATH: %s ' % default_roles_path)
force = opt_help.argparse.ArgumentParser(add_help=False)
force.add_argument('-f', '--force', dest='force', action='store_true', default=False,
help='Force overwriting an existing role or collection')
# Add sub parser for the Galaxy role type (role or collection)
type_parser = self.parser.add_subparsers(metavar='TYPE', dest='type')
type_parser.required = True
# Define the actions for the collection object type
collection = type_parser.add_parser('collection',
parents=[common],
help='Manage an Ansible Galaxy collection.')
collection_parser = collection.add_subparsers(metavar='ACTION', dest='collection')
collection_parser.required = True
build_parser = collection_parser.add_parser(
'build', help='Build an Ansible collection artifact that can be published to Ansible Galaxy.',
parents=[common, force])
build_parser.set_defaults(func=self.execute_build)
build_parser.add_argument(
'args', metavar='collection', nargs='*', default=('./',),
help='Path to the collection(s) directory to build. This should be the directory that contains the '
'galaxy.yml file. The default is the current working directory.')
build_parser.add_argument(
'--output-path', dest='output_path', default='./',
help='The path in which the collection is built to. The default is the current working directory.')
self.add_init_parser(collection_parser, [common, force])
cinstall_parser = collection_parser.add_parser('install', help='Install collection from Ansible Galaxy',
parents=[force, common])
cinstall_parser.set_defaults(func=self.execute_install)
cinstall_parser.add_argument('args', metavar='collection_name', nargs='*',
help='The collection(s) name or path/url to a tar.gz collection artifact. This '
'is mutually exclusive with --requirements-file.')
cinstall_parser.add_argument('-p', '--collections-path', dest='collections_path', required=True,
help='The path to the directory containing your collections.')
cinstall_parser.add_argument('-i', '--ignore-errors', dest='ignore_errors', action='store_true', default=False,
help='Ignore errors during installation and continue with the next specified '
'collection. This will not ignore dependency conflict errors.')
cinstall_parser.add_argument('-r', '--requirements-file', dest='requirements',
help='A file containing a list of collections to be installed.')
cinstall_exclusive = cinstall_parser.add_mutually_exclusive_group()
cinstall_exclusive.add_argument('-n', '--no-deps', dest='no_deps', action='store_true', default=False,
help="Don't download collections listed as dependencies")
cinstall_exclusive.add_argument('--force-with-deps', dest='force_with_deps', action='store_true', default=False,
help="Force overwriting an existing collection and its dependencies")
publish_parser = collection_parser.add_parser(
'publish', help='Publish a collection artifact to Ansible Galaxy.',
parents=[common])
publish_parser.set_defaults(func=self.execute_publish)
publish_parser.add_argument(
'args', metavar='collection_path', help='The path to the collection tarball to publish.')
publish_parser.add_argument(
'--api-key', dest='api_key',
help='The Ansible Galaxy API key which can be found at https://galaxy.ansible.com/me/preferences. '
'You can also use ansible-galaxy login to retrieve this key.')
publish_parser.add_argument(
'--no-wait', dest='wait', action='store_false', default=True,
help="Don't wait for import validation results.")
# Define the actions for the role object type
role = type_parser.add_parser('role',
parents=[common],
help='Manage an Ansible Galaxy role.')
role_parser = role.add_subparsers(metavar='ACTION', dest='role')
role_parser.required = True
delete_parser = role_parser.add_parser('delete', parents=[user_repo, common],
help='Removes the role from Galaxy. It does not remove or alter the actual GitHub repository.')
delete_parser.set_defaults(func=self.execute_delete)
import_parser = role_parser.add_parser('import', help='Import a role', parents=[user_repo, common])
import_parser.set_defaults(func=self.execute_import)
import_parser.add_argument('--no-wait', dest='wait', action='store_false', default=True, help="Don't wait for import results.")
import_parser.add_argument('--branch', dest='reference',
help='The name of a branch to import. Defaults to the repository\'s default branch (usually master)')
import_parser.add_argument('--role-name', dest='role_name', help='The name the role should have, if different than the repo name')
import_parser.add_argument('--status', dest='check_status', action='store_true', default=False,
help='Check the status of the most recent import request for given github_user/github_repo.')
info_parser = role_parser.add_parser('info', help='View more details about a specific role.',
parents=[offline, common, roles_path])
info_parser.set_defaults(func=self.execute_info)
info_parser.add_argument('args', nargs='+', help='role', metavar='role_name[,version]')
rinit_parser = self.add_init_parser(role_parser, [offline, force, common])
rinit_parser.add_argument('--type',
dest='role_type',
action='store',
default='default',
help="Initialize using an alternate role type. Valid types include: 'container', 'apb' and 'network'.")
install_parser = role_parser.add_parser('install', help='Install Roles from file(s), URL(s) or tar file(s)',
parents=[force, common, roles_path])
install_parser.set_defaults(func=self.execute_install)
install_parser.add_argument('-i', '--ignore-errors', dest='ignore_errors', action='store_true', default=False,
help='Ignore errors and continue with the next specified role.')
install_parser.add_argument('-r', '--role-file', dest='role_file', help='A file containing a list of roles to be imported')
install_parser.add_argument('-g', '--keep-scm-meta', dest='keep_scm_meta', action='store_true',
default=False, help='Use tar instead of the scm archive option when packaging the role')
install_parser.add_argument('args', help='Role name, URL or tar file', metavar='role', nargs='*')
install_exclusive = install_parser.add_mutually_exclusive_group()
install_exclusive.add_argument('-n', '--no-deps', dest='no_deps', action='store_true', default=False,
help="Don't download roles listed as dependencies")
install_exclusive.add_argument('--force-with-deps', dest='force_with_deps', action='store_true', default=False,
help="Force overwriting an existing role and it's dependencies")
remove_parser = role_parser.add_parser('remove', help='Delete roles from roles_path.', parents=[common, roles_path])
remove_parser.set_defaults(func=self.execute_remove)
remove_parser.add_argument('args', help='Role(s)', metavar='role', nargs='+')
list_parser = role_parser.add_parser('list', help='Show the name and version of each role installed in the roles_path.',
parents=[common, roles_path])
list_parser.set_defaults(func=self.execute_list)
list_parser.add_argument('role', help='Role', nargs='?', metavar='role')
login_parser = role_parser.add_parser('login', parents=[common],
help="Login to api.github.com server in order to use ansible-galaxy role "
"sub command such as 'import', 'delete', 'publish', and 'setup'")
login_parser.set_defaults(func=self.execute_login)
login_parser.add_argument('--github-token', dest='token', default=None,
help='Identify with github token rather than username and password.')
search_parser = role_parser.add_parser('search', help='Search the Galaxy database by tags, platforms, author and multiple keywords.',
parents=[common])
search_parser.set_defaults(func=self.execute_search)
search_parser.add_argument('--platforms', dest='platforms', help='list of OS platforms to filter by')
search_parser.add_argument('--galaxy-tags', dest='galaxy_tags', help='list of galaxy tags to filter by')
search_parser.add_argument('--author', dest='author', help='GitHub username')
search_parser.add_argument('args', help='Search terms', metavar='searchterm', nargs='*')
setup_parser = role_parser.add_parser('setup', help='Manage the integration between Galaxy and the given source.',
parents=[roles_path, common])
setup_parser.set_defaults(func=self.execute_setup)
setup_parser.add_argument('--remove', dest='remove_id', default=None,
help='Remove the integration matching the provided ID value. Use --list to see ID values.')
setup_parser.add_argument('--list', dest="setup_list", action='store_true', default=False, help='List all of your integrations.')
setup_parser.add_argument('source', help='Source')
setup_parser.add_argument('github_user', help='GitHub username')
setup_parser.add_argument('github_repo', help='GitHub repository')
setup_parser.add_argument('secret', help='Secret')
def add_init_parser(self, parser, parents):
galaxy_type = parser.dest
obj_name_kwargs = {}
if galaxy_type == 'collection':
obj_name_kwargs['type'] = GalaxyCLI._validate_collection_name
init_parser = parser.add_parser('init',
help='Initialize new {0} with the base structure of a {0}.'.format(galaxy_type),
parents=parents)
init_parser.set_defaults(func=self.execute_init)
init_parser.add_argument('--init-path',
dest='init_path',
default='./',
help='The path in which the skeleton {0} will be created. The default is the current working directory.'.format(galaxy_type))
init_parser.add_argument('--{0}-skeleton'.format(galaxy_type),
dest='{0}_skeleton'.format(galaxy_type),
default=C.GALAXY_ROLE_SKELETON,
help='The path to a {0} skeleton that the new {0} should be based upon.'.format(galaxy_type))
init_parser.add_argument('{0}_name'.format(galaxy_type),
help='{0} name'.format(galaxy_type.capitalize()),
**obj_name_kwargs)
return init_parser
def post_process_args(self, options):
options = super(GalaxyCLI, self).post_process_args(options)
display.verbosity = options.verbosity
return options
def run(self):
super(GalaxyCLI, self).run()
self.galaxy = Galaxy()
self.api = GalaxyAPI(self.galaxy)
context.CLIARGS['func']()
@staticmethod
def exit_without_ignore(rc=1):
"""
Exits with the specified return code unless the
option --ignore-errors was specified
"""
if not context.CLIARGS['ignore_errors']:
raise AnsibleError('- you can use --ignore-errors to skip failed roles and finish processing the list.')
@staticmethod
def _display_role_info(role_info):
text = [u"", u"Role: %s" % to_text(role_info['name'])]
text.append(u"\tdescription: %s" % role_info.get('description', ''))
for k in sorted(role_info.keys()):
if k in GalaxyCLI.SKIP_INFO_KEYS:
continue
if isinstance(role_info[k], dict):
text.append(u"\t%s:" % (k))
for key in sorted(role_info[k].keys()):
if key in GalaxyCLI.SKIP_INFO_KEYS:
continue
text.append(u"\t\t%s: %s" % (key, role_info[k][key]))
else:
text.append(u"\t%s: %s" % (k, role_info[k]))
return u'\n'.join(text)
@staticmethod
def _resolve_path(path):
return os.path.abspath(os.path.expanduser(os.path.expandvars(path)))
@staticmethod
def _validate_collection_name(name):
if is_collection_ref('ansible_collections.{0}'.format(name)):
return name
raise AnsibleError("Invalid collection name, must be in the format <namespace>.<collection>")
@staticmethod
def _get_skeleton_galaxy_yml(template_path, inject_data):
with open(to_bytes(template_path, errors='surrogate_or_strict'), 'rb') as template_obj:
meta_template = to_text(template_obj.read(), errors='surrogate_or_strict')
galaxy_meta = get_collections_galaxy_meta_info()
required_config = []
optional_config = []
for meta_entry in galaxy_meta:
config_list = required_config if meta_entry.get('required', False) else optional_config
value = inject_data.get(meta_entry['key'], None)
if not value:
meta_type = meta_entry.get('type', 'str')
if meta_type == 'str':
value = ''
elif meta_type == 'list':
value = []
elif meta_type == 'dict':
value = {}
meta_entry['value'] = value
config_list.append(meta_entry)
link_pattern = re.compile(r"L\(([^)]+),\s+([^)]+)\)")
const_pattern = re.compile(r"C\(([^)]+)\)")
def comment_ify(v):
if isinstance(v, list):
v = ". ".join([l.rstrip('.') for l in v])
v = link_pattern.sub(r"\1 <\2>", v)
v = const_pattern.sub(r"'\1'", v)
return textwrap.fill(v, width=117, initial_indent="# ", subsequent_indent="# ", break_on_hyphens=False)
def to_yaml(v):
return yaml.safe_dump(v, default_flow_style=False).rstrip()
env = Environment(loader=BaseLoader)
env.filters['comment_ify'] = comment_ify
env.filters['to_yaml'] = to_yaml
template = env.from_string(meta_template)
meta_value = template.render({'required_config': required_config, 'optional_config': optional_config})
return meta_value
############################
# execute actions
############################
def execute_role(self):
"""
Perform the action on an Ansible Galaxy role. Must be combined with a further action like delete/install/init
as listed below.
"""
# To satisfy doc build
pass
def execute_collection(self):
"""
Perform the action on an Ansible Galaxy collection. Must be combined with a further action like init/install as
listed below.
"""
# To satisfy doc build
pass
def execute_build(self):
"""
Build an Ansible Galaxy collection artifact that can be stored in a central repository like Ansible Galaxy.
By default, this command builds from the current working directory. You can optionally pass in the
collection input path (where the ``galaxy.yml`` file is).
"""
force = context.CLIARGS['force']
output_path = GalaxyCLI._resolve_path(context.CLIARGS['output_path'])
b_output_path = to_bytes(output_path, errors='surrogate_or_strict')
if not os.path.exists(b_output_path):
os.makedirs(b_output_path)
elif os.path.isfile(b_output_path):
raise AnsibleError("- the output collection directory %s is a file - aborting" % to_native(output_path))
for collection_path in context.CLIARGS['args']:
collection_path = GalaxyCLI._resolve_path(collection_path)
build_collection(collection_path, output_path, force)
def execute_init(self):
"""
Creates the skeleton framework of a role or collection that complies with the Galaxy metadata format.
Requires a role or collection name. The collection name must be in the format ``<namespace>.<collection>``.
"""
galaxy_type = context.CLIARGS['type']
init_path = context.CLIARGS['init_path']
force = context.CLIARGS['force']
obj_skeleton = context.CLIARGS['{0}_skeleton'.format(galaxy_type)]
obj_name = context.CLIARGS['{0}_name'.format(galaxy_type)]
inject_data = dict(
description='your description',
ansible_plugin_list_dir=get_versioned_doclink('plugins/plugins.html'),
)
if galaxy_type == 'role':
inject_data.update(dict(
author='your name',
company='your company (optional)',
license='license (GPL-2.0-or-later, MIT, etc)',
role_name=obj_name,
role_type=context.CLIARGS['role_type'],
issue_tracker_url='http://example.com/issue/tracker',
repository_url='http://example.com/repository',
documentation_url='http://docs.example.com',
homepage_url='http://example.com',
min_ansible_version=ansible_version[:3], # x.y
))
obj_path = os.path.join(init_path, obj_name)
elif galaxy_type == 'collection':
namespace, collection_name = obj_name.split('.', 1)
inject_data.update(dict(
namespace=namespace,
collection_name=collection_name,
version='1.0.0',
readme='README.md',
authors=['your name <[email protected]>'],
license=['GPL-2.0-or-later'],
repository='http://example.com/repository',
documentation='http://docs.example.com',
homepage='http://example.com',
issues='http://example.com/issue/tracker',
))
obj_path = os.path.join(init_path, namespace, collection_name)
b_obj_path = to_bytes(obj_path, errors='surrogate_or_strict')
if os.path.exists(b_obj_path):
if os.path.isfile(obj_path):
raise AnsibleError("- the path %s already exists, but is a file - aborting" % to_native(obj_path))
elif not force:
raise AnsibleError("- the directory %s already exists. "
"You can use --force to re-initialize this directory,\n"
"however it will reset any main.yml files that may have\n"
"been modified there already." % to_native(obj_path))
if obj_skeleton is not None:
own_skeleton = False
skeleton_ignore_expressions = C.GALAXY_ROLE_SKELETON_IGNORE
else:
own_skeleton = True
obj_skeleton = self.galaxy.default_role_skeleton_path
skeleton_ignore_expressions = ['^.*/.git_keep$']
obj_skeleton = os.path.expanduser(obj_skeleton)
skeleton_ignore_re = [re.compile(x) for x in skeleton_ignore_expressions]
if not os.path.exists(obj_skeleton):
raise AnsibleError("- the skeleton path '{0}' does not exist, cannot init {1}".format(
to_native(obj_skeleton), galaxy_type)
)
template_env = Environment(loader=FileSystemLoader(obj_skeleton))
# create role directory
if not os.path.exists(b_obj_path):
os.makedirs(b_obj_path)
for root, dirs, files in os.walk(obj_skeleton, topdown=True):
rel_root = os.path.relpath(root, obj_skeleton)
rel_dirs = rel_root.split(os.sep)
rel_root_dir = rel_dirs[0]
if galaxy_type == 'collection':
# A collection can contain templates in playbooks/*/templates and roles/*/templates
in_templates_dir = rel_root_dir in ['playbooks', 'roles'] and 'templates' in rel_dirs
else:
in_templates_dir = rel_root_dir == 'templates'
dirs[:] = [d for d in dirs if not any(r.match(d) for r in skeleton_ignore_re)]
for f in files:
filename, ext = os.path.splitext(f)
if any(r.match(os.path.join(rel_root, f)) for r in skeleton_ignore_re):
continue
elif galaxy_type == 'collection' and own_skeleton and rel_root == '.' and f == 'galaxy.yml.j2':
# Special use case for galaxy.yml.j2 in our own default collection skeleton. We build the options
# dynamically which requires special options to be set.
# The templated data's keys must match the key name but the inject data contains collection_name
# instead of name. We just make a copy and change the key back to name for this file.
template_data = inject_data.copy()
template_data['name'] = template_data.pop('collection_name')
meta_value = GalaxyCLI._get_skeleton_galaxy_yml(os.path.join(root, rel_root, f), template_data)
b_dest_file = to_bytes(os.path.join(obj_path, rel_root, filename), errors='surrogate_or_strict')
with open(b_dest_file, 'wb') as galaxy_obj:
galaxy_obj.write(to_bytes(meta_value, errors='surrogate_or_strict'))
elif ext == ".j2" and not in_templates_dir:
src_template = os.path.join(rel_root, f)
dest_file = os.path.join(obj_path, rel_root, filename)
template_env.get_template(src_template).stream(inject_data).dump(dest_file, encoding='utf-8')
else:
f_rel_path = os.path.relpath(os.path.join(root, f), obj_skeleton)
shutil.copyfile(os.path.join(root, f), os.path.join(obj_path, f_rel_path))
for d in dirs:
b_dir_path = to_bytes(os.path.join(obj_path, rel_root, d), errors='surrogate_or_strict')
if not os.path.exists(b_dir_path):
os.makedirs(b_dir_path)
display.display("- %s was created successfully" % obj_name)
def execute_info(self):
"""
prints out detailed information about an installed role as well as info available from the galaxy API.
"""
roles_path = context.CLIARGS['roles_path']
data = ''
for role in context.CLIARGS['args']:
role_info = {'path': roles_path}
gr = GalaxyRole(self.galaxy, role)
install_info = gr.install_info
if install_info:
if 'version' in install_info:
install_info['installed_version'] = install_info['version']
del install_info['version']
role_info.update(install_info)
remote_data = False
if not context.CLIARGS['offline']:
remote_data = self.api.lookup_role_by_name(role, False)
if remote_data:
role_info.update(remote_data)
if gr.metadata:
role_info.update(gr.metadata)
req = RoleRequirement()
role_spec = req.role_yaml_parse({'role': role})
if role_spec:
role_info.update(role_spec)
data = self._display_role_info(role_info)
# FIXME: This is broken in both 1.9 and 2.0 as
# _display_role_info() always returns something
if not data:
data = u"\n- the role %s was not found" % role
self.pager(data)
def execute_install(self):
"""
Install one or more roles(``ansible-galaxy role install``), or one or more collections(``ansible-galaxy collection install``).
You can pass in a list (roles or collections) or use the file
option listed below (these are mutually exclusive). If you pass in a list, it
can be a name (which will be downloaded via the galaxy API and github), or it can be a local tar archive file.
"""
if context.CLIARGS['type'] == 'collection':
collections = context.CLIARGS['args']
force = context.CLIARGS['force']
output_path = context.CLIARGS['collections_path']
# TODO: use a list of server that have been configured in ~/.ansible_galaxy
servers = [context.CLIARGS['api_server']]
ignore_certs = context.CLIARGS['ignore_certs']
ignore_errors = context.CLIARGS['ignore_errors']
requirements_file = context.CLIARGS['requirements']
no_deps = context.CLIARGS['no_deps']
force_deps = context.CLIARGS['force_with_deps']
if collections and requirements_file:
raise AnsibleError("The positional collection_name arg and --requirements-file are mutually exclusive.")
elif not collections and not requirements_file:
raise AnsibleError("You must specify a collection name or a requirements file.")
if requirements_file:
requirements_file = GalaxyCLI._resolve_path(requirements_file)
collection_requirements = parse_collections_requirements_file(requirements_file)
else:
collection_requirements = []
for collection_input in collections:
name, dummy, requirement = collection_input.partition(':')
collection_requirements.append((name, requirement or '*', None))
output_path = GalaxyCLI._resolve_path(output_path)
collections_path = C.COLLECTIONS_PATHS
if len([p for p in collections_path if p.startswith(output_path)]) == 0:
display.warning("The specified collections path '%s' is not part of the configured Ansible "
"collections paths '%s'. The installed collection won't be picked up in an Ansible "
"run." % (to_text(output_path), to_text(":".join(collections_path))))
if os.path.split(output_path)[1] != 'ansible_collections':
output_path = os.path.join(output_path, 'ansible_collections')
b_output_path = to_bytes(output_path, errors='surrogate_or_strict')
if not os.path.exists(b_output_path):
os.makedirs(b_output_path)
install_collections(collection_requirements, output_path, servers, (not ignore_certs), ignore_errors,
no_deps, force, force_deps)
return 0
role_file = context.CLIARGS['role_file']
if not context.CLIARGS['args'] and role_file is None:
# the user needs to specify one of either --role-file or specify a single user/role name
raise AnsibleOptionsError("- you must specify a user/role name or a roles file")
no_deps = context.CLIARGS['no_deps']
force_deps = context.CLIARGS['force_with_deps']
force = context.CLIARGS['force'] or force_deps
roles_left = []
if role_file:
try:
f = open(role_file, 'r')
if role_file.endswith('.yaml') or role_file.endswith('.yml'):
try:
required_roles = yaml.safe_load(f.read())
except Exception as e:
raise AnsibleError(
"Unable to load data from the requirements file (%s): %s" % (role_file, to_native(e))
)
if required_roles is None:
raise AnsibleError("No roles found in file: %s" % role_file)
for role in required_roles:
if "include" not in role:
role = RoleRequirement.role_yaml_parse(role)
display.vvv("found role %s in yaml file" % str(role))
if "name" not in role and "scm" not in role:
raise AnsibleError("Must specify name or src for role")
roles_left.append(GalaxyRole(self.galaxy, **role))
else:
with open(role["include"]) as f_include:
try:
roles_left += [
GalaxyRole(self.galaxy, **r) for r in
(RoleRequirement.role_yaml_parse(i) for i in yaml.safe_load(f_include))
]
except Exception as e:
msg = "Unable to load data from the include requirements file: %s %s"
raise AnsibleError(msg % (role_file, e))
else:
raise AnsibleError("Invalid role requirements file")
f.close()
except (IOError, OSError) as e:
raise AnsibleError('Unable to open %s: %s' % (role_file, to_native(e)))
else:
# roles were specified directly, so we'll just go out grab them
# (and their dependencies, unless the user doesn't want us to).
for rname in context.CLIARGS['args']:
role = RoleRequirement.role_yaml_parse(rname.strip())
roles_left.append(GalaxyRole(self.galaxy, **role))
for role in roles_left:
# only process roles in roles files when names matches if given
if role_file and context.CLIARGS['args'] and role.name not in context.CLIARGS['args']:
display.vvv('Skipping role %s' % role.name)
continue
display.vvv('Processing role %s ' % role.name)
# query the galaxy API for the role data
if role.install_info is not None:
if role.install_info['version'] != role.version or force:
if force:
display.display('- changing role %s from %s to %s' %
(role.name, role.install_info['version'], role.version or "unspecified"))
role.remove()
else:
display.warning('- %s (%s) is already installed - use --force to change version to %s' %
(role.name, role.install_info['version'], role.version or "unspecified"))
continue
else:
if not force:
display.display('- %s is already installed, skipping.' % str(role))
continue
try:
installed = role.install()
except AnsibleError as e:
display.warning(u"- %s was NOT installed successfully: %s " % (role.name, to_text(e)))
self.exit_without_ignore()
continue
# install dependencies, if we want them
if not no_deps and installed:
if not role.metadata:
display.warning("Meta file %s is empty. Skipping dependencies." % role.path)
else:
role_dependencies = role.metadata.get('dependencies') or []
for dep in role_dependencies:
display.debug('Installing dep %s' % dep)
dep_req = RoleRequirement()
dep_info = dep_req.role_yaml_parse(dep)
dep_role = GalaxyRole(self.galaxy, **dep_info)
if '.' not in dep_role.name and '.' not in dep_role.src and dep_role.scm is None:
# we know we can skip this, as it's not going to
# be found on galaxy.ansible.com
continue
if dep_role.install_info is None:
if dep_role not in roles_left:
display.display('- adding dependency: %s' % to_text(dep_role))
roles_left.append(dep_role)
else:
display.display('- dependency %s already pending installation.' % dep_role.name)
else:
if dep_role.install_info['version'] != dep_role.version:
if force_deps:
display.display('- changing dependant role %s from %s to %s' %
(dep_role.name, dep_role.install_info['version'], dep_role.version or "unspecified"))
dep_role.remove()
roles_left.append(dep_role)
else:
display.warning('- dependency %s (%s) from role %s differs from already installed version (%s), skipping' %
(to_text(dep_role), dep_role.version, role.name, dep_role.install_info['version']))
else:
if force_deps:
roles_left.append(dep_role)
else:
display.display('- dependency %s is already installed, skipping.' % dep_role.name)
if not installed:
display.warning("- %s was NOT installed successfully." % role.name)
self.exit_without_ignore()
return 0
def execute_remove(self):
"""
removes the list of roles passed as arguments from the local system.
"""
if not context.CLIARGS['args']:
raise AnsibleOptionsError('- you must specify at least one role to remove.')
for role_name in context.CLIARGS['args']:
role = GalaxyRole(self.galaxy, role_name)
try:
if role.remove():
display.display('- successfully removed %s' % role_name)
else:
display.display('- %s is not installed, skipping.' % role_name)
except Exception as e:
raise AnsibleError("Failed to remove role %s: %s" % (role_name, to_native(e)))
return 0
def execute_list(self):
"""
lists the roles installed on the local system or matches a single role passed as an argument.
"""
def _display_role(gr):
install_info = gr.install_info
version = None
if install_info:
version = install_info.get("version", None)
if not version:
version = "(unknown version)"
display.display("- %s, %s" % (gr.name, version))
if context.CLIARGS['role']:
# show the requested role, if it exists
name = context.CLIARGS['role']
gr = GalaxyRole(self.galaxy, name)
if gr.metadata:
display.display('# %s' % os.path.dirname(gr.path))
_display_role(gr)
else:
display.display("- the role %s was not found" % name)
else:
# show all valid roles in the roles_path directory
roles_path = context.CLIARGS['roles_path']
path_found = False
warnings = []
for path in roles_path:
role_path = os.path.expanduser(path)
if not os.path.exists(role_path):
warnings.append("- the configured path %s does not exist." % role_path)
continue
elif not os.path.isdir(role_path):
warnings.append("- the configured path %s, exists, but it is not a directory." % role_path)
continue
display.display('# %s' % role_path)
path_files = os.listdir(role_path)
path_found = True
for path_file in path_files:
gr = GalaxyRole(self.galaxy, path_file, path=path)
if gr.metadata:
_display_role(gr)
for w in warnings:
display.warning(w)
if not path_found:
raise AnsibleOptionsError("- None of the provided paths was usable. Please specify a valid path with --roles-path")
return 0
def execute_publish(self):
"""
Publish a collection into Ansible Galaxy. Requires the path to the collection tarball to publish.
"""
api_key = context.CLIARGS['api_key'] or GalaxyToken().get()
api_server = context.CLIARGS['api_server']
collection_path = GalaxyCLI._resolve_path(context.CLIARGS['args'])
ignore_certs = context.CLIARGS['ignore_certs']
wait = context.CLIARGS['wait']
publish_collection(collection_path, api_server, api_key, ignore_certs, wait)
def execute_search(self):
''' searches for roles on the Ansible Galaxy server'''
page_size = 1000
search = None
if context.CLIARGS['args']:
search = '+'.join(context.CLIARGS['args'])
if not search and not context.CLIARGS['platforms'] and not context.CLIARGS['galaxy_tags'] and not context.CLIARGS['author']:
raise AnsibleError("Invalid query. At least one search term, platform, galaxy tag or author must be provided.")
response = self.api.search_roles(search, platforms=context.CLIARGS['platforms'],
tags=context.CLIARGS['galaxy_tags'], author=context.CLIARGS['author'], page_size=page_size)
if response['count'] == 0:
display.display("No roles match your search.", color=C.COLOR_ERROR)
return True
data = [u'']
if response['count'] > page_size:
data.append(u"Found %d roles matching your search. Showing first %s." % (response['count'], page_size))
else:
data.append(u"Found %d roles matching your search:" % response['count'])
max_len = []
for role in response['results']:
max_len.append(len(role['username'] + '.' + role['name']))
name_len = max(max_len)
format_str = u" %%-%ds %%s" % name_len
data.append(u'')
data.append(format_str % (u"Name", u"Description"))
data.append(format_str % (u"----", u"-----------"))
for role in response['results']:
data.append(format_str % (u'%s.%s' % (role['username'], role['name']), role['description']))
data = u'\n'.join(data)
self.pager(data)
return True
def execute_login(self):
"""
verify user's identify via Github and retrieve an auth token from Ansible Galaxy.
"""
# Authenticate with github and retrieve a token
if context.CLIARGS['token'] is None:
if C.GALAXY_TOKEN:
github_token = C.GALAXY_TOKEN
else:
login = GalaxyLogin(self.galaxy)
github_token = login.create_github_token()
else:
github_token = context.CLIARGS['token']
galaxy_response = self.api.authenticate(github_token)
if context.CLIARGS['token'] is None and C.GALAXY_TOKEN is None:
# Remove the token we created
login.remove_github_token()
# Store the Galaxy token
token = GalaxyToken()
token.set(galaxy_response['token'])
display.display("Successfully logged into Galaxy as %s" % galaxy_response['username'])
return 0
def execute_import(self):
""" used to import a role into Ansible Galaxy """
colors = {
'INFO': 'normal',
'WARNING': C.COLOR_WARN,
'ERROR': C.COLOR_ERROR,
'SUCCESS': C.COLOR_OK,
'FAILED': C.COLOR_ERROR,
}
if len(context.CLIARGS['args']) < 2:
raise AnsibleError("Expected a github_username and github_repository. Use --help.")
github_user = to_text(context.CLIARGS['args'][0], errors='surrogate_or_strict')
github_repo = to_text(context.CLIARGS['args'][1], errors='surrogate_or_strict')
if context.CLIARGS['check_status']:
task = self.api.get_import_task(github_user=github_user, github_repo=github_repo)
else:
# Submit an import request
task = self.api.create_import_task(github_user, github_repo,
reference=context.CLIARGS['reference'],
role_name=context.CLIARGS['role_name'])
if len(task) > 1:
# found multiple roles associated with github_user/github_repo
display.display("WARNING: More than one Galaxy role associated with Github repo %s/%s." % (github_user, github_repo),
color='yellow')
display.display("The following Galaxy roles are being updated:" + u'\n', color=C.COLOR_CHANGED)
for t in task:
display.display('%s.%s' % (t['summary_fields']['role']['namespace'], t['summary_fields']['role']['name']), color=C.COLOR_CHANGED)
display.display(u'\nTo properly namespace this role, remove each of the above and re-import %s/%s from scratch' % (github_user, github_repo),
color=C.COLOR_CHANGED)
return 0
# found a single role as expected
display.display("Successfully submitted import request %d" % task[0]['id'])
if not context.CLIARGS['wait']:
display.display("Role name: %s" % task[0]['summary_fields']['role']['name'])
display.display("Repo: %s/%s" % (task[0]['github_user'], task[0]['github_repo']))
if context.CLIARGS['check_status'] or context.CLIARGS['wait']:
# Get the status of the import
msg_list = []
finished = False
while not finished:
task = self.api.get_import_task(task_id=task[0]['id'])
for msg in task[0]['summary_fields']['task_messages']:
if msg['id'] not in msg_list:
display.display(msg['message_text'], color=colors[msg['message_type']])
msg_list.append(msg['id'])
if task[0]['state'] in ['SUCCESS', 'FAILED']:
finished = True
else:
time.sleep(10)
return 0
def execute_setup(self):
""" Setup an integration from Github or Travis for Ansible Galaxy roles"""
if context.CLIARGS['setup_list']:
# List existing integration secrets
secrets = self.api.list_secrets()
if len(secrets) == 0:
# None found
display.display("No integrations found.")
return 0
display.display(u'\n' + "ID Source Repo", color=C.COLOR_OK)
display.display("---------- ---------- ----------", color=C.COLOR_OK)
for secret in secrets:
display.display("%-10s %-10s %s/%s" % (secret['id'], secret['source'], secret['github_user'],
secret['github_repo']), color=C.COLOR_OK)
return 0
if context.CLIARGS['remove_id']:
# Remove a secret
self.api.remove_secret(context.CLIARGS['remove_id'])
display.display("Secret removed. Integrations using this secret will not longer work.", color=C.COLOR_OK)
return 0
source = context.CLIARGS['source']
github_user = context.CLIARGS['github_user']
github_repo = context.CLIARGS['github_repo']
secret = context.CLIARGS['secret']
resp = self.api.add_secret(source, github_user, github_repo, secret)
display.display("Added integration for %s %s/%s" % (resp['source'], resp['github_user'], resp['github_repo']))
return 0
def execute_delete(self):
""" Delete a role from Ansible Galaxy. """
github_user = context.CLIARGS['github_user']
github_repo = context.CLIARGS['github_repo']
resp = self.api.delete_role(github_user, github_repo)
if len(resp['deleted_roles']) > 1:
display.display("Deleted the following roles:")
display.display("ID User Name")
display.display("------ --------------- ----------")
for role in resp['deleted_roles']:
display.display("%-8s %-15s %s" % (role.id, role.namespace, role.name))
display.display(resp['status'])
return True
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 59,691 |
docker_container: address already in use with ipv4+ipv6 ip-bound port mappings
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below -->
A docker container cannot be started with ip-bound port mappings when using both, ipv4 and ipv6 addresses:
With #46596, i expected
docker_container:
name: "test123"
image: "ubuntu:latest"
tty: yes
interactive: yes
networks:
- name: bridge
purge_networks: yes
pull: yes
ports:
- "127.0.0.1:53:53/tcp"
- "127.0.0.1:53:53/udp"
- "[::1]:53:53/tcp"
- "[::1]:53:53/udp"
to behave like
docker run --rm -ti -p "127.0.0.1:53:53/tcp" -p "127.0.0.1:53:53/udp" -p "[::1]:53:53/tcp" -p "[::1]:53:53/udp" ubuntu:latest
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
3e7204cabfeb ubuntu:latest "/bin/bash" 2 seconds ago Up 1 second 127.0.0.1:53->53/tcp, 127.0.0.1:53->53/udp, ::1:53->53/tcp, ::1:53->53/udp awesome_banach
However, the deployment fails with
fatal: [localhost]: FAILED! => {"changed": false, "msg": "Error starting container a26b2b9b2497195af25770878fd2c084e984e3d2c70bef602261cf76aad1a0bb: 500 Server Error: Internal Server Error (\"driver failed programming external connectivity on endpoint test123 (c41303e4e84c59d838166594918137135dfbd22e0bce311e3302f88135874d87): Error starting userland proxy: listen tcp 0.0.0.0:53: bind: address already in use\")"}
Note that is says `0.0.0.0:53`, which may be related to #40258. Removing the two IPv6 mappings works as expected.
From what i can see, the mapping is correctly parsed in
`ansible.modules.cloud.docker.docker_container.TaskParameters._parse_publish_ports` - i suspect the error is in the passing to the docker-py module.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
docker_container
##### ANSIBLE VERSION
```
ansible 2.8.3
config file = /home/johann/.ansible.cfg
configured module search path = ['/home/johann/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/lib/python3.7/dist-packages/ansible
executable location = /usr/local/bin/ansible
python version = 3.7.3 (default, Apr 3 2019, 05:39:12) [GCC 8.3.0]
```
##### CONFIGURATION
```
ANSIBLE_SSH_ARGS(/home/johann/.ansible.cfg) = -o ControlMaster=auto -o ControlPersist=60s -o ForwardAgent=yes -o ControlPath="/home/johann/.ansible/ansible-ssh-%h-%p-%r"
DEFAULT_HOST_LIST(/home/johann/.ansible.cfg) = ['/home/johann/dev/repos/ercpe-ansible/hosts']
DEFAULT_TRANSPORT(/home/johann/.ansible.cfg) = ssh
DEFAULT_VAULT_PASSWORD_FILE(/home/johann/.ansible.cfg) = /home/johann/bin/ansible-vault-pass
```
##### OS / ENVIRONMENT
Tested on Ubuntu 19.04 (disco)
##### STEPS TO REPRODUCE
See Summary
##### EXPECTED RESULTS
See Summary
##### ACTUAL RESULTS
See Summary
|
https://github.com/ansible/ansible/issues/59691
|
https://github.com/ansible/ansible/pull/59715
|
f94772f807fed7fd6329faae25aa600dd9f030cf
|
a7573102bcbc7e88b8bf6de639be2696f1a5ad43
| 2019-07-28T14:29:47Z |
python
| 2019-08-02T15:10:39Z |
changelogs/fragments/59715-docker_container-ipv6-port-bind.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 59,691 |
docker_container: address already in use with ipv4+ipv6 ip-bound port mappings
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below -->
A docker container cannot be started with ip-bound port mappings when using both, ipv4 and ipv6 addresses:
With #46596, i expected
docker_container:
name: "test123"
image: "ubuntu:latest"
tty: yes
interactive: yes
networks:
- name: bridge
purge_networks: yes
pull: yes
ports:
- "127.0.0.1:53:53/tcp"
- "127.0.0.1:53:53/udp"
- "[::1]:53:53/tcp"
- "[::1]:53:53/udp"
to behave like
docker run --rm -ti -p "127.0.0.1:53:53/tcp" -p "127.0.0.1:53:53/udp" -p "[::1]:53:53/tcp" -p "[::1]:53:53/udp" ubuntu:latest
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
3e7204cabfeb ubuntu:latest "/bin/bash" 2 seconds ago Up 1 second 127.0.0.1:53->53/tcp, 127.0.0.1:53->53/udp, ::1:53->53/tcp, ::1:53->53/udp awesome_banach
However, the deployment fails with
fatal: [localhost]: FAILED! => {"changed": false, "msg": "Error starting container a26b2b9b2497195af25770878fd2c084e984e3d2c70bef602261cf76aad1a0bb: 500 Server Error: Internal Server Error (\"driver failed programming external connectivity on endpoint test123 (c41303e4e84c59d838166594918137135dfbd22e0bce311e3302f88135874d87): Error starting userland proxy: listen tcp 0.0.0.0:53: bind: address already in use\")"}
Note that is says `0.0.0.0:53`, which may be related to #40258. Removing the two IPv6 mappings works as expected.
From what i can see, the mapping is correctly parsed in
`ansible.modules.cloud.docker.docker_container.TaskParameters._parse_publish_ports` - i suspect the error is in the passing to the docker-py module.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
docker_container
##### ANSIBLE VERSION
```
ansible 2.8.3
config file = /home/johann/.ansible.cfg
configured module search path = ['/home/johann/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/lib/python3.7/dist-packages/ansible
executable location = /usr/local/bin/ansible
python version = 3.7.3 (default, Apr 3 2019, 05:39:12) [GCC 8.3.0]
```
##### CONFIGURATION
```
ANSIBLE_SSH_ARGS(/home/johann/.ansible.cfg) = -o ControlMaster=auto -o ControlPersist=60s -o ForwardAgent=yes -o ControlPath="/home/johann/.ansible/ansible-ssh-%h-%p-%r"
DEFAULT_HOST_LIST(/home/johann/.ansible.cfg) = ['/home/johann/dev/repos/ercpe-ansible/hosts']
DEFAULT_TRANSPORT(/home/johann/.ansible.cfg) = ssh
DEFAULT_VAULT_PASSWORD_FILE(/home/johann/.ansible.cfg) = /home/johann/bin/ansible-vault-pass
```
##### OS / ENVIRONMENT
Tested on Ubuntu 19.04 (disco)
##### STEPS TO REPRODUCE
See Summary
##### EXPECTED RESULTS
See Summary
##### ACTUAL RESULTS
See Summary
|
https://github.com/ansible/ansible/issues/59691
|
https://github.com/ansible/ansible/pull/59715
|
f94772f807fed7fd6329faae25aa600dd9f030cf
|
a7573102bcbc7e88b8bf6de639be2696f1a5ad43
| 2019-07-28T14:29:47Z |
python
| 2019-08-02T15:10:39Z |
lib/ansible/modules/cloud/docker/docker_container.py
|
#!/usr/bin/python
#
# Copyright 2016 Red Hat | Ansible
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
ANSIBLE_METADATA = {'metadata_version': '1.1',
'status': ['preview'],
'supported_by': 'community'}
DOCUMENTATION = '''
---
module: docker_container
short_description: manage docker containers
description:
- Manage the life cycle of docker containers.
- Supports check mode. Run with --check and --diff to view config difference and list of actions to be taken.
version_added: "2.1"
options:
auto_remove:
description:
- enable auto-removal of the container on daemon side when the container's process exits
type: bool
default: no
version_added: "2.4"
blkio_weight:
description:
- Block IO (relative weight), between 10 and 1000.
type: int
capabilities:
description:
- List of capabilities to add to the container.
type: list
cap_drop:
description:
- List of capabilities to drop from the container.
type: list
version_added: "2.7"
cleanup:
description:
- Use with I(detach=false) to remove the container after successful execution.
type: bool
default: no
version_added: "2.2"
command:
description:
- Command to execute when the container starts.
A command may be either a string or a list.
- Prior to version 2.4, strings were split on commas.
type: raw
comparisons:
description:
- Allows to specify how properties of existing containers are compared with
module options to decide whether the container should be recreated / updated
or not. Only options which correspond to the state of a container as handled
by the Docker daemon can be specified, as well as C(networks).
- Must be a dictionary specifying for an option one of the keys C(strict), C(ignore)
and C(allow_more_present).
- If C(strict) is specified, values are tested for equality, and changes always
result in updating or restarting. If C(ignore) is specified, changes are ignored.
- C(allow_more_present) is allowed only for lists, sets and dicts. If it is
specified for lists or sets, the container will only be updated or restarted if
the module option contains a value which is not present in the container's
options. If the option is specified for a dict, the container will only be updated
or restarted if the module option contains a key which isn't present in the
container's option, or if the value of a key present differs.
- The wildcard option C(*) can be used to set one of the default values C(strict)
or C(ignore) to I(all) comparisons.
- See the examples for details.
type: dict
version_added: "2.8"
cpu_period:
description:
- Limit CPU CFS (Completely Fair Scheduler) period
type: int
cpu_quota:
description:
- Limit CPU CFS (Completely Fair Scheduler) quota
type: int
cpuset_cpus:
description:
- CPUs in which to allow execution C(1,3) or C(1-3).
type: str
cpuset_mems:
description:
- Memory nodes (MEMs) in which to allow execution C(0-3) or C(0,1)
type: str
cpu_shares:
description:
- CPU shares (relative weight).
type: int
detach:
description:
- Enable detached mode to leave the container running in background.
If disabled, the task will reflect the status of the container run (failed if the command failed).
type: bool
default: yes
devices:
description:
- "List of host device bindings to add to the container. Each binding is a mapping expressed
in the format: <path_on_host>:<path_in_container>:<cgroup_permissions>"
type: list
device_read_bps:
description:
- "List of device path and read rate (bytes per second) from device."
type: list
suboptions:
path:
description:
- Device path in the container.
type: str
required: yes
rate:
description:
- "Device read limit. Format: <number>[<unit>]"
- "Number is a positive integer. Unit can be one of C(B) (byte), C(K) (kibibyte, 1024B), C(M) (mebibyte), C(G) (gibibyte),
C(T) (tebibyte), or C(P) (pebibyte)"
- "Omitting the unit defaults to bytes."
type: str
required: yes
version_added: "2.8"
device_write_bps:
description:
- "List of device and write rate (bytes per second) to device."
type: list
suboptions:
path:
description:
- Device path in the container.
type: str
required: yes
rate:
description:
- "Device read limit. Format: <number>[<unit>]"
- "Number is a positive integer. Unit can be one of C(B) (byte), C(K) (kibibyte, 1024B), C(M) (mebibyte), C(G) (gibibyte),
C(T) (tebibyte), or C(P) (pebibyte)"
- "Omitting the unit defaults to bytes."
type: str
required: yes
version_added: "2.8"
device_read_iops:
description:
- "List of device and read rate (IO per second) from device."
type: list
suboptions:
path:
description:
- Device path in the container.
type: str
required: yes
rate:
description:
- "Device read limit."
- "Must be a positive integer."
type: int
required: yes
version_added: "2.8"
device_write_iops:
description:
- "List of device and write rate (IO per second) to device."
type: list
suboptions:
path:
description:
- Device path in the container.
type: str
required: yes
rate:
description:
- "Device read limit."
- "Must be a positive integer."
type: int
required: yes
version_added: "2.8"
dns_opts:
description:
- list of DNS options
type: list
dns_servers:
description:
- List of custom DNS servers.
type: list
dns_search_domains:
description:
- List of custom DNS search domains.
type: list
domainname:
description:
- Container domainname.
type: str
version_added: "2.5"
env:
description:
- Dictionary of key,value pairs.
- Values which might be parsed as numbers, booleans or other types by the YAML parser must be quoted (e.g. C("true")) in order to avoid data loss.
type: dict
env_file:
description:
- Path to a file, present on the target, containing environment variables I(FOO=BAR).
- If variable also present in C(env), then C(env) value will override.
type: path
version_added: "2.2"
entrypoint:
description:
- Command that overwrites the default ENTRYPOINT of the image.
type: list
etc_hosts:
description:
- Dict of host-to-IP mappings, where each host name is a key in the dictionary.
Each host name will be added to the container's /etc/hosts file.
type: dict
exposed_ports:
description:
- List of additional container ports which informs Docker that the container
listens on the specified network ports at runtime.
If the port is already exposed using EXPOSE in a Dockerfile, it does not
need to be exposed again.
type: list
aliases:
- exposed
- expose
force_kill:
description:
- Use the kill command when stopping a running container.
type: bool
default: no
aliases:
- forcekill
groups:
description:
- List of additional group names and/or IDs that the container process will run as.
type: list
healthcheck:
description:
- 'Configure a check that is run to determine whether or not containers for this service are "healthy".
See the docs for the L(HEALTHCHECK Dockerfile instruction,https://docs.docker.com/engine/reference/builder/#healthcheck)
for details on how healthchecks work.'
- 'I(interval), I(timeout) and I(start_period) are specified as durations. They accept duration as a string in a format
that look like: C(5h34m56s), C(1m30s) etc. The supported units are C(us), C(ms), C(s), C(m) and C(h)'
type: dict
suboptions:
test:
description:
- Command to run to check health.
- Must be either a string or a list. If it is a list, the first item must be one of C(NONE), C(CMD) or C(CMD-SHELL).
type: raw
interval:
description:
- 'Time between running the check. (default: 30s)'
type: str
timeout:
description:
- 'Maximum time to allow one check to run. (default: 30s)'
type: str
retries:
description:
- 'Consecutive failures needed to report unhealthy. It accept integer value. (default: 3)'
type: int
start_period:
description:
- 'Start period for the container to initialize before starting health-retries countdown. (default: 0s)'
type: str
version_added: "2.8"
hostname:
description:
- Container hostname.
type: str
ignore_image:
description:
- When C(state) is I(present) or I(started) the module compares the configuration of an existing
container to requested configuration. The evaluation includes the image version. If
the image version in the registry does not match the container, the container will be
recreated. Stop this behavior by setting C(ignore_image) to I(True).
- I(Warning:) This option is ignored if C(image) or C(*) is used for the C(comparisons) option.
type: bool
default: no
version_added: "2.2"
image:
description:
- Repository path and tag used to create the container. If an image is not found or pull is true, the image
will be pulled from the registry. If no tag is included, C(latest) will be used.
- Can also be an image ID. If this is the case, the image is assumed to be available locally.
The C(pull) option is ignored for this case.
type: str
init:
description:
- Run an init inside the container that forwards signals and reaps processes.
This option requires Docker API >= 1.25.
type: bool
default: no
version_added: "2.6"
interactive:
description:
- Keep stdin open after a container is launched, even if not attached.
type: bool
default: no
ipc_mode:
description:
- Set the IPC mode for the container. Can be one of 'container:<name|id>' to reuse another
container's IPC namespace or 'host' to use the host's IPC namespace within the container.
type: str
keep_volumes:
description:
- Retain volumes associated with a removed container.
type: bool
default: yes
kill_signal:
description:
- Override default signal used to kill a running container.
type: str
kernel_memory:
description:
- "Kernel memory limit (format: C(<number>[<unit>])). Number is a positive integer.
Unit can be C(B) (byte), C(K) (kibibyte, 1024B), C(M) (mebibyte), C(G) (gibibyte),
C(T) (tebibyte), or C(P) (pebibyte). Minimum is C(4M)."
- Omitting the unit defaults to bytes.
type: str
labels:
description:
- Dictionary of key value pairs.
type: dict
links:
description:
- List of name aliases for linked containers in the format C(container_name:alias).
- Setting this will force container to be restarted.
type: list
log_driver:
description:
- Specify the logging driver. Docker uses I(json-file) by default.
- See L(here,https://docs.docker.com/config/containers/logging/configure/) for possible choices.
type: str
log_options:
description:
- Dictionary of options specific to the chosen log_driver. See https://docs.docker.com/engine/admin/logging/overview/
for details.
type: dict
aliases:
- log_opt
mac_address:
description:
- Container MAC address (e.g. 92:d0:c6:0a:29:33)
type: str
memory:
description:
- "Memory limit (format: C(<number>[<unit>])). Number is a positive integer.
Unit can be C(B) (byte), C(K) (kibibyte, 1024B), C(M) (mebibyte), C(G) (gibibyte),
C(T) (tebibyte), or C(P) (pebibyte)."
- Omitting the unit defaults to bytes.
type: str
default: '0'
memory_reservation:
description:
- "Memory soft limit (format: C(<number>[<unit>])). Number is a positive integer.
Unit can be C(B) (byte), C(K) (kibibyte, 1024B), C(M) (mebibyte), C(G) (gibibyte),
C(T) (tebibyte), or C(P) (pebibyte)."
- Omitting the unit defaults to bytes.
type: str
memory_swap:
description:
- "Total memory limit (memory + swap, format: C(<number>[<unit>])).
Number is a positive integer. Unit can be C(B) (byte), C(K) (kibibyte, 1024B),
C(M) (mebibyte), C(G) (gibibyte), C(T) (tebibyte), or C(P) (pebibyte)."
- Omitting the unit defaults to bytes.
type: str
memory_swappiness:
description:
- Tune a container's memory swappiness behavior. Accepts an integer between 0 and 100.
- If not set, the value will be remain the same if container exists and will be inherited from the host machine if it is (re-)created.
type: int
name:
description:
- Assign a name to a new container or match an existing container.
- When identifying an existing container name may be a name or a long or short container ID.
type: str
required: yes
network_mode:
description:
- Connect the container to a network. Choices are "bridge", "host", "none" or "container:<name|id>"
type: str
userns_mode:
description:
- Set the user namespace mode for the container. Currently, the only valid value is C(host).
type: str
version_added: "2.5"
networks:
description:
- List of networks the container belongs to.
- For examples of the data structure and usage see EXAMPLES below.
- To remove a container from one or more networks, use the C(purge_networks) option.
- Note that as opposed to C(docker run ...), M(docker_container) does not remove the default
network if C(networks) is specified. You need to explicity use C(purge_networks) to enforce
the removal of the default network (and all other networks not explicitly mentioned in C(networks)).
type: list
suboptions:
name:
description:
- The network's name.
type: str
required: yes
ipv4_address:
description:
- The container's IPv4 address in this network.
type: str
ipv6_address:
description:
- The container's IPv6 address in this network.
type: str
links:
description:
- A list of containers to link to.
type: list
aliases:
description:
- List of aliases for this container in this network. These names
can be used in the network to reach this container.
type: list
version_added: "2.2"
networks_cli_compatible:
description:
- "When networks are provided to the module via the I(networks) option, the module
behaves differently than C(docker run --network): C(docker run --network other)
will create a container with network C(other) attached, but the default network
not attached. This module with C(networks: {name: other}) will create a container
with both C(default) and C(other) attached. If I(purge_networks) is set to C(yes),
the C(default) network will be removed afterwards."
- "If I(networks_cli_compatible) is set to C(yes), this module will behave as
C(docker run --network) and will I(not) add the default network if C(networks) is
specified. If C(networks) is not specified, the default network will be attached."
- "Note that docker CLI also sets C(network_mode) to the name of the first network
added if C(--network) is specified. For more compatibility with docker CLI, you
explicitly have to set C(network_mode) to the name of the first network you're
adding."
- Current value is C(no). A new default of C(yes) will be set in Ansible 2.12.
type: bool
version_added: "2.8"
oom_killer:
description:
- Whether or not to disable OOM Killer for the container.
type: bool
oom_score_adj:
description:
- An integer value containing the score given to the container in order to tune
OOM killer preferences.
type: int
version_added: "2.2"
output_logs:
description:
- If set to true, output of the container command will be printed (only effective
when log_driver is set to json-file or journald.
type: bool
default: no
version_added: "2.7"
paused:
description:
- Use with the started state to pause running processes inside the container.
type: bool
default: no
pid_mode:
description:
- Set the PID namespace mode for the container.
- Note that Docker SDK for Python < 2.0 only supports 'host'. Newer versions of the
Docker SDK for Python (docker) allow all values supported by the docker daemon.
type: str
pids_limit:
description:
- Set PIDs limit for the container. It accepts an integer value.
- Set -1 for unlimited PIDs.
type: int
version_added: "2.8"
privileged:
description:
- Give extended privileges to the container.
type: bool
default: no
published_ports:
description:
- List of ports to publish from the container to the host.
- "Use docker CLI syntax: C(8000), C(9000:8000), or C(0.0.0.0:9000:8000), where 8000 is a
container port, 9000 is a host port, and 0.0.0.0 is a host interface."
- Port ranges can be used for source and destination ports. If two ranges with
different lengths are specified, the shorter range will be used.
- "Bind addresses must be either IPv4 or IPv6 addresses. Hostnames are I(not) allowed. This
is different from the C(docker) command line utility. Use the L(dig lookup,../lookup/dig.html)
to resolve hostnames."
- A value of C(all) will publish all exposed container ports to random host ports, ignoring
any other mappings.
- If C(networks) parameter is provided, will inspect each network to see if there exists
a bridge network with optional parameter com.docker.network.bridge.host_binding_ipv4.
If such a network is found, then published ports where no host IP address is specified
will be bound to the host IP pointed to by com.docker.network.bridge.host_binding_ipv4.
Note that the first bridge network with a com.docker.network.bridge.host_binding_ipv4
value encountered in the list of C(networks) is the one that will be used.
type: list
aliases:
- ports
pull:
description:
- If true, always pull the latest version of an image. Otherwise, will only pull an image
when missing.
- I(Note) that images are only pulled when specified by name. If the image is specified
as a image ID (hash), it cannot be pulled.
type: bool
default: no
purge_networks:
description:
- Remove the container from ALL networks not included in C(networks) parameter.
- Any default networks such as I(bridge), if not found in C(networks), will be removed as well.
type: bool
default: no
version_added: "2.2"
read_only:
description:
- Mount the container's root file system as read-only.
type: bool
default: no
recreate:
description:
- Use with present and started states to force the re-creation of an existing container.
type: bool
default: no
restart:
description:
- Use with started state to force a matching container to be stopped and restarted.
type: bool
default: no
restart_policy:
description:
- Container restart policy. Place quotes around I(no) option.
type: str
choices:
- 'no'
- 'on-failure'
- 'always'
- 'unless-stopped'
restart_retries:
description:
- Use with restart policy to control maximum number of restart attempts.
type: int
runtime:
description:
- Runtime to use for the container.
type: str
version_added: "2.8"
shm_size:
description:
- "Size of C(/dev/shm) (format: C(<number>[<unit>])). Number is positive integer.
Unit can be C(B) (byte), C(K) (kibibyte, 1024B), C(M) (mebibyte), C(G) (gibibyte),
C(T) (tebibyte), or C(P) (pebibyte)."
- Omitting the unit defaults to bytes. If you omit the size entirely, the system uses C(64M).
type: str
security_opts:
description:
- List of security options in the form of C("label:user:User")
type: list
state:
description:
- 'I(absent) - A container matching the specified name will be stopped and removed. Use force_kill to kill the container
rather than stopping it. Use keep_volumes to retain volumes associated with the removed container.'
- 'I(present) - Asserts the existence of a container matching the name and any provided configuration parameters. If no
container matches the name, a container will be created. If a container matches the name but the provided configuration
does not match, the container will be updated, if it can be. If it cannot be updated, it will be removed and re-created
with the requested config. Image version will be taken into account when comparing configuration. To ignore image
version use the ignore_image option. Use the recreate option to force the re-creation of the matching container. Use
force_kill to kill the container rather than stopping it. Use keep_volumes to retain volumes associated with a removed
container.'
- 'I(started) - Asserts there is a running container matching the name and any provided configuration. If no container
matches the name, a container will be created and started. If a container matching the name is found but the
configuration does not match, the container will be updated, if it can be. If it cannot be updated, it will be removed
and a new container will be created with the requested configuration and started. Image version will be taken into
account when comparing configuration. To ignore image version use the ignore_image option. Use recreate to always
re-create a matching container, even if it is running. Use restart to force a matching container to be stopped and
restarted. Use force_kill to kill a container rather than stopping it. Use keep_volumes to retain volumes associated
with a removed container.'
- 'I(stopped) - Asserts that the container is first I(present), and then if the container is running moves it to a stopped
state. Use force_kill to kill a container rather than stopping it.'
type: str
default: started
choices:
- absent
- present
- stopped
- started
stop_signal:
description:
- Override default signal used to stop the container.
type: str
stop_timeout:
description:
- Number of seconds to wait for the container to stop before sending SIGKILL.
When the container is created by this module, its C(StopTimeout) configuration
will be set to this value.
- When the container is stopped, will be used as a timeout for stopping the
container. In case the container has a custom C(StopTimeout) configuration,
the behavior depends on the version of the docker daemon. New versions of
the docker daemon will always use the container's configured C(StopTimeout)
value if it has been configured.
type: int
trust_image_content:
description:
- If C(yes), skip image verification.
type: bool
default: no
tmpfs:
description:
- Mount a tmpfs directory
type: list
version_added: 2.4
tty:
description:
- Allocate a pseudo-TTY.
type: bool
default: no
ulimits:
description:
- "List of ulimit options. A ulimit is specified as C(nofile:262144:262144)"
type: list
sysctls:
description:
- Dictionary of key,value pairs.
type: dict
version_added: 2.4
user:
description:
- Sets the username or UID used and optionally the groupname or GID for the specified command.
- "Can be [ user | user:group | uid | uid:gid | user:gid | uid:group ]"
type: str
uts:
description:
- Set the UTS namespace mode for the container.
type: str
volumes:
description:
- List of volumes to mount within the container.
- "Use docker CLI-style syntax: C(/host:/container[:mode])"
- "Mount modes can be a comma-separated list of various modes such as C(ro), C(rw), C(consistent),
C(delegated), C(cached), C(rprivate), C(private), C(rshared), C(shared), C(rslave), C(slave), and
C(nocopy). Note that the docker daemon might not support all modes and combinations of such modes."
- SELinux hosts can additionally use C(z) or C(Z) to use a shared or
private label for the volume.
- "Note that Ansible 2.7 and earlier only supported one mode, which had to be one of C(ro), C(rw),
C(z), and C(Z)."
type: list
volume_driver:
description:
- The container volume driver.
type: str
volumes_from:
description:
- List of container names or Ids to get volumes from.
type: list
working_dir:
description:
- Path to the working directory.
type: str
version_added: "2.4"
extends_documentation_fragment:
- docker
- docker.docker_py_1_documentation
author:
- "Cove Schneider (@cove)"
- "Joshua Conner (@joshuaconner)"
- "Pavel Antonov (@softzilla)"
- "Thomas Steinbach (@ThomasSteinbach)"
- "Philippe Jandot (@zfil)"
- "Daan Oosterveld (@dusdanig)"
- "Chris Houseknecht (@chouseknecht)"
- "Kassian Sun (@kassiansun)"
- "Felix Fontein (@felixfontein)"
requirements:
- "L(Docker SDK for Python,https://docker-py.readthedocs.io/en/stable/) >= 1.8.0 (use L(docker-py,https://pypi.org/project/docker-py/) for Python 2.6)"
- "Docker API >= 1.20"
'''
EXAMPLES = '''
- name: Create a data container
docker_container:
name: mydata
image: busybox
volumes:
- /data
- name: Re-create a redis container
docker_container:
name: myredis
image: redis
command: redis-server --appendonly yes
state: present
recreate: yes
exposed_ports:
- 6379
volumes_from:
- mydata
- name: Restart a container
docker_container:
name: myapplication
image: someuser/appimage
state: started
restart: yes
links:
- "myredis:aliasedredis"
devices:
- "/dev/sda:/dev/xvda:rwm"
ports:
- "8080:9000"
- "127.0.0.1:8081:9001/udp"
env:
SECRET_KEY: "ssssh"
# Values which might be parsed as numbers, booleans or other types by the YAML parser need to be quoted
BOOLEAN_KEY: "yes"
- name: Container present
docker_container:
name: mycontainer
state: present
image: ubuntu:14.04
command: sleep infinity
- name: Stop a container
docker_container:
name: mycontainer
state: stopped
- name: Start 4 load-balanced containers
docker_container:
name: "container{{ item }}"
recreate: yes
image: someuser/anotherappimage
command: sleep 1d
with_sequence: count=4
- name: remove container
docker_container:
name: ohno
state: absent
- name: Syslogging output
docker_container:
name: myservice
image: busybox
log_driver: syslog
log_options:
syslog-address: tcp://my-syslog-server:514
syslog-facility: daemon
# NOTE: in Docker 1.13+ the "syslog-tag" option was renamed to "tag" for
# older docker installs, use "syslog-tag" instead
tag: myservice
- name: Create db container and connect to network
docker_container:
name: db_test
image: "postgres:latest"
networks:
- name: "{{ docker_network_name }}"
- name: Start container, connect to network and link
docker_container:
name: sleeper
image: ubuntu:14.04
networks:
- name: TestingNet
ipv4_address: "172.1.1.100"
aliases:
- sleepyzz
links:
- db_test:db
- name: TestingNet2
- name: Start a container with a command
docker_container:
name: sleepy
image: ubuntu:14.04
command: ["sleep", "infinity"]
- name: Add container to networks
docker_container:
name: sleepy
networks:
- name: TestingNet
ipv4_address: 172.1.1.18
links:
- sleeper
- name: TestingNet2
ipv4_address: 172.1.10.20
- name: Update network with aliases
docker_container:
name: sleepy
networks:
- name: TestingNet
aliases:
- sleepyz
- zzzz
- name: Remove container from one network
docker_container:
name: sleepy
networks:
- name: TestingNet2
purge_networks: yes
- name: Remove container from all networks
docker_container:
name: sleepy
purge_networks: yes
- name: Start a container and use an env file
docker_container:
name: agent
image: jenkinsci/ssh-slave
env_file: /var/tmp/jenkins/agent.env
- name: Create a container with limited capabilities
docker_container:
name: sleepy
image: ubuntu:16.04
command: sleep infinity
capabilities:
- sys_time
cap_drop:
- all
- name: Finer container restart/update control
docker_container:
name: test
image: ubuntu:18.04
env:
arg1: "true"
arg2: "whatever"
volumes:
- /tmp:/tmp
comparisons:
image: ignore # don't restart containers with older versions of the image
env: strict # we want precisely this environment
volumes: allow_more_present # if there are more volumes, that's ok, as long as `/tmp:/tmp` is there
- name: Finer container restart/update control II
docker_container:
name: test
image: ubuntu:18.04
env:
arg1: "true"
arg2: "whatever"
comparisons:
'*': ignore # by default, ignore *all* options (including image)
env: strict # except for environment variables; there, we want to be strict
- name: Start container with healthstatus
docker_container:
name: nginx-proxy
image: nginx:1.13
state: started
healthcheck:
# Check if nginx server is healthy by curl'ing the server.
# If this fails or timeouts, the healthcheck fails.
test: ["CMD", "curl", "--fail", "http://nginx.host.com"]
interval: 1m30s
timeout: 10s
retries: 3
start_period: 30s
- name: Remove healthcheck from container
docker_container:
name: nginx-proxy
image: nginx:1.13
state: started
healthcheck:
# The "NONE" check needs to be specified
test: ["NONE"]
- name: start container with block device read limit
docker_container:
name: test
image: ubuntu:18.04
state: started
device_read_bps:
# Limit read rate for /dev/sda to 20 mebibytes per second
- path: /dev/sda
rate: 20M
device_read_iops:
# Limit read rate for /dev/sdb to 300 IO per second
- path: /dev/sdb
rate: 300
'''
RETURN = '''
container:
description:
- Facts representing the current state of the container. Matches the docker inspection output.
- Note that facts are part of the registered vars since Ansible 2.8. For compatibility reasons, the facts
are also accessible directly as C(docker_container). Note that the returned fact will be removed in Ansible 2.12.
- Before 2.3 this was C(ansible_docker_container) but was renamed in 2.3 to C(docker_container) due to
conflicts with the connection plugin.
- Empty if C(state) is I(absent)
- If detached is I(False), will include Output attribute containing any output from container run.
returned: always
type: dict
sample: '{
"AppArmorProfile": "",
"Args": [],
"Config": {
"AttachStderr": false,
"AttachStdin": false,
"AttachStdout": false,
"Cmd": [
"/usr/bin/supervisord"
],
"Domainname": "",
"Entrypoint": null,
"Env": [
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
],
"ExposedPorts": {
"443/tcp": {},
"80/tcp": {}
},
"Hostname": "8e47bf643eb9",
"Image": "lnmp_nginx:v1",
"Labels": {},
"OnBuild": null,
"OpenStdin": false,
"StdinOnce": false,
"Tty": false,
"User": "",
"Volumes": {
"/tmp/lnmp/nginx-sites/logs/": {}
},
...
}'
'''
import os
import re
import shlex
import traceback
from distutils.version import LooseVersion
from ansible.module_utils.common.text.formatters import human_to_bytes
from ansible.module_utils.docker.common import (
AnsibleDockerClient,
DifferenceTracker,
DockerBaseClass,
compare_generic,
is_image_name_id,
sanitize_result,
parse_healthcheck,
DOCKER_COMMON_ARGS,
RequestException,
)
from ansible.module_utils.six import string_types
try:
from docker import utils
from ansible.module_utils.docker.common import docker_version
if LooseVersion(docker_version) >= LooseVersion('1.10.0'):
from docker.types import Ulimit, LogConfig
else:
from docker.utils.types import Ulimit, LogConfig
from docker.errors import DockerException, APIError, NotFound
except Exception:
# missing Docker SDK for Python handled in ansible.module_utils.docker.common
pass
REQUIRES_CONVERSION_TO_BYTES = [
'kernel_memory',
'memory',
'memory_reservation',
'memory_swap',
'shm_size'
]
def is_volume_permissions(input):
for part in input.split(','):
if part not in ('rw', 'ro', 'z', 'Z', 'consistent', 'delegated', 'cached', 'rprivate', 'private', 'rshared', 'shared', 'rslave', 'slave', 'nocopy'):
return False
return True
def parse_port_range(range_or_port, client):
'''
Parses a string containing either a single port or a range of ports.
Returns a list of integers for each port in the list.
'''
if '-' in range_or_port:
start, end = [int(port) for port in range_or_port.split('-')]
if end < start:
client.fail('Invalid port range: {0}'.format(range_or_port))
return list(range(start, end + 1))
else:
return [int(range_or_port)]
def split_colon_ipv6(input, client):
'''
Split string by ':', while keeping IPv6 addresses in square brackets in one component.
'''
if '[' not in input:
return input.split(':')
start = 0
result = []
while start < len(input):
i = input.find('[', start)
if i < 0:
result.extend(input[start:].split(':'))
break
j = input.find(']', i)
if j < 0:
client.fail('Cannot find closing "]" in input "{0}" for opening "[" at index {1}!'.format(input, i + 1))
result.extend(input[start:i].split(':'))
k = input.find(':', j)
if k < 0:
result[-1] += input[i:]
start = len(input)
else:
result[-1] += input[i:k]
if k == len(input):
result.append('')
break
start = k + 1
return result
class TaskParameters(DockerBaseClass):
'''
Access and parse module parameters
'''
def __init__(self, client):
super(TaskParameters, self).__init__()
self.client = client
self.auto_remove = None
self.blkio_weight = None
self.capabilities = None
self.cap_drop = None
self.cleanup = None
self.command = None
self.cpu_period = None
self.cpu_quota = None
self.cpuset_cpus = None
self.cpuset_mems = None
self.cpu_shares = None
self.detach = None
self.debug = None
self.devices = None
self.device_read_bps = None
self.device_write_bps = None
self.device_read_iops = None
self.device_write_iops = None
self.dns_servers = None
self.dns_opts = None
self.dns_search_domains = None
self.domainname = None
self.env = None
self.env_file = None
self.entrypoint = None
self.etc_hosts = None
self.exposed_ports = None
self.force_kill = None
self.groups = None
self.healthcheck = None
self.hostname = None
self.ignore_image = None
self.image = None
self.init = None
self.interactive = None
self.ipc_mode = None
self.keep_volumes = None
self.kernel_memory = None
self.kill_signal = None
self.labels = None
self.links = None
self.log_driver = None
self.output_logs = None
self.log_options = None
self.mac_address = None
self.memory = None
self.memory_reservation = None
self.memory_swap = None
self.memory_swappiness = None
self.name = None
self.network_mode = None
self.userns_mode = None
self.networks = None
self.networks_cli_compatible = None
self.oom_killer = None
self.oom_score_adj = None
self.paused = None
self.pid_mode = None
self.pids_limit = None
self.privileged = None
self.purge_networks = None
self.pull = None
self.read_only = None
self.recreate = None
self.restart = None
self.restart_retries = None
self.restart_policy = None
self.runtime = None
self.shm_size = None
self.security_opts = None
self.state = None
self.stop_signal = None
self.stop_timeout = None
self.tmpfs = None
self.trust_image_content = None
self.tty = None
self.user = None
self.uts = None
self.volumes = None
self.volume_binds = dict()
self.volumes_from = None
self.volume_driver = None
self.working_dir = None
for key, value in client.module.params.items():
setattr(self, key, value)
self.comparisons = client.comparisons
# If state is 'absent', parameters do not have to be parsed or interpreted.
# Only the container's name is needed.
if self.state == 'absent':
return
if self.groups:
# In case integers are passed as groups, we need to convert them to
# strings as docker internally treats them as strings.
self.groups = [str(g) for g in self.groups]
for param_name in REQUIRES_CONVERSION_TO_BYTES:
if client.module.params.get(param_name):
try:
setattr(self, param_name, human_to_bytes(client.module.params.get(param_name)))
except ValueError as exc:
self.fail("Failed to convert %s to bytes: %s" % (param_name, exc))
self.publish_all_ports = False
self.published_ports = self._parse_publish_ports()
if self.published_ports in ('all', 'ALL'):
self.publish_all_ports = True
self.published_ports = None
self.ports = self._parse_exposed_ports(self.published_ports)
self.log("expose ports:")
self.log(self.ports, pretty_print=True)
self.links = self._parse_links(self.links)
if self.volumes:
self.volumes = self._expand_host_paths()
self.tmpfs = self._parse_tmpfs()
self.env = self._get_environment()
self.ulimits = self._parse_ulimits()
self.sysctls = self._parse_sysctls()
self.log_config = self._parse_log_config()
try:
self.healthcheck, self.disable_healthcheck = parse_healthcheck(self.healthcheck)
except ValueError as e:
self.fail(str(e))
self.exp_links = None
self.volume_binds = self._get_volume_binds(self.volumes)
self.pid_mode = self._replace_container_names(self.pid_mode)
self.ipc_mode = self._replace_container_names(self.ipc_mode)
self.network_mode = self._replace_container_names(self.network_mode)
self.log("volumes:")
self.log(self.volumes, pretty_print=True)
self.log("volume binds:")
self.log(self.volume_binds, pretty_print=True)
if self.networks:
for network in self.networks:
network['id'] = self._get_network_id(network['name'])
if not network['id']:
self.fail("Parameter error: network named %s could not be found. Does it exist?" % network['name'])
if network.get('links'):
network['links'] = self._parse_links(network['links'])
if self.mac_address:
# Ensure the MAC address uses colons instead of hyphens for later comparison
self.mac_address = self.mac_address.replace('-', ':')
if self.entrypoint:
# convert from list to str.
self.entrypoint = ' '.join([str(x) for x in self.entrypoint])
if self.command:
# convert from list to str
if isinstance(self.command, list):
self.command = ' '.join([str(x) for x in self.command])
for param_name in ["device_read_bps", "device_write_bps"]:
if client.module.params.get(param_name):
self._process_rate_bps(option=param_name)
for param_name in ["device_read_iops", "device_write_iops"]:
if client.module.params.get(param_name):
self._process_rate_iops(option=param_name)
def fail(self, msg):
self.client.fail(msg)
@property
def update_parameters(self):
'''
Returns parameters used to update a container
'''
update_parameters = dict(
blkio_weight='blkio_weight',
cpu_period='cpu_period',
cpu_quota='cpu_quota',
cpu_shares='cpu_shares',
cpuset_cpus='cpuset_cpus',
cpuset_mems='cpuset_mems',
mem_limit='memory',
mem_reservation='memory_reservation',
memswap_limit='memory_swap',
kernel_memory='kernel_memory',
)
result = dict()
for key, value in update_parameters.items():
if getattr(self, value, None) is not None:
if self.client.option_minimal_versions[value]['supported']:
result[key] = getattr(self, value)
return result
@property
def create_parameters(self):
'''
Returns parameters used to create a container
'''
create_params = dict(
command='command',
domainname='domainname',
hostname='hostname',
user='user',
detach='detach',
stdin_open='interactive',
tty='tty',
ports='ports',
environment='env',
name='name',
entrypoint='entrypoint',
mac_address='mac_address',
labels='labels',
stop_signal='stop_signal',
working_dir='working_dir',
stop_timeout='stop_timeout',
healthcheck='healthcheck',
)
if self.client.docker_py_version < LooseVersion('3.0'):
# cpu_shares and volume_driver moved to create_host_config in > 3
create_params['cpu_shares'] = 'cpu_shares'
create_params['volume_driver'] = 'volume_driver'
result = dict(
host_config=self._host_config(),
volumes=self._get_mounts(),
)
for key, value in create_params.items():
if getattr(self, value, None) is not None:
if self.client.option_minimal_versions[value]['supported']:
result[key] = getattr(self, value)
if self.networks_cli_compatible and self.networks:
network = self.networks[0]
params = dict()
for para in ('ipv4_address', 'ipv6_address', 'links', 'aliases'):
if network.get(para):
params[para] = network[para]
network_config = dict()
network_config[network['name']] = self.client.create_endpoint_config(**params)
result['networking_config'] = self.client.create_networking_config(network_config)
return result
def _expand_host_paths(self):
new_vols = []
for vol in self.volumes:
if ':' in vol:
if len(vol.split(':')) == 3:
host, container, mode = vol.split(':')
if not is_volume_permissions(mode):
self.fail('Found invalid volumes mode: {0}'.format(mode))
if re.match(r'[.~]', host):
host = os.path.abspath(os.path.expanduser(host))
new_vols.append("%s:%s:%s" % (host, container, mode))
continue
elif len(vol.split(':')) == 2:
parts = vol.split(':')
if not is_volume_permissions(parts[1]) and re.match(r'[.~]', parts[0]):
host = os.path.abspath(os.path.expanduser(parts[0]))
new_vols.append("%s:%s:rw" % (host, parts[1]))
continue
new_vols.append(vol)
return new_vols
def _get_mounts(self):
'''
Return a list of container mounts.
:return:
'''
result = []
if self.volumes:
for vol in self.volumes:
if ':' in vol:
if len(vol.split(':')) == 3:
host, container, dummy = vol.split(':')
result.append(container)
continue
if len(vol.split(':')) == 2:
parts = vol.split(':')
if not is_volume_permissions(parts[1]):
result.append(parts[1])
continue
result.append(vol)
self.log("mounts:")
self.log(result, pretty_print=True)
return result
def _host_config(self):
'''
Returns parameters used to create a HostConfig object
'''
host_config_params = dict(
port_bindings='published_ports',
publish_all_ports='publish_all_ports',
links='links',
privileged='privileged',
dns='dns_servers',
dns_opt='dns_opts',
dns_search='dns_search_domains',
binds='volume_binds',
volumes_from='volumes_from',
network_mode='network_mode',
userns_mode='userns_mode',
cap_add='capabilities',
cap_drop='cap_drop',
extra_hosts='etc_hosts',
read_only='read_only',
ipc_mode='ipc_mode',
security_opt='security_opts',
ulimits='ulimits',
sysctls='sysctls',
log_config='log_config',
mem_limit='memory',
memswap_limit='memory_swap',
mem_swappiness='memory_swappiness',
oom_score_adj='oom_score_adj',
oom_kill_disable='oom_killer',
shm_size='shm_size',
group_add='groups',
devices='devices',
pid_mode='pid_mode',
tmpfs='tmpfs',
init='init',
uts_mode='uts',
runtime='runtime',
auto_remove='auto_remove',
device_read_bps='device_read_bps',
device_write_bps='device_write_bps',
device_read_iops='device_read_iops',
device_write_iops='device_write_iops',
pids_limit='pids_limit',
)
if self.client.docker_py_version >= LooseVersion('1.9') and self.client.docker_api_version >= LooseVersion('1.22'):
# blkio_weight can always be updated, but can only be set on creation
# when Docker SDK for Python and Docker API are new enough
host_config_params['blkio_weight'] = 'blkio_weight'
if self.client.docker_py_version >= LooseVersion('3.0'):
# cpu_shares and volume_driver moved to create_host_config in > 3
host_config_params['cpu_shares'] = 'cpu_shares'
host_config_params['volume_driver'] = 'volume_driver'
params = dict()
for key, value in host_config_params.items():
if getattr(self, value, None) is not None:
if self.client.option_minimal_versions[value]['supported']:
params[key] = getattr(self, value)
if self.restart_policy:
params['restart_policy'] = dict(Name=self.restart_policy,
MaximumRetryCount=self.restart_retries)
return self.client.create_host_config(**params)
@property
def default_host_ip(self):
ip = '0.0.0.0'
if not self.networks:
return ip
for net in self.networks:
if net.get('name'):
try:
network = self.client.inspect_network(net['name'])
if network.get('Driver') == 'bridge' and \
network.get('Options', {}).get('com.docker.network.bridge.host_binding_ipv4'):
ip = network['Options']['com.docker.network.bridge.host_binding_ipv4']
break
except NotFound as e:
self.client.fail(
"Cannot inspect the network '{0}' to determine the default IP: {1}".format(net['name'], e),
exception=traceback.format_exc()
)
return ip
def _parse_publish_ports(self):
'''
Parse ports from docker CLI syntax
'''
if self.published_ports is None:
return None
if 'all' in self.published_ports:
return 'all'
default_ip = self.default_host_ip
binds = {}
for port in self.published_ports:
parts = split_colon_ipv6(str(port), self.client)
container_port = parts[-1]
protocol = ''
if '/' in container_port:
container_port, protocol = parts[-1].split('/')
container_ports = parse_port_range(container_port, self.client)
p_len = len(parts)
if p_len == 1:
port_binds = len(container_ports) * [(default_ip,)]
elif p_len == 2:
port_binds = [(default_ip, port) for port in parse_port_range(parts[0], self.client)]
elif p_len == 3:
# We only allow IPv4 and IPv6 addresses for the bind address
if not re.match(r'^[0-9]+\.[0-9]+\.[0-9]+\.[0-9]+$', parts[0]) and not re.match(r'^\[[0-9a-fA-F:]+\]$', parts[0]):
self.fail(('Bind addresses for published ports must be IPv4 or IPv6 addresses, not hostnames. '
'Use the dig lookup to resolve hostnames. (Found hostname: {0})').format(parts[0]))
if parts[1]:
port_binds = [(parts[0], port) for port in parse_port_range(parts[1], self.client)]
else:
port_binds = len(container_ports) * [(parts[0],)]
for bind, container_port in zip(port_binds, container_ports):
idx = '{0}/{1}'.format(container_port, protocol) if protocol else container_port
if idx in binds:
old_bind = binds[idx]
if isinstance(old_bind, list):
old_bind.append(bind)
else:
binds[idx] = [old_bind, bind]
else:
binds[idx] = bind
return binds
def _get_volume_binds(self, volumes):
'''
Extract host bindings, if any, from list of volume mapping strings.
:return: dictionary of bind mappings
'''
result = dict()
if volumes:
for vol in volumes:
host = None
if ':' in vol:
if len(vol.split(':')) == 3:
host, container, mode = vol.split(':')
if not is_volume_permissions(mode):
self.fail('Found invalid volumes mode: {0}'.format(mode))
if len(vol.split(':')) == 2:
parts = vol.split(':')
if not is_volume_permissions(parts[1]):
host, container, mode = (vol.split(':') + ['rw'])
if host is not None:
result[host] = dict(
bind=container,
mode=mode
)
return result
def _parse_exposed_ports(self, published_ports):
'''
Parse exposed ports from docker CLI-style ports syntax.
'''
exposed = []
if self.exposed_ports:
for port in self.exposed_ports:
port = str(port).strip()
protocol = 'tcp'
match = re.search(r'(/.+$)', port)
if match:
protocol = match.group(1).replace('/', '')
port = re.sub(r'/.+$', '', port)
exposed.append((port, protocol))
if published_ports:
# Any published port should also be exposed
for publish_port in published_ports:
match = False
if isinstance(publish_port, string_types) and '/' in publish_port:
port, protocol = publish_port.split('/')
port = int(port)
else:
protocol = 'tcp'
port = int(publish_port)
for exposed_port in exposed:
if exposed_port[1] != protocol:
continue
if isinstance(exposed_port[0], string_types) and '-' in exposed_port[0]:
start_port, end_port = exposed_port[0].split('-')
if int(start_port) <= port <= int(end_port):
match = True
elif exposed_port[0] == port:
match = True
if not match:
exposed.append((port, protocol))
return exposed
@staticmethod
def _parse_links(links):
'''
Turn links into a dictionary
'''
if links is None:
return None
result = []
for link in links:
parsed_link = link.split(':', 1)
if len(parsed_link) == 2:
result.append((parsed_link[0], parsed_link[1]))
else:
result.append((parsed_link[0], parsed_link[0]))
return result
def _parse_ulimits(self):
'''
Turn ulimits into an array of Ulimit objects
'''
if self.ulimits is None:
return None
results = []
for limit in self.ulimits:
limits = dict()
pieces = limit.split(':')
if len(pieces) >= 2:
limits['name'] = pieces[0]
limits['soft'] = int(pieces[1])
limits['hard'] = int(pieces[1])
if len(pieces) == 3:
limits['hard'] = int(pieces[2])
try:
results.append(Ulimit(**limits))
except ValueError as exc:
self.fail("Error parsing ulimits value %s - %s" % (limit, exc))
return results
def _parse_sysctls(self):
'''
Turn sysctls into an hash of Sysctl objects
'''
return self.sysctls
def _parse_log_config(self):
'''
Create a LogConfig object
'''
if self.log_driver is None:
return None
options = dict(
Type=self.log_driver,
Config=dict()
)
if self.log_options is not None:
options['Config'] = dict()
for k, v in self.log_options.items():
if not isinstance(v, string_types):
self.client.module.warn(
"Non-string value found for log_options option '%s'. The value is automatically converted to '%s'. "
"If this is not correct, or you want to avoid such warnings, please quote the value." % (k, str(v))
)
v = str(v)
self.log_options[k] = v
options['Config'][k] = v
try:
return LogConfig(**options)
except ValueError as exc:
self.fail('Error parsing logging options - %s' % (exc))
def _parse_tmpfs(self):
'''
Turn tmpfs into a hash of Tmpfs objects
'''
result = dict()
if self.tmpfs is None:
return result
for tmpfs_spec in self.tmpfs:
split_spec = tmpfs_spec.split(":", 1)
if len(split_spec) > 1:
result[split_spec[0]] = split_spec[1]
else:
result[split_spec[0]] = ""
return result
def _get_environment(self):
"""
If environment file is combined with explicit environment variables, the explicit environment variables
take precedence.
"""
final_env = {}
if self.env_file:
parsed_env_file = utils.parse_env_file(self.env_file)
for name, value in parsed_env_file.items():
final_env[name] = str(value)
if self.env:
for name, value in self.env.items():
if not isinstance(value, string_types):
self.fail("Non-string value found for env option. Ambiguous env options must be "
"wrapped in quotes to avoid them being interpreted. Key: %s" % (name, ))
final_env[name] = str(value)
return final_env
def _get_network_id(self, network_name):
network_id = None
try:
for network in self.client.networks(names=[network_name]):
if network['Name'] == network_name:
network_id = network['Id']
break
except Exception as exc:
self.fail("Error getting network id for %s - %s" % (network_name, str(exc)))
return network_id
def _process_rate_bps(self, option):
"""
Format device_read_bps and device_write_bps option
"""
devices_list = []
for v in getattr(self, option):
device_dict = dict((x.title(), y) for x, y in v.items())
device_dict['Rate'] = human_to_bytes(device_dict['Rate'])
devices_list.append(device_dict)
setattr(self, option, devices_list)
def _process_rate_iops(self, option):
"""
Format device_read_iops and device_write_iops option
"""
devices_list = []
for v in getattr(self, option):
device_dict = dict((x.title(), y) for x, y in v.items())
devices_list.append(device_dict)
setattr(self, option, devices_list)
def _replace_container_names(self, mode):
"""
Parse IPC and PID modes. If they contain a container name, replace
with the container's ID.
"""
if mode is None or not mode.startswith('container:'):
return mode
container_name = mode[len('container:'):]
# Try to inspect container to see whether this is an ID or a
# name (and in the latter case, retrieve it's ID)
container = self.client.get_container(container_name)
if container is None:
# If we can't find the container, issue a warning and continue with
# what the user specified.
self.client.module.warn('Cannot find a container with name or ID "{0}"'.format(container_name))
return mode
return 'container:{0}'.format(container['Id'])
class Container(DockerBaseClass):
def __init__(self, container, parameters):
super(Container, self).__init__()
self.raw = container
self.Id = None
self.container = container
if container:
self.Id = container['Id']
self.Image = container['Image']
self.log(self.container, pretty_print=True)
self.parameters = parameters
self.parameters.expected_links = None
self.parameters.expected_ports = None
self.parameters.expected_exposed = None
self.parameters.expected_volumes = None
self.parameters.expected_ulimits = None
self.parameters.expected_sysctls = None
self.parameters.expected_etc_hosts = None
self.parameters.expected_env = None
self.parameters_map = dict()
self.parameters_map['expected_links'] = 'links'
self.parameters_map['expected_ports'] = 'expected_ports'
self.parameters_map['expected_exposed'] = 'exposed_ports'
self.parameters_map['expected_volumes'] = 'volumes'
self.parameters_map['expected_ulimits'] = 'ulimits'
self.parameters_map['expected_sysctls'] = 'sysctls'
self.parameters_map['expected_etc_hosts'] = 'etc_hosts'
self.parameters_map['expected_env'] = 'env'
self.parameters_map['expected_entrypoint'] = 'entrypoint'
self.parameters_map['expected_binds'] = 'volumes'
self.parameters_map['expected_cmd'] = 'command'
self.parameters_map['expected_devices'] = 'devices'
self.parameters_map['expected_healthcheck'] = 'healthcheck'
def fail(self, msg):
self.parameters.client.fail(msg)
@property
def exists(self):
return True if self.container else False
@property
def running(self):
if self.container and self.container.get('State'):
if self.container['State'].get('Running') and not self.container['State'].get('Ghost', False):
return True
return False
@property
def paused(self):
if self.container and self.container.get('State'):
return self.container['State'].get('Paused', False)
return False
def _compare(self, a, b, compare):
'''
Compare values a and b as described in compare.
'''
return compare_generic(a, b, compare['comparison'], compare['type'])
def has_different_configuration(self, image):
'''
Diff parameters vs existing container config. Returns tuple: (True | False, List of differences)
'''
self.log('Starting has_different_configuration')
self.parameters.expected_entrypoint = self._get_expected_entrypoint()
self.parameters.expected_links = self._get_expected_links()
self.parameters.expected_ports = self._get_expected_ports()
self.parameters.expected_exposed = self._get_expected_exposed(image)
self.parameters.expected_volumes = self._get_expected_volumes(image)
self.parameters.expected_binds = self._get_expected_binds(image)
self.parameters.expected_ulimits = self._get_expected_ulimits(self.parameters.ulimits)
self.parameters.expected_sysctls = self._get_expected_sysctls(self.parameters.sysctls)
self.parameters.expected_etc_hosts = self._convert_simple_dict_to_list('etc_hosts')
self.parameters.expected_env = self._get_expected_env(image)
self.parameters.expected_cmd = self._get_expected_cmd()
self.parameters.expected_devices = self._get_expected_devices()
self.parameters.expected_healthcheck = self._get_expected_healthcheck()
if not self.container.get('HostConfig'):
self.fail("has_config_diff: Error parsing container properties. HostConfig missing.")
if not self.container.get('Config'):
self.fail("has_config_diff: Error parsing container properties. Config missing.")
if not self.container.get('NetworkSettings'):
self.fail("has_config_diff: Error parsing container properties. NetworkSettings missing.")
host_config = self.container['HostConfig']
log_config = host_config.get('LogConfig', dict())
restart_policy = host_config.get('RestartPolicy', dict())
config = self.container['Config']
network = self.container['NetworkSettings']
# The previous version of the docker module ignored the detach state by
# assuming if the container was running, it must have been detached.
detach = not (config.get('AttachStderr') and config.get('AttachStdout'))
# "ExposedPorts": null returns None type & causes AttributeError - PR #5517
if config.get('ExposedPorts') is not None:
expected_exposed = [self._normalize_port(p) for p in config.get('ExposedPorts', dict()).keys()]
else:
expected_exposed = []
# Map parameters to container inspect results
config_mapping = dict(
expected_cmd=config.get('Cmd'),
domainname=config.get('Domainname'),
hostname=config.get('Hostname'),
user=config.get('User'),
detach=detach,
init=host_config.get('Init'),
interactive=config.get('OpenStdin'),
capabilities=host_config.get('CapAdd'),
cap_drop=host_config.get('CapDrop'),
expected_devices=host_config.get('Devices'),
dns_servers=host_config.get('Dns'),
dns_opts=host_config.get('DnsOptions'),
dns_search_domains=host_config.get('DnsSearch'),
expected_env=(config.get('Env') or []),
expected_entrypoint=config.get('Entrypoint'),
expected_etc_hosts=host_config['ExtraHosts'],
expected_exposed=expected_exposed,
groups=host_config.get('GroupAdd'),
ipc_mode=host_config.get("IpcMode"),
labels=config.get('Labels'),
expected_links=host_config.get('Links'),
mac_address=network.get('MacAddress'),
memory_swappiness=host_config.get('MemorySwappiness'),
network_mode=host_config.get('NetworkMode'),
userns_mode=host_config.get('UsernsMode'),
oom_killer=host_config.get('OomKillDisable'),
oom_score_adj=host_config.get('OomScoreAdj'),
pid_mode=host_config.get('PidMode'),
privileged=host_config.get('Privileged'),
expected_ports=host_config.get('PortBindings'),
read_only=host_config.get('ReadonlyRootfs'),
restart_policy=restart_policy.get('Name'),
runtime=host_config.get('Runtime'),
shm_size=host_config.get('ShmSize'),
security_opts=host_config.get("SecurityOpt"),
stop_signal=config.get("StopSignal"),
tmpfs=host_config.get('Tmpfs'),
tty=config.get('Tty'),
expected_ulimits=host_config.get('Ulimits'),
expected_sysctls=host_config.get('Sysctls'),
uts=host_config.get('UTSMode'),
expected_volumes=config.get('Volumes'),
expected_binds=host_config.get('Binds'),
volume_driver=host_config.get('VolumeDriver'),
volumes_from=host_config.get('VolumesFrom'),
working_dir=config.get('WorkingDir'),
publish_all_ports=host_config.get('PublishAllPorts'),
expected_healthcheck=config.get('Healthcheck'),
disable_healthcheck=(not config.get('Healthcheck') or config.get('Healthcheck').get('Test') == ['NONE']),
device_read_bps=host_config.get('BlkioDeviceReadBps'),
device_write_bps=host_config.get('BlkioDeviceWriteBps'),
device_read_iops=host_config.get('BlkioDeviceReadIOps'),
device_write_iops=host_config.get('BlkioDeviceWriteIOps'),
pids_limit=host_config.get('PidsLimit'),
)
# Options which don't make sense without their accompanying option
if self.parameters.restart_policy:
config_mapping['restart_retries'] = restart_policy.get('MaximumRetryCount')
if self.parameters.log_driver:
config_mapping['log_driver'] = log_config.get('Type')
config_mapping['log_options'] = log_config.get('Config')
if self.parameters.client.option_minimal_versions['auto_remove']['supported']:
# auto_remove is only supported in Docker SDK for Python >= 2.0.0; unfortunately
# it has a default value, that's why we have to jump through the hoops here
config_mapping['auto_remove'] = host_config.get('AutoRemove')
if self.parameters.client.option_minimal_versions['stop_timeout']['supported']:
# stop_timeout is only supported in Docker SDK for Python >= 2.1. Note that
# stop_timeout has a hybrid role, in that it used to be something only used
# for stopping containers, and is now also used as a container property.
# That's why it needs special handling here.
config_mapping['stop_timeout'] = config.get('StopTimeout')
if self.parameters.client.docker_api_version < LooseVersion('1.22'):
# For docker API < 1.22, update_container() is not supported. Thus
# we need to handle all limits which are usually handled by
# update_container() as configuration changes which require a container
# restart.
config_mapping.update(dict(
blkio_weight=host_config.get('BlkioWeight'),
cpu_period=host_config.get('CpuPeriod'),
cpu_quota=host_config.get('CpuQuota'),
cpu_shares=host_config.get('CpuShares'),
cpuset_cpus=host_config.get('CpusetCpus'),
cpuset_mems=host_config.get('CpusetMems'),
kernel_memory=host_config.get("KernelMemory"),
memory=host_config.get('Memory'),
memory_reservation=host_config.get('MemoryReservation'),
memory_swap=host_config.get('MemorySwap'),
))
differences = DifferenceTracker()
for key, value in config_mapping.items():
minimal_version = self.parameters.client.option_minimal_versions.get(key, {})
if not minimal_version.get('supported', True):
continue
compare = self.parameters.client.comparisons[self.parameters_map.get(key, key)]
self.log('check differences %s %s vs %s (%s)' % (key, getattr(self.parameters, key), str(value), compare))
if getattr(self.parameters, key, None) is not None:
match = self._compare(getattr(self.parameters, key), value, compare)
if not match:
# no match. record the differences
p = getattr(self.parameters, key)
c = value
if compare['type'] == 'set':
# Since the order does not matter, sort so that the diff output is better.
if p is not None:
p = sorted(p)
if c is not None:
c = sorted(c)
elif compare['type'] == 'set(dict)':
# Since the order does not matter, sort so that the diff output is better.
# We sort the list of dictionaries by using the sorted items of a dict as its key.
if p is not None:
p = sorted(p, key=lambda x: sorted(x.items()))
if c is not None:
c = sorted(c, key=lambda x: sorted(x.items()))
differences.add(key, parameter=p, active=c)
has_differences = not differences.empty
return has_differences, differences
def has_different_resource_limits(self):
'''
Diff parameters and container resource limits
'''
if not self.container.get('HostConfig'):
self.fail("limits_differ_from_container: Error parsing container properties. HostConfig missing.")
if self.parameters.client.docker_api_version < LooseVersion('1.22'):
# update_container() call not supported
return False, []
host_config = self.container['HostConfig']
config_mapping = dict(
blkio_weight=host_config.get('BlkioWeight'),
cpu_period=host_config.get('CpuPeriod'),
cpu_quota=host_config.get('CpuQuota'),
cpu_shares=host_config.get('CpuShares'),
cpuset_cpus=host_config.get('CpusetCpus'),
cpuset_mems=host_config.get('CpusetMems'),
kernel_memory=host_config.get("KernelMemory"),
memory=host_config.get('Memory'),
memory_reservation=host_config.get('MemoryReservation'),
memory_swap=host_config.get('MemorySwap'),
)
differences = DifferenceTracker()
for key, value in config_mapping.items():
if getattr(self.parameters, key, None):
compare = self.parameters.client.comparisons[self.parameters_map.get(key, key)]
match = self._compare(getattr(self.parameters, key), value, compare)
if not match:
# no match. record the differences
differences.add(key, parameter=getattr(self.parameters, key), active=value)
different = not differences.empty
return different, differences
def has_network_differences(self):
'''
Check if the container is connected to requested networks with expected options: links, aliases, ipv4, ipv6
'''
different = False
differences = []
if not self.parameters.networks:
return different, differences
if not self.container.get('NetworkSettings'):
self.fail("has_missing_networks: Error parsing container properties. NetworkSettings missing.")
connected_networks = self.container['NetworkSettings']['Networks']
for network in self.parameters.networks:
if connected_networks.get(network['name'], None) is None:
different = True
differences.append(dict(
parameter=network,
container=None
))
else:
diff = False
if network.get('ipv4_address') and network['ipv4_address'] != connected_networks[network['name']].get('IPAddress'):
diff = True
if network.get('ipv6_address') and network['ipv6_address'] != connected_networks[network['name']].get('GlobalIPv6Address'):
diff = True
if network.get('aliases'):
if not compare_generic(network['aliases'], connected_networks[network['name']].get('Aliases'), 'allow_more_present', 'set'):
diff = True
if network.get('links'):
expected_links = []
for link, alias in network['links']:
expected_links.append("%s:%s" % (link, alias))
if not compare_generic(expected_links, connected_networks[network['name']].get('Links'), 'allow_more_present', 'set'):
diff = True
if diff:
different = True
differences.append(dict(
parameter=network,
container=dict(
name=network['name'],
ipv4_address=connected_networks[network['name']].get('IPAddress'),
ipv6_address=connected_networks[network['name']].get('GlobalIPv6Address'),
aliases=connected_networks[network['name']].get('Aliases'),
links=connected_networks[network['name']].get('Links')
)
))
return different, differences
def has_extra_networks(self):
'''
Check if the container is connected to non-requested networks
'''
extra_networks = []
extra = False
if not self.container.get('NetworkSettings'):
self.fail("has_extra_networks: Error parsing container properties. NetworkSettings missing.")
connected_networks = self.container['NetworkSettings'].get('Networks')
if connected_networks:
for network, network_config in connected_networks.items():
keep = False
if self.parameters.networks:
for expected_network in self.parameters.networks:
if expected_network['name'] == network:
keep = True
if not keep:
extra = True
extra_networks.append(dict(name=network, id=network_config['NetworkID']))
return extra, extra_networks
def _get_expected_devices(self):
if not self.parameters.devices:
return None
expected_devices = []
for device in self.parameters.devices:
parts = device.split(':')
if len(parts) == 1:
expected_devices.append(
dict(
CgroupPermissions='rwm',
PathInContainer=parts[0],
PathOnHost=parts[0]
))
elif len(parts) == 2:
parts = device.split(':')
expected_devices.append(
dict(
CgroupPermissions='rwm',
PathInContainer=parts[1],
PathOnHost=parts[0]
)
)
else:
expected_devices.append(
dict(
CgroupPermissions=parts[2],
PathInContainer=parts[1],
PathOnHost=parts[0]
))
return expected_devices
def _get_expected_entrypoint(self):
if not self.parameters.entrypoint:
return None
return shlex.split(self.parameters.entrypoint)
def _get_expected_ports(self):
if not self.parameters.published_ports:
return None
expected_bound_ports = {}
for container_port, config in self.parameters.published_ports.items():
if isinstance(container_port, int):
container_port = "%s/tcp" % container_port
if len(config) == 1:
if isinstance(config[0], int):
expected_bound_ports[container_port] = [{'HostIp': "0.0.0.0", 'HostPort': config[0]}]
else:
expected_bound_ports[container_port] = [{'HostIp': config[0], 'HostPort': ""}]
elif isinstance(config[0], tuple):
expected_bound_ports[container_port] = []
for host_ip, host_port in config:
expected_bound_ports[container_port].append({'HostIp': host_ip, 'HostPort': str(host_port)})
else:
expected_bound_ports[container_port] = [{'HostIp': config[0], 'HostPort': str(config[1])}]
return expected_bound_ports
def _get_expected_links(self):
if self.parameters.links is None:
return None
self.log('parameter links:')
self.log(self.parameters.links, pretty_print=True)
exp_links = []
for link, alias in self.parameters.links:
exp_links.append("/%s:%s/%s" % (link, ('/' + self.parameters.name), alias))
return exp_links
def _get_expected_binds(self, image):
self.log('_get_expected_binds')
image_vols = []
if image:
image_vols = self._get_image_binds(image[self.parameters.client.image_inspect_source].get('Volumes'))
param_vols = []
if self.parameters.volumes:
for vol in self.parameters.volumes:
host = None
if ':' in vol:
if len(vol.split(':')) == 3:
host, container, mode = vol.split(':')
if not is_volume_permissions(mode):
self.fail('Found invalid volumes mode: {0}'.format(mode))
if len(vol.split(':')) == 2:
parts = vol.split(':')
if not is_volume_permissions(parts[1]):
host, container, mode = vol.split(':') + ['rw']
if host:
param_vols.append("%s:%s:%s" % (host, container, mode))
result = list(set(image_vols + param_vols))
self.log("expected_binds:")
self.log(result, pretty_print=True)
return result
def _get_image_binds(self, volumes):
'''
Convert array of binds to array of strings with format host_path:container_path:mode
:param volumes: array of bind dicts
:return: array of strings
'''
results = []
if isinstance(volumes, dict):
results += self._get_bind_from_dict(volumes)
elif isinstance(volumes, list):
for vol in volumes:
results += self._get_bind_from_dict(vol)
return results
@staticmethod
def _get_bind_from_dict(volume_dict):
results = []
if volume_dict:
for host_path, config in volume_dict.items():
if isinstance(config, dict) and config.get('bind'):
container_path = config.get('bind')
mode = config.get('mode', 'rw')
results.append("%s:%s:%s" % (host_path, container_path, mode))
return results
def _get_expected_volumes(self, image):
self.log('_get_expected_volumes')
expected_vols = dict()
if image and image[self.parameters.client.image_inspect_source].get('Volumes'):
expected_vols.update(image[self.parameters.client.image_inspect_source].get('Volumes'))
if self.parameters.volumes:
for vol in self.parameters.volumes:
container = None
if ':' in vol:
if len(vol.split(':')) == 3:
host, container, mode = vol.split(':')
if not is_volume_permissions(mode):
self.fail('Found invalid volumes mode: {0}'.format(mode))
if len(vol.split(':')) == 2:
parts = vol.split(':')
if not is_volume_permissions(parts[1]):
host, container, mode = vol.split(':') + ['rw']
new_vol = dict()
if container:
new_vol[container] = dict()
else:
new_vol[vol] = dict()
expected_vols.update(new_vol)
if not expected_vols:
expected_vols = None
self.log("expected_volumes:")
self.log(expected_vols, pretty_print=True)
return expected_vols
def _get_expected_env(self, image):
self.log('_get_expected_env')
expected_env = dict()
if image and image[self.parameters.client.image_inspect_source].get('Env'):
for env_var in image[self.parameters.client.image_inspect_source]['Env']:
parts = env_var.split('=', 1)
expected_env[parts[0]] = parts[1]
if self.parameters.env:
expected_env.update(self.parameters.env)
param_env = []
for key, value in expected_env.items():
param_env.append("%s=%s" % (key, value))
return param_env
def _get_expected_exposed(self, image):
self.log('_get_expected_exposed')
image_ports = []
if image:
image_exposed_ports = image[self.parameters.client.image_inspect_source].get('ExposedPorts') or {}
image_ports = [self._normalize_port(p) for p in image_exposed_ports.keys()]
param_ports = []
if self.parameters.ports:
param_ports = [str(p[0]) + '/' + p[1] for p in self.parameters.ports]
result = list(set(image_ports + param_ports))
self.log(result, pretty_print=True)
return result
def _get_expected_ulimits(self, config_ulimits):
self.log('_get_expected_ulimits')
if config_ulimits is None:
return None
results = []
for limit in config_ulimits:
results.append(dict(
Name=limit.name,
Soft=limit.soft,
Hard=limit.hard
))
return results
def _get_expected_sysctls(self, config_sysctls):
self.log('_get_expected_sysctls')
if config_sysctls is None:
return None
result = dict()
for key, value in config_sysctls.items():
result[key] = str(value)
return result
def _get_expected_cmd(self):
self.log('_get_expected_cmd')
if not self.parameters.command:
return None
return shlex.split(self.parameters.command)
def _convert_simple_dict_to_list(self, param_name, join_with=':'):
if getattr(self.parameters, param_name, None) is None:
return None
results = []
for key, value in getattr(self.parameters, param_name).items():
results.append("%s%s%s" % (key, join_with, value))
return results
def _normalize_port(self, port):
if '/' not in port:
return port + '/tcp'
return port
def _get_expected_healthcheck(self):
self.log('_get_expected_healthcheck')
expected_healthcheck = dict()
if self.parameters.healthcheck:
expected_healthcheck.update([(k.title().replace("_", ""), v)
for k, v in self.parameters.healthcheck.items()])
return expected_healthcheck
class ContainerManager(DockerBaseClass):
'''
Perform container management tasks
'''
def __init__(self, client):
super(ContainerManager, self).__init__()
if client.module.params.get('log_options') and not client.module.params.get('log_driver'):
client.module.warn('log_options is ignored when log_driver is not specified')
if client.module.params.get('healthcheck') and not client.module.params.get('healthcheck').get('test'):
client.module.warn('healthcheck is ignored when test is not specified')
if client.module.params.get('restart_retries') is not None and not client.module.params.get('restart_policy'):
client.module.warn('restart_retries is ignored when restart_policy is not specified')
self.client = client
self.parameters = TaskParameters(client)
self.check_mode = self.client.check_mode
self.results = {'changed': False, 'actions': []}
self.diff = {}
self.diff_tracker = DifferenceTracker()
self.facts = {}
state = self.parameters.state
if state in ('stopped', 'started', 'present'):
self.present(state)
elif state == 'absent':
self.absent()
if not self.check_mode and not self.parameters.debug:
self.results.pop('actions')
if self.client.module._diff or self.parameters.debug:
self.diff['before'], self.diff['after'] = self.diff_tracker.get_before_after()
self.results['diff'] = self.diff
if self.facts:
self.results['ansible_facts'] = {'docker_container': self.facts}
self.results['container'] = self.facts
def present(self, state):
container = self._get_container(self.parameters.name)
was_running = container.running
was_paused = container.paused
container_created = False
# If the image parameter was passed then we need to deal with the image
# version comparison. Otherwise we handle this depending on whether
# the container already runs or not; in the former case, in case the
# container needs to be restarted, we use the existing container's
# image ID.
image = self._get_image()
self.log(image, pretty_print=True)
if not container.exists:
# New container
self.log('No container found')
if not self.parameters.image:
self.fail('Cannot create container when image is not specified!')
self.diff_tracker.add('exists', parameter=True, active=False)
new_container = self.container_create(self.parameters.image, self.parameters.create_parameters)
if new_container:
container = new_container
container_created = True
else:
# Existing container
different, differences = container.has_different_configuration(image)
image_different = False
if self.parameters.comparisons['image']['comparison'] == 'strict':
image_different = self._image_is_different(image, container)
if image_different or different or self.parameters.recreate:
self.diff_tracker.merge(differences)
self.diff['differences'] = differences.get_legacy_docker_container_diffs()
if image_different:
self.diff['image_different'] = True
self.log("differences")
self.log(differences.get_legacy_docker_container_diffs(), pretty_print=True)
image_to_use = self.parameters.image
if not image_to_use and container and container.Image:
image_to_use = container.Image
if not image_to_use:
self.fail('Cannot recreate container when image is not specified or cannot be extracted from current container!')
if container.running:
self.container_stop(container.Id)
self.container_remove(container.Id)
new_container = self.container_create(image_to_use, self.parameters.create_parameters)
if new_container:
container = new_container
container_created = True
if container and container.exists:
container = self.update_limits(container)
container = self.update_networks(container, container_created)
if state == 'started' and not container.running:
self.diff_tracker.add('running', parameter=True, active=was_running)
container = self.container_start(container.Id)
elif state == 'started' and self.parameters.restart:
self.diff_tracker.add('running', parameter=True, active=was_running)
self.diff_tracker.add('restarted', parameter=True, active=False)
container = self.container_restart(container.Id)
elif state == 'stopped' and container.running:
self.diff_tracker.add('running', parameter=False, active=was_running)
self.container_stop(container.Id)
container = self._get_container(container.Id)
if state == 'started' and container.paused != self.parameters.paused:
self.diff_tracker.add('paused', parameter=self.parameters.paused, active=was_paused)
if not self.check_mode:
try:
if self.parameters.paused:
self.client.pause(container=container.Id)
else:
self.client.unpause(container=container.Id)
except Exception as exc:
self.fail("Error %s container %s: %s" % (
"pausing" if self.parameters.paused else "unpausing", container.Id, str(exc)
))
container = self._get_container(container.Id)
self.results['changed'] = True
self.results['actions'].append(dict(set_paused=self.parameters.paused))
self.facts = container.raw
def absent(self):
container = self._get_container(self.parameters.name)
if container.exists:
if container.running:
self.diff_tracker.add('running', parameter=False, active=True)
self.container_stop(container.Id)
self.diff_tracker.add('exists', parameter=False, active=True)
self.container_remove(container.Id)
def fail(self, msg, **kwargs):
self.client.fail(msg, **kwargs)
def _output_logs(self, msg):
self.client.module.log(msg=msg)
def _get_container(self, container):
'''
Expects container ID or Name. Returns a container object
'''
return Container(self.client.get_container(container), self.parameters)
def _get_image(self):
if not self.parameters.image:
self.log('No image specified')
return None
if is_image_name_id(self.parameters.image):
image = self.client.find_image_by_id(self.parameters.image)
else:
repository, tag = utils.parse_repository_tag(self.parameters.image)
if not tag:
tag = "latest"
image = self.client.find_image(repository, tag)
if not self.check_mode:
if not image or self.parameters.pull:
self.log("Pull the image.")
image, alreadyToLatest = self.client.pull_image(repository, tag)
if alreadyToLatest:
self.results['changed'] = False
else:
self.results['changed'] = True
self.results['actions'].append(dict(pulled_image="%s:%s" % (repository, tag)))
self.log("image")
self.log(image, pretty_print=True)
return image
def _image_is_different(self, image, container):
if image and image.get('Id'):
if container and container.Image:
if image.get('Id') != container.Image:
self.diff_tracker.add('image', parameter=image.get('Id'), active=container.Image)
return True
return False
def update_limits(self, container):
limits_differ, different_limits = container.has_different_resource_limits()
if limits_differ:
self.log("limit differences:")
self.log(different_limits.get_legacy_docker_container_diffs(), pretty_print=True)
self.diff_tracker.merge(different_limits)
if limits_differ and not self.check_mode:
self.container_update(container.Id, self.parameters.update_parameters)
return self._get_container(container.Id)
return container
def update_networks(self, container, container_created):
updated_container = container
if self.parameters.comparisons['networks']['comparison'] != 'ignore' or container_created:
has_network_differences, network_differences = container.has_network_differences()
if has_network_differences:
if self.diff.get('differences'):
self.diff['differences'].append(dict(network_differences=network_differences))
else:
self.diff['differences'] = [dict(network_differences=network_differences)]
for netdiff in network_differences:
self.diff_tracker.add(
'network.{0}'.format(netdiff['parameter']['name']),
parameter=netdiff['parameter'],
active=netdiff['container']
)
self.results['changed'] = True
updated_container = self._add_networks(container, network_differences)
if (self.parameters.comparisons['networks']['comparison'] == 'strict' and self.parameters.networks is not None) or self.parameters.purge_networks:
has_extra_networks, extra_networks = container.has_extra_networks()
if has_extra_networks:
if self.diff.get('differences'):
self.diff['differences'].append(dict(purge_networks=extra_networks))
else:
self.diff['differences'] = [dict(purge_networks=extra_networks)]
for extra_network in extra_networks:
self.diff_tracker.add(
'network.{0}'.format(extra_network['name']),
active=extra_network
)
self.results['changed'] = True
updated_container = self._purge_networks(container, extra_networks)
return updated_container
def _add_networks(self, container, differences):
for diff in differences:
# remove the container from the network, if connected
if diff.get('container'):
self.results['actions'].append(dict(removed_from_network=diff['parameter']['name']))
if not self.check_mode:
try:
self.client.disconnect_container_from_network(container.Id, diff['parameter']['id'])
except Exception as exc:
self.fail("Error disconnecting container from network %s - %s" % (diff['parameter']['name'],
str(exc)))
# connect to the network
params = dict()
for para in ('ipv4_address', 'ipv6_address', 'links', 'aliases'):
if diff['parameter'].get(para):
params[para] = diff['parameter'][para]
self.results['actions'].append(dict(added_to_network=diff['parameter']['name'], network_parameters=params))
if not self.check_mode:
try:
self.log("Connecting container to network %s" % diff['parameter']['id'])
self.log(params, pretty_print=True)
self.client.connect_container_to_network(container.Id, diff['parameter']['id'], **params)
except Exception as exc:
self.fail("Error connecting container to network %s - %s" % (diff['parameter']['name'], str(exc)))
return self._get_container(container.Id)
def _purge_networks(self, container, networks):
for network in networks:
self.results['actions'].append(dict(removed_from_network=network['name']))
if not self.check_mode:
try:
self.client.disconnect_container_from_network(container.Id, network['name'])
except Exception as exc:
self.fail("Error disconnecting container from network %s - %s" % (network['name'],
str(exc)))
return self._get_container(container.Id)
def container_create(self, image, create_parameters):
self.log("create container")
self.log("image: %s parameters:" % image)
self.log(create_parameters, pretty_print=True)
self.results['actions'].append(dict(created="Created container", create_parameters=create_parameters))
self.results['changed'] = True
new_container = None
if not self.check_mode:
try:
new_container = self.client.create_container(image, **create_parameters)
self.client.report_warnings(new_container)
except Exception as exc:
self.fail("Error creating container: %s" % str(exc))
return self._get_container(new_container['Id'])
return new_container
def container_start(self, container_id):
self.log("start container %s" % (container_id))
self.results['actions'].append(dict(started=container_id))
self.results['changed'] = True
if not self.check_mode:
try:
self.client.start(container=container_id)
except Exception as exc:
self.fail("Error starting container %s: %s" % (container_id, str(exc)))
if not self.parameters.detach:
if self.client.docker_py_version >= LooseVersion('3.0'):
status = self.client.wait(container_id)['StatusCode']
else:
status = self.client.wait(container_id)
if self.parameters.auto_remove:
output = "Cannot retrieve result as auto_remove is enabled"
if self.parameters.output_logs:
self.client.module.warn('Cannot output_logs if auto_remove is enabled!')
else:
config = self.client.inspect_container(container_id)
logging_driver = config['HostConfig']['LogConfig']['Type']
if logging_driver == 'json-file' or logging_driver == 'journald':
output = self.client.logs(container_id, stdout=True, stderr=True, stream=False, timestamps=False)
if self.parameters.output_logs:
self._output_logs(msg=output)
else:
output = "Result logged using `%s` driver" % logging_driver
if status != 0:
self.fail(output, status=status)
if self.parameters.cleanup:
self.container_remove(container_id, force=True)
insp = self._get_container(container_id)
if insp.raw:
insp.raw['Output'] = output
else:
insp.raw = dict(Output=output)
return insp
return self._get_container(container_id)
def container_remove(self, container_id, link=False, force=False):
volume_state = (not self.parameters.keep_volumes)
self.log("remove container container:%s v:%s link:%s force%s" % (container_id, volume_state, link, force))
self.results['actions'].append(dict(removed=container_id, volume_state=volume_state, link=link, force=force))
self.results['changed'] = True
response = None
if not self.check_mode:
count = 0
while True:
try:
response = self.client.remove_container(container_id, v=volume_state, link=link, force=force)
except NotFound as dummy:
pass
except APIError as exc:
if 'Unpause the container before stopping or killing' in exc.explanation:
# New docker daemon versions do not allow containers to be removed
# if they are paused. Make sure we don't end up in an infinite loop.
if count == 3:
self.fail("Error removing container %s (tried to unpause three times): %s" % (container_id, str(exc)))
count += 1
# Unpause
try:
self.client.unpause(container=container_id)
except Exception as exc2:
self.fail("Error unpausing container %s for removal: %s" % (container_id, str(exc2)))
# Now try again
continue
if 'removal of container ' in exc.explanation and ' is already in progress' in exc.explanation:
pass
else:
self.fail("Error removing container %s: %s" % (container_id, str(exc)))
except Exception as exc:
self.fail("Error removing container %s: %s" % (container_id, str(exc)))
# We only loop when explicitly requested by 'continue'
break
return response
def container_update(self, container_id, update_parameters):
if update_parameters:
self.log("update container %s" % (container_id))
self.log(update_parameters, pretty_print=True)
self.results['actions'].append(dict(updated=container_id, update_parameters=update_parameters))
self.results['changed'] = True
if not self.check_mode and callable(getattr(self.client, 'update_container')):
try:
result = self.client.update_container(container_id, **update_parameters)
self.client.report_warnings(result)
except Exception as exc:
self.fail("Error updating container %s: %s" % (container_id, str(exc)))
return self._get_container(container_id)
def container_kill(self, container_id):
self.results['actions'].append(dict(killed=container_id, signal=self.parameters.kill_signal))
self.results['changed'] = True
response = None
if not self.check_mode:
try:
if self.parameters.kill_signal:
response = self.client.kill(container_id, signal=self.parameters.kill_signal)
else:
response = self.client.kill(container_id)
except Exception as exc:
self.fail("Error killing container %s: %s" % (container_id, exc))
return response
def container_restart(self, container_id):
self.results['actions'].append(dict(restarted=container_id, timeout=self.parameters.stop_timeout))
self.results['changed'] = True
if not self.check_mode:
try:
if self.parameters.stop_timeout:
dummy = self.client.restart(container_id, timeout=self.parameters.stop_timeout)
else:
dummy = self.client.restart(container_id)
except Exception as exc:
self.fail("Error restarting container %s: %s" % (container_id, str(exc)))
return self._get_container(container_id)
def container_stop(self, container_id):
if self.parameters.force_kill:
self.container_kill(container_id)
return
self.results['actions'].append(dict(stopped=container_id, timeout=self.parameters.stop_timeout))
self.results['changed'] = True
response = None
if not self.check_mode:
count = 0
while True:
try:
if self.parameters.stop_timeout:
response = self.client.stop(container_id, timeout=self.parameters.stop_timeout)
else:
response = self.client.stop(container_id)
except APIError as exc:
if 'Unpause the container before stopping or killing' in exc.explanation:
# New docker daemon versions do not allow containers to be removed
# if they are paused. Make sure we don't end up in an infinite loop.
if count == 3:
self.fail("Error removing container %s (tried to unpause three times): %s" % (container_id, str(exc)))
count += 1
# Unpause
try:
self.client.unpause(container=container_id)
except Exception as exc2:
self.fail("Error unpausing container %s for removal: %s" % (container_id, str(exc2)))
# Now try again
continue
self.fail("Error stopping container %s: %s" % (container_id, str(exc)))
except Exception as exc:
self.fail("Error stopping container %s: %s" % (container_id, str(exc)))
# We only loop when explicitly requested by 'continue'
break
return response
def detect_ipvX_address_usage(client):
'''
Helper function to detect whether any specified network uses ipv4_address or ipv6_address
'''
for network in client.module.params.get("networks") or []:
if network.get('ipv4_address') is not None or network.get('ipv6_address') is not None:
return True
return False
class AnsibleDockerClientContainer(AnsibleDockerClient):
# A list of module options which are not docker container properties
__NON_CONTAINER_PROPERTY_OPTIONS = tuple([
'env_file', 'force_kill', 'keep_volumes', 'ignore_image', 'name', 'pull', 'purge_networks',
'recreate', 'restart', 'state', 'trust_image_content', 'networks', 'cleanup', 'kill_signal',
'output_logs', 'paused'
] + list(DOCKER_COMMON_ARGS.keys()))
def _parse_comparisons(self):
comparisons = {}
comp_aliases = {}
# Put in defaults
explicit_types = dict(
command='list',
devices='set(dict)',
dns_search_domains='list',
dns_servers='list',
env='set',
entrypoint='list',
etc_hosts='set',
networks='set(dict)',
ulimits='set(dict)',
device_read_bps='set(dict)',
device_write_bps='set(dict)',
device_read_iops='set(dict)',
device_write_iops='set(dict)',
)
all_options = set() # this is for improving user feedback when a wrong option was specified for comparison
default_values = dict(
stop_timeout='ignore',
)
for option, data in self.module.argument_spec.items():
all_options.add(option)
for alias in data.get('aliases', []):
all_options.add(alias)
# Ignore options which aren't used as container properties
if option in self.__NON_CONTAINER_PROPERTY_OPTIONS and option != 'networks':
continue
# Determine option type
if option in explicit_types:
type = explicit_types[option]
elif data['type'] == 'list':
type = 'set'
elif data['type'] == 'dict':
type = 'dict'
else:
type = 'value'
# Determine comparison type
if option in default_values:
comparison = default_values[option]
elif type in ('list', 'value'):
comparison = 'strict'
else:
comparison = 'allow_more_present'
comparisons[option] = dict(type=type, comparison=comparison, name=option)
# Keep track of aliases
comp_aliases[option] = option
for alias in data.get('aliases', []):
comp_aliases[alias] = option
# Process legacy ignore options
if self.module.params['ignore_image']:
comparisons['image']['comparison'] = 'ignore'
if self.module.params['purge_networks']:
comparisons['networks']['comparison'] = 'strict'
# Process options
if self.module.params.get('comparisons'):
# If '*' appears in comparisons, process it first
if '*' in self.module.params['comparisons']:
value = self.module.params['comparisons']['*']
if value not in ('strict', 'ignore'):
self.fail("The wildcard can only be used with comparison modes 'strict' and 'ignore'!")
for option, v in comparisons.items():
if option == 'networks':
# `networks` is special: only update if
# some value is actually specified
if self.module.params['networks'] is None:
continue
v['comparison'] = value
# Now process all other comparisons.
comp_aliases_used = {}
for key, value in self.module.params['comparisons'].items():
if key == '*':
continue
# Find main key
key_main = comp_aliases.get(key)
if key_main is None:
if key_main in all_options:
self.fail("The module option '%s' cannot be specified in the comparisons dict, "
"since it does not correspond to container's state!" % key)
self.fail("Unknown module option '%s' in comparisons dict!" % key)
if key_main in comp_aliases_used:
self.fail("Both '%s' and '%s' (aliases of %s) are specified in comparisons dict!" % (key, comp_aliases_used[key_main], key_main))
comp_aliases_used[key_main] = key
# Check value and update accordingly
if value in ('strict', 'ignore'):
comparisons[key_main]['comparison'] = value
elif value == 'allow_more_present':
if comparisons[key_main]['type'] == 'value':
self.fail("Option '%s' is a value and not a set/list/dict, so its comparison cannot be %s" % (key, value))
comparisons[key_main]['comparison'] = value
else:
self.fail("Unknown comparison mode '%s'!" % value)
# Add implicit options
comparisons['publish_all_ports'] = dict(type='value', comparison='strict', name='published_ports')
comparisons['expected_ports'] = dict(type='dict', comparison=comparisons['published_ports']['comparison'], name='expected_ports')
comparisons['disable_healthcheck'] = dict(type='value',
comparison='ignore' if comparisons['healthcheck']['comparison'] == 'ignore' else 'strict',
name='disable_healthcheck')
# Check legacy values
if self.module.params['ignore_image'] and comparisons['image']['comparison'] != 'ignore':
self.module.warn('The ignore_image option has been overridden by the comparisons option!')
if self.module.params['purge_networks'] and comparisons['networks']['comparison'] != 'strict':
self.module.warn('The purge_networks option has been overridden by the comparisons option!')
self.comparisons = comparisons
def _get_additional_minimal_versions(self):
stop_timeout_supported = self.docker_api_version >= LooseVersion('1.25')
stop_timeout_needed_for_update = self.module.params.get("stop_timeout") is not None and self.module.params.get('state') != 'absent'
if stop_timeout_supported:
stop_timeout_supported = self.docker_py_version >= LooseVersion('2.1')
if stop_timeout_needed_for_update and not stop_timeout_supported:
# We warn (instead of fail) since in older versions, stop_timeout was not used
# to update the container's configuration, but only when stopping a container.
self.module.warn("Docker SDK for Python's version is %s. Minimum version required is 2.1 to update "
"the container's stop_timeout configuration. "
"If you use the 'docker-py' module, you have to switch to the 'docker' Python package." % (docker_version,))
else:
if stop_timeout_needed_for_update and not stop_timeout_supported:
# We warn (instead of fail) since in older versions, stop_timeout was not used
# to update the container's configuration, but only when stopping a container.
self.module.warn("Docker API version is %s. Minimum version required is 1.25 to set or "
"update the container's stop_timeout configuration." % (self.docker_api_version_str,))
self.option_minimal_versions['stop_timeout']['supported'] = stop_timeout_supported
def __init__(self, **kwargs):
option_minimal_versions = dict(
# internal options
log_config=dict(),
publish_all_ports=dict(),
ports=dict(),
volume_binds=dict(),
name=dict(),
# normal options
device_read_bps=dict(docker_py_version='1.9.0', docker_api_version='1.22'),
device_read_iops=dict(docker_py_version='1.9.0', docker_api_version='1.22'),
device_write_bps=dict(docker_py_version='1.9.0', docker_api_version='1.22'),
device_write_iops=dict(docker_py_version='1.9.0', docker_api_version='1.22'),
dns_opts=dict(docker_api_version='1.21', docker_py_version='1.10.0'),
ipc_mode=dict(docker_api_version='1.25'),
mac_address=dict(docker_api_version='1.25'),
oom_score_adj=dict(docker_api_version='1.22'),
shm_size=dict(docker_api_version='1.22'),
stop_signal=dict(docker_api_version='1.21'),
tmpfs=dict(docker_api_version='1.22'),
volume_driver=dict(docker_api_version='1.21'),
memory_reservation=dict(docker_api_version='1.21'),
kernel_memory=dict(docker_api_version='1.21'),
auto_remove=dict(docker_py_version='2.1.0', docker_api_version='1.25'),
healthcheck=dict(docker_py_version='2.0.0', docker_api_version='1.24'),
init=dict(docker_py_version='2.2.0', docker_api_version='1.25'),
runtime=dict(docker_py_version='2.4.0', docker_api_version='1.25'),
sysctls=dict(docker_py_version='1.10.0', docker_api_version='1.24'),
userns_mode=dict(docker_py_version='1.10.0', docker_api_version='1.23'),
uts=dict(docker_py_version='3.5.0', docker_api_version='1.25'),
pids_limit=dict(docker_py_version='1.10.0', docker_api_version='1.23'),
# specials
ipvX_address_supported=dict(docker_py_version='1.9.0', detect_usage=detect_ipvX_address_usage,
usage_msg='ipv4_address or ipv6_address in networks'),
stop_timeout=dict(), # see _get_additional_minimal_versions()
)
super(AnsibleDockerClientContainer, self).__init__(
option_minimal_versions=option_minimal_versions,
option_minimal_versions_ignore_params=self.__NON_CONTAINER_PROPERTY_OPTIONS,
**kwargs
)
self.image_inspect_source = 'Config'
if self.docker_api_version < LooseVersion('1.21'):
self.image_inspect_source = 'ContainerConfig'
self._get_additional_minimal_versions()
self._parse_comparisons()
def main():
argument_spec = dict(
auto_remove=dict(type='bool', default=False),
blkio_weight=dict(type='int'),
capabilities=dict(type='list', elements='str'),
cap_drop=dict(type='list', elements='str'),
cleanup=dict(type='bool', default=False),
command=dict(type='raw'),
comparisons=dict(type='dict'),
cpu_period=dict(type='int'),
cpu_quota=dict(type='int'),
cpuset_cpus=dict(type='str'),
cpuset_mems=dict(type='str'),
cpu_shares=dict(type='int'),
detach=dict(type='bool', default=True),
devices=dict(type='list', elements='str'),
device_read_bps=dict(type='list', elements='dict', options=dict(
path=dict(required=True, type='str'),
rate=dict(required=True, type='str'),
)),
device_write_bps=dict(type='list', elements='dict', options=dict(
path=dict(required=True, type='str'),
rate=dict(required=True, type='str'),
)),
device_read_iops=dict(type='list', elements='dict', options=dict(
path=dict(required=True, type='str'),
rate=dict(required=True, type='int'),
)),
device_write_iops=dict(type='list', elements='dict', options=dict(
path=dict(required=True, type='str'),
rate=dict(required=True, type='int'),
)),
dns_servers=dict(type='list', elements='str'),
dns_opts=dict(type='list', elements='str'),
dns_search_domains=dict(type='list', elements='str'),
domainname=dict(type='str'),
entrypoint=dict(type='list', elements='str'),
env=dict(type='dict'),
env_file=dict(type='path'),
etc_hosts=dict(type='dict'),
exposed_ports=dict(type='list', elements='str', aliases=['exposed', 'expose']),
force_kill=dict(type='bool', default=False, aliases=['forcekill']),
groups=dict(type='list', elements='str'),
healthcheck=dict(type='dict', options=dict(
test=dict(type='raw'),
interval=dict(type='str'),
timeout=dict(type='str'),
start_period=dict(type='str'),
retries=dict(type='int'),
)),
hostname=dict(type='str'),
ignore_image=dict(type='bool', default=False),
image=dict(type='str'),
init=dict(type='bool', default=False),
interactive=dict(type='bool', default=False),
ipc_mode=dict(type='str'),
keep_volumes=dict(type='bool', default=True),
kernel_memory=dict(type='str'),
kill_signal=dict(type='str'),
labels=dict(type='dict'),
links=dict(type='list', elements='str'),
log_driver=dict(type='str'),
log_options=dict(type='dict', aliases=['log_opt']),
mac_address=dict(type='str'),
memory=dict(type='str', default='0'),
memory_reservation=dict(type='str'),
memory_swap=dict(type='str'),
memory_swappiness=dict(type='int'),
name=dict(type='str', required=True),
network_mode=dict(type='str'),
networks=dict(type='list', elements='dict', options=dict(
name=dict(type='str', required=True),
ipv4_address=dict(type='str'),
ipv6_address=dict(type='str'),
aliases=dict(type='list', elements='str'),
links=dict(type='list', elements='str'),
)),
networks_cli_compatible=dict(type='bool'),
oom_killer=dict(type='bool'),
oom_score_adj=dict(type='int'),
output_logs=dict(type='bool', default=False),
paused=dict(type='bool', default=False),
pid_mode=dict(type='str'),
pids_limit=dict(type='int'),
privileged=dict(type='bool', default=False),
published_ports=dict(type='list', elements='str', aliases=['ports']),
pull=dict(type='bool', default=False),
purge_networks=dict(type='bool', default=False),
read_only=dict(type='bool', default=False),
recreate=dict(type='bool', default=False),
restart=dict(type='bool', default=False),
restart_policy=dict(type='str', choices=['no', 'on-failure', 'always', 'unless-stopped']),
restart_retries=dict(type='int'),
runtime=dict(type='str'),
security_opts=dict(type='list', elements='str'),
shm_size=dict(type='str'),
state=dict(type='str', default='started', choices=['absent', 'present', 'started', 'stopped']),
stop_signal=dict(type='str'),
stop_timeout=dict(type='int'),
sysctls=dict(type='dict'),
tmpfs=dict(type='list', elements='str'),
trust_image_content=dict(type='bool', default=False),
tty=dict(type='bool', default=False),
ulimits=dict(type='list', elements='str'),
user=dict(type='str'),
userns_mode=dict(type='str'),
uts=dict(type='str'),
volume_driver=dict(type='str'),
volumes=dict(type='list', elements='str'),
volumes_from=dict(type='list', elements='str'),
working_dir=dict(type='str'),
)
required_if = [
('state', 'present', ['image'])
]
client = AnsibleDockerClientContainer(
argument_spec=argument_spec,
required_if=required_if,
supports_check_mode=True,
min_docker_api_version='1.20',
)
if client.module.params['networks_cli_compatible'] is None and client.module.params['networks']:
client.module.deprecate(
'Please note that docker_container handles networks slightly different than docker CLI. '
'If you specify networks, the default network will still be attached as the first network. '
'(You can specify purge_networks to remove all networks not explicitly listed.) '
'This behavior will change in Ansible 2.12. You can change the behavior now by setting '
'the new `networks_cli_compatible` option to `yes`, and remove this warning by setting '
'it to `no`',
version='2.12'
)
try:
cm = ContainerManager(client)
client.module.exit_json(**sanitize_result(cm.results))
except DockerException as e:
client.fail('An unexpected docker error occurred: {0}'.format(e), exception=traceback.format_exc())
except RequestException as e:
client.fail('An unexpected requests error occurred when docker-py tried to talk to the docker daemon: {0}'.format(e), exception=traceback.format_exc())
if __name__ == '__main__':
main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 59,691 |
docker_container: address already in use with ipv4+ipv6 ip-bound port mappings
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below -->
A docker container cannot be started with ip-bound port mappings when using both, ipv4 and ipv6 addresses:
With #46596, i expected
docker_container:
name: "test123"
image: "ubuntu:latest"
tty: yes
interactive: yes
networks:
- name: bridge
purge_networks: yes
pull: yes
ports:
- "127.0.0.1:53:53/tcp"
- "127.0.0.1:53:53/udp"
- "[::1]:53:53/tcp"
- "[::1]:53:53/udp"
to behave like
docker run --rm -ti -p "127.0.0.1:53:53/tcp" -p "127.0.0.1:53:53/udp" -p "[::1]:53:53/tcp" -p "[::1]:53:53/udp" ubuntu:latest
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
3e7204cabfeb ubuntu:latest "/bin/bash" 2 seconds ago Up 1 second 127.0.0.1:53->53/tcp, 127.0.0.1:53->53/udp, ::1:53->53/tcp, ::1:53->53/udp awesome_banach
However, the deployment fails with
fatal: [localhost]: FAILED! => {"changed": false, "msg": "Error starting container a26b2b9b2497195af25770878fd2c084e984e3d2c70bef602261cf76aad1a0bb: 500 Server Error: Internal Server Error (\"driver failed programming external connectivity on endpoint test123 (c41303e4e84c59d838166594918137135dfbd22e0bce311e3302f88135874d87): Error starting userland proxy: listen tcp 0.0.0.0:53: bind: address already in use\")"}
Note that is says `0.0.0.0:53`, which may be related to #40258. Removing the two IPv6 mappings works as expected.
From what i can see, the mapping is correctly parsed in
`ansible.modules.cloud.docker.docker_container.TaskParameters._parse_publish_ports` - i suspect the error is in the passing to the docker-py module.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
docker_container
##### ANSIBLE VERSION
```
ansible 2.8.3
config file = /home/johann/.ansible.cfg
configured module search path = ['/home/johann/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/lib/python3.7/dist-packages/ansible
executable location = /usr/local/bin/ansible
python version = 3.7.3 (default, Apr 3 2019, 05:39:12) [GCC 8.3.0]
```
##### CONFIGURATION
```
ANSIBLE_SSH_ARGS(/home/johann/.ansible.cfg) = -o ControlMaster=auto -o ControlPersist=60s -o ForwardAgent=yes -o ControlPath="/home/johann/.ansible/ansible-ssh-%h-%p-%r"
DEFAULT_HOST_LIST(/home/johann/.ansible.cfg) = ['/home/johann/dev/repos/ercpe-ansible/hosts']
DEFAULT_TRANSPORT(/home/johann/.ansible.cfg) = ssh
DEFAULT_VAULT_PASSWORD_FILE(/home/johann/.ansible.cfg) = /home/johann/bin/ansible-vault-pass
```
##### OS / ENVIRONMENT
Tested on Ubuntu 19.04 (disco)
##### STEPS TO REPRODUCE
See Summary
##### EXPECTED RESULTS
See Summary
##### ACTUAL RESULTS
See Summary
|
https://github.com/ansible/ansible/issues/59691
|
https://github.com/ansible/ansible/pull/59715
|
f94772f807fed7fd6329faae25aa600dd9f030cf
|
a7573102bcbc7e88b8bf6de639be2696f1a5ad43
| 2019-07-28T14:29:47Z |
python
| 2019-08-02T15:10:39Z |
test/integration/targets/docker_container/tasks/tests/options.yml
|
---
- name: Registering container name
set_fact:
cname: "{{ cname_prefix ~ '-options' }}"
cname_h1: "{{ cname_prefix ~ '-options-h1' }}"
cname_h2: "{{ cname_prefix ~ '-options-h2' }}"
cname_h3: "{{ cname_prefix ~ '-options-h3' }}"
- name: Registering container name
set_fact:
cnames: "{{ cnames + [cname, cname_h1, cname_h2, cname_h3] }}"
####################################################################
## auto_remove #####################################################
####################################################################
- name: auto_remove
docker_container:
image: alpine:3.8
command: '/bin/sh -c "echo"'
name: "{{ cname }}"
state: started
auto_remove: yes
register: auto_remove_1
ignore_errors: yes
- name: Give container 1 second to be sure it terminated
pause:
seconds: 1
- name: auto_remove (verify)
docker_container:
name: "{{ cname }}"
state: absent
register: auto_remove_2
ignore_errors: yes
- assert:
that:
- auto_remove_1 is changed
- auto_remove_2 is not changed
when: docker_py_version is version('2.1.0', '>=')
- assert:
that:
- auto_remove_1 is failed
- "('version is ' ~ docker_py_version ~ ' ') in auto_remove_1.msg"
- "'Minimum version required is 2.1.0 ' in auto_remove_1.msg"
when: docker_py_version is version('2.1.0', '<')
####################################################################
## blkio_weight ####################################################
####################################################################
- name: blkio_weight
docker_container:
image: alpine:3.8
command: '/bin/sh -c "sleep 10m"'
name: "{{ cname }}"
state: started
blkio_weight: 123
register: blkio_weight_1
- name: blkio_weight (idempotency)
docker_container:
image: alpine:3.8
command: '/bin/sh -c "sleep 10m"'
name: "{{ cname }}"
state: started
blkio_weight: 123
register: blkio_weight_2
- name: blkio_weight (change)
docker_container:
image: alpine:3.8
command: '/bin/sh -c "sleep 10m"'
name: "{{ cname }}"
state: started
blkio_weight: 234
force_kill: yes
register: blkio_weight_3
- name: cleanup
docker_container:
name: "{{ cname }}"
state: absent
force_kill: yes
diff: no
- assert:
that:
- blkio_weight_1 is changed
- "blkio_weight_2 is not changed or 'Docker warning: Your kernel does not support Block I/O weight or the cgroup is not mounted. Weight discarded.' in blkio_weight_2.warnings"
- blkio_weight_3 is changed
####################################################################
## cap_drop, capabilities ##########################################
####################################################################
- name: capabilities, cap_drop
docker_container:
image: alpine:3.8
command: '/bin/sh -c "sleep 10m"'
name: "{{ cname }}"
state: started
capabilities:
- sys_time
cap_drop:
- all
register: capabilities_1
- name: capabilities, cap_drop (idempotency)
docker_container:
image: alpine:3.8
command: '/bin/sh -c "sleep 10m"'
name: "{{ cname }}"
state: started
capabilities:
- sys_time
cap_drop:
- all
register: capabilities_2
- name: capabilities, cap_drop (less)
docker_container:
image: alpine:3.8
command: '/bin/sh -c "sleep 10m"'
name: "{{ cname }}"
state: started
capabilities: []
cap_drop:
- all
register: capabilities_3
- name: capabilities, cap_drop (changed)
docker_container:
image: alpine:3.8
command: '/bin/sh -c "sleep 10m"'
name: "{{ cname }}"
state: started
capabilities:
- setgid
cap_drop:
- all
force_kill: yes
register: capabilities_4
- name: cleanup
docker_container:
name: "{{ cname }}"
state: absent
force_kill: yes
diff: no
- assert:
that:
- capabilities_1 is changed
- capabilities_2 is not changed
- capabilities_3 is not changed
- capabilities_4 is changed
####################################################################
## command #########################################################
####################################################################
- name: command
docker_container:
image: alpine:3.8
command: '/bin/sh -v -c "sleep 10m"'
name: "{{ cname }}"
state: started
register: command_1
- name: command (idempotency)
docker_container:
image: alpine:3.8
command: '/bin/sh -v -c "sleep 10m"'
name: "{{ cname }}"
state: started
register: command_2
- name: command (less parameters)
docker_container:
image: alpine:3.8
command: '/bin/sh -c "sleep 10m"'
name: "{{ cname }}"
state: started
force_kill: yes
register: command_3
- name: cleanup
docker_container:
name: "{{ cname }}"
state: absent
force_kill: yes
diff: no
- assert:
that:
- command_1 is changed
- command_2 is not changed
- command_3 is changed
####################################################################
## cpu_period ######################################################
####################################################################
- name: cpu_period
docker_container:
image: alpine:3.8
command: '/bin/sh -c "sleep 10m"'
name: "{{ cname }}"
cpu_period: 90000
state: started
register: cpu_period_1
- name: cpu_period (idempotency)
docker_container:
image: alpine:3.8
command: '/bin/sh -c "sleep 10m"'
name: "{{ cname }}"
cpu_period: 90000
state: started
register: cpu_period_2
- name: cpu_period (change)
docker_container:
image: alpine:3.8
command: '/bin/sh -c "sleep 10m"'
name: "{{ cname }}"
cpu_period: 50000
state: started
force_kill: yes
register: cpu_period_3
- name: cleanup
docker_container:
name: "{{ cname }}"
state: absent
force_kill: yes
diff: no
- assert:
that:
- cpu_period_1 is changed
- cpu_period_2 is not changed
- cpu_period_3 is changed
####################################################################
## cpu_quota #######################################################
####################################################################
- name: cpu_quota
docker_container:
image: alpine:3.8
command: '/bin/sh -c "sleep 10m"'
name: "{{ cname }}"
cpu_quota: 150000
state: started
register: cpu_quota_1
- name: cpu_quota (idempotency)
docker_container:
image: alpine:3.8
command: '/bin/sh -c "sleep 10m"'
name: "{{ cname }}"
cpu_quota: 150000
state: started
register: cpu_quota_2
- name: cpu_quota (change)
docker_container:
image: alpine:3.8
command: '/bin/sh -c "sleep 10m"'
name: "{{ cname }}"
cpu_quota: 50000
state: started
force_kill: yes
register: cpu_quota_3
- name: cleanup
docker_container:
name: "{{ cname }}"
state: absent
force_kill: yes
diff: no
- assert:
that:
- cpu_quota_1 is changed
- cpu_quota_2 is not changed
- cpu_quota_3 is changed
####################################################################
## cpu_shares ######################################################
####################################################################
- name: cpu_shares
docker_container:
image: alpine:3.8
command: '/bin/sh -c "sleep 10m"'
name: "{{ cname }}"
cpu_shares: 900
state: started
register: cpu_shares_1
- name: cpu_shares (idempotency)
docker_container:
image: alpine:3.8
command: '/bin/sh -c "sleep 10m"'
name: "{{ cname }}"
cpu_shares: 900
state: started
register: cpu_shares_2
- name: cpu_shares (change)
docker_container:
image: alpine:3.8
command: '/bin/sh -c "sleep 10m"'
name: "{{ cname }}"
cpu_shares: 1100
state: started
force_kill: yes
register: cpu_shares_3
- name: cleanup
docker_container:
name: "{{ cname }}"
state: absent
force_kill: yes
diff: no
- assert:
that:
- cpu_shares_1 is changed
- cpu_shares_2 is not changed
- cpu_shares_3 is changed
####################################################################
## cpuset_cpus #####################################################
####################################################################
- name: cpuset_cpus
docker_container:
image: alpine:3.8
command: '/bin/sh -c "sleep 10m"'
name: "{{ cname }}"
cpuset_cpus: "0"
state: started
register: cpuset_cpus_1
- name: cpuset_cpus (idempotency)
docker_container:
image: alpine:3.8
command: '/bin/sh -c "sleep 10m"'
name: "{{ cname }}"
cpuset_cpus: "0"
state: started
register: cpuset_cpus_2
- name: cpuset_cpus (change)
docker_container:
image: alpine:3.8
command: '/bin/sh -c "sleep 10m"'
name: "{{ cname }}"
cpuset_cpus: "1"
state: started
force_kill: yes
# This will fail if the system the test is run on doesn't have
# multiple CPUs/cores available.
ignore_errors: yes
register: cpuset_cpus_3
- name: cleanup
docker_container:
name: "{{ cname }}"
state: absent
force_kill: yes
diff: no
- assert:
that:
- cpuset_cpus_1 is changed
- cpuset_cpus_2 is not changed
- cpuset_cpus_3 is failed or cpuset_cpus_3 is changed
####################################################################
## cpuset_mems #####################################################
####################################################################
- name: cpuset_mems
docker_container:
image: alpine:3.8
command: '/bin/sh -c "sleep 10m"'
name: "{{ cname }}"
cpuset_mems: "0"
state: started
register: cpuset_mems_1
- name: cpuset_mems (idempotency)
docker_container:
image: alpine:3.8
command: '/bin/sh -c "sleep 10m"'
name: "{{ cname }}"
cpuset_mems: "0"
state: started
register: cpuset_mems_2
- name: cpuset_mems (change)
docker_container:
image: alpine:3.8
command: '/bin/sh -c "sleep 10m"'
name: "{{ cname }}"
cpuset_mems: "1"
state: started
force_kill: yes
# This will fail if the system the test is run on doesn't have
# multiple MEMs available.
ignore_errors: yes
register: cpuset_mems_3
- name: cleanup
docker_container:
name: "{{ cname }}"
state: absent
force_kill: yes
diff: no
- assert:
that:
- cpuset_mems_1 is changed
- cpuset_mems_2 is not changed
- cpuset_mems_3 is failed or cpuset_mems_3 is changed
####################################################################
## debug ###########################################################
####################################################################
- name: debug (create)
docker_container:
image: alpine:3.8
command: '/bin/sh -c "sleep 10m"'
name: "{{ cname }}"
state: present
debug: yes
register: debug_1
- name: debug (start)
docker_container:
name: "{{ cname }}"
state: started
debug: yes
register: debug_2
- name: debug (stop)
docker_container:
image: alpine:3.8
name: "{{ cname }}"
state: stopped
force_kill: yes
debug: yes
register: debug_3
- name: debug (absent)
docker_container:
name: "{{ cname }}"
state: absent
debug: yes
force_kill: yes
register: debug_4
- assert:
that:
- debug_1 is changed
- debug_2 is changed
- debug_3 is changed
- debug_4 is changed
####################################################################
## detach, cleanup #################################################
####################################################################
- name: detach without cleanup
docker_container:
name: "{{ cname }}"
image: hello-world
detach: no
register: detach_no_cleanup
- name: cleanup
docker_container:
name: "{{ cname }}"
state: absent
register: detach_no_cleanup_cleanup
diff: no
- name: detach with cleanup
docker_container:
name: "{{ cname }}"
image: hello-world
detach: no
cleanup: yes
register: detach_cleanup
- name: cleanup (unnecessary)
docker_container:
name: "{{ cname }}"
state: absent
register: detach_cleanup_cleanup
diff: no
- name: detach with auto_remove and cleanup
docker_container:
name: "{{ cname }}"
image: hello-world
detach: no
auto_remove: yes
cleanup: yes
register: detach_auto_remove
ignore_errors: yes
- name: cleanup (unnecessary)
docker_container:
name: "{{ cname }}"
state: absent
register: detach_auto_remove_cleanup
diff: no
- assert:
that:
# NOTE that 'Output' sometimes fails to contain the correct output
# of hello-world. We don't know why this happens, but it happens
# often enough to be annoying. That's why we disable this for now,
# and simply test that 'Output' is contained in the result.
- "'Output' in detach_no_cleanup.container"
# - "'Hello from Docker!' in detach_no_cleanup.container.Output"
- detach_no_cleanup_cleanup is changed
- "'Output' in detach_cleanup.container"
# - "'Hello from Docker!' in detach_cleanup.container.Output"
- detach_cleanup_cleanup is not changed
- assert:
that:
- "'Cannot retrieve result as auto_remove is enabled' == detach_auto_remove.container.Output"
- detach_auto_remove_cleanup is not changed
when: docker_py_version is version('2.1.0', '>=')
####################################################################
## devices #########################################################
####################################################################
- name: devices
docker_container:
image: alpine:3.8
command: '/bin/sh -c "sleep 10m"'
name: "{{ cname }}"
state: started
devices:
- "/dev/random:/dev/virt-random:rwm"
- "/dev/urandom:/dev/virt-urandom:rwm"
register: devices_1
- name: devices (idempotency)
docker_container:
image: alpine:3.8
command: '/bin/sh -c "sleep 10m"'
name: "{{ cname }}"
state: started
devices:
- "/dev/urandom:/dev/virt-urandom:rwm"
- "/dev/random:/dev/virt-random:rwm"
register: devices_2
- name: devices (less)
docker_container:
image: alpine:3.8
command: '/bin/sh -c "sleep 10m"'
name: "{{ cname }}"
state: started
devices:
- "/dev/random:/dev/virt-random:rwm"
register: devices_3
- name: devices (changed)
docker_container:
image: alpine:3.8
command: '/bin/sh -c "sleep 10m"'
name: "{{ cname }}"
state: started
devices:
- "/dev/random:/dev/virt-random:rwm"
- "/dev/null:/dev/virt-null:rwm"
force_kill: yes
register: devices_4
- name: cleanup
docker_container:
name: "{{ cname }}"
state: absent
force_kill: yes
diff: no
- assert:
that:
- devices_1 is changed
- devices_2 is not changed
- devices_3 is not changed
- devices_4 is changed
####################################################################
## device_read_bps #################################################
####################################################################
- name: device_read_bps
docker_container:
image: alpine:3.8
command: '/bin/sh -c "sleep 10m"'
name: "{{ cname }}"
state: started
device_read_bps:
- path: /dev/random
rate: 20M
- path: /dev/urandom
rate: 10K
register: device_read_bps_1
ignore_errors: yes
- name: device_read_bps (idempotency)
docker_container:
image: alpine:3.8
command: '/bin/sh -c "sleep 10m"'
name: "{{ cname }}"
state: started
device_read_bps:
- path: /dev/urandom
rate: 10K
- path: /dev/random
rate: 20M
register: device_read_bps_2
ignore_errors: yes
- name: device_read_bps (lesser entries)
docker_container:
image: alpine:3.8
command: '/bin/sh -c "sleep 10m"'
name: "{{ cname }}"
state: started
device_read_bps:
- path: /dev/random
rate: 20M
register: device_read_bps_3
ignore_errors: yes
- name: device_read_bps (changed)
docker_container:
image: alpine:3.8
command: '/bin/sh -c "sleep 10m"'
name: "{{ cname }}"
state: started
device_read_bps:
- path: /dev/random
rate: 10M
- path: /dev/urandom
rate: 5K
force_kill: yes
register: device_read_bps_4
ignore_errors: yes
- name: cleanup
docker_container:
name: "{{ cname }}"
state: absent
force_kill: yes
diff: no
- assert:
that:
- device_read_bps_1 is changed
- device_read_bps_2 is not changed
- device_read_bps_3 is not changed
- device_read_bps_4 is changed
when: docker_py_version is version('1.9.0', '>=')
- assert:
that:
- device_read_bps_1 is failed
- "('version is ' ~ docker_py_version ~ ' ') in device_read_bps_1.msg"
- "'Minimum version required is 1.9.0 ' in device_read_bps_1.msg"
when: docker_py_version is version('1.9.0', '<')
####################################################################
## device_read_iops ################################################
####################################################################
- name: device_read_iops
docker_container:
image: alpine:3.8
command: '/bin/sh -c "sleep 10m"'
name: "{{ cname }}"
state: started
device_read_iops:
- path: /dev/random
rate: 10
- path: /dev/urandom
rate: 20
register: device_read_iops_1
ignore_errors: yes
- name: device_read_iops (idempotency)
docker_container:
image: alpine:3.8
command: '/bin/sh -c "sleep 10m"'
name: "{{ cname }}"
state: started
device_read_iops:
- path: /dev/urandom
rate: "20"
- path: /dev/random
rate: 10
register: device_read_iops_2
ignore_errors: yes
- name: device_read_iops (less)
docker_container:
image: alpine:3.8
command: '/bin/sh -c "sleep 10m"'
name: "{{ cname }}"
state: started
device_read_iops:
- path: /dev/random
rate: 10
register: device_read_iops_3
ignore_errors: yes
- name: device_read_iops (changed)
docker_container:
image: alpine:3.8
command: '/bin/sh -c "sleep 10m"'
name: "{{ cname }}"
state: started
device_read_iops:
- path: /dev/random
rate: 30
- path: /dev/urandom
rate: 50
force_kill: yes
register: device_read_iops_4
ignore_errors: yes
- name: cleanup
docker_container:
name: "{{ cname }}"
state: absent
force_kill: yes
diff: no
- assert:
that:
- device_read_iops_1 is changed
- device_read_iops_2 is not changed
- device_read_iops_3 is not changed
- device_read_iops_4 is changed
when: docker_py_version is version('1.9.0', '>=')
- assert:
that:
- device_read_iops_1 is failed
- "('version is ' ~ docker_py_version ~ ' ') in device_read_iops_1.msg"
- "'Minimum version required is 1.9.0 ' in device_read_iops_1.msg"
when: docker_py_version is version('1.9.0', '<')
####################################################################
## device_write_bps and device_write_iops ##########################
####################################################################
- name: device_write_bps and device_write_iops
docker_container:
image: alpine:3.8
command: '/bin/sh -c "sleep 10m"'
name: "{{ cname }}"
state: started
device_write_bps:
- path: /dev/random
rate: 10M
device_write_iops:
- path: /dev/urandom
rate: 30
register: device_write_limit_1
ignore_errors: yes
- name: device_write_bps and device_write_iops (idempotency)
docker_container:
image: alpine:3.8
command: '/bin/sh -c "sleep 10m"'
name: "{{ cname }}"
state: started
device_write_bps:
- path: /dev/random
rate: 10M
device_write_iops:
- path: /dev/urandom
rate: 30
register: device_write_limit_2
ignore_errors: yes
- name: device_write_bps device_write_iops (changed)
docker_container:
image: alpine:3.8
command: '/bin/sh -c "sleep 10m"'
name: "{{ cname }}"
state: started
device_write_bps:
- path: /dev/random
rate: 20K
device_write_iops:
- path: /dev/urandom
rate: 100
force_kill: yes
register: device_write_limit_3
ignore_errors: yes
- name: cleanup
docker_container:
name: "{{ cname }}"
state: absent
force_kill: yes
diff: no
- assert:
that:
- device_write_limit_1 is changed
- device_write_limit_2 is not changed
- device_write_limit_3 is changed
when: docker_py_version is version('1.9.0', '>=')
- assert:
that:
- device_write_limit_1 is failed
- "('version is ' ~ docker_py_version ~ ' ') in device_write_limit_1.msg"
- "'Minimum version required is 1.9.0 ' in device_write_limit_1.msg"
when: docker_py_version is version('1.9.0', '<')
####################################################################
## dns_opts ########################################################
####################################################################
- name: dns_opts
docker_container:
image: alpine:3.8
command: '/bin/sh -c "sleep 10m"'
name: "{{ cname }}"
state: started
dns_opts:
- "timeout:10"
- rotate
register: dns_opts_1
ignore_errors: yes
- name: dns_opts (idempotency)
docker_container:
image: alpine:3.8
command: '/bin/sh -c "sleep 10m"'
name: "{{ cname }}"
state: started
dns_opts:
- rotate
- "timeout:10"
register: dns_opts_2
ignore_errors: yes
- name: dns_opts (less resolv.conf options)
docker_container:
image: alpine:3.8
command: '/bin/sh -c "sleep 10m"'
name: "{{ cname }}"
state: started
dns_opts:
- "timeout:10"
register: dns_opts_3
ignore_errors: yes
- name: dns_opts (more resolv.conf options)
docker_container:
image: alpine:3.8
command: '/bin/sh -c "sleep 10m"'
name: "{{ cname }}"
state: started
dns_opts:
- "timeout:10"
- no-check-names
force_kill: yes
register: dns_opts_4
ignore_errors: yes
- name: cleanup
docker_container:
name: "{{ cname }}"
state: absent
force_kill: yes
diff: no
- assert:
that:
- dns_opts_1 is changed
- dns_opts_2 is not changed
- dns_opts_3 is not changed
- dns_opts_4 is changed
when: docker_py_version is version('1.10.0', '>=')
- assert:
that:
- dns_opts_1 is failed
- "('version is ' ~ docker_py_version ~ ' ') in dns_opts_1.msg"
- "'Minimum version required is 1.10.0 ' in dns_opts_1.msg"
when: docker_py_version is version('1.10.0', '<')
####################################################################
## dns_search_domains ##############################################
####################################################################
- name: dns_search_domains
docker_container:
image: alpine:3.8
command: '/bin/sh -c "sleep 10m"'
name: "{{ cname }}"
state: started
dns_search_domains:
- example.com
- example.org
register: dns_search_domains_1
- name: dns_search_domains (idempotency)
docker_container:
image: alpine:3.8
command: '/bin/sh -c "sleep 10m"'
name: "{{ cname }}"
state: started
dns_search_domains:
- example.com
- example.org
register: dns_search_domains_2
- name: dns_search_domains (different order)
docker_container:
image: alpine:3.8
command: '/bin/sh -c "sleep 10m"'
name: "{{ cname }}"
state: started
dns_search_domains:
- example.org
- example.com
force_kill: yes
register: dns_search_domains_3
- name: dns_search_domains (changed elements)
docker_container:
image: alpine:3.8
command: '/bin/sh -c "sleep 10m"'
name: "{{ cname }}"
state: started
dns_search_domains:
- ansible.com
- example.com
force_kill: yes
register: dns_search_domains_4
- name: cleanup
docker_container:
name: "{{ cname }}"
state: absent
force_kill: yes
diff: no
- assert:
that:
- dns_search_domains_1 is changed
- dns_search_domains_2 is not changed
- dns_search_domains_3 is changed
- dns_search_domains_4 is changed
####################################################################
## dns_servers #####################################################
####################################################################
- name: dns_servers
docker_container:
image: alpine:3.8
command: '/bin/sh -c "sleep 10m"'
name: "{{ cname }}"
state: started
dns_servers:
- 1.1.1.1
- 8.8.8.8
register: dns_servers_1
- name: dns_servers (idempotency)
docker_container:
image: alpine:3.8
command: '/bin/sh -c "sleep 10m"'
name: "{{ cname }}"
state: started
dns_servers:
- 1.1.1.1
- 8.8.8.8
register: dns_servers_2
- name: dns_servers (changed order)
docker_container:
image: alpine:3.8
command: '/bin/sh -c "sleep 10m"'
name: "{{ cname }}"
state: started
dns_servers:
- 8.8.8.8
- 1.1.1.1
force_kill: yes
register: dns_servers_3
- name: dns_servers (changed elements)
docker_container:
image: alpine:3.8
command: '/bin/sh -c "sleep 10m"'
name: "{{ cname }}"
state: started
dns_servers:
- 8.8.8.8
- 9.9.9.9
force_kill: yes
register: dns_servers_4
- name: cleanup
docker_container:
name: "{{ cname }}"
state: absent
force_kill: yes
diff: no
- assert:
that:
- dns_servers_1 is changed
- dns_servers_2 is not changed
- dns_servers_3 is changed
- dns_servers_4 is changed
####################################################################
## domainname ######################################################
####################################################################
- name: domainname
docker_container:
image: alpine:3.8
command: '/bin/sh -c "sleep 10m"'
name: "{{ cname }}"
domainname: example.com
state: started
register: domainname_1
- name: domainname (idempotency)
docker_container:
image: alpine:3.8
command: '/bin/sh -c "sleep 10m"'
name: "{{ cname }}"
domainname: example.com
state: started
register: domainname_2
- name: domainname (change)
docker_container:
image: alpine:3.8
command: '/bin/sh -c "sleep 10m"'
name: "{{ cname }}"
domainname: example.org
state: started
force_kill: yes
register: domainname_3
- name: cleanup
docker_container:
name: "{{ cname }}"
state: absent
force_kill: yes
diff: no
- assert:
that:
- domainname_1 is changed
- domainname_2 is not changed
- domainname_3 is changed
####################################################################
## entrypoint ######################################################
####################################################################
- name: entrypoint
docker_container:
image: alpine:3.8
entrypoint:
- /bin/sh
- "-v"
- "-c"
- "'sleep 10m'"
name: "{{ cname }}"
state: started
register: entrypoint_1
- name: entrypoint (idempotency)
docker_container:
image: alpine:3.8
entrypoint:
- /bin/sh
- "-v"
- "-c"
- "'sleep 10m'"
name: "{{ cname }}"
state: started
register: entrypoint_2
- name: entrypoint (change order, should not be idempotent)
docker_container:
image: alpine:3.8
entrypoint:
- /bin/sh
- "-c"
- "'sleep 10m'"
- "-v"
name: "{{ cname }}"
state: started
force_kill: yes
register: entrypoint_3
- name: entrypoint (less parameters)
docker_container:
image: alpine:3.8
entrypoint:
- /bin/sh
- "-c"
- "'sleep 10m'"
name: "{{ cname }}"
state: started
force_kill: yes
register: entrypoint_4
- name: entrypoint (other parameters)
docker_container:
image: alpine:3.8
entrypoint:
- /bin/sh
- "-c"
- "'sleep 5m'"
name: "{{ cname }}"
state: started
force_kill: yes
register: entrypoint_5
- name: cleanup
docker_container:
name: "{{ cname }}"
state: absent
force_kill: yes
diff: no
- assert:
that:
- entrypoint_1 is changed
- entrypoint_2 is not changed
- entrypoint_3 is changed
- entrypoint_4 is changed
- entrypoint_5 is changed
####################################################################
## env #############################################################
####################################################################
- name: env
docker_container:
image: alpine:3.8
command: '/bin/sh -c "sleep 10m"'
name: "{{ cname }}"
state: started
env:
TEST1: val1
TEST2: val2
TEST3: "False"
TEST4: "true"
TEST5: "yes"
register: env_1
- name: env (idempotency)
docker_container:
image: alpine:3.8
command: '/bin/sh -c "sleep 10m"'
name: "{{ cname }}"
state: started
env:
TEST2: val2
TEST1: val1
TEST5: "yes"
TEST3: "False"
TEST4: "true"
register: env_2
- name: env (less environment variables)
docker_container:
image: alpine:3.8
command: '/bin/sh -c "sleep 10m"'
name: "{{ cname }}"
state: started
env:
TEST1: val1
register: env_3
- name: env (more environment variables)
docker_container:
image: alpine:3.8
command: '/bin/sh -c "sleep 10m"'
name: "{{ cname }}"
state: started
env:
TEST1: val1
TEST3: val3
force_kill: yes
register: env_4
- name: env (fail unwrapped values)
docker_container:
image: alpine:3.8
command: '/bin/sh -c "sleep 10m"'
name: "{{ cname }}"
state: started
env:
TEST1: true
force_kill: yes
register: env_5
ignore_errors: yes
- name: cleanup
docker_container:
name: "{{ cname }}"
state: absent
force_kill: yes
diff: no
- assert:
that:
- env_1 is changed
- env_2 is not changed
- env_3 is not changed
- env_4 is changed
- env_5 is failed
- "('Non-string value found for env option.') in env_5.msg"
####################################################################
## env_file #########################################################
####################################################################
- name: env_file
docker_container:
image: alpine:3.8
command: '/bin/sh -c "sleep 10m"'
name: "{{ cname }}"
state: started
env_file: "{{ role_path }}/files/env-file"
register: env_file_1
- name: env_file (idempotency)
docker_container:
image: alpine:3.8
command: '/bin/sh -c "sleep 10m"'
name: "{{ cname }}"
state: started
env_file: "{{ role_path }}/files/env-file"
register: env_file_2
- name: cleanup
docker_container:
name: "{{ cname }}"
state: absent
force_kill: yes
diff: no
- assert:
that:
- env_file_1 is changed
- env_file_2 is not changed
####################################################################
## etc_hosts #######################################################
####################################################################
- name: etc_hosts
docker_container:
image: alpine:3.8
command: '/bin/sh -c "sleep 10m"'
name: "{{ cname }}"
state: started
etc_hosts:
example.com: 1.2.3.4
example.org: 4.3.2.1
register: etc_hosts_1
- name: etc_hosts (idempotency)
docker_container:
image: alpine:3.8
command: '/bin/sh -c "sleep 10m"'
name: "{{ cname }}"
state: started
etc_hosts:
example.org: 4.3.2.1
example.com: 1.2.3.4
register: etc_hosts_2
- name: etc_hosts (less hosts)
docker_container:
image: alpine:3.8
command: '/bin/sh -c "sleep 10m"'
name: "{{ cname }}"
state: started
etc_hosts:
example.com: 1.2.3.4
register: etc_hosts_3
- name: etc_hosts (more hosts)
docker_container:
image: alpine:3.8
command: '/bin/sh -c "sleep 10m"'
name: "{{ cname }}"
state: started
etc_hosts:
example.com: 1.2.3.4
example.us: 1.2.3.5
force_kill: yes
register: etc_hosts_4
- name: cleanup
docker_container:
name: "{{ cname }}"
state: absent
force_kill: yes
diff: no
- assert:
that:
- etc_hosts_1 is changed
- etc_hosts_2 is not changed
- etc_hosts_3 is not changed
- etc_hosts_4 is changed
####################################################################
## exposed_ports ###################################################
####################################################################
- name: exposed_ports
docker_container:
image: alpine:3.8
command: '/bin/sh -c "sleep 10m"'
name: "{{ cname }}"
state: started
exposed_ports:
- "9001"
- "9002"
register: exposed_ports_1
- name: exposed_ports (idempotency)
docker_container:
image: alpine:3.8
command: '/bin/sh -c "sleep 10m"'
name: "{{ cname }}"
state: started
exposed_ports:
- "9002"
- "9001"
register: exposed_ports_2
- name: exposed_ports (less ports)
docker_container:
image: alpine:3.8
command: '/bin/sh -c "sleep 10m"'
name: "{{ cname }}"
state: started
exposed_ports:
- "9002"
register: exposed_ports_3
- name: exposed_ports (more ports)
docker_container:
image: alpine:3.8
command: '/bin/sh -c "sleep 10m"'
name: "{{ cname }}"
state: started
exposed_ports:
- "9002"
- "9003"
force_kill: yes
register: exposed_ports_4
- name: cleanup
docker_container:
name: "{{ cname }}"
state: absent
force_kill: yes
diff: no
- assert:
that:
- exposed_ports_1 is changed
- exposed_ports_2 is not changed
- exposed_ports_3 is not changed
- exposed_ports_4 is changed
####################################################################
## force_kill ######################################################
####################################################################
# TODO: - force_kill
####################################################################
## groups ##########################################################
####################################################################
- name: groups
docker_container:
image: alpine:3.8
command: '/bin/sh -c "sleep 10m"'
name: "{{ cname }}"
state: started
groups:
- "1234"
- "5678"
register: groups_1
- name: groups (idempotency)
docker_container:
image: alpine:3.8
command: '/bin/sh -c "sleep 10m"'
name: "{{ cname }}"
state: started
groups:
- "5678"
- "1234"
register: groups_2
- name: groups (less groups)
docker_container:
image: alpine:3.8
command: '/bin/sh -c "sleep 10m"'
name: "{{ cname }}"
state: started
groups:
- "1234"
register: groups_3
- name: groups (more groups)
docker_container:
image: alpine:3.8
command: '/bin/sh -c "sleep 10m"'
name: "{{ cname }}"
state: started
groups:
- "1234"
- "2345"
force_kill: yes
register: groups_4
- name: cleanup
docker_container:
name: "{{ cname }}"
state: absent
force_kill: yes
diff: no
- assert:
that:
- groups_1 is changed
- groups_2 is not changed
- groups_3 is not changed
- groups_4 is changed
####################################################################
## healthcheck #####################################################
####################################################################
- name: healthcheck
docker_container:
image: alpine:3.8
command: '/bin/sh -c "sleep 10m"'
name: "{{ cname }}"
state: started
healthcheck:
test:
- CMD
- sleep
- 1
timeout: 2s
interval: 0h0m2s3ms4us
retries: 2
force_kill: yes
register: healthcheck_1
ignore_errors: yes
- name: healthcheck (idempotency)
docker_container:
image: alpine:3.8
command: '/bin/sh -c "sleep 10m"'
name: "{{ cname }}"
state: started
healthcheck:
test:
- CMD
- sleep
- 1
timeout: 2s
interval: 0h0m2s3ms4us
retries: 2
force_kill: yes
register: healthcheck_2
ignore_errors: yes
- name: healthcheck (changed)
docker_container:
image: alpine:3.8
command: '/bin/sh -c "sleep 10m"'
name: "{{ cname }}"
state: started
healthcheck:
test:
- CMD
- sleep
- 1
timeout: 3s
interval: 0h1m2s3ms4us
retries: 3
force_kill: yes
register: healthcheck_3
ignore_errors: yes
- name: healthcheck (no change)
docker_container:
image: alpine:3.8
command: '/bin/sh -c "sleep 10m"'
name: "{{ cname }}"
state: started
force_kill: yes
register: healthcheck_4
ignore_errors: yes
- name: healthcheck (disabled)
docker_container:
image: alpine:3.8
command: '/bin/sh -c "sleep 10m"'
name: "{{ cname }}"
state: started
healthcheck:
test:
- NONE
force_kill: yes
register: healthcheck_5
ignore_errors: yes
- name: healthcheck (disabled, idempotency)
docker_container:
image: alpine:3.8
command: '/bin/sh -c "sleep 10m"'
name: "{{ cname }}"
state: started
healthcheck:
test:
- NONE
force_kill: yes
register: healthcheck_6
ignore_errors: yes
- name: healthcheck (string in healthcheck test, changed)
docker_container:
image: alpine:3.8
command: '/bin/sh -c "sleep 10m"'
name: "{{ cname }}"
state: started
healthcheck:
test: "sleep 1"
force_kill: yes
register: healthcheck_7
ignore_errors: yes
- name: healthcheck (string in healthcheck test, idempotency)
docker_container:
image: alpine:3.8
command: '/bin/sh -c "sleep 10m"'
name: "{{ cname }}"
state: started
healthcheck:
test: "sleep 1"
force_kill: yes
register: healthcheck_8
ignore_errors: yes
- name: cleanup
docker_container:
name: "{{ cname }}"
state: absent
force_kill: yes
diff: no
- assert:
that:
- healthcheck_1 is changed
- healthcheck_2 is not changed
- healthcheck_3 is changed
- healthcheck_4 is not changed
- healthcheck_5 is changed
- healthcheck_6 is not changed
- healthcheck_7 is changed
- healthcheck_8 is not changed
when: docker_py_version is version('2.0.0', '>=')
- assert:
that:
- healthcheck_1 is failed
- "('version is ' ~ docker_py_version ~ ' ') in healthcheck_1.msg"
- "'Minimum version required is 2.0.0 ' in healthcheck_1.msg"
when: docker_py_version is version('2.0.0', '<')
####################################################################
## hostname ########################################################
####################################################################
- name: hostname
docker_container:
image: alpine:3.8
command: '/bin/sh -c "sleep 10m"'
name: "{{ cname }}"
hostname: me.example.com
state: started
register: hostname_1
- name: hostname (idempotency)
docker_container:
image: alpine:3.8
command: '/bin/sh -c "sleep 10m"'
name: "{{ cname }}"
hostname: me.example.com
state: started
register: hostname_2
- name: hostname (change)
docker_container:
image: alpine:3.8
command: '/bin/sh -c "sleep 10m"'
name: "{{ cname }}"
hostname: me.example.org
state: started
force_kill: yes
register: hostname_3
- name: cleanup
docker_container:
name: "{{ cname }}"
state: absent
force_kill: yes
diff: no
- assert:
that:
- hostname_1 is changed
- hostname_2 is not changed
- hostname_3 is changed
####################################################################
## init ############################################################
####################################################################
- name: init
docker_container:
image: alpine:3.8
command: '/bin/sh -c "sleep 10m"'
name: "{{ cname }}"
init: yes
state: started
register: init_1
ignore_errors: yes
- name: init (idempotency)
docker_container:
image: alpine:3.8
command: '/bin/sh -c "sleep 10m"'
name: "{{ cname }}"
init: yes
state: started
register: init_2
ignore_errors: yes
- name: init (change)
docker_container:
image: alpine:3.8
command: '/bin/sh -c "sleep 10m"'
name: "{{ cname }}"
init: no
state: started
force_kill: yes
register: init_3
ignore_errors: yes
- name: cleanup
docker_container:
name: "{{ cname }}"
state: absent
force_kill: yes
diff: no
- assert:
that:
- init_1 is changed
- init_2 is not changed
- init_3 is changed
when: docker_py_version is version('2.2.0', '>=')
- assert:
that:
- init_1 is failed
- "('version is ' ~ docker_py_version ~ ' ') in init_1.msg"
- "'Minimum version required is 2.2.0 ' in init_1.msg"
when: docker_py_version is version('2.2.0', '<')
####################################################################
## interactive #####################################################
####################################################################
- name: interactive
docker_container:
image: alpine:3.8
command: '/bin/sh -c "sleep 10m"'
name: "{{ cname }}"
interactive: yes
state: started
register: interactive_1
- name: interactive (idempotency)
docker_container:
image: alpine:3.8
command: '/bin/sh -c "sleep 10m"'
name: "{{ cname }}"
interactive: yes
state: started
register: interactive_2
- name: interactive (change)
docker_container:
image: alpine:3.8
command: '/bin/sh -c "sleep 10m"'
name: "{{ cname }}"
interactive: no
state: started
force_kill: yes
register: interactive_3
- name: cleanup
docker_container:
name: "{{ cname }}"
state: absent
force_kill: yes
diff: no
- assert:
that:
- interactive_1 is changed
- interactive_2 is not changed
- interactive_3 is changed
####################################################################
## image / ignore_image ############################################
####################################################################
- name: Pull hello-world image to make sure ignore_image test succeeds
# If the image isn't there, it will pull it and return 'changed'.
docker_image:
name: hello-world
pull: true
- name: image
docker_container:
image: alpine:3.8
command: '/bin/sh -c "sleep 10m"'
name: "{{ cname }}"
state: started
register: image_1
- name: image (idempotency)
docker_container:
image: alpine:3.8
command: '/bin/sh -c "sleep 10m"'
name: "{{ cname }}"
state: started
register: image_2
- name: ignore_image
docker_container:
image: hello-world
ignore_image: yes
name: "{{ cname }}"
state: started
register: ignore_image
- name: image change
docker_container:
image: hello-world
name: "{{ cname }}"
state: started
force_kill: yes
register: image_change
- name: cleanup
docker_container:
name: "{{ cname }}"
state: absent
force_kill: yes
diff: no
- assert:
that:
- image_1 is changed
- image_2 is not changed
- ignore_image is not changed
- image_change is changed
####################################################################
## ipc_mode ########################################################
####################################################################
- name: start helpers
docker_container:
image: alpine:3.8
command: '/bin/sh -c "sleep 10m"'
name: "{{ container_name }}"
state: started
ipc_mode: shareable
loop:
- "{{ cname_h1 }}"
loop_control:
loop_var: container_name
- name: ipc_mode
docker_container:
image: alpine:3.8
command: '/bin/sh -c "sleep 10m"'
name: "{{ cname }}"
state: started
ipc_mode: "container:{{ cname_h1 }}"
# ipc_mode: shareable
register: ipc_mode_1
- name: ipc_mode (idempotency)
docker_container:
image: alpine:3.8
command: '/bin/sh -c "sleep 10m"'
name: "{{ cname }}"
state: started
ipc_mode: "container:{{ cname_h1 }}"
# ipc_mode: shareable
register: ipc_mode_2
- name: ipc_mode (change)
docker_container:
image: alpine:3.8
command: '/bin/sh -c "sleep 10m"'
name: "{{ cname }}"
state: started
ipc_mode: private
force_kill: yes
register: ipc_mode_3
- name: cleanup
docker_container:
name: "{{ container_name }}"
state: absent
force_kill: yes
loop:
- "{{ cname }}"
- "{{ cname_h1 }}"
loop_control:
loop_var: container_name
diff: no
- assert:
that:
- ipc_mode_1 is changed
- ipc_mode_2 is not changed
- ipc_mode_3 is changed
####################################################################
## kernel_memory ###################################################
####################################################################
- name: kernel_memory
docker_container:
image: alpine:3.8
command: '/bin/sh -c "sleep 10m"'
name: "{{ cname }}"
kernel_memory: 8M
state: started
register: kernel_memory_1
ignore_errors: yes
- name: kernel_memory (idempotency)
docker_container:
image: alpine:3.8
command: '/bin/sh -c "sleep 10m"'
name: "{{ cname }}"
kernel_memory: 8M
state: started
register: kernel_memory_2
ignore_errors: yes
- name: kernel_memory (change)
docker_container:
image: alpine:3.8
command: '/bin/sh -c "sleep 10m"'
name: "{{ cname }}"
kernel_memory: 6M
state: started
force_kill: yes
register: kernel_memory_3
ignore_errors: yes
- name: cleanup
docker_container:
name: "{{ cname }}"
state: absent
force_kill: yes
diff: no
ignore_errors: yes
- assert:
that:
- kernel_memory_1 is changed
- kernel_memory_2 is not changed
- kernel_memory_3 is changed
when: kernel_memory_1 is not failed or 'kernel memory accounting disabled in this runc build' not in kernel_memory_1.msg
####################################################################
## kill_signal #####################################################
####################################################################
# TODO: - kill_signal
####################################################################
## labels ##########################################################
####################################################################
- name: labels
docker_container:
image: alpine:3.8
command: '/bin/sh -c "sleep 10m"'
name: "{{ cname }}"
state: started
labels:
ansible.test.1: hello
ansible.test.2: world
register: labels_1
- name: labels (idempotency)
docker_container:
image: alpine:3.8
command: '/bin/sh -c "sleep 10m"'
name: "{{ cname }}"
state: started
labels:
ansible.test.2: world
ansible.test.1: hello
register: labels_2
- name: labels (less labels)
docker_container:
image: alpine:3.8
command: '/bin/sh -c "sleep 10m"'
name: "{{ cname }}"
state: started
labels:
ansible.test.1: hello
register: labels_3
- name: labels (more labels)
docker_container:
image: alpine:3.8
command: '/bin/sh -c "sleep 10m"'
name: "{{ cname }}"
state: started
labels:
ansible.test.1: hello
ansible.test.3: ansible
force_kill: yes
register: labels_4
- name: cleanup
docker_container:
name: "{{ cname }}"
state: absent
force_kill: yes
diff: no
- assert:
that:
- labels_1 is changed
- labels_2 is not changed
- labels_3 is not changed
- labels_4 is changed
####################################################################
## links ###########################################################
####################################################################
- name: start helpers
docker_container:
image: alpine:3.8
command: '/bin/sh -c "sleep 10m"'
name: "{{ container_name }}"
state: started
loop:
- "{{ cname_h1 }}"
- "{{ cname_h2 }}"
- "{{ cname_h3 }}"
loop_control:
loop_var: container_name
- name: links
docker_container:
image: alpine:3.8
command: '/bin/sh -c "sleep 10m"'
name: "{{ cname }}"
state: started
links:
- "{{ cname_h1 }}:test1"
- "{{ cname_h2 }}:test2"
register: links_1
- name: links (idempotency)
docker_container:
image: alpine:3.8
command: '/bin/sh -c "sleep 10m"'
name: "{{ cname }}"
state: started
links:
- "{{ cname_h2 }}:test2"
- "{{ cname_h1 }}:test1"
register: links_2
- name: links (less links)
docker_container:
image: alpine:3.8
command: '/bin/sh -c "sleep 10m"'
name: "{{ cname }}"
state: started
links:
- "{{ cname_h1 }}:test1"
register: links_3
- name: links (more links)
docker_container:
image: alpine:3.8
command: '/bin/sh -c "sleep 10m"'
name: "{{ cname }}"
state: started
links:
- "{{ cname_h1 }}:test1"
- "{{ cname_h3 }}:test3"
force_kill: yes
register: links_4
- name: cleanup
docker_container:
name: "{{ container_name }}"
state: absent
force_kill: yes
loop:
- "{{ cname }}"
- "{{ cname_h1 }}"
- "{{ cname_h2 }}"
- "{{ cname_h3 }}"
loop_control:
loop_var: container_name
diff: no
- assert:
that:
- links_1 is changed
- links_2 is not changed
- links_3 is not changed
- links_4 is changed
####################################################################
## log_driver ######################################################
####################################################################
- name: log_driver
docker_container:
image: alpine:3.8
command: '/bin/sh -c "sleep 10m"'
name: "{{ cname }}"
state: started
log_driver: json-file
register: log_driver_1
- name: log_driver (idempotency)
docker_container:
image: alpine:3.8
command: '/bin/sh -c "sleep 10m"'
name: "{{ cname }}"
state: started
log_driver: json-file
register: log_driver_2
- name: log_driver (change)
docker_container:
image: alpine:3.8
command: '/bin/sh -c "sleep 10m"'
name: "{{ cname }}"
state: started
log_driver: syslog
force_kill: yes
register: log_driver_3
- name: cleanup
docker_container:
name: "{{ cname }}"
state: absent
force_kill: yes
diff: no
- assert:
that:
- log_driver_1 is changed
- log_driver_2 is not changed
- log_driver_3 is changed
####################################################################
## log_options #####################################################
####################################################################
- name: log_options
docker_container:
image: alpine:3.8
command: '/bin/sh -c "sleep 10m"'
name: "{{ cname }}"
state: started
log_driver: json-file
log_options:
labels: production_status
env: os,customer
max-file: 5
register: log_options_1
- name: log_options (idempotency)
docker_container:
image: alpine:3.8
command: '/bin/sh -c "sleep 10m"'
name: "{{ cname }}"
state: started
log_driver: json-file
log_options:
env: os,customer
labels: production_status
max-file: 5
register: log_options_2
- name: log_options (less log options)
docker_container:
image: alpine:3.8
command: '/bin/sh -c "sleep 10m"'
name: "{{ cname }}"
state: started
log_driver: json-file
log_options:
labels: production_status
register: log_options_3
- name: log_options (more log options)
docker_container:
image: alpine:3.8
command: '/bin/sh -c "sleep 10m"'
name: "{{ cname }}"
state: started
log_driver: json-file
log_options:
labels: production_status
max-size: 10m
force_kill: yes
register: log_options_4
- name: cleanup
docker_container:
name: "{{ cname }}"
state: absent
force_kill: yes
diff: no
- assert:
that:
- log_options_1 is changed
- log_options_2 is not changed
- "'Non-string value found for log_options option \\'max-file\\'. The value is automatically converted to \\'5\\'. If this is not correct, or you want to
avoid such warnings, please quote the value.' in log_options_2.warnings"
- log_options_3 is not changed
- log_options_4 is changed
####################################################################
## mac_address #####################################################
####################################################################
- name: mac_address
docker_container:
image: alpine:3.8
command: '/bin/sh -c "sleep 10m"'
name: "{{ cname }}"
mac_address: 92:d0:c6:0a:29:33
state: started
register: mac_address_1
- name: mac_address (idempotency)
docker_container:
image: alpine:3.8
command: '/bin/sh -c "sleep 10m"'
name: "{{ cname }}"
mac_address: 92:d0:c6:0a:29:33
state: started
register: mac_address_2
- name: mac_address (change)
docker_container:
image: alpine:3.8
command: '/bin/sh -c "sleep 10m"'
name: "{{ cname }}"
mac_address: 92:d0:c6:0a:29:44
state: started
force_kill: yes
register: mac_address_3
- name: cleanup
docker_container:
name: "{{ cname }}"
state: absent
force_kill: yes
diff: no
- assert:
that:
- mac_address_1 is changed
- mac_address_2 is not changed
- mac_address_3 is changed
####################################################################
## memory ##########################################################
####################################################################
- name: memory
docker_container:
image: alpine:3.8
command: '/bin/sh -c "sleep 10m"'
name: "{{ cname }}"
memory: 64M
state: started
register: memory_1
- name: memory (idempotency)
docker_container:
image: alpine:3.8
command: '/bin/sh -c "sleep 10m"'
name: "{{ cname }}"
memory: 64M
state: started
register: memory_2
- name: memory (change)
docker_container:
image: alpine:3.8
command: '/bin/sh -c "sleep 10m"'
name: "{{ cname }}"
memory: 48M
state: started
force_kill: yes
register: memory_3
- name: cleanup
docker_container:
name: "{{ cname }}"
state: absent
force_kill: yes
diff: no
- assert:
that:
- memory_1 is changed
- memory_2 is not changed
- memory_3 is changed
####################################################################
## memory_reservation ##############################################
####################################################################
- name: memory_reservation
docker_container:
image: alpine:3.8
command: '/bin/sh -c "sleep 10m"'
name: "{{ cname }}"
memory_reservation: 64M
state: started
register: memory_reservation_1
- name: memory_reservation (idempotency)
docker_container:
image: alpine:3.8
command: '/bin/sh -c "sleep 10m"'
name: "{{ cname }}"
memory_reservation: 64M
state: started
register: memory_reservation_2
- name: memory_reservation (change)
docker_container:
image: alpine:3.8
command: '/bin/sh -c "sleep 10m"'
name: "{{ cname }}"
memory_reservation: 48M
state: started
force_kill: yes
register: memory_reservation_3
- name: cleanup
docker_container:
name: "{{ cname }}"
state: absent
force_kill: yes
diff: no
- assert:
that:
- memory_reservation_1 is changed
- memory_reservation_2 is not changed
- memory_reservation_3 is changed
####################################################################
## memory_swap #####################################################
####################################################################
- name: memory_swap
docker_container:
image: alpine:3.8
command: '/bin/sh -c "sleep 10m"'
name: "{{ cname }}"
# Docker daemon does not accept memory_swap if memory is not specified
memory: 32M
memory_swap: 64M
state: started
debug: yes
register: memory_swap_1
- name: memory_swap (idempotency)
docker_container:
image: alpine:3.8
command: '/bin/sh -c "sleep 10m"'
name: "{{ cname }}"
# Docker daemon does not accept memory_swap if memory is not specified
memory: 32M
memory_swap: 64M
state: started
debug: yes
register: memory_swap_2
- name: memory_swap (change)
docker_container:
image: alpine:3.8
command: '/bin/sh -c "sleep 10m"'
name: "{{ cname }}"
# Docker daemon does not accept memory_swap if memory is not specified
memory: 32M
memory_swap: 48M
state: started
force_kill: yes
debug: yes
register: memory_swap_3
- name: cleanup
docker_container:
name: "{{ cname }}"
state: absent
force_kill: yes
diff: no
- assert:
that:
- memory_swap_1 is changed
# Sometimes (in particular during integration tests, maybe when not running
# on a proper VM), memory_swap cannot be set and will be -1 afterwards.
- memory_swap_2 is not changed or memory_swap_2.container.HostConfig.MemorySwap == -1
- memory_swap_3 is changed
- debug: var=memory_swap_1
when: memory_swap_2 is changed
- debug: var=memory_swap_2
when: memory_swap_2 is changed
- debug: var=memory_swap_3
when: memory_swap_2 is changed
####################################################################
## memory_swappiness ###############################################
####################################################################
- name: memory_swappiness
docker_container:
image: alpine:3.8
command: '/bin/sh -c "sleep 10m"'
name: "{{ cname }}"
memory_swappiness: 40
state: started
register: memory_swappiness_1
- name: memory_swappiness (idempotency)
docker_container:
image: alpine:3.8
command: '/bin/sh -c "sleep 10m"'
name: "{{ cname }}"
memory_swappiness: 40
state: started
register: memory_swappiness_2
- name: memory_swappiness (change)
docker_container:
image: alpine:3.8
command: '/bin/sh -c "sleep 10m"'
name: "{{ cname }}"
memory_swappiness: 60
state: started
force_kill: yes
register: memory_swappiness_3
- name: cleanup
docker_container:
name: "{{ cname }}"
state: absent
force_kill: yes
diff: no
- assert:
that:
- memory_swappiness_1 is changed
- memory_swappiness_2 is not changed
- memory_swappiness_3 is changed
####################################################################
## oom_killer ######################################################
####################################################################
- name: oom_killer
docker_container:
image: alpine:3.8
command: '/bin/sh -c "sleep 10m"'
name: "{{ cname }}"
oom_killer: yes
state: started
register: oom_killer_1
- name: oom_killer (idempotency)
docker_container:
image: alpine:3.8
command: '/bin/sh -c "sleep 10m"'
name: "{{ cname }}"
oom_killer: yes
state: started
register: oom_killer_2
- name: oom_killer (change)
docker_container:
image: alpine:3.8
command: '/bin/sh -c "sleep 10m"'
name: "{{ cname }}"
oom_killer: no
state: started
force_kill: yes
register: oom_killer_3
- name: cleanup
docker_container:
name: "{{ cname }}"
state: absent
force_kill: yes
diff: no
- assert:
that:
- oom_killer_1 is changed
- oom_killer_2 is not changed
- oom_killer_3 is changed
####################################################################
## oom_score_adj ###################################################
####################################################################
- name: oom_score_adj
docker_container:
image: alpine:3.8
command: '/bin/sh -c "sleep 10m"'
name: "{{ cname }}"
oom_score_adj: 5
state: started
register: oom_score_adj_1
- name: oom_score_adj (idempotency)
docker_container:
image: alpine:3.8
command: '/bin/sh -c "sleep 10m"'
name: "{{ cname }}"
oom_score_adj: 5
state: started
register: oom_score_adj_2
- name: oom_score_adj (change)
docker_container:
image: alpine:3.8
command: '/bin/sh -c "sleep 10m"'
name: "{{ cname }}"
oom_score_adj: 7
state: started
force_kill: yes
register: oom_score_adj_3
- name: cleanup
docker_container:
name: "{{ cname }}"
state: absent
force_kill: yes
diff: no
- assert:
that:
- oom_score_adj_1 is changed
- oom_score_adj_2 is not changed
- oom_score_adj_3 is changed
####################################################################
## output_logs #####################################################
####################################################################
# TODO: - output_logs
####################################################################
## paused ##########################################################
####################################################################
- name: paused
docker_container:
image: alpine:3.8
command: "/bin/sh -c 'sleep 10m'"
name: "{{ cname }}"
state: started
paused: yes
force_kill: yes
register: paused_1
- name: inspect paused
command: "docker inspect -f {% raw %}'{{.State.Status}} {{.State.Paused}}'{% endraw %} {{ cname }}"
register: paused_2
- name: paused (idempotent)
docker_container:
image: alpine:3.8
command: "/bin/sh -c 'sleep 10m'"
name: "{{ cname }}"
state: started
paused: yes
force_kill: yes
register: paused_3
- name: paused (continue)
docker_container:
image: alpine:3.8
command: "/bin/sh -c 'sleep 10m'"
name: "{{ cname }}"
state: started
paused: no
force_kill: yes
register: paused_4
- name: inspect paused
command: "docker inspect -f {% raw %}'{{.State.Status}} {{.State.Paused}}'{% endraw %} {{ cname }}"
register: paused_5
- name: cleanup
docker_container:
name: "{{ cname }}"
state: absent
force_kill: yes
diff: no
- assert:
that:
- paused_1 is changed
- 'paused_2.stdout == "paused true"'
- paused_3 is not changed
- paused_4 is changed
- 'paused_5.stdout == "running false"'
####################################################################
## pid_mode ########################################################
####################################################################
- name: start helpers
docker_container:
image: alpine:3.8
command: '/bin/sh -c "sleep 10m"'
name: "{{ cname_h1 }}"
state: started
register: pid_mode_helper
- name: pid_mode
docker_container:
image: alpine:3.8
command: '/bin/sh -c "sleep 10m"'
name: "{{ cname }}"
state: started
pid_mode: "container:{{ pid_mode_helper.container.Id }}"
register: pid_mode_1
ignore_errors: yes
# docker-py < 2.0 does not support "arbitrary" pid_mode values
- name: pid_mode (idempotency)
docker_container:
image: alpine:3.8
command: '/bin/sh -c "sleep 10m"'
name: "{{ cname }}"
state: started
pid_mode: "container:{{ cname_h1 }}"
register: pid_mode_2
ignore_errors: yes
# docker-py < 2.0 does not support "arbitrary" pid_mode values
- name: pid_mode (change)
docker_container:
image: alpine:3.8
command: '/bin/sh -c "sleep 10m"'
name: "{{ cname }}"
state: started
pid_mode: host
force_kill: yes
register: pid_mode_3
- name: cleanup
docker_container:
name: "{{ container_name }}"
state: absent
force_kill: yes
loop:
- "{{ cname }}"
- "{{ cname_h1 }}"
loop_control:
loop_var: container_name
diff: no
- assert:
that:
- pid_mode_1 is changed
- pid_mode_2 is not changed
- pid_mode_3 is changed
when: docker_py_version is version('2.0.0', '>=')
- assert:
that:
- pid_mode_1 is failed
- pid_mode_2 is failed
- pid_mode_3 is changed
when: docker_py_version is version('2.0.0', '<')
####################################################################
## pids_limit ######################################################
####################################################################
- name: pids_limit
docker_container:
image: alpine:3.8
command: '/bin/sh -c "sleep 10m"'
name: "{{ cname }}"
state: started
pids_limit: 10
register: pids_limit_1
ignore_errors: yes
- name: pids_limit (idempotency)
docker_container:
image: alpine:3.8
command: '/bin/sh -c "sleep 10m"'
name: "{{ cname }}"
state: started
pids_limit: 10
register: pids_limit_2
ignore_errors: yes
- name: pids_limit (changed)
docker_container:
image: alpine:3.8
command: '/bin/sh -c "sleep 10m"'
name: "{{ cname }}"
state: started
pids_limit: 20
force_kill: yes
register: pids_limit_3
ignore_errors: yes
- name: cleanup
docker_container:
name: "{{ cname }}"
state: absent
force_kill: yes
diff: no
- assert:
that:
- pids_limit_1 is changed
- pids_limit_2 is not changed
- pids_limit_3 is changed
when: docker_py_version is version('1.10.0', '>=')
- assert:
that:
- pids_limit_1 is failed
- "('version is ' ~ docker_py_version ~ ' ') in pids_limit_1.msg"
- "'Minimum version required is 1.10.0 ' in pids_limit_1.msg"
when: docker_py_version is version('1.10.0', '<')
####################################################################
## privileged ######################################################
####################################################################
- name: privileged
docker_container:
image: alpine:3.8
command: '/bin/sh -c "sleep 10m"'
name: "{{ cname }}"
privileged: yes
state: started
register: privileged_1
- name: privileged (idempotency)
docker_container:
image: alpine:3.8
command: '/bin/sh -c "sleep 10m"'
name: "{{ cname }}"
privileged: yes
state: started
register: privileged_2
- name: privileged (change)
docker_container:
image: alpine:3.8
command: '/bin/sh -c "sleep 10m"'
name: "{{ cname }}"
privileged: no
state: started
force_kill: yes
register: privileged_3
- name: cleanup
docker_container:
name: "{{ cname }}"
state: absent
force_kill: yes
diff: no
- assert:
that:
- privileged_1 is changed
- privileged_2 is not changed
- privileged_3 is changed
####################################################################
## published_ports #################################################
####################################################################
- name: published_ports
docker_container:
image: alpine:3.8
command: '/bin/sh -c "sleep 10m"'
name: "{{ cname }}"
state: started
published_ports:
- '9001'
- '9002'
register: published_ports_1
- name: published_ports (idempotency)
docker_container:
image: alpine:3.8
command: '/bin/sh -c "sleep 10m"'
name: "{{ cname }}"
state: started
published_ports:
- '9002'
- '9001'
register: published_ports_2
- name: published_ports (less published_ports)
docker_container:
image: alpine:3.8
command: '/bin/sh -c "sleep 10m"'
name: "{{ cname }}"
state: started
published_ports:
- '9002'
register: published_ports_3
- name: published_ports (more published_ports)
docker_container:
image: alpine:3.8
command: '/bin/sh -c "sleep 10m"'
name: "{{ cname }}"
state: started
published_ports:
- '9002'
- '9003'
force_kill: yes
register: published_ports_4
- name: cleanup
docker_container:
name: "{{ cname }}"
state: absent
force_kill: yes
diff: no
- assert:
that:
- published_ports_1 is changed
- published_ports_2 is not changed
- published_ports_3 is not changed
- published_ports_4 is changed
####################################################################
## pull ############################################################
####################################################################
# TODO: - pull
####################################################################
## read_only #######################################################
####################################################################
- name: read_only
docker_container:
image: alpine:3.8
command: '/bin/sh -c "sleep 10m"'
name: "{{ cname }}"
read_only: yes
state: started
register: read_only_1
- name: read_only (idempotency)
docker_container:
image: alpine:3.8
command: '/bin/sh -c "sleep 10m"'
name: "{{ cname }}"
read_only: yes
state: started
register: read_only_2
- name: read_only (change)
docker_container:
image: alpine:3.8
command: '/bin/sh -c "sleep 10m"'
name: "{{ cname }}"
read_only: no
state: started
force_kill: yes
register: read_only_3
- name: cleanup
docker_container:
name: "{{ cname }}"
state: absent
force_kill: yes
diff: no
- assert:
that:
- read_only_1 is changed
- read_only_2 is not changed
- read_only_3 is changed
####################################################################
## restart_policy ##################################################
####################################################################
- name: restart_policy
docker_container:
image: alpine:3.8
command: '/bin/sh -c "sleep 10m"'
name: "{{ cname }}"
restart_policy: always
state: started
register: restart_policy_1
- name: restart_policy (idempotency)
docker_container:
image: alpine:3.8
command: '/bin/sh -c "sleep 10m"'
name: "{{ cname }}"
restart_policy: always
state: started
register: restart_policy_2
- name: restart_policy (change)
docker_container:
image: alpine:3.8
command: '/bin/sh -c "sleep 10m"'
name: "{{ cname }}"
restart_policy: unless-stopped
state: started
force_kill: yes
register: restart_policy_3
- name: cleanup
docker_container:
name: "{{ cname }}"
state: absent
force_kill: yes
diff: no
- assert:
that:
- restart_policy_1 is changed
- restart_policy_2 is not changed
- restart_policy_3 is changed
####################################################################
## restart_retries #################################################
####################################################################
- name: restart_retries
docker_container:
image: alpine:3.8
command: '/bin/sh -c "sleep 10m"'
name: "{{ cname }}"
restart_policy: on-failure
restart_retries: 5
state: started
register: restart_retries_1
- name: restart_retries (idempotency)
docker_container:
image: alpine:3.8
command: '/bin/sh -c "sleep 10m"'
name: "{{ cname }}"
restart_policy: on-failure
restart_retries: 5
state: started
register: restart_retries_2
- name: restart_retries (change)
docker_container:
image: alpine:3.8
command: '/bin/sh -c "sleep 10m"'
name: "{{ cname }}"
restart_policy: on-failure
restart_retries: 2
state: started
force_kill: yes
register: restart_retries_3
- name: cleanup
docker_container:
name: "{{ cname }}"
state: absent
force_kill: yes
diff: no
- assert:
that:
- restart_retries_1 is changed
- restart_retries_2 is not changed
- restart_retries_3 is changed
####################################################################
## runtime #########################################################
####################################################################
- name: runtime
docker_container:
image: alpine:3.8
command: '/bin/sh -c "sleep 10m"'
name: "{{ cname }}"
runtime: runc
state: started
register: runtime_1
ignore_errors: yes
- name: runtime (idempotency)
docker_container:
image: alpine:3.8
command: '/bin/sh -c "sleep 10m"'
name: "{{ cname }}"
runtime: runc
state: started
register: runtime_2
ignore_errors: yes
- name: cleanup
docker_container:
name: "{{ cname }}"
state: absent
force_kill: yes
diff: no
- assert:
that:
- runtime_1 is changed
- runtime_2 is not changed
when: docker_py_version is version('2.4.0', '>=')
- assert:
that:
- runtime_1 is failed
- "('version is ' ~ docker_py_version ~ ' ') in runtime_1.msg"
- "'Minimum version required is 2.4.0 ' in runtime_1.msg"
when: docker_py_version is version('2.4.0', '<')
####################################################################
## security_opts ###################################################
####################################################################
# In case some of the options stop working, here are some more
# options which *currently* work with all integration test targets:
# no-new-privileges
# label:disable
# label=disable
# label:level:s0:c100,c200
# label=level:s0:c100,c200
# label:type:svirt_apache_t
# label=type:svirt_apache_t
# label:user:root
# label=user:root
# seccomp:unconfined
# seccomp=unconfined
# apparmor:docker-default
# apparmor=docker-default
- name: security_opts
docker_container:
image: alpine:3.8
command: '/bin/sh -c "sleep 10m"'
name: "{{ cname }}"
state: started
security_opts:
- "label:level:s0:c100,c200"
- "no-new-privileges"
register: security_opts_1
- name: security_opts (idempotency)
docker_container:
image: alpine:3.8
command: '/bin/sh -c "sleep 10m"'
name: "{{ cname }}"
state: started
security_opts:
- "no-new-privileges"
- "label:level:s0:c100,c200"
register: security_opts_2
- name: security_opts (less security options)
docker_container:
image: alpine:3.8
command: '/bin/sh -c "sleep 10m"'
name: "{{ cname }}"
state: started
security_opts:
- "no-new-privileges"
register: security_opts_3
- name: security_opts (more security options)
docker_container:
image: alpine:3.8
command: '/bin/sh -c "sleep 10m"'
name: "{{ cname }}"
state: started
security_opts:
- "label:disable"
- "no-new-privileges"
force_kill: yes
register: security_opts_4
- name: cleanup
docker_container:
name: "{{ cname }}"
state: absent
force_kill: yes
diff: no
- assert:
that:
- security_opts_1 is changed
- security_opts_2 is not changed
- security_opts_3 is not changed
- security_opts_4 is changed
####################################################################
## shm_size ########################################################
####################################################################
- name: shm_size
docker_container:
image: alpine:3.8
command: '/bin/sh -c "sleep 10m"'
name: "{{ cname }}"
shm_size: 96M
state: started
register: shm_size_1
- name: shm_size (idempotency)
docker_container:
image: alpine:3.8
command: '/bin/sh -c "sleep 10m"'
name: "{{ cname }}"
shm_size: 96M
state: started
register: shm_size_2
- name: shm_size (change)
docker_container:
image: alpine:3.8
command: '/bin/sh -c "sleep 10m"'
name: "{{ cname }}"
shm_size: 75M
state: started
force_kill: yes
register: shm_size_3
- name: cleanup
docker_container:
name: "{{ cname }}"
state: absent
force_kill: yes
diff: no
- assert:
that:
- shm_size_1 is changed
- shm_size_2 is not changed
- shm_size_3 is changed
####################################################################
## stop_signal #####################################################
####################################################################
- name: stop_signal
docker_container:
image: alpine:3.8
command: '/bin/sh -c "sleep 10m"'
name: "{{ cname }}"
stop_signal: "30"
state: started
register: stop_signal_1
- name: stop_signal (idempotency)
docker_container:
image: alpine:3.8
command: '/bin/sh -c "sleep 10m"'
name: "{{ cname }}"
stop_signal: "30"
state: started
register: stop_signal_2
- name: stop_signal (change)
docker_container:
image: alpine:3.8
command: '/bin/sh -c "sleep 10m"'
name: "{{ cname }}"
stop_signal: "9"
state: started
force_kill: yes
register: stop_signal_3
- name: cleanup
docker_container:
name: "{{ cname }}"
state: absent
force_kill: yes
diff: no
- assert:
that:
- stop_signal_1 is changed
- stop_signal_2 is not changed
- stop_signal_3 is changed
####################################################################
## stop_timeout ####################################################
####################################################################
- name: stop_timeout
docker_container:
image: alpine:3.8
command: '/bin/sh -c "sleep 10m"'
name: "{{ cname }}"
stop_timeout: 2
state: started
register: stop_timeout_1
- name: stop_timeout (idempotency)
docker_container:
image: alpine:3.8
command: '/bin/sh -c "sleep 10m"'
name: "{{ cname }}"
stop_timeout: 2
state: started
register: stop_timeout_2
- name: stop_timeout (no change)
# stop_timeout changes are ignored by default
docker_container:
image: alpine:3.8
command: '/bin/sh -c "sleep 10m"'
name: "{{ cname }}"
stop_timeout: 1
state: started
register: stop_timeout_3
- name: cleanup
docker_container:
name: "{{ cname }}"
state: absent
force_kill: yes
diff: no
- assert:
that:
- stop_timeout_1 is changed
- stop_timeout_2 is not changed
- stop_timeout_3 is not changed
####################################################################
## sysctls #########################################################
####################################################################
# In case some of the options stop working, here are some more
# options which *currently* work with all integration test targets:
# net.ipv4.conf.default.log_martians: 1
# net.ipv4.conf.default.secure_redirects: 0
# net.ipv4.conf.default.send_redirects: 0
# net.ipv4.conf.all.log_martians: 1
# net.ipv4.conf.all.accept_redirects: 0
# net.ipv4.conf.all.secure_redirects: 0
# net.ipv4.conf.all.send_redirects: 0
- name: sysctls
docker_container:
image: alpine:3.8
command: '/bin/sh -c "sleep 10m"'
name: "{{ cname }}"
state: started
sysctls:
net.ipv4.icmp_echo_ignore_all: 1
net.ipv4.ip_forward: 1
register: sysctls_1
ignore_errors: yes
- name: sysctls (idempotency)
docker_container:
image: alpine:3.8
command: '/bin/sh -c "sleep 10m"'
name: "{{ cname }}"
state: started
sysctls:
net.ipv4.ip_forward: 1
net.ipv4.icmp_echo_ignore_all: 1
register: sysctls_2
ignore_errors: yes
- name: sysctls (less sysctls)
docker_container:
image: alpine:3.8
command: '/bin/sh -c "sleep 10m"'
name: "{{ cname }}"
state: started
sysctls:
net.ipv4.icmp_echo_ignore_all: 1
register: sysctls_3
ignore_errors: yes
- name: sysctls (more sysctls)
docker_container:
image: alpine:3.8
command: '/bin/sh -c "sleep 10m"'
name: "{{ cname }}"
state: started
sysctls:
net.ipv4.icmp_echo_ignore_all: 1
net.ipv6.conf.default.accept_redirects: 0
force_kill: yes
register: sysctls_4
ignore_errors: yes
- name: cleanup
docker_container:
name: "{{ cname }}"
state: absent
force_kill: yes
diff: no
- assert:
that:
- sysctls_1 is changed
- sysctls_2 is not changed
- sysctls_3 is not changed
- sysctls_4 is changed
when: docker_py_version is version('1.10.0', '>=')
- assert:
that:
- sysctls_1 is failed
- "('version is ' ~ docker_py_version ~ ' ') in sysctls_1.msg"
- "'Minimum version required is 1.10.0 ' in sysctls_1.msg"
when: docker_py_version is version('1.10.0', '<')
####################################################################
## tmpfs ###########################################################
####################################################################
- name: tmpfs
docker_container:
image: alpine:3.8
command: '/bin/sh -c "sleep 10m"'
name: "{{ cname }}"
state: started
tmpfs:
- "/test1:rw,noexec,nosuid,size=65536k"
- "/test2:rw,noexec,nosuid,size=65536k"
register: tmpfs_1
- name: tmpfs (idempotency)
docker_container:
image: alpine:3.8
command: '/bin/sh -c "sleep 10m"'
name: "{{ cname }}"
state: started
tmpfs:
- "/test2:rw,noexec,nosuid,size=65536k"
- "/test1:rw,noexec,nosuid,size=65536k"
register: tmpfs_2
- name: tmpfs (less tmpfs)
docker_container:
image: alpine:3.8
command: '/bin/sh -c "sleep 10m"'
name: "{{ cname }}"
state: started
tmpfs:
- "/test1:rw,noexec,nosuid,size=65536k"
register: tmpfs_3
- name: tmpfs (more tmpfs)
docker_container:
image: alpine:3.8
command: '/bin/sh -c "sleep 10m"'
name: "{{ cname }}"
state: started
tmpfs:
- "/test1:rw,noexec,nosuid,size=65536k"
- "/test3:rw,noexec,nosuid,size=65536k"
force_kill: yes
register: tmpfs_4
- name: cleanup
docker_container:
name: "{{ cname }}"
state: absent
force_kill: yes
diff: no
- assert:
that:
- tmpfs_1 is changed
- tmpfs_2 is not changed
- tmpfs_3 is not changed
- tmpfs_4 is changed
####################################################################
## trust_image_content #############################################
####################################################################
# TODO: - trust_image_content
####################################################################
## tty #############################################################
####################################################################
- name: tty
docker_container:
image: alpine:3.8
command: '/bin/sh -c "sleep 10m"'
name: "{{ cname }}"
tty: yes
state: started
register: tty_1
- name: tty (idempotency)
docker_container:
image: alpine:3.8
command: '/bin/sh -c "sleep 10m"'
name: "{{ cname }}"
tty: yes
state: started
register: tty_2
- name: tty (change)
docker_container:
image: alpine:3.8
command: '/bin/sh -c "sleep 10m"'
name: "{{ cname }}"
tty: no
state: started
force_kill: yes
register: tty_3
- name: cleanup
docker_container:
name: "{{ cname }}"
state: absent
force_kill: yes
diff: no
- assert:
that:
- tty_1 is changed
- tty_2 is not changed
- tty_3 is changed
####################################################################
## ulimits #########################################################
####################################################################
- name: ulimits
docker_container:
image: alpine:3.8
command: '/bin/sh -c "sleep 10m"'
name: "{{ cname }}"
state: started
ulimits:
- "nofile:1234:1234"
- "nproc:3:6"
register: ulimits_1
- name: ulimits (idempotency)
docker_container:
image: alpine:3.8
command: '/bin/sh -c "sleep 10m"'
name: "{{ cname }}"
state: started
ulimits:
- "nproc:3:6"
- "nofile:1234:1234"
register: ulimits_2
- name: ulimits (less ulimits)
docker_container:
image: alpine:3.8
command: '/bin/sh -c "sleep 10m"'
name: "{{ cname }}"
state: started
ulimits:
- "nofile:1234:1234"
register: ulimits_3
- name: ulimits (more ulimits)
docker_container:
image: alpine:3.8
command: '/bin/sh -c "sleep 10m"'
name: "{{ cname }}"
state: started
ulimits:
- "nofile:1234:1234"
- "sigpending:100:200"
force_kill: yes
register: ulimits_4
- name: cleanup
docker_container:
name: "{{ cname }}"
state: absent
force_kill: yes
diff: no
- assert:
that:
- ulimits_1 is changed
- ulimits_2 is not changed
- ulimits_3 is not changed
- ulimits_4 is changed
####################################################################
## user ############################################################
####################################################################
- name: user
docker_container:
image: alpine:3.8
command: '/bin/sh -c "sleep 10m"'
name: "{{ cname }}"
user: nobody
state: started
register: user_1
- name: user (idempotency)
docker_container:
image: alpine:3.8
command: '/bin/sh -c "sleep 10m"'
name: "{{ cname }}"
user: nobody
state: started
register: user_2
- name: user (change)
docker_container:
image: alpine:3.8
command: '/bin/sh -c "sleep 10m"'
name: "{{ cname }}"
user: root
state: started
force_kill: yes
register: user_3
- name: cleanup
docker_container:
name: "{{ cname }}"
state: absent
force_kill: yes
diff: no
- assert:
that:
- user_1 is changed
- user_2 is not changed
- user_3 is changed
####################################################################
## userns_mode #####################################################
####################################################################
- name: userns_mode
docker_container:
image: alpine:3.8
command: '/bin/sh -c "sleep 10m"'
name: "{{ cname }}"
userns_mode: host
state: started
register: userns_mode_1
ignore_errors: yes
- name: userns_mode (idempotency)
docker_container:
image: alpine:3.8
command: '/bin/sh -c "sleep 10m"'
name: "{{ cname }}"
userns_mode: host
state: started
register: userns_mode_2
ignore_errors: yes
- name: userns_mode (change)
docker_container:
image: alpine:3.8
command: '/bin/sh -c "sleep 10m"'
name: "{{ cname }}"
userns_mode: ""
state: started
force_kill: yes
register: userns_mode_3
ignore_errors: yes
- name: cleanup
docker_container:
name: "{{ cname }}"
state: absent
force_kill: yes
diff: no
- assert:
that:
- userns_mode_1 is changed
- userns_mode_2 is not changed
- userns_mode_3 is changed
when: docker_py_version is version('1.10.0', '>=')
- assert:
that:
- userns_mode_1 is failed
- "('version is ' ~ docker_py_version ~ ' ') in userns_mode_1.msg"
- "'Minimum version required is 1.10.0 ' in userns_mode_1.msg"
when: docker_py_version is version('1.10.0', '<')
####################################################################
## uts #############################################################
####################################################################
- name: uts
docker_container:
image: alpine:3.8
command: '/bin/sh -c "sleep 10m"'
name: "{{ cname }}"
uts: host
state: started
register: uts_1
ignore_errors: yes
- name: uts (idempotency)
docker_container:
image: alpine:3.8
command: '/bin/sh -c "sleep 10m"'
name: "{{ cname }}"
uts: host
state: started
register: uts_2
ignore_errors: yes
- name: uts (change)
docker_container:
image: alpine:3.8
command: '/bin/sh -c "sleep 10m"'
name: "{{ cname }}"
uts: ""
state: started
force_kill: yes
register: uts_3
ignore_errors: yes
- name: cleanup
docker_container:
name: "{{ cname }}"
state: absent
force_kill: yes
diff: no
- assert:
that:
- uts_1 is changed
- uts_2 is not changed
- uts_3 is changed
when: docker_py_version is version('3.5.0', '>=')
- assert:
that:
- uts_1 is failed
- "('version is ' ~ docker_py_version ~ ' ') in uts_1.msg"
- "'Minimum version required is 3.5.0 ' in uts_1.msg"
when: docker_py_version is version('3.5.0', '<')
####################################################################
## keep_volumes ####################################################
####################################################################
# TODO: - keep_volumes
####################################################################
## volume_driver ###################################################
####################################################################
- name: volume_driver
docker_container:
image: alpine:3.8
command: '/bin/sh -c "sleep 10m"'
name: "{{ cname }}"
volume_driver: local
state: started
register: volume_driver_1
- name: volume_driver (idempotency)
docker_container:
image: alpine:3.8
command: '/bin/sh -c "sleep 10m"'
name: "{{ cname }}"
volume_driver: local
state: started
register: volume_driver_2
- name: volume_driver (change)
docker_container:
image: alpine:3.8
command: '/bin/sh -c "sleep 10m"'
name: "{{ cname }}"
volume_driver: /
state: started
force_kill: yes
register: volume_driver_3
- name: cleanup
docker_container:
name: "{{ cname }}"
state: absent
force_kill: yes
diff: no
- assert:
that:
- volume_driver_1 is changed
- volume_driver_2 is not changed
- volume_driver_3 is changed
####################################################################
## volumes #########################################################
####################################################################
- name: volumes
docker_container:
image: alpine:3.8
command: '/bin/sh -c "sleep 10m"'
name: "{{ cname }}"
state: started
volumes:
- "/tmp:/tmp"
- "/:/whatever:rw,z"
register: volumes_1
- name: volumes (idempotency)
docker_container:
image: alpine:3.8
command: '/bin/sh -c "sleep 10m"'
name: "{{ cname }}"
state: started
volumes:
- "/:/whatever:rw,z"
- "/tmp:/tmp"
register: volumes_2
- name: volumes (less volumes)
docker_container:
image: alpine:3.8
command: '/bin/sh -c "sleep 10m"'
name: "{{ cname }}"
state: started
volumes:
- "/tmp:/tmp"
register: volumes_3
- name: volumes (more volumes)
docker_container:
image: alpine:3.8
command: '/bin/sh -c "sleep 10m"'
name: "{{ cname }}"
state: started
volumes:
- "/tmp:/tmp"
- "/tmp:/somewhereelse:ro,Z"
force_kill: yes
register: volumes_4
- name: volumes (different modes)
docker_container:
image: alpine:3.8
command: '/bin/sh -c "sleep 10m"'
name: "{{ cname }}"
state: started
volumes:
- "/tmp:/tmp"
- "/tmp:/somewhereelse:ro"
force_kill: yes
register: volumes_5
- name: cleanup
docker_container:
name: "{{ cname }}"
state: absent
force_kill: yes
diff: no
- assert:
that:
- volumes_1 is changed
- volumes_2 is not changed
- volumes_3 is not changed
- volumes_4 is changed
- volumes_5 is changed
####################################################################
## volumes_from ####################################################
####################################################################
- name: start helpers
docker_container:
image: alpine:3.8
command: '/bin/sh -c "sleep 10m"'
name: "{{ container_name }}"
state: started
volumes:
- "{{ '/tmp:/tmp' if container_name == cname_h1 else '/:/whatever:ro' }}"
loop:
- "{{ cname_h1 }}"
- "{{ cname_h2 }}"
loop_control:
loop_var: container_name
- name: volumes_from
docker_container:
image: alpine:3.8
command: '/bin/sh -c "sleep 10m"'
name: "{{ cname }}"
state: started
volumes_from: "{{ cname_h1 }}"
register: volumes_from_1
- name: volumes_from (idempotency)
docker_container:
image: alpine:3.8
command: '/bin/sh -c "sleep 10m"'
name: "{{ cname }}"
state: started
volumes_from: "{{ cname_h1 }}"
register: volumes_from_2
- name: volumes_from (change)
docker_container:
image: alpine:3.8
command: '/bin/sh -c "sleep 10m"'
name: "{{ cname }}"
state: started
volumes_from: "{{ cname_h2 }}"
force_kill: yes
register: volumes_from_3
- name: cleanup
docker_container:
name: "{{ container_name }}"
state: absent
force_kill: yes
loop:
- "{{ cname }}"
- "{{ cname_h1 }}"
- "{{ cname_h2 }}"
loop_control:
loop_var: container_name
diff: no
- assert:
that:
- volumes_from_1 is changed
- volumes_from_2 is not changed
- volumes_from_3 is changed
####################################################################
## working_dir #####################################################
####################################################################
- name: working_dir
docker_container:
image: alpine:3.8
command: '/bin/sh -c "sleep 10m"'
name: "{{ cname }}"
working_dir: /tmp
state: started
register: working_dir_1
- name: working_dir (idempotency)
docker_container:
image: alpine:3.8
command: '/bin/sh -c "sleep 10m"'
name: "{{ cname }}"
working_dir: /tmp
state: started
register: working_dir_2
- name: working_dir (change)
docker_container:
image: alpine:3.8
command: '/bin/sh -c "sleep 10m"'
name: "{{ cname }}"
working_dir: /
state: started
force_kill: yes
register: working_dir_3
- name: cleanup
docker_container:
name: "{{ cname }}"
state: absent
force_kill: yes
diff: no
- assert:
that:
- working_dir_1 is changed
- working_dir_2 is not changed
- working_dir_3 is changed
####################################################################
####################################################################
####################################################################
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 60,000 |
Move CLI script impls under ansible package
|
##### SUMMARY
To ensure that `ansible-test` as a first-class-citizen can always locate the correct (and unmodified) Ansible scripts for its invocation in all circumstances, we need to move them under `lib/ansible` and change the top-level bin dir to symlinks (which setuptools will copy/modify on install and hacking/env-setup will just use).
##### ISSUE TYPE
- Feature Idea
##### COMPONENT NAME
bin/ansible
##### ADDITIONAL INFORMATION
|
https://github.com/ansible/ansible/issues/60000
|
https://github.com/ansible/ansible/pull/60004
|
97d36881e21dd6bd714cae0a5af020d20bedfafb
|
8d1f658ce46e74f537bf1df610ddef4b2bfb035f
| 2019-08-02T16:55:21Z |
python
| 2019-08-02T18:27:02Z |
bin/ansible
|
#!/usr/bin/env python
# -*- coding: utf-8 -*-
# (c) 2012, Michael DeHaan <[email protected]>
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
# PYTHON_ARGCOMPLETE_OK
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
__requires__ = ['ansible']
import os
import shutil
import sys
import traceback
from ansible import context
from ansible.errors import AnsibleError, AnsibleOptionsError, AnsibleParserError
from ansible.module_utils._text import to_text
# Used for determining if the system is running a new enough python version
# and should only restrict on our documented minimum versions
_PY3_MIN = sys.version_info[:2] >= (3, 5)
_PY2_MIN = (2, 6) <= sys.version_info[:2] < (3,)
_PY_MIN = _PY3_MIN or _PY2_MIN
if not _PY_MIN:
raise SystemExit('ERROR: Ansible requires a minimum of Python2 version 2.6 or Python3 version 3.5. Current version: %s' % ''.join(sys.version.splitlines()))
class LastResort(object):
# OUTPUT OF LAST RESORT
def display(self, msg, log_only=None):
print(msg, file=sys.stderr)
def error(self, msg, wrap_text=None):
print(msg, file=sys.stderr)
if __name__ == '__main__':
display = LastResort()
try: # bad ANSIBLE_CONFIG or config options can force ugly stacktrace
import ansible.constants as C
from ansible.utils.display import Display
except AnsibleOptionsError as e:
display.error(to_text(e), wrap_text=False)
sys.exit(5)
cli = None
me = os.path.basename(sys.argv[0])
try:
display = Display()
display.debug("starting run")
sub = None
target = me.split('-')
if target[-1][0].isdigit():
# Remove any version or python version info as downstreams
# sometimes add that
target = target[:-1]
if len(target) > 1:
sub = target[1]
myclass = "%sCLI" % sub.capitalize()
elif target[0] == 'ansible':
sub = 'adhoc'
myclass = 'AdHocCLI'
else:
raise AnsibleError("Unknown Ansible alias: %s" % me)
try:
mycli = getattr(__import__("ansible.cli.%s" % sub, fromlist=[myclass]), myclass)
except ImportError as e:
# ImportError members have changed in py3
if 'msg' in dir(e):
msg = e.msg
else:
msg = e.message
if msg.endswith(' %s' % sub):
raise AnsibleError("Ansible sub-program not implemented: %s" % me)
else:
raise
try:
args = [to_text(a, errors='surrogate_or_strict') for a in sys.argv]
except UnicodeError:
display.error('Command line args are not in utf-8, unable to continue. Ansible currently only understands utf-8')
display.display(u"The full traceback was:\n\n%s" % to_text(traceback.format_exc()))
exit_code = 6
else:
cli = mycli(args)
exit_code = cli.run()
except AnsibleOptionsError as e:
cli.parser.print_help()
display.error(to_text(e), wrap_text=False)
exit_code = 5
except AnsibleParserError as e:
display.error(to_text(e), wrap_text=False)
exit_code = 4
# TQM takes care of these, but leaving comment to reserve the exit codes
# except AnsibleHostUnreachable as e:
# display.error(str(e))
# exit_code = 3
# except AnsibleHostFailed as e:
# display.error(str(e))
# exit_code = 2
except AnsibleError as e:
display.error(to_text(e), wrap_text=False)
exit_code = 1
except KeyboardInterrupt:
display.error("User interrupted execution")
exit_code = 99
except Exception as e:
if C.DEFAULT_DEBUG:
# Show raw stacktraces in debug mode, It also allow pdb to
# enter post mortem mode.
raise
have_cli_options = bool(context.CLIARGS)
display.error("Unexpected Exception, this is probably a bug: %s" % to_text(e), wrap_text=False)
if not have_cli_options or have_cli_options and context.CLIARGS['verbosity'] > 2:
log_only = False
if hasattr(e, 'orig_exc'):
display.vvv('\nexception type: %s' % to_text(type(e.orig_exc)))
why = to_text(e.orig_exc)
if to_text(e) != why:
display.vvv('\noriginal msg: %s' % why)
else:
display.display("to see the full traceback, use -vvv")
log_only = True
display.display(u"the full traceback was:\n\n%s" % to_text(traceback.format_exc()), log_only=log_only)
exit_code = 250
sys.exit(exit_code)
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 60,000 |
Move CLI script impls under ansible package
|
##### SUMMARY
To ensure that `ansible-test` as a first-class-citizen can always locate the correct (and unmodified) Ansible scripts for its invocation in all circumstances, we need to move them under `lib/ansible` and change the top-level bin dir to symlinks (which setuptools will copy/modify on install and hacking/env-setup will just use).
##### ISSUE TYPE
- Feature Idea
##### COMPONENT NAME
bin/ansible
##### ADDITIONAL INFORMATION
|
https://github.com/ansible/ansible/issues/60000
|
https://github.com/ansible/ansible/pull/60004
|
97d36881e21dd6bd714cae0a5af020d20bedfafb
|
8d1f658ce46e74f537bf1df610ddef4b2bfb035f
| 2019-08-02T16:55:21Z |
python
| 2019-08-02T18:27:02Z |
bin/ansible
|
#!/usr/bin/env python
# -*- coding: utf-8 -*-
# (c) 2012, Michael DeHaan <[email protected]>
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
# PYTHON_ARGCOMPLETE_OK
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
__requires__ = ['ansible']
import os
import shutil
import sys
import traceback
from ansible import context
from ansible.errors import AnsibleError, AnsibleOptionsError, AnsibleParserError
from ansible.module_utils._text import to_text
# Used for determining if the system is running a new enough python version
# and should only restrict on our documented minimum versions
_PY3_MIN = sys.version_info[:2] >= (3, 5)
_PY2_MIN = (2, 6) <= sys.version_info[:2] < (3,)
_PY_MIN = _PY3_MIN or _PY2_MIN
if not _PY_MIN:
raise SystemExit('ERROR: Ansible requires a minimum of Python2 version 2.6 or Python3 version 3.5. Current version: %s' % ''.join(sys.version.splitlines()))
class LastResort(object):
# OUTPUT OF LAST RESORT
def display(self, msg, log_only=None):
print(msg, file=sys.stderr)
def error(self, msg, wrap_text=None):
print(msg, file=sys.stderr)
if __name__ == '__main__':
display = LastResort()
try: # bad ANSIBLE_CONFIG or config options can force ugly stacktrace
import ansible.constants as C
from ansible.utils.display import Display
except AnsibleOptionsError as e:
display.error(to_text(e), wrap_text=False)
sys.exit(5)
cli = None
me = os.path.basename(sys.argv[0])
try:
display = Display()
display.debug("starting run")
sub = None
target = me.split('-')
if target[-1][0].isdigit():
# Remove any version or python version info as downstreams
# sometimes add that
target = target[:-1]
if len(target) > 1:
sub = target[1]
myclass = "%sCLI" % sub.capitalize()
elif target[0] == 'ansible':
sub = 'adhoc'
myclass = 'AdHocCLI'
else:
raise AnsibleError("Unknown Ansible alias: %s" % me)
try:
mycli = getattr(__import__("ansible.cli.%s" % sub, fromlist=[myclass]), myclass)
except ImportError as e:
# ImportError members have changed in py3
if 'msg' in dir(e):
msg = e.msg
else:
msg = e.message
if msg.endswith(' %s' % sub):
raise AnsibleError("Ansible sub-program not implemented: %s" % me)
else:
raise
try:
args = [to_text(a, errors='surrogate_or_strict') for a in sys.argv]
except UnicodeError:
display.error('Command line args are not in utf-8, unable to continue. Ansible currently only understands utf-8')
display.display(u"The full traceback was:\n\n%s" % to_text(traceback.format_exc()))
exit_code = 6
else:
cli = mycli(args)
exit_code = cli.run()
except AnsibleOptionsError as e:
cli.parser.print_help()
display.error(to_text(e), wrap_text=False)
exit_code = 5
except AnsibleParserError as e:
display.error(to_text(e), wrap_text=False)
exit_code = 4
# TQM takes care of these, but leaving comment to reserve the exit codes
# except AnsibleHostUnreachable as e:
# display.error(str(e))
# exit_code = 3
# except AnsibleHostFailed as e:
# display.error(str(e))
# exit_code = 2
except AnsibleError as e:
display.error(to_text(e), wrap_text=False)
exit_code = 1
except KeyboardInterrupt:
display.error("User interrupted execution")
exit_code = 99
except Exception as e:
if C.DEFAULT_DEBUG:
# Show raw stacktraces in debug mode, It also allow pdb to
# enter post mortem mode.
raise
have_cli_options = bool(context.CLIARGS)
display.error("Unexpected Exception, this is probably a bug: %s" % to_text(e), wrap_text=False)
if not have_cli_options or have_cli_options and context.CLIARGS['verbosity'] > 2:
log_only = False
if hasattr(e, 'orig_exc'):
display.vvv('\nexception type: %s' % to_text(type(e.orig_exc)))
why = to_text(e.orig_exc)
if to_text(e) != why:
display.vvv('\noriginal msg: %s' % why)
else:
display.display("to see the full traceback, use -vvv")
log_only = True
display.display(u"the full traceback was:\n\n%s" % to_text(traceback.format_exc()), log_only=log_only)
exit_code = 250
sys.exit(exit_code)
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 60,000 |
Move CLI script impls under ansible package
|
##### SUMMARY
To ensure that `ansible-test` as a first-class-citizen can always locate the correct (and unmodified) Ansible scripts for its invocation in all circumstances, we need to move them under `lib/ansible` and change the top-level bin dir to symlinks (which setuptools will copy/modify on install and hacking/env-setup will just use).
##### ISSUE TYPE
- Feature Idea
##### COMPONENT NAME
bin/ansible
##### ADDITIONAL INFORMATION
|
https://github.com/ansible/ansible/issues/60000
|
https://github.com/ansible/ansible/pull/60004
|
97d36881e21dd6bd714cae0a5af020d20bedfafb
|
8d1f658ce46e74f537bf1df610ddef4b2bfb035f
| 2019-08-02T16:55:21Z |
python
| 2019-08-02T18:27:02Z |
bin/ansible-connection
|
#!/usr/bin/env python
# Copyright: (c) 2017, Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
__requires__ = ['ansible']
import fcntl
import hashlib
import os
import signal
import socket
import sys
import time
import traceback
import errno
import json
from contextlib import contextmanager
from ansible import constants as C
from ansible.module_utils._text import to_bytes, to_text
from ansible.module_utils.six import PY3
from ansible.module_utils.six.moves import cPickle, StringIO
from ansible.module_utils.connection import Connection, ConnectionError, send_data, recv_data
from ansible.module_utils.service import fork_process
from ansible.parsing.ajson import AnsibleJSONEncoder, AnsibleJSONDecoder
from ansible.playbook.play_context import PlayContext
from ansible.plugins.loader import connection_loader
from ansible.utils.path import unfrackpath, makedirs_safe
from ansible.utils.display import Display
from ansible.utils.jsonrpc import JsonRpcServer
def read_stream(byte_stream):
size = int(byte_stream.readline().strip())
data = byte_stream.read(size)
if len(data) < size:
raise Exception("EOF found before data was complete")
data_hash = to_text(byte_stream.readline().strip())
if data_hash != hashlib.sha1(data).hexdigest():
raise Exception("Read {0} bytes, but data did not match checksum".format(size))
# restore escaped loose \r characters
data = data.replace(br'\r', b'\r')
return data
@contextmanager
def file_lock(lock_path):
"""
Uses contextmanager to create and release a file lock based on the
given path. This allows us to create locks using `with file_lock()`
to prevent deadlocks related to failure to unlock properly.
"""
lock_fd = os.open(lock_path, os.O_RDWR | os.O_CREAT, 0o600)
fcntl.lockf(lock_fd, fcntl.LOCK_EX)
yield
fcntl.lockf(lock_fd, fcntl.LOCK_UN)
os.close(lock_fd)
class ConnectionProcess(object):
'''
The connection process wraps around a Connection object that manages
the connection to a remote device that persists over the playbook
'''
def __init__(self, fd, play_context, socket_path, original_path, ansible_playbook_pid=None):
self.play_context = play_context
self.socket_path = socket_path
self.original_path = original_path
self.fd = fd
self.exception = None
self.srv = JsonRpcServer()
self.sock = None
self.connection = None
self._ansible_playbook_pid = ansible_playbook_pid
def start(self, variables):
try:
messages = list()
result = {}
messages.append(('vvvv', 'control socket path is %s' % self.socket_path))
# If this is a relative path (~ gets expanded later) then plug the
# key's path on to the directory we originally came from, so we can
# find it now that our cwd is /
if self.play_context.private_key_file and self.play_context.private_key_file[0] not in '~/':
self.play_context.private_key_file = os.path.join(self.original_path, self.play_context.private_key_file)
self.connection = connection_loader.get(self.play_context.connection, self.play_context, '/dev/null',
ansible_playbook_pid=self._ansible_playbook_pid)
self.connection.set_options(var_options=variables)
self.connection._connect()
self.connection._socket_path = self.socket_path
self.srv.register(self.connection)
messages.extend([('vvvv', msg) for msg in sys.stdout.getvalue().splitlines()])
messages.append(('vvvv', 'connection to remote device started successfully'))
self.sock = socket.socket(socket.AF_UNIX, socket.SOCK_STREAM)
self.sock.bind(self.socket_path)
self.sock.listen(1)
messages.append(('vvvv', 'local domain socket listeners started successfully'))
except Exception as exc:
messages.extend(self.connection.pop_messages())
result['error'] = to_text(exc)
result['exception'] = traceback.format_exc()
finally:
result['messages'] = messages
self.fd.write(json.dumps(result, cls=AnsibleJSONEncoder))
self.fd.close()
def run(self):
try:
while self.connection.connected:
signal.signal(signal.SIGALRM, self.connect_timeout)
signal.signal(signal.SIGTERM, self.handler)
signal.alarm(self.connection.get_option('persistent_connect_timeout'))
self.exception = None
(s, addr) = self.sock.accept()
signal.alarm(0)
signal.signal(signal.SIGALRM, self.command_timeout)
while True:
data = recv_data(s)
if not data:
break
log_messages = self.connection.get_option('persistent_log_messages')
if log_messages:
display.display("jsonrpc request: %s" % data, log_only=True)
signal.alarm(self.connection.get_option('persistent_command_timeout'))
resp = self.srv.handle_request(data)
signal.alarm(0)
if log_messages:
display.display("jsonrpc response: %s" % resp, log_only=True)
send_data(s, to_bytes(resp))
s.close()
except Exception as e:
# socket.accept() will raise EINTR if the socket.close() is called
if hasattr(e, 'errno'):
if e.errno != errno.EINTR:
self.exception = traceback.format_exc()
else:
self.exception = traceback.format_exc()
finally:
# allow time for any exception msg send over socket to receive at other end before shutting down
time.sleep(0.1)
# when done, close the connection properly and cleanup the socket file so it can be recreated
self.shutdown()
def connect_timeout(self, signum, frame):
msg = 'persistent connection idle timeout triggered, timeout value is %s secs.\nSee the timeout setting options in the Network Debug and ' \
'Troubleshooting Guide.' % self.connection.get_option('persistent_connect_timeout')
display.display(msg, log_only=True)
raise Exception(msg)
def command_timeout(self, signum, frame):
msg = 'command timeout triggered, timeout value is %s secs.\nSee the timeout setting options in the Network Debug and Troubleshooting Guide.'\
% self.connection.get_option('persistent_command_timeout')
display.display(msg, log_only=True)
raise Exception(msg)
def handler(self, signum, frame):
msg = 'signal handler called with signal %s.' % signum
display.display(msg, log_only=True)
raise Exception(msg)
def shutdown(self):
""" Shuts down the local domain socket
"""
lock_path = unfrackpath("%s/.ansible_pc_lock_%s" % os.path.split(self.socket_path))
if os.path.exists(self.socket_path):
try:
if self.sock:
self.sock.close()
if self.connection:
self.connection.close()
except Exception:
pass
finally:
if os.path.exists(self.socket_path):
os.remove(self.socket_path)
setattr(self.connection, '_socket_path', None)
setattr(self.connection, '_connected', False)
if os.path.exists(lock_path):
os.remove(lock_path)
display.display('shutdown complete', log_only=True)
def main():
""" Called to initiate the connect to the remote device
"""
rc = 0
result = {}
messages = list()
socket_path = None
# Need stdin as a byte stream
if PY3:
stdin = sys.stdin.buffer
else:
stdin = sys.stdin
# Note: update the below log capture code after Display.display() is refactored.
saved_stdout = sys.stdout
sys.stdout = StringIO()
try:
# read the play context data via stdin, which means depickling it
vars_data = read_stream(stdin)
init_data = read_stream(stdin)
if PY3:
pc_data = cPickle.loads(init_data, encoding='bytes')
variables = cPickle.loads(vars_data, encoding='bytes')
else:
pc_data = cPickle.loads(init_data)
variables = cPickle.loads(vars_data)
play_context = PlayContext()
play_context.deserialize(pc_data)
display.verbosity = play_context.verbosity
except Exception as e:
rc = 1
result.update({
'error': to_text(e),
'exception': traceback.format_exc()
})
if rc == 0:
ssh = connection_loader.get('ssh', class_only=True)
ansible_playbook_pid = sys.argv[1]
cp = ssh._create_control_path(play_context.remote_addr, play_context.port, play_context.remote_user, play_context.connection, ansible_playbook_pid)
# create the persistent connection dir if need be and create the paths
# which we will be using later
tmp_path = unfrackpath(C.PERSISTENT_CONTROL_PATH_DIR)
makedirs_safe(tmp_path)
socket_path = unfrackpath(cp % dict(directory=tmp_path))
lock_path = unfrackpath("%s/.ansible_pc_lock_%s" % os.path.split(socket_path))
with file_lock(lock_path):
if not os.path.exists(socket_path):
messages.append(('vvvv', 'local domain socket does not exist, starting it'))
original_path = os.getcwd()
r, w = os.pipe()
pid = fork_process()
if pid == 0:
try:
os.close(r)
wfd = os.fdopen(w, 'w')
process = ConnectionProcess(wfd, play_context, socket_path, original_path, ansible_playbook_pid)
process.start(variables)
except Exception:
messages.append(('error', traceback.format_exc()))
rc = 1
if rc == 0:
process.run()
else:
process.shutdown()
sys.exit(rc)
else:
os.close(w)
rfd = os.fdopen(r, 'r')
data = json.loads(rfd.read(), cls=AnsibleJSONDecoder)
messages.extend(data.pop('messages'))
result.update(data)
else:
messages.append(('vvvv', 'found existing local domain socket, using it!'))
conn = Connection(socket_path)
conn.set_options(var_options=variables)
pc_data = to_text(init_data)
try:
conn.update_play_context(pc_data)
except Exception as exc:
# Only network_cli has update_play context, so missing this is
# not fatal e.g. netconf
if isinstance(exc, ConnectionError) and getattr(exc, 'code', None) == -32601:
pass
else:
result.update({
'error': to_text(exc),
'exception': traceback.format_exc()
})
if os.path.exists(socket_path):
messages.extend(Connection(socket_path).pop_messages())
messages.append(('vvvv', sys.stdout.getvalue()))
result.update({
'messages': messages,
'socket_path': socket_path
})
sys.stdout = saved_stdout
if 'exception' in result:
rc = 1
sys.stderr.write(json.dumps(result, cls=AnsibleJSONEncoder))
else:
rc = 0
sys.stdout.write(json.dumps(result, cls=AnsibleJSONEncoder))
sys.exit(rc)
if __name__ == '__main__':
display = Display()
main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 60,000 |
Move CLI script impls under ansible package
|
##### SUMMARY
To ensure that `ansible-test` as a first-class-citizen can always locate the correct (and unmodified) Ansible scripts for its invocation in all circumstances, we need to move them under `lib/ansible` and change the top-level bin dir to symlinks (which setuptools will copy/modify on install and hacking/env-setup will just use).
##### ISSUE TYPE
- Feature Idea
##### COMPONENT NAME
bin/ansible
##### ADDITIONAL INFORMATION
|
https://github.com/ansible/ansible/issues/60000
|
https://github.com/ansible/ansible/pull/60004
|
97d36881e21dd6bd714cae0a5af020d20bedfafb
|
8d1f658ce46e74f537bf1df610ddef4b2bfb035f
| 2019-08-02T16:55:21Z |
python
| 2019-08-02T18:27:02Z |
bin/ansible-connection
|
#!/usr/bin/env python
# Copyright: (c) 2017, Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
__requires__ = ['ansible']
import fcntl
import hashlib
import os
import signal
import socket
import sys
import time
import traceback
import errno
import json
from contextlib import contextmanager
from ansible import constants as C
from ansible.module_utils._text import to_bytes, to_text
from ansible.module_utils.six import PY3
from ansible.module_utils.six.moves import cPickle, StringIO
from ansible.module_utils.connection import Connection, ConnectionError, send_data, recv_data
from ansible.module_utils.service import fork_process
from ansible.parsing.ajson import AnsibleJSONEncoder, AnsibleJSONDecoder
from ansible.playbook.play_context import PlayContext
from ansible.plugins.loader import connection_loader
from ansible.utils.path import unfrackpath, makedirs_safe
from ansible.utils.display import Display
from ansible.utils.jsonrpc import JsonRpcServer
def read_stream(byte_stream):
size = int(byte_stream.readline().strip())
data = byte_stream.read(size)
if len(data) < size:
raise Exception("EOF found before data was complete")
data_hash = to_text(byte_stream.readline().strip())
if data_hash != hashlib.sha1(data).hexdigest():
raise Exception("Read {0} bytes, but data did not match checksum".format(size))
# restore escaped loose \r characters
data = data.replace(br'\r', b'\r')
return data
@contextmanager
def file_lock(lock_path):
"""
Uses contextmanager to create and release a file lock based on the
given path. This allows us to create locks using `with file_lock()`
to prevent deadlocks related to failure to unlock properly.
"""
lock_fd = os.open(lock_path, os.O_RDWR | os.O_CREAT, 0o600)
fcntl.lockf(lock_fd, fcntl.LOCK_EX)
yield
fcntl.lockf(lock_fd, fcntl.LOCK_UN)
os.close(lock_fd)
class ConnectionProcess(object):
'''
The connection process wraps around a Connection object that manages
the connection to a remote device that persists over the playbook
'''
def __init__(self, fd, play_context, socket_path, original_path, ansible_playbook_pid=None):
self.play_context = play_context
self.socket_path = socket_path
self.original_path = original_path
self.fd = fd
self.exception = None
self.srv = JsonRpcServer()
self.sock = None
self.connection = None
self._ansible_playbook_pid = ansible_playbook_pid
def start(self, variables):
try:
messages = list()
result = {}
messages.append(('vvvv', 'control socket path is %s' % self.socket_path))
# If this is a relative path (~ gets expanded later) then plug the
# key's path on to the directory we originally came from, so we can
# find it now that our cwd is /
if self.play_context.private_key_file and self.play_context.private_key_file[0] not in '~/':
self.play_context.private_key_file = os.path.join(self.original_path, self.play_context.private_key_file)
self.connection = connection_loader.get(self.play_context.connection, self.play_context, '/dev/null',
ansible_playbook_pid=self._ansible_playbook_pid)
self.connection.set_options(var_options=variables)
self.connection._connect()
self.connection._socket_path = self.socket_path
self.srv.register(self.connection)
messages.extend([('vvvv', msg) for msg in sys.stdout.getvalue().splitlines()])
messages.append(('vvvv', 'connection to remote device started successfully'))
self.sock = socket.socket(socket.AF_UNIX, socket.SOCK_STREAM)
self.sock.bind(self.socket_path)
self.sock.listen(1)
messages.append(('vvvv', 'local domain socket listeners started successfully'))
except Exception as exc:
messages.extend(self.connection.pop_messages())
result['error'] = to_text(exc)
result['exception'] = traceback.format_exc()
finally:
result['messages'] = messages
self.fd.write(json.dumps(result, cls=AnsibleJSONEncoder))
self.fd.close()
def run(self):
try:
while self.connection.connected:
signal.signal(signal.SIGALRM, self.connect_timeout)
signal.signal(signal.SIGTERM, self.handler)
signal.alarm(self.connection.get_option('persistent_connect_timeout'))
self.exception = None
(s, addr) = self.sock.accept()
signal.alarm(0)
signal.signal(signal.SIGALRM, self.command_timeout)
while True:
data = recv_data(s)
if not data:
break
log_messages = self.connection.get_option('persistent_log_messages')
if log_messages:
display.display("jsonrpc request: %s" % data, log_only=True)
signal.alarm(self.connection.get_option('persistent_command_timeout'))
resp = self.srv.handle_request(data)
signal.alarm(0)
if log_messages:
display.display("jsonrpc response: %s" % resp, log_only=True)
send_data(s, to_bytes(resp))
s.close()
except Exception as e:
# socket.accept() will raise EINTR if the socket.close() is called
if hasattr(e, 'errno'):
if e.errno != errno.EINTR:
self.exception = traceback.format_exc()
else:
self.exception = traceback.format_exc()
finally:
# allow time for any exception msg send over socket to receive at other end before shutting down
time.sleep(0.1)
# when done, close the connection properly and cleanup the socket file so it can be recreated
self.shutdown()
def connect_timeout(self, signum, frame):
msg = 'persistent connection idle timeout triggered, timeout value is %s secs.\nSee the timeout setting options in the Network Debug and ' \
'Troubleshooting Guide.' % self.connection.get_option('persistent_connect_timeout')
display.display(msg, log_only=True)
raise Exception(msg)
def command_timeout(self, signum, frame):
msg = 'command timeout triggered, timeout value is %s secs.\nSee the timeout setting options in the Network Debug and Troubleshooting Guide.'\
% self.connection.get_option('persistent_command_timeout')
display.display(msg, log_only=True)
raise Exception(msg)
def handler(self, signum, frame):
msg = 'signal handler called with signal %s.' % signum
display.display(msg, log_only=True)
raise Exception(msg)
def shutdown(self):
""" Shuts down the local domain socket
"""
lock_path = unfrackpath("%s/.ansible_pc_lock_%s" % os.path.split(self.socket_path))
if os.path.exists(self.socket_path):
try:
if self.sock:
self.sock.close()
if self.connection:
self.connection.close()
except Exception:
pass
finally:
if os.path.exists(self.socket_path):
os.remove(self.socket_path)
setattr(self.connection, '_socket_path', None)
setattr(self.connection, '_connected', False)
if os.path.exists(lock_path):
os.remove(lock_path)
display.display('shutdown complete', log_only=True)
def main():
""" Called to initiate the connect to the remote device
"""
rc = 0
result = {}
messages = list()
socket_path = None
# Need stdin as a byte stream
if PY3:
stdin = sys.stdin.buffer
else:
stdin = sys.stdin
# Note: update the below log capture code after Display.display() is refactored.
saved_stdout = sys.stdout
sys.stdout = StringIO()
try:
# read the play context data via stdin, which means depickling it
vars_data = read_stream(stdin)
init_data = read_stream(stdin)
if PY3:
pc_data = cPickle.loads(init_data, encoding='bytes')
variables = cPickle.loads(vars_data, encoding='bytes')
else:
pc_data = cPickle.loads(init_data)
variables = cPickle.loads(vars_data)
play_context = PlayContext()
play_context.deserialize(pc_data)
display.verbosity = play_context.verbosity
except Exception as e:
rc = 1
result.update({
'error': to_text(e),
'exception': traceback.format_exc()
})
if rc == 0:
ssh = connection_loader.get('ssh', class_only=True)
ansible_playbook_pid = sys.argv[1]
cp = ssh._create_control_path(play_context.remote_addr, play_context.port, play_context.remote_user, play_context.connection, ansible_playbook_pid)
# create the persistent connection dir if need be and create the paths
# which we will be using later
tmp_path = unfrackpath(C.PERSISTENT_CONTROL_PATH_DIR)
makedirs_safe(tmp_path)
socket_path = unfrackpath(cp % dict(directory=tmp_path))
lock_path = unfrackpath("%s/.ansible_pc_lock_%s" % os.path.split(socket_path))
with file_lock(lock_path):
if not os.path.exists(socket_path):
messages.append(('vvvv', 'local domain socket does not exist, starting it'))
original_path = os.getcwd()
r, w = os.pipe()
pid = fork_process()
if pid == 0:
try:
os.close(r)
wfd = os.fdopen(w, 'w')
process = ConnectionProcess(wfd, play_context, socket_path, original_path, ansible_playbook_pid)
process.start(variables)
except Exception:
messages.append(('error', traceback.format_exc()))
rc = 1
if rc == 0:
process.run()
else:
process.shutdown()
sys.exit(rc)
else:
os.close(w)
rfd = os.fdopen(r, 'r')
data = json.loads(rfd.read(), cls=AnsibleJSONDecoder)
messages.extend(data.pop('messages'))
result.update(data)
else:
messages.append(('vvvv', 'found existing local domain socket, using it!'))
conn = Connection(socket_path)
conn.set_options(var_options=variables)
pc_data = to_text(init_data)
try:
conn.update_play_context(pc_data)
except Exception as exc:
# Only network_cli has update_play context, so missing this is
# not fatal e.g. netconf
if isinstance(exc, ConnectionError) and getattr(exc, 'code', None) == -32601:
pass
else:
result.update({
'error': to_text(exc),
'exception': traceback.format_exc()
})
if os.path.exists(socket_path):
messages.extend(Connection(socket_path).pop_messages())
messages.append(('vvvv', sys.stdout.getvalue()))
result.update({
'messages': messages,
'socket_path': socket_path
})
sys.stdout = saved_stdout
if 'exception' in result:
rc = 1
sys.stderr.write(json.dumps(result, cls=AnsibleJSONEncoder))
else:
rc = 0
sys.stdout.write(json.dumps(result, cls=AnsibleJSONEncoder))
sys.exit(rc)
if __name__ == '__main__':
display = Display()
main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 60,000 |
Move CLI script impls under ansible package
|
##### SUMMARY
To ensure that `ansible-test` as a first-class-citizen can always locate the correct (and unmodified) Ansible scripts for its invocation in all circumstances, we need to move them under `lib/ansible` and change the top-level bin dir to symlinks (which setuptools will copy/modify on install and hacking/env-setup will just use).
##### ISSUE TYPE
- Feature Idea
##### COMPONENT NAME
bin/ansible
##### ADDITIONAL INFORMATION
|
https://github.com/ansible/ansible/issues/60000
|
https://github.com/ansible/ansible/pull/60004
|
97d36881e21dd6bd714cae0a5af020d20bedfafb
|
8d1f658ce46e74f537bf1df610ddef4b2bfb035f
| 2019-08-02T16:55:21Z |
python
| 2019-08-02T18:27:02Z |
lib/ansible/cli/scripts/__init__.py
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 60,000 |
Move CLI script impls under ansible package
|
##### SUMMARY
To ensure that `ansible-test` as a first-class-citizen can always locate the correct (and unmodified) Ansible scripts for its invocation in all circumstances, we need to move them under `lib/ansible` and change the top-level bin dir to symlinks (which setuptools will copy/modify on install and hacking/env-setup will just use).
##### ISSUE TYPE
- Feature Idea
##### COMPONENT NAME
bin/ansible
##### ADDITIONAL INFORMATION
|
https://github.com/ansible/ansible/issues/60000
|
https://github.com/ansible/ansible/pull/60004
|
97d36881e21dd6bd714cae0a5af020d20bedfafb
|
8d1f658ce46e74f537bf1df610ddef4b2bfb035f
| 2019-08-02T16:55:21Z |
python
| 2019-08-02T18:27:02Z |
lib/ansible/cli/scripts/ansible_cli_stub.py
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 60,000 |
Move CLI script impls under ansible package
|
##### SUMMARY
To ensure that `ansible-test` as a first-class-citizen can always locate the correct (and unmodified) Ansible scripts for its invocation in all circumstances, we need to move them under `lib/ansible` and change the top-level bin dir to symlinks (which setuptools will copy/modify on install and hacking/env-setup will just use).
##### ISSUE TYPE
- Feature Idea
##### COMPONENT NAME
bin/ansible
##### ADDITIONAL INFORMATION
|
https://github.com/ansible/ansible/issues/60000
|
https://github.com/ansible/ansible/pull/60004
|
97d36881e21dd6bd714cae0a5af020d20bedfafb
|
8d1f658ce46e74f537bf1df610ddef4b2bfb035f
| 2019-08-02T16:55:21Z |
python
| 2019-08-02T18:27:02Z |
lib/ansible/cli/scripts/ansible_connection_cli_stub.py
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 60,000 |
Move CLI script impls under ansible package
|
##### SUMMARY
To ensure that `ansible-test` as a first-class-citizen can always locate the correct (and unmodified) Ansible scripts for its invocation in all circumstances, we need to move them under `lib/ansible` and change the top-level bin dir to symlinks (which setuptools will copy/modify on install and hacking/env-setup will just use).
##### ISSUE TYPE
- Feature Idea
##### COMPONENT NAME
bin/ansible
##### ADDITIONAL INFORMATION
|
https://github.com/ansible/ansible/issues/60000
|
https://github.com/ansible/ansible/pull/60004
|
97d36881e21dd6bd714cae0a5af020d20bedfafb
|
8d1f658ce46e74f537bf1df610ddef4b2bfb035f
| 2019-08-02T16:55:21Z |
python
| 2019-08-02T18:27:02Z |
test/sanity/ignore.txt
|
contrib/inventory/abiquo.py future-import-boilerplate
contrib/inventory/abiquo.py metaclass-boilerplate
contrib/inventory/apache-libcloud.py future-import-boilerplate
contrib/inventory/apache-libcloud.py metaclass-boilerplate
contrib/inventory/apstra_aos.py future-import-boilerplate
contrib/inventory/apstra_aos.py metaclass-boilerplate
contrib/inventory/azure_rm.py future-import-boilerplate
contrib/inventory/azure_rm.py metaclass-boilerplate
contrib/inventory/brook.py future-import-boilerplate
contrib/inventory/brook.py metaclass-boilerplate
contrib/inventory/cloudforms.py future-import-boilerplate
contrib/inventory/cloudforms.py metaclass-boilerplate
contrib/inventory/cloudstack.py future-import-boilerplate
contrib/inventory/cloudstack.py metaclass-boilerplate
contrib/inventory/cobbler.py future-import-boilerplate
contrib/inventory/cobbler.py metaclass-boilerplate
contrib/inventory/collins.py future-import-boilerplate
contrib/inventory/collins.py metaclass-boilerplate
contrib/inventory/consul_io.py future-import-boilerplate
contrib/inventory/consul_io.py metaclass-boilerplate
contrib/inventory/digital_ocean.py future-import-boilerplate
contrib/inventory/digital_ocean.py metaclass-boilerplate
contrib/inventory/docker.py future-import-boilerplate
contrib/inventory/docker.py metaclass-boilerplate
contrib/inventory/ec2.py future-import-boilerplate
contrib/inventory/ec2.py metaclass-boilerplate
contrib/inventory/fleet.py future-import-boilerplate
contrib/inventory/fleet.py metaclass-boilerplate
contrib/inventory/foreman.py future-import-boilerplate
contrib/inventory/foreman.py metaclass-boilerplate
contrib/inventory/freeipa.py future-import-boilerplate
contrib/inventory/freeipa.py metaclass-boilerplate
contrib/inventory/gce.py future-import-boilerplate
contrib/inventory/gce.py metaclass-boilerplate
contrib/inventory/gce.py pylint:blacklisted-name
contrib/inventory/infoblox.py future-import-boilerplate
contrib/inventory/infoblox.py metaclass-boilerplate
contrib/inventory/jail.py future-import-boilerplate
contrib/inventory/jail.py metaclass-boilerplate
contrib/inventory/landscape.py future-import-boilerplate
contrib/inventory/landscape.py metaclass-boilerplate
contrib/inventory/libvirt_lxc.py future-import-boilerplate
contrib/inventory/libvirt_lxc.py metaclass-boilerplate
contrib/inventory/linode.py future-import-boilerplate
contrib/inventory/linode.py metaclass-boilerplate
contrib/inventory/lxc_inventory.py future-import-boilerplate
contrib/inventory/lxc_inventory.py metaclass-boilerplate
contrib/inventory/lxd.py future-import-boilerplate
contrib/inventory/lxd.py metaclass-boilerplate
contrib/inventory/mdt_dynamic_inventory.py future-import-boilerplate
contrib/inventory/mdt_dynamic_inventory.py metaclass-boilerplate
contrib/inventory/nagios_livestatus.py future-import-boilerplate
contrib/inventory/nagios_livestatus.py metaclass-boilerplate
contrib/inventory/nagios_ndo.py future-import-boilerplate
contrib/inventory/nagios_ndo.py metaclass-boilerplate
contrib/inventory/nsot.py future-import-boilerplate
contrib/inventory/nsot.py metaclass-boilerplate
contrib/inventory/openshift.py future-import-boilerplate
contrib/inventory/openshift.py metaclass-boilerplate
contrib/inventory/openstack_inventory.py future-import-boilerplate
contrib/inventory/openstack_inventory.py metaclass-boilerplate
contrib/inventory/openvz.py future-import-boilerplate
contrib/inventory/openvz.py metaclass-boilerplate
contrib/inventory/ovirt.py future-import-boilerplate
contrib/inventory/ovirt.py metaclass-boilerplate
contrib/inventory/ovirt4.py future-import-boilerplate
contrib/inventory/ovirt4.py metaclass-boilerplate
contrib/inventory/packet_net.py future-import-boilerplate
contrib/inventory/packet_net.py metaclass-boilerplate
contrib/inventory/proxmox.py future-import-boilerplate
contrib/inventory/proxmox.py metaclass-boilerplate
contrib/inventory/rackhd.py future-import-boilerplate
contrib/inventory/rackhd.py metaclass-boilerplate
contrib/inventory/rax.py future-import-boilerplate
contrib/inventory/rax.py metaclass-boilerplate
contrib/inventory/rudder.py future-import-boilerplate
contrib/inventory/rudder.py metaclass-boilerplate
contrib/inventory/scaleway.py future-import-boilerplate
contrib/inventory/scaleway.py metaclass-boilerplate
contrib/inventory/serf.py future-import-boilerplate
contrib/inventory/serf.py metaclass-boilerplate
contrib/inventory/softlayer.py future-import-boilerplate
contrib/inventory/softlayer.py metaclass-boilerplate
contrib/inventory/spacewalk.py future-import-boilerplate
contrib/inventory/spacewalk.py metaclass-boilerplate
contrib/inventory/ssh_config.py future-import-boilerplate
contrib/inventory/ssh_config.py metaclass-boilerplate
contrib/inventory/stacki.py future-import-boilerplate
contrib/inventory/stacki.py metaclass-boilerplate
contrib/inventory/vagrant.py future-import-boilerplate
contrib/inventory/vagrant.py metaclass-boilerplate
contrib/inventory/vbox.py future-import-boilerplate
contrib/inventory/vbox.py metaclass-boilerplate
contrib/inventory/vmware.py future-import-boilerplate
contrib/inventory/vmware.py metaclass-boilerplate
contrib/inventory/vmware_inventory.py future-import-boilerplate
contrib/inventory/vmware_inventory.py metaclass-boilerplate
contrib/inventory/zabbix.py future-import-boilerplate
contrib/inventory/zabbix.py metaclass-boilerplate
contrib/inventory/zone.py future-import-boilerplate
contrib/inventory/zone.py metaclass-boilerplate
contrib/vault/azure_vault.py future-import-boilerplate
contrib/vault/azure_vault.py metaclass-boilerplate
contrib/vault/vault-keyring-client.py future-import-boilerplate
contrib/vault/vault-keyring-client.py metaclass-boilerplate
contrib/vault/vault-keyring.py future-import-boilerplate
contrib/vault/vault-keyring.py metaclass-boilerplate
docs/bin/find-plugin-refs.py future-import-boilerplate
docs/bin/find-plugin-refs.py metaclass-boilerplate
docs/docsite/_extensions/pygments_lexer.py future-import-boilerplate
docs/docsite/_extensions/pygments_lexer.py metaclass-boilerplate
docs/docsite/_themes/sphinx_rtd_theme/__init__.py future-import-boilerplate
docs/docsite/_themes/sphinx_rtd_theme/__init__.py metaclass-boilerplate
docs/docsite/rst/conf.py future-import-boilerplate
docs/docsite/rst/conf.py metaclass-boilerplate
docs/docsite/rst/dev_guide/testing/sanity/no-smart-quotes.rst no-smart-quotes
examples/scripts/ConfigureRemotingForAnsible.ps1 pslint:PSCustomUseLiteralPath
examples/scripts/upgrade_to_ps3.ps1 pslint:PSCustomUseLiteralPath
examples/scripts/upgrade_to_ps3.ps1 pslint:PSUseApprovedVerbs
examples/scripts/uptime.py future-import-boilerplate
examples/scripts/uptime.py metaclass-boilerplate
hacking/aws_config/build_iam_policy_framework.py future-import-boilerplate
hacking/aws_config/build_iam_policy_framework.py metaclass-boilerplate
hacking/build-ansible.py shebang # only run by release engineers, Python 3.6+ required
hacking/build_library/build_ansible/announce.py compile-2.6!skip # release process only, 3.6+ required
hacking/build_library/build_ansible/announce.py compile-2.7!skip # release process only, 3.6+ required
hacking/build_library/build_ansible/announce.py compile-3.5!skip # release process only, 3.6+ required
hacking/build_library/build_ansible/command_plugins/dump_config.py compile-2.6!skip # docs build only, 2.7+ required
hacking/build_library/build_ansible/command_plugins/dump_keywords.py compile-2.6!skip # docs build only, 2.7+ required
hacking/build_library/build_ansible/command_plugins/generate_man.py compile-2.6!skip # docs build only, 2.7+ required
hacking/build_library/build_ansible/command_plugins/plugin_formatter.py compile-2.6!skip # docs build only, 2.7+ required
hacking/build_library/build_ansible/command_plugins/porting_guide.py compile-2.6!skip # release process only, 3.6+ required
hacking/build_library/build_ansible/command_plugins/porting_guide.py compile-2.7!skip # release process only, 3.6+ required
hacking/build_library/build_ansible/command_plugins/porting_guide.py compile-3.5!skip # release process only, 3.6+ required
hacking/build_library/build_ansible/command_plugins/release_announcement.py compile-2.6!skip # release process only, 3.6+ required
hacking/build_library/build_ansible/command_plugins/release_announcement.py compile-2.7!skip # release process only, 3.6+ required
hacking/build_library/build_ansible/command_plugins/release_announcement.py compile-3.5!skip # release process only, 3.6+ required
hacking/create_deprecated_issues.py future-import-boilerplate
hacking/create_deprecated_issues.py metaclass-boilerplate
hacking/fix_test_syntax.py future-import-boilerplate
hacking/fix_test_syntax.py metaclass-boilerplate
hacking/get_library.py future-import-boilerplate
hacking/get_library.py metaclass-boilerplate
hacking/report.py future-import-boilerplate
hacking/report.py metaclass-boilerplate
hacking/return_skeleton_generator.py future-import-boilerplate
hacking/return_skeleton_generator.py metaclass-boilerplate
hacking/test-module.py future-import-boilerplate
hacking/test-module.py metaclass-boilerplate
hacking/tests/gen_distribution_version_testcase.py future-import-boilerplate
hacking/tests/gen_distribution_version_testcase.py metaclass-boilerplate
lib/ansible/cli/console.py pylint:blacklisted-name
lib/ansible/compat/selectors/_selectors2.py future-import-boilerplate # ignore bundled
lib/ansible/compat/selectors/_selectors2.py metaclass-boilerplate # ignore bundled
lib/ansible/compat/selectors/_selectors2.py pylint:blacklisted-name
lib/ansible/config/base.yml no-unwanted-files
lib/ansible/config/module_defaults.yml no-unwanted-files
lib/ansible/executor/playbook_executor.py pylint:blacklisted-name
lib/ansible/executor/powershell/async_watchdog.ps1 pslint:PSCustomUseLiteralPath
lib/ansible/executor/powershell/async_wrapper.ps1 pslint:PSCustomUseLiteralPath
lib/ansible/executor/powershell/exec_wrapper.ps1 pslint:PSCustomUseLiteralPath
lib/ansible/executor/task_queue_manager.py pylint:blacklisted-name
lib/ansible/module_utils/_text.py future-import-boilerplate
lib/ansible/module_utils/_text.py metaclass-boilerplate
lib/ansible/module_utils/alicloud_ecs.py future-import-boilerplate
lib/ansible/module_utils/alicloud_ecs.py metaclass-boilerplate
lib/ansible/module_utils/ansible_tower.py future-import-boilerplate
lib/ansible/module_utils/ansible_tower.py metaclass-boilerplate
lib/ansible/module_utils/api.py future-import-boilerplate
lib/ansible/module_utils/api.py metaclass-boilerplate
lib/ansible/module_utils/aws/batch.py future-import-boilerplate
lib/ansible/module_utils/aws/batch.py metaclass-boilerplate
lib/ansible/module_utils/aws/cloudfront_facts.py future-import-boilerplate
lib/ansible/module_utils/aws/cloudfront_facts.py metaclass-boilerplate
lib/ansible/module_utils/aws/core.py future-import-boilerplate
lib/ansible/module_utils/aws/core.py metaclass-boilerplate
lib/ansible/module_utils/aws/direct_connect.py future-import-boilerplate
lib/ansible/module_utils/aws/direct_connect.py metaclass-boilerplate
lib/ansible/module_utils/aws/elb_utils.py future-import-boilerplate
lib/ansible/module_utils/aws/elb_utils.py metaclass-boilerplate
lib/ansible/module_utils/aws/elbv2.py future-import-boilerplate
lib/ansible/module_utils/aws/elbv2.py metaclass-boilerplate
lib/ansible/module_utils/aws/iam.py future-import-boilerplate
lib/ansible/module_utils/aws/iam.py metaclass-boilerplate
lib/ansible/module_utils/aws/rds.py future-import-boilerplate
lib/ansible/module_utils/aws/rds.py metaclass-boilerplate
lib/ansible/module_utils/aws/s3.py future-import-boilerplate
lib/ansible/module_utils/aws/s3.py metaclass-boilerplate
lib/ansible/module_utils/aws/urls.py future-import-boilerplate
lib/ansible/module_utils/aws/urls.py metaclass-boilerplate
lib/ansible/module_utils/aws/waf.py future-import-boilerplate
lib/ansible/module_utils/aws/waf.py metaclass-boilerplate
lib/ansible/module_utils/aws/waiters.py future-import-boilerplate
lib/ansible/module_utils/aws/waiters.py metaclass-boilerplate
lib/ansible/module_utils/azure_rm_common.py future-import-boilerplate
lib/ansible/module_utils/azure_rm_common.py metaclass-boilerplate
lib/ansible/module_utils/azure_rm_common_ext.py future-import-boilerplate
lib/ansible/module_utils/azure_rm_common_ext.py metaclass-boilerplate
lib/ansible/module_utils/azure_rm_common_rest.py future-import-boilerplate
lib/ansible/module_utils/azure_rm_common_rest.py metaclass-boilerplate
lib/ansible/module_utils/basic.py metaclass-boilerplate
lib/ansible/module_utils/cloud.py future-import-boilerplate
lib/ansible/module_utils/cloud.py metaclass-boilerplate
lib/ansible/module_utils/common/network.py future-import-boilerplate
lib/ansible/module_utils/common/network.py metaclass-boilerplate
lib/ansible/module_utils/common/removed.py future-import-boilerplate
lib/ansible/module_utils/common/removed.py metaclass-boilerplate
lib/ansible/module_utils/compat/ipaddress.py future-import-boilerplate
lib/ansible/module_utils/compat/ipaddress.py metaclass-boilerplate
lib/ansible/module_utils/compat/ipaddress.py no-assert
lib/ansible/module_utils/compat/ipaddress.py no-unicode-literals
lib/ansible/module_utils/connection.py future-import-boilerplate
lib/ansible/module_utils/connection.py metaclass-boilerplate
lib/ansible/module_utils/crypto.py future-import-boilerplate
lib/ansible/module_utils/crypto.py metaclass-boilerplate
lib/ansible/module_utils/database.py future-import-boilerplate
lib/ansible/module_utils/database.py metaclass-boilerplate
lib/ansible/module_utils/digital_ocean.py future-import-boilerplate
lib/ansible/module_utils/digital_ocean.py metaclass-boilerplate
lib/ansible/module_utils/dimensiondata.py future-import-boilerplate
lib/ansible/module_utils/dimensiondata.py metaclass-boilerplate
lib/ansible/module_utils/distro/__init__.py empty-init # breaks namespacing, bundled, do not override
lib/ansible/module_utils/distro/_distro.py future-import-boilerplate # ignore bundled
lib/ansible/module_utils/distro/_distro.py metaclass-boilerplate # ignore bundled
lib/ansible/module_utils/distro/_distro.py no-assert
lib/ansible/module_utils/distro/_distro.py pep8!skip # bundled code we don't want to modify
lib/ansible/module_utils/docker/common.py future-import-boilerplate
lib/ansible/module_utils/docker/common.py metaclass-boilerplate
lib/ansible/module_utils/docker/swarm.py future-import-boilerplate
lib/ansible/module_utils/docker/swarm.py metaclass-boilerplate
lib/ansible/module_utils/ec2.py future-import-boilerplate
lib/ansible/module_utils/ec2.py metaclass-boilerplate
lib/ansible/module_utils/exoscale.py future-import-boilerplate
lib/ansible/module_utils/exoscale.py metaclass-boilerplate
lib/ansible/module_utils/f5_utils.py future-import-boilerplate
lib/ansible/module_utils/f5_utils.py metaclass-boilerplate
lib/ansible/module_utils/facts/__init__.py empty-init # breaks namespacing, deprecate and eventually remove
lib/ansible/module_utils/facts/network/linux.py pylint:blacklisted-name
lib/ansible/module_utils/facts/sysctl.py future-import-boilerplate
lib/ansible/module_utils/facts/sysctl.py metaclass-boilerplate
lib/ansible/module_utils/facts/utils.py future-import-boilerplate
lib/ansible/module_utils/facts/utils.py metaclass-boilerplate
lib/ansible/module_utils/firewalld.py future-import-boilerplate
lib/ansible/module_utils/firewalld.py metaclass-boilerplate
lib/ansible/module_utils/gcdns.py future-import-boilerplate
lib/ansible/module_utils/gcdns.py metaclass-boilerplate
lib/ansible/module_utils/gce.py future-import-boilerplate
lib/ansible/module_utils/gce.py metaclass-boilerplate
lib/ansible/module_utils/gcp.py future-import-boilerplate
lib/ansible/module_utils/gcp.py metaclass-boilerplate
lib/ansible/module_utils/gcp_utils.py future-import-boilerplate
lib/ansible/module_utils/gcp_utils.py metaclass-boilerplate
lib/ansible/module_utils/gitlab.py future-import-boilerplate
lib/ansible/module_utils/gitlab.py metaclass-boilerplate
lib/ansible/module_utils/hwc_utils.py future-import-boilerplate
lib/ansible/module_utils/hwc_utils.py metaclass-boilerplate
lib/ansible/module_utils/infinibox.py future-import-boilerplate
lib/ansible/module_utils/infinibox.py metaclass-boilerplate
lib/ansible/module_utils/ipa.py future-import-boilerplate
lib/ansible/module_utils/ipa.py metaclass-boilerplate
lib/ansible/module_utils/ismount.py future-import-boilerplate
lib/ansible/module_utils/ismount.py metaclass-boilerplate
lib/ansible/module_utils/json_utils.py future-import-boilerplate
lib/ansible/module_utils/json_utils.py metaclass-boilerplate
lib/ansible/module_utils/k8s/common.py metaclass-boilerplate
lib/ansible/module_utils/k8s/raw.py metaclass-boilerplate
lib/ansible/module_utils/k8s/scale.py metaclass-boilerplate
lib/ansible/module_utils/known_hosts.py future-import-boilerplate
lib/ansible/module_utils/known_hosts.py metaclass-boilerplate
lib/ansible/module_utils/kubevirt.py future-import-boilerplate
lib/ansible/module_utils/kubevirt.py metaclass-boilerplate
lib/ansible/module_utils/linode.py future-import-boilerplate
lib/ansible/module_utils/linode.py metaclass-boilerplate
lib/ansible/module_utils/lxd.py future-import-boilerplate
lib/ansible/module_utils/lxd.py metaclass-boilerplate
lib/ansible/module_utils/manageiq.py future-import-boilerplate
lib/ansible/module_utils/manageiq.py metaclass-boilerplate
lib/ansible/module_utils/memset.py future-import-boilerplate
lib/ansible/module_utils/memset.py metaclass-boilerplate
lib/ansible/module_utils/mysql.py future-import-boilerplate
lib/ansible/module_utils/mysql.py metaclass-boilerplate
lib/ansible/module_utils/net_tools/netbox/netbox_utils.py future-import-boilerplate
lib/ansible/module_utils/net_tools/nios/api.py future-import-boilerplate
lib/ansible/module_utils/net_tools/nios/api.py metaclass-boilerplate
lib/ansible/module_utils/netapp.py future-import-boilerplate
lib/ansible/module_utils/netapp.py metaclass-boilerplate
lib/ansible/module_utils/netapp_elementsw_module.py future-import-boilerplate
lib/ansible/module_utils/netapp_elementsw_module.py metaclass-boilerplate
lib/ansible/module_utils/netapp_module.py future-import-boilerplate
lib/ansible/module_utils/netapp_module.py metaclass-boilerplate
lib/ansible/module_utils/network/a10/a10.py future-import-boilerplate
lib/ansible/module_utils/network/a10/a10.py metaclass-boilerplate
lib/ansible/module_utils/network/aci/aci.py future-import-boilerplate
lib/ansible/module_utils/network/aci/aci.py metaclass-boilerplate
lib/ansible/module_utils/network/aci/mso.py future-import-boilerplate
lib/ansible/module_utils/network/aci/mso.py metaclass-boilerplate
lib/ansible/module_utils/network/aireos/aireos.py future-import-boilerplate
lib/ansible/module_utils/network/aireos/aireos.py metaclass-boilerplate
lib/ansible/module_utils/network/aos/aos.py future-import-boilerplate
lib/ansible/module_utils/network/aos/aos.py metaclass-boilerplate
lib/ansible/module_utils/network/aruba/aruba.py future-import-boilerplate
lib/ansible/module_utils/network/aruba/aruba.py metaclass-boilerplate
lib/ansible/module_utils/network/asa/asa.py future-import-boilerplate
lib/ansible/module_utils/network/asa/asa.py metaclass-boilerplate
lib/ansible/module_utils/network/avi/ansible_utils.py future-import-boilerplate
lib/ansible/module_utils/network/avi/ansible_utils.py metaclass-boilerplate
lib/ansible/module_utils/network/avi/avi.py future-import-boilerplate
lib/ansible/module_utils/network/avi/avi.py metaclass-boilerplate
lib/ansible/module_utils/network/avi/avi_api.py future-import-boilerplate
lib/ansible/module_utils/network/avi/avi_api.py metaclass-boilerplate
lib/ansible/module_utils/network/bigswitch/bigswitch.py future-import-boilerplate
lib/ansible/module_utils/network/bigswitch/bigswitch.py metaclass-boilerplate
lib/ansible/module_utils/network/checkpoint/checkpoint.py metaclass-boilerplate
lib/ansible/module_utils/network/cloudengine/ce.py future-import-boilerplate
lib/ansible/module_utils/network/cloudengine/ce.py metaclass-boilerplate
lib/ansible/module_utils/network/cnos/cnos.py future-import-boilerplate
lib/ansible/module_utils/network/cnos/cnos.py metaclass-boilerplate
lib/ansible/module_utils/network/cnos/cnos_devicerules.py future-import-boilerplate
lib/ansible/module_utils/network/cnos/cnos_devicerules.py metaclass-boilerplate
lib/ansible/module_utils/network/cnos/cnos_errorcodes.py future-import-boilerplate
lib/ansible/module_utils/network/cnos/cnos_errorcodes.py metaclass-boilerplate
lib/ansible/module_utils/network/common/cfg/base.py future-import-boilerplate
lib/ansible/module_utils/network/common/cfg/base.py metaclass-boilerplate
lib/ansible/module_utils/network/common/config.py future-import-boilerplate
lib/ansible/module_utils/network/common/config.py metaclass-boilerplate
lib/ansible/module_utils/network/common/facts/facts.py future-import-boilerplate
lib/ansible/module_utils/network/common/facts/facts.py metaclass-boilerplate
lib/ansible/module_utils/network/common/netconf.py future-import-boilerplate
lib/ansible/module_utils/network/common/netconf.py metaclass-boilerplate
lib/ansible/module_utils/network/common/network.py future-import-boilerplate
lib/ansible/module_utils/network/common/network.py metaclass-boilerplate
lib/ansible/module_utils/network/common/parsing.py future-import-boilerplate
lib/ansible/module_utils/network/common/parsing.py metaclass-boilerplate
lib/ansible/module_utils/network/common/utils.py future-import-boilerplate
lib/ansible/module_utils/network/common/utils.py metaclass-boilerplate
lib/ansible/module_utils/network/dellos10/dellos10.py future-import-boilerplate
lib/ansible/module_utils/network/dellos10/dellos10.py metaclass-boilerplate
lib/ansible/module_utils/network/dellos6/dellos6.py future-import-boilerplate
lib/ansible/module_utils/network/dellos6/dellos6.py metaclass-boilerplate
lib/ansible/module_utils/network/dellos9/dellos9.py future-import-boilerplate
lib/ansible/module_utils/network/dellos9/dellos9.py metaclass-boilerplate
lib/ansible/module_utils/network/edgeos/edgeos.py future-import-boilerplate
lib/ansible/module_utils/network/edgeos/edgeos.py metaclass-boilerplate
lib/ansible/module_utils/network/edgeswitch/edgeswitch.py future-import-boilerplate
lib/ansible/module_utils/network/edgeswitch/edgeswitch.py metaclass-boilerplate
lib/ansible/module_utils/network/edgeswitch/edgeswitch_interface.py future-import-boilerplate
lib/ansible/module_utils/network/edgeswitch/edgeswitch_interface.py metaclass-boilerplate
lib/ansible/module_utils/network/edgeswitch/edgeswitch_interface.py pylint:duplicate-string-formatting-argument
lib/ansible/module_utils/network/enos/enos.py future-import-boilerplate
lib/ansible/module_utils/network/enos/enos.py metaclass-boilerplate
lib/ansible/module_utils/network/eos/eos.py future-import-boilerplate
lib/ansible/module_utils/network/eos/eos.py metaclass-boilerplate
lib/ansible/module_utils/network/eos/providers/cli/config/bgp/address_family.py future-import-boilerplate
lib/ansible/module_utils/network/eos/providers/cli/config/bgp/address_family.py metaclass-boilerplate
lib/ansible/module_utils/network/eos/providers/cli/config/bgp/neighbors.py future-import-boilerplate
lib/ansible/module_utils/network/eos/providers/cli/config/bgp/neighbors.py metaclass-boilerplate
lib/ansible/module_utils/network/eos/providers/cli/config/bgp/process.py future-import-boilerplate
lib/ansible/module_utils/network/eos/providers/cli/config/bgp/process.py metaclass-boilerplate
lib/ansible/module_utils/network/eos/providers/module.py future-import-boilerplate
lib/ansible/module_utils/network/eos/providers/module.py metaclass-boilerplate
lib/ansible/module_utils/network/eos/providers/providers.py future-import-boilerplate
lib/ansible/module_utils/network/eos/providers/providers.py metaclass-boilerplate
lib/ansible/module_utils/network/exos/exos.py future-import-boilerplate
lib/ansible/module_utils/network/exos/exos.py metaclass-boilerplate
lib/ansible/module_utils/network/fortimanager/common.py future-import-boilerplate
lib/ansible/module_utils/network/fortimanager/common.py metaclass-boilerplate
lib/ansible/module_utils/network/fortimanager/fortimanager.py future-import-boilerplate
lib/ansible/module_utils/network/fortimanager/fortimanager.py metaclass-boilerplate
lib/ansible/module_utils/network/fortios/fortios.py future-import-boilerplate
lib/ansible/module_utils/network/fortios/fortios.py metaclass-boilerplate
lib/ansible/module_utils/network/frr/frr.py future-import-boilerplate
lib/ansible/module_utils/network/frr/frr.py metaclass-boilerplate
lib/ansible/module_utils/network/frr/providers/cli/config/base.py future-import-boilerplate
lib/ansible/module_utils/network/frr/providers/cli/config/base.py metaclass-boilerplate
lib/ansible/module_utils/network/frr/providers/cli/config/bgp/address_family.py future-import-boilerplate
lib/ansible/module_utils/network/frr/providers/cli/config/bgp/address_family.py metaclass-boilerplate
lib/ansible/module_utils/network/frr/providers/cli/config/bgp/neighbors.py future-import-boilerplate
lib/ansible/module_utils/network/frr/providers/cli/config/bgp/neighbors.py metaclass-boilerplate
lib/ansible/module_utils/network/frr/providers/cli/config/bgp/process.py future-import-boilerplate
lib/ansible/module_utils/network/frr/providers/cli/config/bgp/process.py metaclass-boilerplate
lib/ansible/module_utils/network/frr/providers/module.py future-import-boilerplate
lib/ansible/module_utils/network/frr/providers/module.py metaclass-boilerplate
lib/ansible/module_utils/network/frr/providers/providers.py future-import-boilerplate
lib/ansible/module_utils/network/frr/providers/providers.py metaclass-boilerplate
lib/ansible/module_utils/network/ftd/common.py future-import-boilerplate
lib/ansible/module_utils/network/ftd/common.py metaclass-boilerplate
lib/ansible/module_utils/network/ftd/configuration.py future-import-boilerplate
lib/ansible/module_utils/network/ftd/configuration.py metaclass-boilerplate
lib/ansible/module_utils/network/ftd/device.py future-import-boilerplate
lib/ansible/module_utils/network/ftd/device.py metaclass-boilerplate
lib/ansible/module_utils/network/ftd/fdm_swagger_client.py future-import-boilerplate
lib/ansible/module_utils/network/ftd/fdm_swagger_client.py metaclass-boilerplate
lib/ansible/module_utils/network/ftd/operation.py future-import-boilerplate
lib/ansible/module_utils/network/ftd/operation.py metaclass-boilerplate
lib/ansible/module_utils/network/ios/ios.py future-import-boilerplate
lib/ansible/module_utils/network/ios/ios.py metaclass-boilerplate
lib/ansible/module_utils/network/ios/providers/cli/config/base.py future-import-boilerplate
lib/ansible/module_utils/network/ios/providers/cli/config/base.py metaclass-boilerplate
lib/ansible/module_utils/network/ios/providers/cli/config/bgp/address_family.py future-import-boilerplate
lib/ansible/module_utils/network/ios/providers/cli/config/bgp/address_family.py metaclass-boilerplate
lib/ansible/module_utils/network/ios/providers/cli/config/bgp/neighbors.py future-import-boilerplate
lib/ansible/module_utils/network/ios/providers/cli/config/bgp/neighbors.py metaclass-boilerplate
lib/ansible/module_utils/network/ios/providers/cli/config/bgp/process.py future-import-boilerplate
lib/ansible/module_utils/network/ios/providers/cli/config/bgp/process.py metaclass-boilerplate
lib/ansible/module_utils/network/ios/providers/module.py future-import-boilerplate
lib/ansible/module_utils/network/ios/providers/module.py metaclass-boilerplate
lib/ansible/module_utils/network/ios/providers/providers.py future-import-boilerplate
lib/ansible/module_utils/network/ios/providers/providers.py metaclass-boilerplate
lib/ansible/module_utils/network/iosxr/iosxr.py future-import-boilerplate
lib/ansible/module_utils/network/iosxr/iosxr.py metaclass-boilerplate
lib/ansible/module_utils/network/iosxr/providers/cli/config/bgp/address_family.py future-import-boilerplate
lib/ansible/module_utils/network/iosxr/providers/cli/config/bgp/address_family.py metaclass-boilerplate
lib/ansible/module_utils/network/iosxr/providers/cli/config/bgp/neighbors.py future-import-boilerplate
lib/ansible/module_utils/network/iosxr/providers/cli/config/bgp/neighbors.py metaclass-boilerplate
lib/ansible/module_utils/network/iosxr/providers/cli/config/bgp/process.py future-import-boilerplate
lib/ansible/module_utils/network/iosxr/providers/cli/config/bgp/process.py metaclass-boilerplate
lib/ansible/module_utils/network/iosxr/providers/module.py future-import-boilerplate
lib/ansible/module_utils/network/iosxr/providers/module.py metaclass-boilerplate
lib/ansible/module_utils/network/iosxr/providers/providers.py future-import-boilerplate
lib/ansible/module_utils/network/iosxr/providers/providers.py metaclass-boilerplate
lib/ansible/module_utils/network/junos/argspec/facts/facts.py future-import-boilerplate
lib/ansible/module_utils/network/junos/argspec/facts/facts.py metaclass-boilerplate
lib/ansible/module_utils/network/junos/facts/facts.py future-import-boilerplate
lib/ansible/module_utils/network/junos/facts/facts.py metaclass-boilerplate
lib/ansible/module_utils/network/junos/facts/legacy/base.py future-import-boilerplate
lib/ansible/module_utils/network/junos/facts/legacy/base.py metaclass-boilerplate
lib/ansible/module_utils/network/junos/junos.py future-import-boilerplate
lib/ansible/module_utils/network/junos/junos.py metaclass-boilerplate
lib/ansible/module_utils/network/junos/utils/utils.py future-import-boilerplate
lib/ansible/module_utils/network/junos/utils/utils.py metaclass-boilerplate
lib/ansible/module_utils/network/meraki/meraki.py future-import-boilerplate
lib/ansible/module_utils/network/meraki/meraki.py metaclass-boilerplate
lib/ansible/module_utils/network/netconf/netconf.py future-import-boilerplate
lib/ansible/module_utils/network/netconf/netconf.py metaclass-boilerplate
lib/ansible/module_utils/network/netscaler/netscaler.py future-import-boilerplate
lib/ansible/module_utils/network/netscaler/netscaler.py metaclass-boilerplate
lib/ansible/module_utils/network/nos/nos.py future-import-boilerplate
lib/ansible/module_utils/network/nos/nos.py metaclass-boilerplate
lib/ansible/module_utils/network/nso/nso.py future-import-boilerplate
lib/ansible/module_utils/network/nso/nso.py metaclass-boilerplate
lib/ansible/module_utils/network/nxos/argspec/facts/facts.py future-import-boilerplate
lib/ansible/module_utils/network/nxos/argspec/facts/facts.py metaclass-boilerplate
lib/ansible/module_utils/network/nxos/facts/facts.py future-import-boilerplate
lib/ansible/module_utils/network/nxos/facts/facts.py metaclass-boilerplate
lib/ansible/module_utils/network/nxos/facts/legacy/base.py future-import-boilerplate
lib/ansible/module_utils/network/nxos/facts/legacy/base.py metaclass-boilerplate
lib/ansible/module_utils/network/nxos/nxos.py future-import-boilerplate
lib/ansible/module_utils/network/nxos/nxos.py metaclass-boilerplate
lib/ansible/module_utils/network/nxos/utils/utils.py future-import-boilerplate
lib/ansible/module_utils/network/nxos/utils/utils.py metaclass-boilerplate
lib/ansible/module_utils/network/onyx/onyx.py future-import-boilerplate
lib/ansible/module_utils/network/onyx/onyx.py metaclass-boilerplate
lib/ansible/module_utils/network/ordnance/ordnance.py future-import-boilerplate
lib/ansible/module_utils/network/ordnance/ordnance.py metaclass-boilerplate
lib/ansible/module_utils/network/restconf/restconf.py future-import-boilerplate
lib/ansible/module_utils/network/restconf/restconf.py metaclass-boilerplate
lib/ansible/module_utils/network/routeros/routeros.py future-import-boilerplate
lib/ansible/module_utils/network/routeros/routeros.py metaclass-boilerplate
lib/ansible/module_utils/network/skydive/api.py future-import-boilerplate
lib/ansible/module_utils/network/skydive/api.py metaclass-boilerplate
lib/ansible/module_utils/network/slxos/slxos.py future-import-boilerplate
lib/ansible/module_utils/network/slxos/slxos.py metaclass-boilerplate
lib/ansible/module_utils/network/sros/sros.py future-import-boilerplate
lib/ansible/module_utils/network/sros/sros.py metaclass-boilerplate
lib/ansible/module_utils/network/voss/voss.py future-import-boilerplate
lib/ansible/module_utils/network/voss/voss.py metaclass-boilerplate
lib/ansible/module_utils/network/vyos/vyos.py future-import-boilerplate
lib/ansible/module_utils/network/vyos/vyos.py metaclass-boilerplate
lib/ansible/module_utils/oneandone.py future-import-boilerplate
lib/ansible/module_utils/oneandone.py metaclass-boilerplate
lib/ansible/module_utils/oneview.py metaclass-boilerplate
lib/ansible/module_utils/opennebula.py future-import-boilerplate
lib/ansible/module_utils/opennebula.py metaclass-boilerplate
lib/ansible/module_utils/openstack.py future-import-boilerplate
lib/ansible/module_utils/openstack.py metaclass-boilerplate
lib/ansible/module_utils/oracle/oci_utils.py future-import-boilerplate
lib/ansible/module_utils/oracle/oci_utils.py metaclass-boilerplate
lib/ansible/module_utils/ovirt.py future-import-boilerplate
lib/ansible/module_utils/ovirt.py metaclass-boilerplate
lib/ansible/module_utils/parsing/convert_bool.py future-import-boilerplate
lib/ansible/module_utils/parsing/convert_bool.py metaclass-boilerplate
lib/ansible/module_utils/postgres.py future-import-boilerplate
lib/ansible/module_utils/postgres.py metaclass-boilerplate
lib/ansible/module_utils/powershell/Ansible.ModuleUtils.ArgvParser.psm1 pslint:PSUseApprovedVerbs
lib/ansible/module_utils/powershell/Ansible.ModuleUtils.CommandUtil.psm1 pslint:PSProvideCommentHelp # need to agree on best format for comment location
lib/ansible/module_utils/powershell/Ansible.ModuleUtils.CommandUtil.psm1 pslint:PSUseApprovedVerbs
lib/ansible/module_utils/powershell/Ansible.ModuleUtils.FileUtil.psm1 pslint:PSCustomUseLiteralPath
lib/ansible/module_utils/powershell/Ansible.ModuleUtils.FileUtil.psm1 pslint:PSProvideCommentHelp
lib/ansible/module_utils/powershell/Ansible.ModuleUtils.Legacy.psm1 pslint:PSCustomUseLiteralPath
lib/ansible/module_utils/powershell/Ansible.ModuleUtils.Legacy.psm1 pslint:PSUseApprovedVerbs
lib/ansible/module_utils/powershell/Ansible.ModuleUtils.LinkUtil.psm1 pslint:PSUseApprovedVerbs
lib/ansible/module_utils/pure.py future-import-boilerplate
lib/ansible/module_utils/pure.py metaclass-boilerplate
lib/ansible/module_utils/pycompat24.py future-import-boilerplate
lib/ansible/module_utils/pycompat24.py metaclass-boilerplate
lib/ansible/module_utils/pycompat24.py no-get-exception
lib/ansible/module_utils/rax.py future-import-boilerplate
lib/ansible/module_utils/rax.py metaclass-boilerplate
lib/ansible/module_utils/redhat.py future-import-boilerplate
lib/ansible/module_utils/redhat.py metaclass-boilerplate
lib/ansible/module_utils/remote_management/dellemc/dellemc_idrac.py future-import-boilerplate
lib/ansible/module_utils/remote_management/intersight.py future-import-boilerplate
lib/ansible/module_utils/remote_management/intersight.py metaclass-boilerplate
lib/ansible/module_utils/remote_management/lxca/common.py future-import-boilerplate
lib/ansible/module_utils/remote_management/lxca/common.py metaclass-boilerplate
lib/ansible/module_utils/remote_management/ucs.py future-import-boilerplate
lib/ansible/module_utils/remote_management/ucs.py metaclass-boilerplate
lib/ansible/module_utils/scaleway.py future-import-boilerplate
lib/ansible/module_utils/scaleway.py metaclass-boilerplate
lib/ansible/module_utils/service.py future-import-boilerplate
lib/ansible/module_utils/service.py metaclass-boilerplate
lib/ansible/module_utils/six/__init__.py empty-init # breaks namespacing, bundled, do not override
lib/ansible/module_utils/six/__init__.py future-import-boilerplate # ignore bundled
lib/ansible/module_utils/six/__init__.py metaclass-boilerplate # ignore bundled
lib/ansible/module_utils/six/__init__.py no-basestring
lib/ansible/module_utils/six/__init__.py no-dict-iteritems
lib/ansible/module_utils/six/__init__.py no-dict-iterkeys
lib/ansible/module_utils/six/__init__.py no-dict-itervalues
lib/ansible/module_utils/six/__init__.py replace-urlopen
lib/ansible/module_utils/splitter.py future-import-boilerplate
lib/ansible/module_utils/splitter.py metaclass-boilerplate
lib/ansible/module_utils/storage/emc/emc_vnx.py future-import-boilerplate
lib/ansible/module_utils/storage/emc/emc_vnx.py metaclass-boilerplate
lib/ansible/module_utils/storage/hpe3par/hpe3par.py future-import-boilerplate
lib/ansible/module_utils/storage/hpe3par/hpe3par.py metaclass-boilerplate
lib/ansible/module_utils/univention_umc.py future-import-boilerplate
lib/ansible/module_utils/univention_umc.py metaclass-boilerplate
lib/ansible/module_utils/urls.py future-import-boilerplate
lib/ansible/module_utils/urls.py metaclass-boilerplate
lib/ansible/module_utils/urls.py pylint:blacklisted-name
lib/ansible/module_utils/urls.py replace-urlopen
lib/ansible/module_utils/vca.py future-import-boilerplate
lib/ansible/module_utils/vca.py metaclass-boilerplate
lib/ansible/module_utils/vexata.py future-import-boilerplate
lib/ansible/module_utils/vexata.py metaclass-boilerplate
lib/ansible/module_utils/yumdnf.py future-import-boilerplate
lib/ansible/module_utils/yumdnf.py metaclass-boilerplate
lib/ansible/modules/cloud/alicloud/ali_instance.py validate-modules:E337
lib/ansible/modules/cloud/alicloud/ali_instance.py validate-modules:E338
lib/ansible/modules/cloud/alicloud/ali_instance_info.py validate-modules:E337
lib/ansible/modules/cloud/alicloud/ali_instance_info.py validate-modules:E338
lib/ansible/modules/cloud/amazon/aws_acm_info.py future-import-boilerplate
lib/ansible/modules/cloud/amazon/aws_acm_info.py metaclass-boilerplate
lib/ansible/modules/cloud/amazon/aws_acm_info.py validate-modules:E337
lib/ansible/modules/cloud/amazon/aws_acm_info.py validate-modules:E338
lib/ansible/modules/cloud/amazon/aws_api_gateway.py validate-modules:E322
lib/ansible/modules/cloud/amazon/aws_api_gateway.py validate-modules:E337
lib/ansible/modules/cloud/amazon/aws_application_scaling_policy.py validate-modules:E322
lib/ansible/modules/cloud/amazon/aws_application_scaling_policy.py validate-modules:E326
lib/ansible/modules/cloud/amazon/aws_application_scaling_policy.py validate-modules:E337
lib/ansible/modules/cloud/amazon/aws_application_scaling_policy.py validate-modules:E340
lib/ansible/modules/cloud/amazon/aws_az_info.py future-import-boilerplate
lib/ansible/modules/cloud/amazon/aws_az_info.py metaclass-boilerplate
lib/ansible/modules/cloud/amazon/aws_az_info.py validate-modules:E337
lib/ansible/modules/cloud/amazon/aws_batch_compute_environment.py future-import-boilerplate
lib/ansible/modules/cloud/amazon/aws_batch_compute_environment.py metaclass-boilerplate
lib/ansible/modules/cloud/amazon/aws_batch_compute_environment.py validate-modules:E337
lib/ansible/modules/cloud/amazon/aws_batch_compute_environment.py validate-modules:E338
lib/ansible/modules/cloud/amazon/aws_batch_job_definition.py future-import-boilerplate
lib/ansible/modules/cloud/amazon/aws_batch_job_definition.py metaclass-boilerplate
lib/ansible/modules/cloud/amazon/aws_batch_job_definition.py validate-modules:E337
lib/ansible/modules/cloud/amazon/aws_batch_job_definition.py validate-modules:E338
lib/ansible/modules/cloud/amazon/aws_batch_job_queue.py future-import-boilerplate
lib/ansible/modules/cloud/amazon/aws_batch_job_queue.py metaclass-boilerplate
lib/ansible/modules/cloud/amazon/aws_batch_job_queue.py validate-modules:E337
lib/ansible/modules/cloud/amazon/aws_batch_job_queue.py validate-modules:E338
lib/ansible/modules/cloud/amazon/aws_caller_info.py future-import-boilerplate
lib/ansible/modules/cloud/amazon/aws_caller_info.py metaclass-boilerplate
lib/ansible/modules/cloud/amazon/aws_codebuild.py validate-modules:E337
lib/ansible/modules/cloud/amazon/aws_codebuild.py validate-modules:E338
lib/ansible/modules/cloud/amazon/aws_codecommit.py future-import-boilerplate
lib/ansible/modules/cloud/amazon/aws_codecommit.py metaclass-boilerplate
lib/ansible/modules/cloud/amazon/aws_codecommit.py validate-modules:E338
lib/ansible/modules/cloud/amazon/aws_codepipeline.py validate-modules:E337
lib/ansible/modules/cloud/amazon/aws_codepipeline.py validate-modules:E338
lib/ansible/modules/cloud/amazon/aws_config_aggregation_authorization.py validate-modules:E337
lib/ansible/modules/cloud/amazon/aws_config_aggregator.py validate-modules:E337
lib/ansible/modules/cloud/amazon/aws_config_delivery_channel.py validate-modules:E337
lib/ansible/modules/cloud/amazon/aws_config_recorder.py validate-modules:E337
lib/ansible/modules/cloud/amazon/aws_config_rule.py validate-modules:E337
lib/ansible/modules/cloud/amazon/aws_direct_connect_connection.py future-import-boilerplate
lib/ansible/modules/cloud/amazon/aws_direct_connect_connection.py metaclass-boilerplate
lib/ansible/modules/cloud/amazon/aws_direct_connect_connection.py validate-modules:E338
lib/ansible/modules/cloud/amazon/aws_direct_connect_gateway.py validate-modules:E322
lib/ansible/modules/cloud/amazon/aws_direct_connect_gateway.py validate-modules:E324
lib/ansible/modules/cloud/amazon/aws_direct_connect_gateway.py validate-modules:E337
lib/ansible/modules/cloud/amazon/aws_direct_connect_gateway.py validate-modules:E338
lib/ansible/modules/cloud/amazon/aws_direct_connect_link_aggregation_group.py future-import-boilerplate
lib/ansible/modules/cloud/amazon/aws_direct_connect_link_aggregation_group.py metaclass-boilerplate
lib/ansible/modules/cloud/amazon/aws_direct_connect_link_aggregation_group.py validate-modules:E337
lib/ansible/modules/cloud/amazon/aws_direct_connect_link_aggregation_group.py validate-modules:E338
lib/ansible/modules/cloud/amazon/aws_direct_connect_virtual_interface.py future-import-boilerplate
lib/ansible/modules/cloud/amazon/aws_direct_connect_virtual_interface.py metaclass-boilerplate
lib/ansible/modules/cloud/amazon/aws_direct_connect_virtual_interface.py validate-modules:E337
lib/ansible/modules/cloud/amazon/aws_direct_connect_virtual_interface.py validate-modules:E338
lib/ansible/modules/cloud/amazon/aws_eks_cluster.py future-import-boilerplate
lib/ansible/modules/cloud/amazon/aws_eks_cluster.py metaclass-boilerplate
lib/ansible/modules/cloud/amazon/aws_eks_cluster.py validate-modules:E337
lib/ansible/modules/cloud/amazon/aws_eks_cluster.py validate-modules:E338
lib/ansible/modules/cloud/amazon/aws_elasticbeanstalk_app.py future-import-boilerplate
lib/ansible/modules/cloud/amazon/aws_elasticbeanstalk_app.py metaclass-boilerplate
lib/ansible/modules/cloud/amazon/aws_elasticbeanstalk_app.py validate-modules:E337
lib/ansible/modules/cloud/amazon/aws_elasticbeanstalk_app.py validate-modules:E338
lib/ansible/modules/cloud/amazon/aws_glue_connection.py future-import-boilerplate
lib/ansible/modules/cloud/amazon/aws_glue_connection.py metaclass-boilerplate
lib/ansible/modules/cloud/amazon/aws_glue_connection.py validate-modules:E337
lib/ansible/modules/cloud/amazon/aws_glue_job.py future-import-boilerplate
lib/ansible/modules/cloud/amazon/aws_glue_job.py metaclass-boilerplate
lib/ansible/modules/cloud/amazon/aws_glue_job.py validate-modules:E337
lib/ansible/modules/cloud/amazon/aws_inspector_target.py future-import-boilerplate
lib/ansible/modules/cloud/amazon/aws_inspector_target.py metaclass-boilerplate
lib/ansible/modules/cloud/amazon/aws_inspector_target.py validate-modules:E337
lib/ansible/modules/cloud/amazon/aws_inspector_target.py validate-modules:E338
lib/ansible/modules/cloud/amazon/aws_kms.py future-import-boilerplate
lib/ansible/modules/cloud/amazon/aws_kms.py metaclass-boilerplate
lib/ansible/modules/cloud/amazon/aws_kms.py validate-modules:E337
lib/ansible/modules/cloud/amazon/aws_kms.py validate-modules:E338
lib/ansible/modules/cloud/amazon/aws_kms_info.py future-import-boilerplate
lib/ansible/modules/cloud/amazon/aws_kms_info.py metaclass-boilerplate
lib/ansible/modules/cloud/amazon/aws_kms_info.py validate-modules:E337
lib/ansible/modules/cloud/amazon/aws_region_info.py future-import-boilerplate
lib/ansible/modules/cloud/amazon/aws_region_info.py metaclass-boilerplate
lib/ansible/modules/cloud/amazon/aws_region_info.py validate-modules:E337
lib/ansible/modules/cloud/amazon/aws_s3.py future-import-boilerplate
lib/ansible/modules/cloud/amazon/aws_s3.py metaclass-boilerplate
lib/ansible/modules/cloud/amazon/aws_s3.py validate-modules:E322
lib/ansible/modules/cloud/amazon/aws_s3.py validate-modules:E324
lib/ansible/modules/cloud/amazon/aws_s3.py validate-modules:E337
lib/ansible/modules/cloud/amazon/aws_s3.py validate-modules:E338
lib/ansible/modules/cloud/amazon/aws_s3_cors.py future-import-boilerplate
lib/ansible/modules/cloud/amazon/aws_s3_cors.py metaclass-boilerplate
lib/ansible/modules/cloud/amazon/aws_s3_cors.py validate-modules:E337
lib/ansible/modules/cloud/amazon/aws_secret.py validate-modules:E337
lib/ansible/modules/cloud/amazon/aws_secret.py validate-modules:E338
lib/ansible/modules/cloud/amazon/aws_ses_identity.py future-import-boilerplate
lib/ansible/modules/cloud/amazon/aws_ses_identity.py metaclass-boilerplate
lib/ansible/modules/cloud/amazon/aws_ses_identity.py validate-modules:E337
lib/ansible/modules/cloud/amazon/aws_ses_identity.py validate-modules:E338
lib/ansible/modules/cloud/amazon/aws_ses_identity_policy.py future-import-boilerplate
lib/ansible/modules/cloud/amazon/aws_ses_identity_policy.py metaclass-boilerplate
lib/ansible/modules/cloud/amazon/aws_ses_identity_policy.py validate-modules:E337
lib/ansible/modules/cloud/amazon/aws_ses_identity_policy.py validate-modules:E338
lib/ansible/modules/cloud/amazon/aws_ses_rule_set.py future-import-boilerplate
lib/ansible/modules/cloud/amazon/aws_ses_rule_set.py metaclass-boilerplate
lib/ansible/modules/cloud/amazon/aws_ses_rule_set.py validate-modules:E337
lib/ansible/modules/cloud/amazon/aws_sgw_info.py future-import-boilerplate
lib/ansible/modules/cloud/amazon/aws_sgw_info.py metaclass-boilerplate
lib/ansible/modules/cloud/amazon/aws_ssm_parameter_store.py future-import-boilerplate
lib/ansible/modules/cloud/amazon/aws_ssm_parameter_store.py metaclass-boilerplate
lib/ansible/modules/cloud/amazon/aws_ssm_parameter_store.py validate-modules:E324
lib/ansible/modules/cloud/amazon/aws_ssm_parameter_store.py validate-modules:E338
lib/ansible/modules/cloud/amazon/aws_waf_condition.py future-import-boilerplate
lib/ansible/modules/cloud/amazon/aws_waf_condition.py metaclass-boilerplate
lib/ansible/modules/cloud/amazon/aws_waf_condition.py validate-modules:E337
lib/ansible/modules/cloud/amazon/aws_waf_condition.py validate-modules:E338
lib/ansible/modules/cloud/amazon/aws_waf_info.py future-import-boilerplate
lib/ansible/modules/cloud/amazon/aws_waf_info.py metaclass-boilerplate
lib/ansible/modules/cloud/amazon/aws_waf_info.py validate-modules:E338
lib/ansible/modules/cloud/amazon/aws_waf_rule.py future-import-boilerplate
lib/ansible/modules/cloud/amazon/aws_waf_rule.py metaclass-boilerplate
lib/ansible/modules/cloud/amazon/aws_waf_rule.py validate-modules:E337
lib/ansible/modules/cloud/amazon/aws_waf_rule.py validate-modules:E338
lib/ansible/modules/cloud/amazon/aws_waf_web_acl.py future-import-boilerplate
lib/ansible/modules/cloud/amazon/aws_waf_web_acl.py metaclass-boilerplate
lib/ansible/modules/cloud/amazon/aws_waf_web_acl.py validate-modules:E337
lib/ansible/modules/cloud/amazon/aws_waf_web_acl.py validate-modules:E338
lib/ansible/modules/cloud/amazon/cloudformation.py validate-modules:E324
lib/ansible/modules/cloud/amazon/cloudformation.py validate-modules:E337
lib/ansible/modules/cloud/amazon/cloudformation.py validate-modules:E338
lib/ansible/modules/cloud/amazon/cloudformation_facts.py validate-modules:E338
lib/ansible/modules/cloud/amazon/cloudformation_stack_set.py validate-modules:E322
lib/ansible/modules/cloud/amazon/cloudformation_stack_set.py validate-modules:E337
lib/ansible/modules/cloud/amazon/cloudformation_stack_set.py validate-modules:E338
lib/ansible/modules/cloud/amazon/cloudformation_stack_set.py validate-modules:E340
lib/ansible/modules/cloud/amazon/cloudfront_distribution.py future-import-boilerplate
lib/ansible/modules/cloud/amazon/cloudfront_distribution.py metaclass-boilerplate
lib/ansible/modules/cloud/amazon/cloudfront_distribution.py validate-modules:E324
lib/ansible/modules/cloud/amazon/cloudfront_distribution.py validate-modules:E326
lib/ansible/modules/cloud/amazon/cloudfront_distribution.py validate-modules:E337
lib/ansible/modules/cloud/amazon/cloudfront_distribution.py validate-modules:E338
lib/ansible/modules/cloud/amazon/cloudfront_facts.py future-import-boilerplate
lib/ansible/modules/cloud/amazon/cloudfront_facts.py metaclass-boilerplate
lib/ansible/modules/cloud/amazon/cloudfront_facts.py validate-modules:E323
lib/ansible/modules/cloud/amazon/cloudfront_facts.py validate-modules:E337
lib/ansible/modules/cloud/amazon/cloudfront_invalidation.py future-import-boilerplate
lib/ansible/modules/cloud/amazon/cloudfront_invalidation.py metaclass-boilerplate
lib/ansible/modules/cloud/amazon/cloudfront_invalidation.py validate-modules:E324
lib/ansible/modules/cloud/amazon/cloudfront_invalidation.py validate-modules:E337
lib/ansible/modules/cloud/amazon/cloudfront_invalidation.py validate-modules:E338
lib/ansible/modules/cloud/amazon/cloudfront_origin_access_identity.py future-import-boilerplate
lib/ansible/modules/cloud/amazon/cloudfront_origin_access_identity.py metaclass-boilerplate
lib/ansible/modules/cloud/amazon/cloudfront_origin_access_identity.py validate-modules:E324
lib/ansible/modules/cloud/amazon/cloudfront_origin_access_identity.py validate-modules:E338
lib/ansible/modules/cloud/amazon/cloudtrail.py validate-modules:E324
lib/ansible/modules/cloud/amazon/cloudtrail.py validate-modules:E337
lib/ansible/modules/cloud/amazon/cloudtrail.py validate-modules:E338
lib/ansible/modules/cloud/amazon/cloudwatchevent_rule.py validate-modules:E337
lib/ansible/modules/cloud/amazon/cloudwatchevent_rule.py validate-modules:E338
lib/ansible/modules/cloud/amazon/cloudwatchlogs_log_group.py validate-modules:E337
lib/ansible/modules/cloud/amazon/cloudwatchlogs_log_group.py validate-modules:E338
lib/ansible/modules/cloud/amazon/cloudwatchlogs_log_group_info.py validate-modules:E338
lib/ansible/modules/cloud/amazon/data_pipeline.py pylint:blacklisted-name
lib/ansible/modules/cloud/amazon/data_pipeline.py validate-modules:E322
lib/ansible/modules/cloud/amazon/data_pipeline.py validate-modules:E337
lib/ansible/modules/cloud/amazon/data_pipeline.py validate-modules:E338
lib/ansible/modules/cloud/amazon/dms_endpoint.py future-import-boilerplate
lib/ansible/modules/cloud/amazon/dms_endpoint.py validate-modules:E337
lib/ansible/modules/cloud/amazon/dms_endpoint.py validate-modules:E338
lib/ansible/modules/cloud/amazon/dms_replication_subnet_group.py future-import-boilerplate
lib/ansible/modules/cloud/amazon/dynamodb_table.py validate-modules:E337
lib/ansible/modules/cloud/amazon/dynamodb_table.py validate-modules:E338
lib/ansible/modules/cloud/amazon/dynamodb_ttl.py validate-modules:E324
lib/ansible/modules/cloud/amazon/dynamodb_ttl.py validate-modules:E338
lib/ansible/modules/cloud/amazon/ec2.py future-import-boilerplate
lib/ansible/modules/cloud/amazon/ec2.py metaclass-boilerplate
lib/ansible/modules/cloud/amazon/ec2.py validate-modules:E322
lib/ansible/modules/cloud/amazon/ec2.py validate-modules:E337
lib/ansible/modules/cloud/amazon/ec2_ami.py validate-modules:E324
lib/ansible/modules/cloud/amazon/ec2_ami.py validate-modules:E337
lib/ansible/modules/cloud/amazon/ec2_ami.py validate-modules:E338
lib/ansible/modules/cloud/amazon/ec2_ami_copy.py future-import-boilerplate
lib/ansible/modules/cloud/amazon/ec2_ami_copy.py metaclass-boilerplate
lib/ansible/modules/cloud/amazon/ec2_ami_copy.py validate-modules:E337
lib/ansible/modules/cloud/amazon/ec2_ami_copy.py validate-modules:E338
lib/ansible/modules/cloud/amazon/ec2_ami_info.py future-import-boilerplate
lib/ansible/modules/cloud/amazon/ec2_ami_info.py metaclass-boilerplate
lib/ansible/modules/cloud/amazon/ec2_ami_info.py validate-modules:E337
lib/ansible/modules/cloud/amazon/ec2_asg.py future-import-boilerplate
lib/ansible/modules/cloud/amazon/ec2_asg.py metaclass-boilerplate
lib/ansible/modules/cloud/amazon/ec2_asg.py validate-modules:E324
lib/ansible/modules/cloud/amazon/ec2_asg.py validate-modules:E326
lib/ansible/modules/cloud/amazon/ec2_asg.py validate-modules:E327
lib/ansible/modules/cloud/amazon/ec2_asg.py validate-modules:E337
lib/ansible/modules/cloud/amazon/ec2_asg.py validate-modules:E338
lib/ansible/modules/cloud/amazon/ec2_asg_info.py validate-modules:E337
lib/ansible/modules/cloud/amazon/ec2_asg_lifecycle_hook.py future-import-boilerplate
lib/ansible/modules/cloud/amazon/ec2_asg_lifecycle_hook.py metaclass-boilerplate
lib/ansible/modules/cloud/amazon/ec2_asg_lifecycle_hook.py validate-modules:E327
lib/ansible/modules/cloud/amazon/ec2_asg_lifecycle_hook.py validate-modules:E337
lib/ansible/modules/cloud/amazon/ec2_asg_lifecycle_hook.py validate-modules:E338
lib/ansible/modules/cloud/amazon/ec2_customer_gateway.py future-import-boilerplate
lib/ansible/modules/cloud/amazon/ec2_customer_gateway.py metaclass-boilerplate
lib/ansible/modules/cloud/amazon/ec2_customer_gateway.py validate-modules:E337
lib/ansible/modules/cloud/amazon/ec2_customer_gateway.py validate-modules:E338
lib/ansible/modules/cloud/amazon/ec2_customer_gateway_info.py future-import-boilerplate
lib/ansible/modules/cloud/amazon/ec2_customer_gateway_info.py metaclass-boilerplate
lib/ansible/modules/cloud/amazon/ec2_customer_gateway_info.py validate-modules:E337
lib/ansible/modules/cloud/amazon/ec2_eip.py validate-modules:E322
lib/ansible/modules/cloud/amazon/ec2_eip.py validate-modules:E324
lib/ansible/modules/cloud/amazon/ec2_eip.py validate-modules:E337
lib/ansible/modules/cloud/amazon/ec2_eip.py validate-modules:E338
lib/ansible/modules/cloud/amazon/ec2_eip_info.py future-import-boilerplate
lib/ansible/modules/cloud/amazon/ec2_eip_info.py metaclass-boilerplate
lib/ansible/modules/cloud/amazon/ec2_eip_info.py validate-modules:E337
lib/ansible/modules/cloud/amazon/ec2_elb.py validate-modules:E326
lib/ansible/modules/cloud/amazon/ec2_elb.py validate-modules:E337
lib/ansible/modules/cloud/amazon/ec2_elb.py validate-modules:E338
lib/ansible/modules/cloud/amazon/ec2_elb_info.py future-import-boilerplate
lib/ansible/modules/cloud/amazon/ec2_elb_info.py metaclass-boilerplate
lib/ansible/modules/cloud/amazon/ec2_elb_info.py validate-modules:E323
lib/ansible/modules/cloud/amazon/ec2_elb_info.py validate-modules:E337
lib/ansible/modules/cloud/amazon/ec2_eni.py future-import-boilerplate
lib/ansible/modules/cloud/amazon/ec2_eni.py metaclass-boilerplate
lib/ansible/modules/cloud/amazon/ec2_eni.py validate-modules:E337
lib/ansible/modules/cloud/amazon/ec2_eni.py validate-modules:E338
lib/ansible/modules/cloud/amazon/ec2_eni_info.py future-import-boilerplate
lib/ansible/modules/cloud/amazon/ec2_eni_info.py metaclass-boilerplate
lib/ansible/modules/cloud/amazon/ec2_eni_info.py validate-modules:E337
lib/ansible/modules/cloud/amazon/ec2_group.py future-import-boilerplate
lib/ansible/modules/cloud/amazon/ec2_group.py metaclass-boilerplate
lib/ansible/modules/cloud/amazon/ec2_group.py validate-modules:E322
lib/ansible/modules/cloud/amazon/ec2_group.py validate-modules:E337
lib/ansible/modules/cloud/amazon/ec2_group.py validate-modules:E338
lib/ansible/modules/cloud/amazon/ec2_group_info.py validate-modules:E337
lib/ansible/modules/cloud/amazon/ec2_instance.py validate-modules:E324
lib/ansible/modules/cloud/amazon/ec2_instance.py validate-modules:E337
lib/ansible/modules/cloud/amazon/ec2_instance.py validate-modules:E338
lib/ansible/modules/cloud/amazon/ec2_instance_info.py validate-modules:E337
lib/ansible/modules/cloud/amazon/ec2_key.py validate-modules:E337
lib/ansible/modules/cloud/amazon/ec2_key.py validate-modules:E338
lib/ansible/modules/cloud/amazon/ec2_launch_template.py validate-modules:E323
lib/ansible/modules/cloud/amazon/ec2_launch_template.py validate-modules:E326
lib/ansible/modules/cloud/amazon/ec2_launch_template.py validate-modules:E337
lib/ansible/modules/cloud/amazon/ec2_launch_template.py validate-modules:E338
lib/ansible/modules/cloud/amazon/ec2_lc.py future-import-boilerplate
lib/ansible/modules/cloud/amazon/ec2_lc.py metaclass-boilerplate
lib/ansible/modules/cloud/amazon/ec2_lc.py validate-modules:E322
lib/ansible/modules/cloud/amazon/ec2_lc.py validate-modules:E324
lib/ansible/modules/cloud/amazon/ec2_lc.py validate-modules:E326
lib/ansible/modules/cloud/amazon/ec2_lc.py validate-modules:E337
lib/ansible/modules/cloud/amazon/ec2_lc.py validate-modules:E338
lib/ansible/modules/cloud/amazon/ec2_lc_find.py validate-modules:E337
lib/ansible/modules/cloud/amazon/ec2_lc_find.py validate-modules:E338
lib/ansible/modules/cloud/amazon/ec2_lc_info.py validate-modules:E337
lib/ansible/modules/cloud/amazon/ec2_lc_info.py validate-modules:E338
lib/ansible/modules/cloud/amazon/ec2_metric_alarm.py validate-modules:E324
lib/ansible/modules/cloud/amazon/ec2_metric_alarm.py validate-modules:E337
lib/ansible/modules/cloud/amazon/ec2_metric_alarm.py validate-modules:E338
lib/ansible/modules/cloud/amazon/ec2_placement_group.py future-import-boilerplate
lib/ansible/modules/cloud/amazon/ec2_placement_group.py metaclass-boilerplate
lib/ansible/modules/cloud/amazon/ec2_placement_group.py validate-modules:E337
lib/ansible/modules/cloud/amazon/ec2_placement_group.py validate-modules:E338
lib/ansible/modules/cloud/amazon/ec2_placement_group_info.py future-import-boilerplate
lib/ansible/modules/cloud/amazon/ec2_placement_group_info.py metaclass-boilerplate
lib/ansible/modules/cloud/amazon/ec2_placement_group_info.py validate-modules:E337
lib/ansible/modules/cloud/amazon/ec2_scaling_policy.py validate-modules:E324
lib/ansible/modules/cloud/amazon/ec2_scaling_policy.py validate-modules:E337
lib/ansible/modules/cloud/amazon/ec2_scaling_policy.py validate-modules:E338
lib/ansible/modules/cloud/amazon/ec2_snapshot.py validate-modules:E337
lib/ansible/modules/cloud/amazon/ec2_snapshot.py validate-modules:E338
lib/ansible/modules/cloud/amazon/ec2_snapshot_copy.py future-import-boilerplate
lib/ansible/modules/cloud/amazon/ec2_snapshot_copy.py metaclass-boilerplate
lib/ansible/modules/cloud/amazon/ec2_snapshot_copy.py validate-modules:E337
lib/ansible/modules/cloud/amazon/ec2_snapshot_copy.py validate-modules:E338
lib/ansible/modules/cloud/amazon/ec2_snapshot_info.py validate-modules:E337
lib/ansible/modules/cloud/amazon/ec2_tag.py validate-modules:E337
lib/ansible/modules/cloud/amazon/ec2_tag.py validate-modules:E338
lib/ansible/modules/cloud/amazon/ec2_transit_gateway.py validate-modules:E337
lib/ansible/modules/cloud/amazon/ec2_transit_gateway.py validate-modules:E338
lib/ansible/modules/cloud/amazon/ec2_vol.py validate-modules:E322
lib/ansible/modules/cloud/amazon/ec2_vol.py validate-modules:E324
lib/ansible/modules/cloud/amazon/ec2_vol.py validate-modules:E326
lib/ansible/modules/cloud/amazon/ec2_vol.py validate-modules:E337
lib/ansible/modules/cloud/amazon/ec2_vol.py validate-modules:E338
lib/ansible/modules/cloud/amazon/ec2_vol_info.py validate-modules:E337
lib/ansible/modules/cloud/amazon/ec2_vpc_dhcp_option.py validate-modules:E337
lib/ansible/modules/cloud/amazon/ec2_vpc_dhcp_option_info.py validate-modules:E322
lib/ansible/modules/cloud/amazon/ec2_vpc_dhcp_option_info.py validate-modules:E337
lib/ansible/modules/cloud/amazon/ec2_vpc_egress_igw.py future-import-boilerplate
lib/ansible/modules/cloud/amazon/ec2_vpc_egress_igw.py metaclass-boilerplate
lib/ansible/modules/cloud/amazon/ec2_vpc_egress_igw.py validate-modules:E338
lib/ansible/modules/cloud/amazon/ec2_vpc_endpoint.py validate-modules:E337
lib/ansible/modules/cloud/amazon/ec2_vpc_endpoint.py validate-modules:E338
lib/ansible/modules/cloud/amazon/ec2_vpc_endpoint_info.py validate-modules:E337
lib/ansible/modules/cloud/amazon/ec2_vpc_endpoint_info.py validate-modules:E338
lib/ansible/modules/cloud/amazon/ec2_vpc_igw.py validate-modules:E337
lib/ansible/modules/cloud/amazon/ec2_vpc_igw.py validate-modules:E338
lib/ansible/modules/cloud/amazon/ec2_vpc_igw_info.py validate-modules:E337
lib/ansible/modules/cloud/amazon/ec2_vpc_nacl.py validate-modules:E337
lib/ansible/modules/cloud/amazon/ec2_vpc_nacl.py validate-modules:E338
lib/ansible/modules/cloud/amazon/ec2_vpc_nacl_info.py validate-modules:E337
lib/ansible/modules/cloud/amazon/ec2_vpc_nat_gateway.py pylint:blacklisted-name
lib/ansible/modules/cloud/amazon/ec2_vpc_nat_gateway.py validate-modules:E324
lib/ansible/modules/cloud/amazon/ec2_vpc_nat_gateway.py validate-modules:E337
lib/ansible/modules/cloud/amazon/ec2_vpc_nat_gateway.py validate-modules:E338
lib/ansible/modules/cloud/amazon/ec2_vpc_nat_gateway_info.py validate-modules:E337
lib/ansible/modules/cloud/amazon/ec2_vpc_net.py validate-modules:E337
lib/ansible/modules/cloud/amazon/ec2_vpc_net.py validate-modules:E338
lib/ansible/modules/cloud/amazon/ec2_vpc_net_info.py future-import-boilerplate
lib/ansible/modules/cloud/amazon/ec2_vpc_net_info.py metaclass-boilerplate
lib/ansible/modules/cloud/amazon/ec2_vpc_net_info.py validate-modules:E337
lib/ansible/modules/cloud/amazon/ec2_vpc_peer.py validate-modules:E337
lib/ansible/modules/cloud/amazon/ec2_vpc_peer.py validate-modules:E338
lib/ansible/modules/cloud/amazon/ec2_vpc_peering_info.py validate-modules:E337
lib/ansible/modules/cloud/amazon/ec2_vpc_route_table.py future-import-boilerplate
lib/ansible/modules/cloud/amazon/ec2_vpc_route_table.py metaclass-boilerplate
lib/ansible/modules/cloud/amazon/ec2_vpc_route_table.py validate-modules:E337
lib/ansible/modules/cloud/amazon/ec2_vpc_route_table.py validate-modules:E338
lib/ansible/modules/cloud/amazon/ec2_vpc_route_table_info.py validate-modules:E337
lib/ansible/modules/cloud/amazon/ec2_vpc_subnet.py validate-modules:E317
lib/ansible/modules/cloud/amazon/ec2_vpc_subnet.py validate-modules:E337
lib/ansible/modules/cloud/amazon/ec2_vpc_subnet.py validate-modules:E338
lib/ansible/modules/cloud/amazon/ec2_vpc_subnet_info.py future-import-boilerplate
lib/ansible/modules/cloud/amazon/ec2_vpc_subnet_info.py metaclass-boilerplate
lib/ansible/modules/cloud/amazon/ec2_vpc_subnet_info.py validate-modules:E337
lib/ansible/modules/cloud/amazon/ec2_vpc_vgw.py validate-modules:E323
lib/ansible/modules/cloud/amazon/ec2_vpc_vgw.py validate-modules:E324
lib/ansible/modules/cloud/amazon/ec2_vpc_vgw.py validate-modules:E337
lib/ansible/modules/cloud/amazon/ec2_vpc_vgw.py validate-modules:E338
lib/ansible/modules/cloud/amazon/ec2_vpc_vgw_info.py validate-modules:E337
lib/ansible/modules/cloud/amazon/ec2_vpc_vpn.py future-import-boilerplate
lib/ansible/modules/cloud/amazon/ec2_vpc_vpn.py metaclass-boilerplate
lib/ansible/modules/cloud/amazon/ec2_vpc_vpn.py pylint:blacklisted-name
lib/ansible/modules/cloud/amazon/ec2_vpc_vpn.py validate-modules:E326
lib/ansible/modules/cloud/amazon/ec2_vpc_vpn.py validate-modules:E337
lib/ansible/modules/cloud/amazon/ec2_vpc_vpn_info.py future-import-boilerplate
lib/ansible/modules/cloud/amazon/ec2_vpc_vpn_info.py metaclass-boilerplate
lib/ansible/modules/cloud/amazon/ec2_vpc_vpn_info.py validate-modules:E337
lib/ansible/modules/cloud/amazon/ec2_win_password.py validate-modules:E337
lib/ansible/modules/cloud/amazon/ec2_win_password.py validate-modules:E338
lib/ansible/modules/cloud/amazon/ecs_attribute.py validate-modules:E337
lib/ansible/modules/cloud/amazon/ecs_attribute.py validate-modules:E338
lib/ansible/modules/cloud/amazon/ecs_cluster.py validate-modules:E324
lib/ansible/modules/cloud/amazon/ecs_cluster.py validate-modules:E337
lib/ansible/modules/cloud/amazon/ecs_cluster.py validate-modules:E338
lib/ansible/modules/cloud/amazon/ecs_ecr.py validate-modules:E337
lib/ansible/modules/cloud/amazon/ecs_ecr.py validate-modules:E338
lib/ansible/modules/cloud/amazon/ecs_service.py future-import-boilerplate
lib/ansible/modules/cloud/amazon/ecs_service.py metaclass-boilerplate
lib/ansible/modules/cloud/amazon/ecs_service.py validate-modules:E337
lib/ansible/modules/cloud/amazon/ecs_service.py validate-modules:E338
lib/ansible/modules/cloud/amazon/ecs_service_facts.py validate-modules:E324
lib/ansible/modules/cloud/amazon/ecs_service_facts.py validate-modules:E337
lib/ansible/modules/cloud/amazon/ecs_service_facts.py validate-modules:E338
lib/ansible/modules/cloud/amazon/ecs_task.py validate-modules:E337
lib/ansible/modules/cloud/amazon/ecs_task.py validate-modules:E338
lib/ansible/modules/cloud/amazon/ecs_taskdefinition.py future-import-boilerplate
lib/ansible/modules/cloud/amazon/ecs_taskdefinition.py metaclass-boilerplate
lib/ansible/modules/cloud/amazon/ecs_taskdefinition.py validate-modules:E337
lib/ansible/modules/cloud/amazon/ecs_taskdefinition.py validate-modules:E338
lib/ansible/modules/cloud/amazon/ecs_taskdefinition_facts.py validate-modules:E337
lib/ansible/modules/cloud/amazon/efs.py pylint:blacklisted-name
lib/ansible/modules/cloud/amazon/efs.py validate-modules:E337
lib/ansible/modules/cloud/amazon/efs_facts.py pylint:blacklisted-name
lib/ansible/modules/cloud/amazon/efs_facts.py validate-modules:E337
lib/ansible/modules/cloud/amazon/efs_facts.py validate-modules:E338
lib/ansible/modules/cloud/amazon/elasticache.py future-import-boilerplate
lib/ansible/modules/cloud/amazon/elasticache.py metaclass-boilerplate
lib/ansible/modules/cloud/amazon/elasticache.py validate-modules:E324
lib/ansible/modules/cloud/amazon/elasticache.py validate-modules:E326
lib/ansible/modules/cloud/amazon/elasticache.py validate-modules:E337
lib/ansible/modules/cloud/amazon/elasticache.py validate-modules:E338
lib/ansible/modules/cloud/amazon/elasticache_info.py future-import-boilerplate
lib/ansible/modules/cloud/amazon/elasticache_info.py metaclass-boilerplate
lib/ansible/modules/cloud/amazon/elasticache_info.py validate-modules:E338
lib/ansible/modules/cloud/amazon/elasticache_parameter_group.py future-import-boilerplate
lib/ansible/modules/cloud/amazon/elasticache_parameter_group.py metaclass-boilerplate
lib/ansible/modules/cloud/amazon/elasticache_parameter_group.py validate-modules:E326
lib/ansible/modules/cloud/amazon/elasticache_parameter_group.py validate-modules:E337
lib/ansible/modules/cloud/amazon/elasticache_parameter_group.py validate-modules:E338
lib/ansible/modules/cloud/amazon/elasticache_snapshot.py validate-modules:E337
lib/ansible/modules/cloud/amazon/elasticache_subnet_group.py validate-modules:E324
lib/ansible/modules/cloud/amazon/elasticache_subnet_group.py validate-modules:E337
lib/ansible/modules/cloud/amazon/elasticache_subnet_group.py validate-modules:E338
lib/ansible/modules/cloud/amazon/elb_application_lb.py future-import-boilerplate
lib/ansible/modules/cloud/amazon/elb_application_lb.py metaclass-boilerplate
lib/ansible/modules/cloud/amazon/elb_application_lb.py validate-modules:E322
lib/ansible/modules/cloud/amazon/elb_application_lb.py validate-modules:E324
lib/ansible/modules/cloud/amazon/elb_application_lb.py validate-modules:E337
lib/ansible/modules/cloud/amazon/elb_application_lb.py validate-modules:E338
lib/ansible/modules/cloud/amazon/elb_application_lb.py validate-modules:E340
lib/ansible/modules/cloud/amazon/elb_application_lb_info.py validate-modules:E337
lib/ansible/modules/cloud/amazon/elb_classic_lb.py validate-modules:E337
lib/ansible/modules/cloud/amazon/elb_classic_lb.py validate-modules:E338
lib/ansible/modules/cloud/amazon/elb_classic_lb_info.py future-import-boilerplate
lib/ansible/modules/cloud/amazon/elb_classic_lb_info.py metaclass-boilerplate
lib/ansible/modules/cloud/amazon/elb_classic_lb_info.py validate-modules:E323
lib/ansible/modules/cloud/amazon/elb_classic_lb_info.py validate-modules:E337
lib/ansible/modules/cloud/amazon/elb_instance.py validate-modules:E326
lib/ansible/modules/cloud/amazon/elb_instance.py validate-modules:E337
lib/ansible/modules/cloud/amazon/elb_instance.py validate-modules:E338
lib/ansible/modules/cloud/amazon/elb_network_lb.py future-import-boilerplate
lib/ansible/modules/cloud/amazon/elb_network_lb.py metaclass-boilerplate
lib/ansible/modules/cloud/amazon/elb_network_lb.py validate-modules:E322
lib/ansible/modules/cloud/amazon/elb_network_lb.py validate-modules:E337
lib/ansible/modules/cloud/amazon/elb_network_lb.py validate-modules:E338
lib/ansible/modules/cloud/amazon/elb_network_lb.py validate-modules:E340
lib/ansible/modules/cloud/amazon/elb_target.py future-import-boilerplate
lib/ansible/modules/cloud/amazon/elb_target.py metaclass-boilerplate
lib/ansible/modules/cloud/amazon/elb_target.py validate-modules:E327
lib/ansible/modules/cloud/amazon/elb_target.py validate-modules:E337
lib/ansible/modules/cloud/amazon/elb_target_group.py validate-modules:E324
lib/ansible/modules/cloud/amazon/elb_target_group.py validate-modules:E326
lib/ansible/modules/cloud/amazon/elb_target_group.py validate-modules:E337
lib/ansible/modules/cloud/amazon/elb_target_group.py validate-modules:E338
lib/ansible/modules/cloud/amazon/elb_target_group_info.py validate-modules:E337
lib/ansible/modules/cloud/amazon/execute_lambda.py future-import-boilerplate
lib/ansible/modules/cloud/amazon/execute_lambda.py metaclass-boilerplate
lib/ansible/modules/cloud/amazon/execute_lambda.py validate-modules:E324
lib/ansible/modules/cloud/amazon/execute_lambda.py validate-modules:E337
lib/ansible/modules/cloud/amazon/execute_lambda.py validate-modules:E338
lib/ansible/modules/cloud/amazon/iam.py validate-modules:E317
lib/ansible/modules/cloud/amazon/iam.py validate-modules:E326
lib/ansible/modules/cloud/amazon/iam.py validate-modules:E337
lib/ansible/modules/cloud/amazon/iam.py validate-modules:E338
lib/ansible/modules/cloud/amazon/iam_cert.py future-import-boilerplate
lib/ansible/modules/cloud/amazon/iam_cert.py metaclass-boilerplate
lib/ansible/modules/cloud/amazon/iam_cert.py validate-modules:E338
lib/ansible/modules/cloud/amazon/iam_group.py future-import-boilerplate
lib/ansible/modules/cloud/amazon/iam_group.py metaclass-boilerplate
lib/ansible/modules/cloud/amazon/iam_group.py validate-modules:E337
lib/ansible/modules/cloud/amazon/iam_group.py validate-modules:E338
lib/ansible/modules/cloud/amazon/iam_managed_policy.py validate-modules:E322
lib/ansible/modules/cloud/amazon/iam_managed_policy.py validate-modules:E324
lib/ansible/modules/cloud/amazon/iam_managed_policy.py validate-modules:E337
lib/ansible/modules/cloud/amazon/iam_managed_policy.py validate-modules:E338
lib/ansible/modules/cloud/amazon/iam_mfa_device_info.py validate-modules:E338
lib/ansible/modules/cloud/amazon/iam_password_policy.py validate-modules:E337
lib/ansible/modules/cloud/amazon/iam_password_policy.py validate-modules:E338
lib/ansible/modules/cloud/amazon/iam_policy.py future-import-boilerplate
lib/ansible/modules/cloud/amazon/iam_policy.py metaclass-boilerplate
lib/ansible/modules/cloud/amazon/iam_policy.py validate-modules:E317
lib/ansible/modules/cloud/amazon/iam_policy.py validate-modules:E327
lib/ansible/modules/cloud/amazon/iam_policy.py validate-modules:E337
lib/ansible/modules/cloud/amazon/iam_policy.py validate-modules:E338
lib/ansible/modules/cloud/amazon/iam_role.py future-import-boilerplate
lib/ansible/modules/cloud/amazon/iam_role.py metaclass-boilerplate
lib/ansible/modules/cloud/amazon/iam_role.py validate-modules:E337
lib/ansible/modules/cloud/amazon/iam_role_info.py validate-modules:E338
lib/ansible/modules/cloud/amazon/iam_server_certificate_info.py validate-modules:E337
lib/ansible/modules/cloud/amazon/iam_user.py future-import-boilerplate
lib/ansible/modules/cloud/amazon/iam_user.py metaclass-boilerplate
lib/ansible/modules/cloud/amazon/iam_user.py validate-modules:E337
lib/ansible/modules/cloud/amazon/iam_user.py validate-modules:E338
lib/ansible/modules/cloud/amazon/kinesis_stream.py pylint:blacklisted-name
lib/ansible/modules/cloud/amazon/kinesis_stream.py validate-modules:E317
lib/ansible/modules/cloud/amazon/kinesis_stream.py validate-modules:E324
lib/ansible/modules/cloud/amazon/kinesis_stream.py validate-modules:E326
lib/ansible/modules/cloud/amazon/kinesis_stream.py validate-modules:E337
lib/ansible/modules/cloud/amazon/kinesis_stream.py validate-modules:E338
lib/ansible/modules/cloud/amazon/lambda.py future-import-boilerplate
lib/ansible/modules/cloud/amazon/lambda.py metaclass-boilerplate
lib/ansible/modules/cloud/amazon/lambda.py validate-modules:E323
lib/ansible/modules/cloud/amazon/lambda.py validate-modules:E337
lib/ansible/modules/cloud/amazon/lambda.py validate-modules:E338
lib/ansible/modules/cloud/amazon/lambda_alias.py validate-modules:E317
lib/ansible/modules/cloud/amazon/lambda_alias.py validate-modules:E337
lib/ansible/modules/cloud/amazon/lambda_alias.py validate-modules:E338
lib/ansible/modules/cloud/amazon/lambda_event.py validate-modules:E317
lib/ansible/modules/cloud/amazon/lambda_event.py validate-modules:E337
lib/ansible/modules/cloud/amazon/lambda_event.py validate-modules:E338
lib/ansible/modules/cloud/amazon/lambda_facts.py future-import-boilerplate
lib/ansible/modules/cloud/amazon/lambda_facts.py metaclass-boilerplate
lib/ansible/modules/cloud/amazon/lambda_facts.py validate-modules:E338
lib/ansible/modules/cloud/amazon/lambda_policy.py future-import-boilerplate
lib/ansible/modules/cloud/amazon/lambda_policy.py metaclass-boilerplate
lib/ansible/modules/cloud/amazon/lambda_policy.py validate-modules:E337
lib/ansible/modules/cloud/amazon/lambda_policy.py validate-modules:E338
lib/ansible/modules/cloud/amazon/lightsail.py validate-modules:E337
lib/ansible/modules/cloud/amazon/rds.py validate-modules:E322
lib/ansible/modules/cloud/amazon/rds.py validate-modules:E327
lib/ansible/modules/cloud/amazon/rds.py validate-modules:E337
lib/ansible/modules/cloud/amazon/rds.py validate-modules:E338
lib/ansible/modules/cloud/amazon/rds_instance_info.py future-import-boilerplate
lib/ansible/modules/cloud/amazon/rds_instance_info.py metaclass-boilerplate
lib/ansible/modules/cloud/amazon/rds_instance_info.py validate-modules:E337
lib/ansible/modules/cloud/amazon/rds_instance_info.py validate-modules:E338
lib/ansible/modules/cloud/amazon/rds_param_group.py future-import-boilerplate
lib/ansible/modules/cloud/amazon/rds_param_group.py metaclass-boilerplate
lib/ansible/modules/cloud/amazon/rds_param_group.py validate-modules:E324
lib/ansible/modules/cloud/amazon/rds_param_group.py validate-modules:E337
lib/ansible/modules/cloud/amazon/rds_param_group.py validate-modules:E338
lib/ansible/modules/cloud/amazon/rds_snapshot.py future-import-boilerplate
lib/ansible/modules/cloud/amazon/rds_snapshot.py metaclass-boilerplate
lib/ansible/modules/cloud/amazon/rds_snapshot_info.py future-import-boilerplate
lib/ansible/modules/cloud/amazon/rds_snapshot_info.py metaclass-boilerplate
lib/ansible/modules/cloud/amazon/rds_snapshot_info.py validate-modules:E338
lib/ansible/modules/cloud/amazon/rds_subnet_group.py validate-modules:E324
lib/ansible/modules/cloud/amazon/rds_subnet_group.py validate-modules:E337
lib/ansible/modules/cloud/amazon/rds_subnet_group.py validate-modules:E338
lib/ansible/modules/cloud/amazon/redshift.py validate-modules:E322
lib/ansible/modules/cloud/amazon/redshift.py validate-modules:E326
lib/ansible/modules/cloud/amazon/redshift.py validate-modules:E337
lib/ansible/modules/cloud/amazon/redshift.py validate-modules:E338
lib/ansible/modules/cloud/amazon/redshift_cross_region_snapshots.py future-import-boilerplate
lib/ansible/modules/cloud/amazon/redshift_cross_region_snapshots.py metaclass-boilerplate
lib/ansible/modules/cloud/amazon/redshift_cross_region_snapshots.py validate-modules:E337
lib/ansible/modules/cloud/amazon/redshift_info.py validate-modules:E337
lib/ansible/modules/cloud/amazon/redshift_subnet_group.py validate-modules:E324
lib/ansible/modules/cloud/amazon/redshift_subnet_group.py validate-modules:E337
lib/ansible/modules/cloud/amazon/redshift_subnet_group.py validate-modules:E338
lib/ansible/modules/cloud/amazon/route53.py validate-modules:E326
lib/ansible/modules/cloud/amazon/route53.py validate-modules:E327
lib/ansible/modules/cloud/amazon/route53.py validate-modules:E337
lib/ansible/modules/cloud/amazon/route53_health_check.py future-import-boilerplate
lib/ansible/modules/cloud/amazon/route53_health_check.py metaclass-boilerplate
lib/ansible/modules/cloud/amazon/route53_health_check.py validate-modules:E324
lib/ansible/modules/cloud/amazon/route53_health_check.py validate-modules:E337
lib/ansible/modules/cloud/amazon/route53_health_check.py validate-modules:E338
lib/ansible/modules/cloud/amazon/route53_info.py validate-modules:E337
lib/ansible/modules/cloud/amazon/route53_info.py validate-modules:E338
lib/ansible/modules/cloud/amazon/route53_zone.py future-import-boilerplate
lib/ansible/modules/cloud/amazon/route53_zone.py metaclass-boilerplate
lib/ansible/modules/cloud/amazon/route53_zone.py validate-modules:E338
lib/ansible/modules/cloud/amazon/s3_bucket.py future-import-boilerplate
lib/ansible/modules/cloud/amazon/s3_bucket.py metaclass-boilerplate
lib/ansible/modules/cloud/amazon/s3_bucket.py validate-modules:E337
lib/ansible/modules/cloud/amazon/s3_bucket.py validate-modules:E338
lib/ansible/modules/cloud/amazon/s3_lifecycle.py validate-modules:E322
lib/ansible/modules/cloud/amazon/s3_lifecycle.py validate-modules:E337
lib/ansible/modules/cloud/amazon/s3_lifecycle.py validate-modules:E338
lib/ansible/modules/cloud/amazon/s3_logging.py validate-modules:E338
lib/ansible/modules/cloud/amazon/s3_sync.py future-import-boilerplate
lib/ansible/modules/cloud/amazon/s3_sync.py metaclass-boilerplate
lib/ansible/modules/cloud/amazon/s3_sync.py pylint:blacklisted-name
lib/ansible/modules/cloud/amazon/s3_sync.py validate-modules:E322
lib/ansible/modules/cloud/amazon/s3_sync.py validate-modules:E326
lib/ansible/modules/cloud/amazon/s3_sync.py validate-modules:E337
lib/ansible/modules/cloud/amazon/s3_sync.py validate-modules:E338
lib/ansible/modules/cloud/amazon/s3_website.py validate-modules:E324
lib/ansible/modules/cloud/amazon/s3_website.py validate-modules:E337
lib/ansible/modules/cloud/amazon/sns.py validate-modules:E337
lib/ansible/modules/cloud/amazon/sns.py validate-modules:E338
lib/ansible/modules/cloud/amazon/sns_topic.py validate-modules:E337
lib/ansible/modules/cloud/amazon/sns_topic.py validate-modules:E338
lib/ansible/modules/cloud/amazon/sqs_queue.py validate-modules:E337
lib/ansible/modules/cloud/amazon/sqs_queue.py validate-modules:E338
lib/ansible/modules/cloud/amazon/sts_assume_role.py validate-modules:E317
lib/ansible/modules/cloud/amazon/sts_assume_role.py validate-modules:E337
lib/ansible/modules/cloud/amazon/sts_assume_role.py validate-modules:E338
lib/ansible/modules/cloud/amazon/sts_session_token.py validate-modules:E337
lib/ansible/modules/cloud/amazon/sts_session_token.py validate-modules:E338
lib/ansible/modules/cloud/atomic/atomic_container.py validate-modules:E317
lib/ansible/modules/cloud/atomic/atomic_container.py validate-modules:E337
lib/ansible/modules/cloud/atomic/atomic_container.py validate-modules:E338
lib/ansible/modules/cloud/atomic/atomic_host.py validate-modules:E337
lib/ansible/modules/cloud/atomic/atomic_image.py validate-modules:E337
lib/ansible/modules/cloud/azure/azure_rm_acs.py validate-modules:E337
lib/ansible/modules/cloud/azure/azure_rm_aks.py validate-modules:E322
lib/ansible/modules/cloud/azure/azure_rm_aks.py validate-modules:E324
lib/ansible/modules/cloud/azure/azure_rm_aks.py validate-modules:E326
lib/ansible/modules/cloud/azure/azure_rm_aks.py validate-modules:E337
lib/ansible/modules/cloud/azure/azure_rm_aks_facts.py validate-modules:E337
lib/ansible/modules/cloud/azure/azure_rm_aksversion_facts.py validate-modules:E337
lib/ansible/modules/cloud/azure/azure_rm_appgateway.py validate-modules:E326
lib/ansible/modules/cloud/azure/azure_rm_appgateway.py validate-modules:E337
lib/ansible/modules/cloud/azure/azure_rm_applicationsecuritygroup.py validate-modules:E337
lib/ansible/modules/cloud/azure/azure_rm_applicationsecuritygroup_facts.py validate-modules:E337
lib/ansible/modules/cloud/azure/azure_rm_appserviceplan.py validate-modules:E337
lib/ansible/modules/cloud/azure/azure_rm_appserviceplan_facts.py validate-modules:E337
lib/ansible/modules/cloud/azure/azure_rm_autoscale.py validate-modules:E322
lib/ansible/modules/cloud/azure/azure_rm_autoscale.py validate-modules:E337
lib/ansible/modules/cloud/azure/azure_rm_autoscale_facts.py validate-modules:E337
lib/ansible/modules/cloud/azure/azure_rm_availabilityset.py validate-modules:E337
lib/ansible/modules/cloud/azure/azure_rm_availabilityset_facts.py validate-modules:E337
lib/ansible/modules/cloud/azure/azure_rm_azurefirewall.py validate-modules:E322
lib/ansible/modules/cloud/azure/azure_rm_azurefirewall.py validate-modules:E337
lib/ansible/modules/cloud/azure/azure_rm_azurefirewall.py validate-modules:E340
lib/ansible/modules/cloud/azure/azure_rm_batchaccount.py validate-modules:E337
lib/ansible/modules/cloud/azure/azure_rm_cdnendpoint.py validate-modules:E337
lib/ansible/modules/cloud/azure/azure_rm_cdnendpoint_facts.py validate-modules:E337
lib/ansible/modules/cloud/azure/azure_rm_cdnprofile.py validate-modules:E337
lib/ansible/modules/cloud/azure/azure_rm_cdnprofile_facts.py validate-modules:E337
lib/ansible/modules/cloud/azure/azure_rm_containerinstance.py validate-modules:E325
lib/ansible/modules/cloud/azure/azure_rm_containerinstance.py validate-modules:E337
lib/ansible/modules/cloud/azure/azure_rm_containerinstance_facts.py validate-modules:E337
lib/ansible/modules/cloud/azure/azure_rm_containerregistry.py validate-modules:E337
lib/ansible/modules/cloud/azure/azure_rm_containerregistry_facts.py validate-modules:E337
lib/ansible/modules/cloud/azure/azure_rm_cosmosdbaccount.py validate-modules:E322
lib/ansible/modules/cloud/azure/azure_rm_cosmosdbaccount.py validate-modules:E323
lib/ansible/modules/cloud/azure/azure_rm_cosmosdbaccount.py validate-modules:E337
lib/ansible/modules/cloud/azure/azure_rm_cosmosdbaccount_facts.py validate-modules:E337
lib/ansible/modules/cloud/azure/azure_rm_deployment.py validate-modules:E337
lib/ansible/modules/cloud/azure/azure_rm_deployment_facts.py validate-modules:E337
lib/ansible/modules/cloud/azure/azure_rm_devtestlab.py validate-modules:E337
lib/ansible/modules/cloud/azure/azure_rm_devtestlab_facts.py validate-modules:E337
lib/ansible/modules/cloud/azure/azure_rm_devtestlabarmtemplate_facts.py validate-modules:E337
lib/ansible/modules/cloud/azure/azure_rm_devtestlabartifact_facts.py validate-modules:E337
lib/ansible/modules/cloud/azure/azure_rm_devtestlabartifactsource.py validate-modules:E337
lib/ansible/modules/cloud/azure/azure_rm_devtestlabartifactsource_facts.py validate-modules:E337
lib/ansible/modules/cloud/azure/azure_rm_devtestlabcustomimage.py validate-modules:E337
lib/ansible/modules/cloud/azure/azure_rm_devtestlabcustomimage_facts.py validate-modules:E337
lib/ansible/modules/cloud/azure/azure_rm_devtestlabenvironment.py validate-modules:E337
lib/ansible/modules/cloud/azure/azure_rm_devtestlabenvironment_facts.py validate-modules:E337
lib/ansible/modules/cloud/azure/azure_rm_devtestlabpolicy.py validate-modules:E337
lib/ansible/modules/cloud/azure/azure_rm_devtestlabpolicy_facts.py validate-modules:E337
lib/ansible/modules/cloud/azure/azure_rm_devtestlabschedule.py validate-modules:E337
lib/ansible/modules/cloud/azure/azure_rm_devtestlabschedule_facts.py validate-modules:E337
lib/ansible/modules/cloud/azure/azure_rm_devtestlabvirtualmachine.py validate-modules:E322
lib/ansible/modules/cloud/azure/azure_rm_devtestlabvirtualmachine.py validate-modules:E323
lib/ansible/modules/cloud/azure/azure_rm_devtestlabvirtualmachine.py validate-modules:E337
lib/ansible/modules/cloud/azure/azure_rm_devtestlabvirtualmachine_facts.py validate-modules:E337
lib/ansible/modules/cloud/azure/azure_rm_devtestlabvirtualnetwork.py validate-modules:E337
lib/ansible/modules/cloud/azure/azure_rm_devtestlabvirtualnetwork_facts.py validate-modules:E337
lib/ansible/modules/cloud/azure/azure_rm_dnsrecordset.py validate-modules:E337
lib/ansible/modules/cloud/azure/azure_rm_dnsrecordset.py validate-modules:E338
lib/ansible/modules/cloud/azure/azure_rm_dnsrecordset_facts.py validate-modules:E337
lib/ansible/modules/cloud/azure/azure_rm_dnszone.py validate-modules:E337
lib/ansible/modules/cloud/azure/azure_rm_dnszone_facts.py validate-modules:E325
lib/ansible/modules/cloud/azure/azure_rm_dnszone_facts.py validate-modules:E337
lib/ansible/modules/cloud/azure/azure_rm_functionapp.py validate-modules:E337
lib/ansible/modules/cloud/azure/azure_rm_functionapp_facts.py validate-modules:E337
lib/ansible/modules/cloud/azure/azure_rm_galleryimageversion.py validate-modules:E325
lib/ansible/modules/cloud/azure/azure_rm_galleryimageversion.py validate-modules:E326
lib/ansible/modules/cloud/azure/azure_rm_galleryimageversion.py validate-modules:E337
lib/ansible/modules/cloud/azure/azure_rm_hdinsightcluster.py validate-modules:E337
lib/ansible/modules/cloud/azure/azure_rm_hdinsightcluster_facts.py validate-modules:E337
lib/ansible/modules/cloud/azure/azure_rm_image.py validate-modules:E337
lib/ansible/modules/cloud/azure/azure_rm_image_facts.py validate-modules:E337
lib/ansible/modules/cloud/azure/azure_rm_keyvault.py validate-modules:E326
lib/ansible/modules/cloud/azure/azure_rm_keyvault.py validate-modules:E337
lib/ansible/modules/cloud/azure/azure_rm_keyvault_info.py validate-modules:E337
lib/ansible/modules/cloud/azure/azure_rm_keyvaultkey.py validate-modules:E337
lib/ansible/modules/cloud/azure/azure_rm_keyvaultsecret.py validate-modules:E337
lib/ansible/modules/cloud/azure/azure_rm_loadbalancer.py validate-modules:E324
lib/ansible/modules/cloud/azure/azure_rm_loadbalancer.py validate-modules:E326
lib/ansible/modules/cloud/azure/azure_rm_loadbalancer.py validate-modules:E337
lib/ansible/modules/cloud/azure/azure_rm_loadbalancer_facts.py validate-modules:E337
lib/ansible/modules/cloud/azure/azure_rm_loganalyticsworkspace.py validate-modules:E337
lib/ansible/modules/cloud/azure/azure_rm_loganalyticsworkspace_facts.py validate-modules:E337
lib/ansible/modules/cloud/azure/azure_rm_manageddisk.py validate-modules:E337
lib/ansible/modules/cloud/azure/azure_rm_manageddisk_facts.py validate-modules:E325
lib/ansible/modules/cloud/azure/azure_rm_mariadbconfiguration.py validate-modules:E337
lib/ansible/modules/cloud/azure/azure_rm_mariadbconfiguration_facts.py validate-modules:E337
lib/ansible/modules/cloud/azure/azure_rm_mariadbdatabase.py validate-modules:E337
lib/ansible/modules/cloud/azure/azure_rm_mariadbdatabase_facts.py validate-modules:E337
lib/ansible/modules/cloud/azure/azure_rm_mariadbfirewallrule.py validate-modules:E337
lib/ansible/modules/cloud/azure/azure_rm_mariadbfirewallrule_facts.py validate-modules:E337
lib/ansible/modules/cloud/azure/azure_rm_mariadbserver.py validate-modules:E337
lib/ansible/modules/cloud/azure/azure_rm_mariadbserver_facts.py validate-modules:E337
lib/ansible/modules/cloud/azure/azure_rm_mysqlconfiguration.py validate-modules:E337
lib/ansible/modules/cloud/azure/azure_rm_mysqlconfiguration_facts.py validate-modules:E337
lib/ansible/modules/cloud/azure/azure_rm_mysqldatabase.py validate-modules:E337
lib/ansible/modules/cloud/azure/azure_rm_mysqldatabase_facts.py validate-modules:E337
lib/ansible/modules/cloud/azure/azure_rm_mysqlfirewallrule.py validate-modules:E337
lib/ansible/modules/cloud/azure/azure_rm_mysqlfirewallrule_facts.py validate-modules:E337
lib/ansible/modules/cloud/azure/azure_rm_mysqlserver.py validate-modules:E337
lib/ansible/modules/cloud/azure/azure_rm_mysqlserver_facts.py validate-modules:E337
lib/ansible/modules/cloud/azure/azure_rm_networkinterface.py validate-modules:E337
lib/ansible/modules/cloud/azure/azure_rm_networkinterface.py validate-modules:E338
lib/ansible/modules/cloud/azure/azure_rm_networkinterface_facts.py validate-modules:E337
lib/ansible/modules/cloud/azure/azure_rm_postgresqlconfiguration.py validate-modules:E337
lib/ansible/modules/cloud/azure/azure_rm_postgresqlconfiguration_facts.py validate-modules:E337
lib/ansible/modules/cloud/azure/azure_rm_postgresqldatabase.py validate-modules:E337
lib/ansible/modules/cloud/azure/azure_rm_postgresqldatabase_facts.py validate-modules:E337
lib/ansible/modules/cloud/azure/azure_rm_postgresqlfirewallrule.py validate-modules:E337
lib/ansible/modules/cloud/azure/azure_rm_postgresqlfirewallrule_facts.py validate-modules:E337
lib/ansible/modules/cloud/azure/azure_rm_postgresqlserver.py validate-modules:E337
lib/ansible/modules/cloud/azure/azure_rm_postgresqlserver_facts.py validate-modules:E337
lib/ansible/modules/cloud/azure/azure_rm_publicipaddress.py validate-modules:E337
lib/ansible/modules/cloud/azure/azure_rm_publicipaddress_facts.py validate-modules:E337
lib/ansible/modules/cloud/azure/azure_rm_rediscache.py validate-modules:E324
lib/ansible/modules/cloud/azure/azure_rm_rediscache.py validate-modules:E325
lib/ansible/modules/cloud/azure/azure_rm_rediscache.py validate-modules:E337
lib/ansible/modules/cloud/azure/azure_rm_rediscache_facts.py validate-modules:E337
lib/ansible/modules/cloud/azure/azure_rm_rediscachefirewallrule.py validate-modules:E337
lib/ansible/modules/cloud/azure/azure_rm_resource.py validate-modules:E337
lib/ansible/modules/cloud/azure/azure_rm_resource_facts.py validate-modules:E337
lib/ansible/modules/cloud/azure/azure_rm_resourcegroup.py validate-modules:E337
lib/ansible/modules/cloud/azure/azure_rm_resourcegroup_info.py validate-modules:E337
lib/ansible/modules/cloud/azure/azure_rm_roleassignment.py validate-modules:E337
lib/ansible/modules/cloud/azure/azure_rm_roleassignment_facts.py validate-modules:E337
lib/ansible/modules/cloud/azure/azure_rm_roledefinition.py validate-modules:E331
lib/ansible/modules/cloud/azure/azure_rm_roledefinition.py validate-modules:E337
lib/ansible/modules/cloud/azure/azure_rm_roledefinition.py validate-modules:E340
lib/ansible/modules/cloud/azure/azure_rm_roledefinition_facts.py validate-modules:E337
lib/ansible/modules/cloud/azure/azure_rm_route.py validate-modules:E337
lib/ansible/modules/cloud/azure/azure_rm_routetable.py validate-modules:E337
lib/ansible/modules/cloud/azure/azure_rm_routetable_facts.py validate-modules:E337
lib/ansible/modules/cloud/azure/azure_rm_securitygroup.py validate-modules:E322
lib/ansible/modules/cloud/azure/azure_rm_securitygroup.py validate-modules:E324
lib/ansible/modules/cloud/azure/azure_rm_securitygroup.py validate-modules:E326
lib/ansible/modules/cloud/azure/azure_rm_securitygroup.py validate-modules:E337
lib/ansible/modules/cloud/azure/azure_rm_securitygroup.py validate-modules:E340
lib/ansible/modules/cloud/azure/azure_rm_securitygroup_facts.py validate-modules:E337
lib/ansible/modules/cloud/azure/azure_rm_servicebus.py validate-modules:E337
lib/ansible/modules/cloud/azure/azure_rm_servicebus_facts.py validate-modules:E337
lib/ansible/modules/cloud/azure/azure_rm_servicebusqueue.py validate-modules:E337
lib/ansible/modules/cloud/azure/azure_rm_servicebussaspolicy.py validate-modules:E337
lib/ansible/modules/cloud/azure/azure_rm_servicebustopic.py validate-modules:E337
lib/ansible/modules/cloud/azure/azure_rm_servicebustopicsubscription.py validate-modules:E337
lib/ansible/modules/cloud/azure/azure_rm_sqldatabase.py validate-modules:E337
lib/ansible/modules/cloud/azure/azure_rm_sqldatabase_facts.py validate-modules:E337
lib/ansible/modules/cloud/azure/azure_rm_sqlfirewallrule.py validate-modules:E337
lib/ansible/modules/cloud/azure/azure_rm_sqlfirewallrule_facts.py validate-modules:E337
lib/ansible/modules/cloud/azure/azure_rm_sqlserver.py validate-modules:E337
lib/ansible/modules/cloud/azure/azure_rm_sqlserver_facts.py validate-modules:E337
lib/ansible/modules/cloud/azure/azure_rm_storageaccount.py validate-modules:E337
lib/ansible/modules/cloud/azure/azure_rm_storageaccount.py validate-modules:E338
lib/ansible/modules/cloud/azure/azure_rm_storageaccount_facts.py validate-modules:E337
lib/ansible/modules/cloud/azure/azure_rm_storageblob.py validate-modules:E337
lib/ansible/modules/cloud/azure/azure_rm_subnet.py validate-modules:E337
lib/ansible/modules/cloud/azure/azure_rm_subnet_facts.py validate-modules:E337
lib/ansible/modules/cloud/azure/azure_rm_trafficmanagerendpoint.py validate-modules:E337
lib/ansible/modules/cloud/azure/azure_rm_trafficmanagerendpoint_facts.py validate-modules:E337
lib/ansible/modules/cloud/azure/azure_rm_trafficmanagerprofile.py validate-modules:E322
lib/ansible/modules/cloud/azure/azure_rm_trafficmanagerprofile.py validate-modules:E324
lib/ansible/modules/cloud/azure/azure_rm_trafficmanagerprofile.py validate-modules:E326
lib/ansible/modules/cloud/azure/azure_rm_trafficmanagerprofile.py validate-modules:E337
lib/ansible/modules/cloud/azure/azure_rm_trafficmanagerprofile_facts.py validate-modules:E337
lib/ansible/modules/cloud/azure/azure_rm_virtualmachine.py validate-modules:E337
lib/ansible/modules/cloud/azure/azure_rm_virtualmachine_facts.py validate-modules:E337
lib/ansible/modules/cloud/azure/azure_rm_virtualmachineextension.py validate-modules:E337
lib/ansible/modules/cloud/azure/azure_rm_virtualmachineextension_facts.py validate-modules:E337
lib/ansible/modules/cloud/azure/azure_rm_virtualmachineimage_facts.py validate-modules:E337
lib/ansible/modules/cloud/azure/azure_rm_virtualmachinescaleset.py validate-modules:E337
lib/ansible/modules/cloud/azure/azure_rm_virtualmachinescaleset_facts.py validate-modules:E337
lib/ansible/modules/cloud/azure/azure_rm_virtualmachinescalesetextension.py validate-modules:E337
lib/ansible/modules/cloud/azure/azure_rm_virtualmachinescalesetextension_facts.py validate-modules:E337
lib/ansible/modules/cloud/azure/azure_rm_virtualmachinescalesetinstance.py validate-modules:E337
lib/ansible/modules/cloud/azure/azure_rm_virtualmachinescalesetinstance_facts.py validate-modules:E337
lib/ansible/modules/cloud/azure/azure_rm_virtualnetwork.py validate-modules:E337
lib/ansible/modules/cloud/azure/azure_rm_virtualnetwork_facts.py validate-modules:E337
lib/ansible/modules/cloud/azure/azure_rm_virtualnetworkgateway.py validate-modules:E324
lib/ansible/modules/cloud/azure/azure_rm_virtualnetworkgateway.py validate-modules:E326
lib/ansible/modules/cloud/azure/azure_rm_virtualnetworkgateway.py validate-modules:E337
lib/ansible/modules/cloud/azure/azure_rm_virtualnetworkgateway.py validate-modules:E338
lib/ansible/modules/cloud/azure/azure_rm_virtualnetworkpeering.py validate-modules:E337
lib/ansible/modules/cloud/azure/azure_rm_virtualnetworkpeering_facts.py validate-modules:E337
lib/ansible/modules/cloud/azure/azure_rm_webapp.py validate-modules:E337
lib/ansible/modules/cloud/azure/azure_rm_webapp_facts.py validate-modules:E337
lib/ansible/modules/cloud/azure/azure_rm_webappslot.py validate-modules:E337
lib/ansible/modules/cloud/centurylink/clc_aa_policy.py validate-modules:E335
lib/ansible/modules/cloud/centurylink/clc_aa_policy.py validate-modules:E338
lib/ansible/modules/cloud/centurylink/clc_alert_policy.py validate-modules:E317
lib/ansible/modules/cloud/centurylink/clc_alert_policy.py validate-modules:E337
lib/ansible/modules/cloud/centurylink/clc_alert_policy.py validate-modules:E338
lib/ansible/modules/cloud/centurylink/clc_blueprint_package.py validate-modules:E335
lib/ansible/modules/cloud/centurylink/clc_blueprint_package.py validate-modules:E337
lib/ansible/modules/cloud/centurylink/clc_blueprint_package.py validate-modules:E338
lib/ansible/modules/cloud/centurylink/clc_firewall_policy.py validate-modules:E317
lib/ansible/modules/cloud/centurylink/clc_firewall_policy.py validate-modules:E324
lib/ansible/modules/cloud/centurylink/clc_firewall_policy.py validate-modules:E326
lib/ansible/modules/cloud/centurylink/clc_firewall_policy.py validate-modules:E335
lib/ansible/modules/cloud/centurylink/clc_firewall_policy.py validate-modules:E337
lib/ansible/modules/cloud/centurylink/clc_firewall_policy.py validate-modules:E338
lib/ansible/modules/cloud/centurylink/clc_group.py validate-modules:E338
lib/ansible/modules/cloud/centurylink/clc_loadbalancer.py validate-modules:E337
lib/ansible/modules/cloud/centurylink/clc_loadbalancer.py validate-modules:E338
lib/ansible/modules/cloud/centurylink/clc_modify_server.py validate-modules:E337
lib/ansible/modules/cloud/centurylink/clc_modify_server.py validate-modules:E338
lib/ansible/modules/cloud/centurylink/clc_publicip.py validate-modules:E337
lib/ansible/modules/cloud/centurylink/clc_publicip.py validate-modules:E338
lib/ansible/modules/cloud/centurylink/clc_server.py validate-modules:E337
lib/ansible/modules/cloud/centurylink/clc_server.py validate-modules:E338
lib/ansible/modules/cloud/centurylink/clc_server_snapshot.py validate-modules:E335
lib/ansible/modules/cloud/centurylink/clc_server_snapshot.py validate-modules:E337
lib/ansible/modules/cloud/centurylink/clc_server_snapshot.py validate-modules:E338
lib/ansible/modules/cloud/cloudscale/cloudscale_floating_ip.py validate-modules:E337
lib/ansible/modules/cloud/cloudscale/cloudscale_floating_ip.py validate-modules:E338
lib/ansible/modules/cloud/cloudscale/cloudscale_server.py validate-modules:E337
lib/ansible/modules/cloud/cloudscale/cloudscale_server.py validate-modules:E338
lib/ansible/modules/cloud/cloudscale/cloudscale_server_group.py validate-modules:E337
lib/ansible/modules/cloud/cloudscale/cloudscale_server_group.py validate-modules:E338
lib/ansible/modules/cloud/cloudscale/cloudscale_volume.py validate-modules:E337
lib/ansible/modules/cloud/cloudscale/cloudscale_volume.py validate-modules:E338
lib/ansible/modules/cloud/cloudstack/cs_account.py future-import-boilerplate
lib/ansible/modules/cloud/cloudstack/cs_account.py metaclass-boilerplate
lib/ansible/modules/cloud/cloudstack/cs_affinitygroup.py future-import-boilerplate
lib/ansible/modules/cloud/cloudstack/cs_affinitygroup.py metaclass-boilerplate
lib/ansible/modules/cloud/cloudstack/cs_affinitygroup.py validate-modules:E338
lib/ansible/modules/cloud/cloudstack/cs_cluster.py future-import-boilerplate
lib/ansible/modules/cloud/cloudstack/cs_cluster.py metaclass-boilerplate
lib/ansible/modules/cloud/cloudstack/cs_configuration.py future-import-boilerplate
lib/ansible/modules/cloud/cloudstack/cs_configuration.py metaclass-boilerplate
lib/ansible/modules/cloud/cloudstack/cs_domain.py future-import-boilerplate
lib/ansible/modules/cloud/cloudstack/cs_domain.py metaclass-boilerplate
lib/ansible/modules/cloud/cloudstack/cs_facts.py future-import-boilerplate
lib/ansible/modules/cloud/cloudstack/cs_facts.py metaclass-boilerplate
lib/ansible/modules/cloud/cloudstack/cs_firewall.py future-import-boilerplate
lib/ansible/modules/cloud/cloudstack/cs_firewall.py metaclass-boilerplate
lib/ansible/modules/cloud/cloudstack/cs_host.py future-import-boilerplate
lib/ansible/modules/cloud/cloudstack/cs_host.py metaclass-boilerplate
lib/ansible/modules/cloud/cloudstack/cs_image_store.py future-import-boilerplate
lib/ansible/modules/cloud/cloudstack/cs_image_store.py metaclass-boilerplate
lib/ansible/modules/cloud/cloudstack/cs_instance_nic.py future-import-boilerplate
lib/ansible/modules/cloud/cloudstack/cs_instance_nic.py metaclass-boilerplate
lib/ansible/modules/cloud/cloudstack/cs_instance_nic_secondaryip.py future-import-boilerplate
lib/ansible/modules/cloud/cloudstack/cs_instance_nic_secondaryip.py metaclass-boilerplate
lib/ansible/modules/cloud/cloudstack/cs_instancegroup.py future-import-boilerplate
lib/ansible/modules/cloud/cloudstack/cs_instancegroup.py metaclass-boilerplate
lib/ansible/modules/cloud/cloudstack/cs_ip_address.py future-import-boilerplate
lib/ansible/modules/cloud/cloudstack/cs_ip_address.py metaclass-boilerplate
lib/ansible/modules/cloud/cloudstack/cs_iso.py future-import-boilerplate
lib/ansible/modules/cloud/cloudstack/cs_iso.py metaclass-boilerplate
lib/ansible/modules/cloud/cloudstack/cs_loadbalancer_rule.py future-import-boilerplate
lib/ansible/modules/cloud/cloudstack/cs_loadbalancer_rule.py metaclass-boilerplate
lib/ansible/modules/cloud/cloudstack/cs_loadbalancer_rule_member.py future-import-boilerplate
lib/ansible/modules/cloud/cloudstack/cs_loadbalancer_rule_member.py metaclass-boilerplate
lib/ansible/modules/cloud/cloudstack/cs_network.py future-import-boilerplate
lib/ansible/modules/cloud/cloudstack/cs_network.py metaclass-boilerplate
lib/ansible/modules/cloud/cloudstack/cs_network.py validate-modules:E338
lib/ansible/modules/cloud/cloudstack/cs_network_acl.py future-import-boilerplate
lib/ansible/modules/cloud/cloudstack/cs_network_acl.py metaclass-boilerplate
lib/ansible/modules/cloud/cloudstack/cs_network_acl_rule.py future-import-boilerplate
lib/ansible/modules/cloud/cloudstack/cs_network_acl_rule.py metaclass-boilerplate
lib/ansible/modules/cloud/cloudstack/cs_network_offering.py future-import-boilerplate
lib/ansible/modules/cloud/cloudstack/cs_network_offering.py metaclass-boilerplate
lib/ansible/modules/cloud/cloudstack/cs_physical_network.py future-import-boilerplate
lib/ansible/modules/cloud/cloudstack/cs_physical_network.py metaclass-boilerplate
lib/ansible/modules/cloud/cloudstack/cs_pod.py future-import-boilerplate
lib/ansible/modules/cloud/cloudstack/cs_pod.py metaclass-boilerplate
lib/ansible/modules/cloud/cloudstack/cs_project.py future-import-boilerplate
lib/ansible/modules/cloud/cloudstack/cs_project.py metaclass-boilerplate
lib/ansible/modules/cloud/cloudstack/cs_region.py future-import-boilerplate
lib/ansible/modules/cloud/cloudstack/cs_region.py metaclass-boilerplate
lib/ansible/modules/cloud/cloudstack/cs_resourcelimit.py future-import-boilerplate
lib/ansible/modules/cloud/cloudstack/cs_resourcelimit.py metaclass-boilerplate
lib/ansible/modules/cloud/cloudstack/cs_role.py future-import-boilerplate
lib/ansible/modules/cloud/cloudstack/cs_role.py metaclass-boilerplate
lib/ansible/modules/cloud/cloudstack/cs_role.py validate-modules:E338
lib/ansible/modules/cloud/cloudstack/cs_role_permission.py future-import-boilerplate
lib/ansible/modules/cloud/cloudstack/cs_role_permission.py metaclass-boilerplate
lib/ansible/modules/cloud/cloudstack/cs_router.py future-import-boilerplate
lib/ansible/modules/cloud/cloudstack/cs_router.py metaclass-boilerplate
lib/ansible/modules/cloud/cloudstack/cs_sshkeypair.py future-import-boilerplate
lib/ansible/modules/cloud/cloudstack/cs_sshkeypair.py metaclass-boilerplate
lib/ansible/modules/cloud/cloudstack/cs_staticnat.py future-import-boilerplate
lib/ansible/modules/cloud/cloudstack/cs_staticnat.py metaclass-boilerplate
lib/ansible/modules/cloud/cloudstack/cs_storage_pool.py future-import-boilerplate
lib/ansible/modules/cloud/cloudstack/cs_storage_pool.py metaclass-boilerplate
lib/ansible/modules/cloud/cloudstack/cs_template.py future-import-boilerplate
lib/ansible/modules/cloud/cloudstack/cs_template.py metaclass-boilerplate
lib/ansible/modules/cloud/cloudstack/cs_traffic_type.py future-import-boilerplate
lib/ansible/modules/cloud/cloudstack/cs_traffic_type.py metaclass-boilerplate
lib/ansible/modules/cloud/cloudstack/cs_user.py future-import-boilerplate
lib/ansible/modules/cloud/cloudstack/cs_user.py metaclass-boilerplate
lib/ansible/modules/cloud/cloudstack/cs_vlan_ip_range.py future-import-boilerplate
lib/ansible/modules/cloud/cloudstack/cs_vlan_ip_range.py metaclass-boilerplate
lib/ansible/modules/cloud/cloudstack/cs_vmsnapshot.py future-import-boilerplate
lib/ansible/modules/cloud/cloudstack/cs_vmsnapshot.py metaclass-boilerplate
lib/ansible/modules/cloud/cloudstack/cs_volume.py future-import-boilerplate
lib/ansible/modules/cloud/cloudstack/cs_volume.py metaclass-boilerplate
lib/ansible/modules/cloud/cloudstack/cs_vpc_offering.py future-import-boilerplate
lib/ansible/modules/cloud/cloudstack/cs_vpc_offering.py metaclass-boilerplate
lib/ansible/modules/cloud/cloudstack/cs_vpn_gateway.py future-import-boilerplate
lib/ansible/modules/cloud/cloudstack/cs_vpn_gateway.py metaclass-boilerplate
lib/ansible/modules/cloud/cloudstack/cs_zone.py future-import-boilerplate
lib/ansible/modules/cloud/cloudstack/cs_zone.py metaclass-boilerplate
lib/ansible/modules/cloud/cloudstack/cs_zone_facts.py future-import-boilerplate
lib/ansible/modules/cloud/cloudstack/cs_zone_facts.py metaclass-boilerplate
lib/ansible/modules/cloud/cloudstack/cs_zone_facts.py validate-modules:E338
lib/ansible/modules/cloud/digital_ocean/_digital_ocean.py validate-modules:E322
lib/ansible/modules/cloud/digital_ocean/_digital_ocean.py validate-modules:E337
lib/ansible/modules/cloud/digital_ocean/_digital_ocean.py validate-modules:E338
lib/ansible/modules/cloud/digital_ocean/digital_ocean_block_storage.py validate-modules:E337
lib/ansible/modules/cloud/digital_ocean/digital_ocean_block_storage.py validate-modules:E338
lib/ansible/modules/cloud/digital_ocean/digital_ocean_certificate.py validate-modules:E337
lib/ansible/modules/cloud/digital_ocean/digital_ocean_certificate.py validate-modules:E338
lib/ansible/modules/cloud/digital_ocean/digital_ocean_certificate_info.py validate-modules:E337
lib/ansible/modules/cloud/digital_ocean/digital_ocean_domain.py validate-modules:E337
lib/ansible/modules/cloud/digital_ocean/digital_ocean_domain.py validate-modules:E338
lib/ansible/modules/cloud/digital_ocean/digital_ocean_domain_info.py validate-modules:E337
lib/ansible/modules/cloud/digital_ocean/digital_ocean_droplet.py validate-modules:E337
lib/ansible/modules/cloud/digital_ocean/digital_ocean_droplet.py validate-modules:E338
lib/ansible/modules/cloud/digital_ocean/digital_ocean_firewall_info.py validate-modules:E337
lib/ansible/modules/cloud/digital_ocean/digital_ocean_floating_ip.py validate-modules:E322
lib/ansible/modules/cloud/digital_ocean/digital_ocean_floating_ip.py validate-modules:E324
lib/ansible/modules/cloud/digital_ocean/digital_ocean_floating_ip.py validate-modules:E337
lib/ansible/modules/cloud/digital_ocean/digital_ocean_floating_ip.py validate-modules:E338
lib/ansible/modules/cloud/digital_ocean/digital_ocean_image_info.py validate-modules:E337
lib/ansible/modules/cloud/digital_ocean/digital_ocean_load_balancer_info.py validate-modules:E337
lib/ansible/modules/cloud/digital_ocean/digital_ocean_snapshot_info.py validate-modules:E337
lib/ansible/modules/cloud/digital_ocean/digital_ocean_sshkey.py validate-modules:E322
lib/ansible/modules/cloud/digital_ocean/digital_ocean_sshkey.py validate-modules:E324
lib/ansible/modules/cloud/digital_ocean/digital_ocean_sshkey.py validate-modules:E337
lib/ansible/modules/cloud/digital_ocean/digital_ocean_sshkey.py validate-modules:E338
lib/ansible/modules/cloud/digital_ocean/digital_ocean_tag.py validate-modules:E337
lib/ansible/modules/cloud/digital_ocean/digital_ocean_tag.py validate-modules:E338
lib/ansible/modules/cloud/digital_ocean/digital_ocean_tag_info.py validate-modules:E337
lib/ansible/modules/cloud/digital_ocean/digital_ocean_volume_info.py validate-modules:E337
lib/ansible/modules/cloud/dimensiondata/dimensiondata_network.py validate-modules:E326
lib/ansible/modules/cloud/dimensiondata/dimensiondata_network.py validate-modules:E337
lib/ansible/modules/cloud/dimensiondata/dimensiondata_network.py validate-modules:E338
lib/ansible/modules/cloud/dimensiondata/dimensiondata_vlan.py validate-modules:E326
lib/ansible/modules/cloud/dimensiondata/dimensiondata_vlan.py validate-modules:E337
lib/ansible/modules/cloud/dimensiondata/dimensiondata_vlan.py validate-modules:E338
lib/ansible/modules/cloud/docker/docker_container.py use-argspec-type-path # uses colon-separated paths, can't use type=path
lib/ansible/modules/cloud/docker/docker_swarm_service.py validate-modules:E326
lib/ansible/modules/cloud/google/_gcdns_record.py validate-modules:E337
lib/ansible/modules/cloud/google/_gcdns_zone.py validate-modules:E337
lib/ansible/modules/cloud/google/_gce.py pylint:blacklisted-name
lib/ansible/modules/cloud/google/_gce.py validate-modules:E326
lib/ansible/modules/cloud/google/_gce.py validate-modules:E337
lib/ansible/modules/cloud/google/_gce.py validate-modules:E338
lib/ansible/modules/cloud/google/_gcp_backend_service.py pylint:blacklisted-name
lib/ansible/modules/cloud/google/_gcp_backend_service.py validate-modules:E322
lib/ansible/modules/cloud/google/_gcp_backend_service.py validate-modules:E324
lib/ansible/modules/cloud/google/_gcp_backend_service.py validate-modules:E326
lib/ansible/modules/cloud/google/_gcp_backend_service.py validate-modules:E337
lib/ansible/modules/cloud/google/_gcp_backend_service.py validate-modules:E338
lib/ansible/modules/cloud/google/_gcp_forwarding_rule.py validate-modules:E322
lib/ansible/modules/cloud/google/_gcp_forwarding_rule.py validate-modules:E324
lib/ansible/modules/cloud/google/_gcp_forwarding_rule.py validate-modules:E326
lib/ansible/modules/cloud/google/_gcp_forwarding_rule.py validate-modules:E337
lib/ansible/modules/cloud/google/_gcp_forwarding_rule.py validate-modules:E338
lib/ansible/modules/cloud/google/_gcp_healthcheck.py pylint:blacklisted-name
lib/ansible/modules/cloud/google/_gcp_healthcheck.py validate-modules:E324
lib/ansible/modules/cloud/google/_gcp_healthcheck.py validate-modules:E326
lib/ansible/modules/cloud/google/_gcp_healthcheck.py validate-modules:E337
lib/ansible/modules/cloud/google/_gcp_healthcheck.py validate-modules:E338
lib/ansible/modules/cloud/google/_gcp_target_proxy.py validate-modules:E322
lib/ansible/modules/cloud/google/_gcp_target_proxy.py validate-modules:E326
lib/ansible/modules/cloud/google/_gcp_target_proxy.py validate-modules:E337
lib/ansible/modules/cloud/google/_gcp_target_proxy.py validate-modules:E338
lib/ansible/modules/cloud/google/_gcp_url_map.py validate-modules:E322
lib/ansible/modules/cloud/google/_gcp_url_map.py validate-modules:E324
lib/ansible/modules/cloud/google/_gcp_url_map.py validate-modules:E326
lib/ansible/modules/cloud/google/_gcp_url_map.py validate-modules:E337
lib/ansible/modules/cloud/google/_gcp_url_map.py validate-modules:E338
lib/ansible/modules/cloud/google/_gcspanner.py validate-modules:E322
lib/ansible/modules/cloud/google/_gcspanner.py validate-modules:E337
lib/ansible/modules/cloud/google/gc_storage.py validate-modules:E322
lib/ansible/modules/cloud/google/gc_storage.py validate-modules:E324
lib/ansible/modules/cloud/google/gc_storage.py validate-modules:E326
lib/ansible/modules/cloud/google/gc_storage.py validate-modules:E337
lib/ansible/modules/cloud/google/gc_storage.py validate-modules:E338
lib/ansible/modules/cloud/google/gce_eip.py pylint:blacklisted-name
lib/ansible/modules/cloud/google/gce_eip.py validate-modules:E322
lib/ansible/modules/cloud/google/gce_eip.py validate-modules:E337
lib/ansible/modules/cloud/google/gce_eip.py validate-modules:E338
lib/ansible/modules/cloud/google/gce_img.py pylint:blacklisted-name
lib/ansible/modules/cloud/google/gce_img.py validate-modules:E337
lib/ansible/modules/cloud/google/gce_img.py validate-modules:E338
lib/ansible/modules/cloud/google/gce_instance_template.py pylint:blacklisted-name
lib/ansible/modules/cloud/google/gce_instance_template.py validate-modules:E322
lib/ansible/modules/cloud/google/gce_instance_template.py validate-modules:E324
lib/ansible/modules/cloud/google/gce_instance_template.py validate-modules:E326
lib/ansible/modules/cloud/google/gce_instance_template.py validate-modules:E337
lib/ansible/modules/cloud/google/gce_instance_template.py validate-modules:E338
lib/ansible/modules/cloud/google/gce_labels.py validate-modules:E322
lib/ansible/modules/cloud/google/gce_labels.py validate-modules:E324
lib/ansible/modules/cloud/google/gce_labels.py validate-modules:E326
lib/ansible/modules/cloud/google/gce_labels.py validate-modules:E337
lib/ansible/modules/cloud/google/gce_labels.py validate-modules:E338
lib/ansible/modules/cloud/google/gce_lb.py pylint:blacklisted-name
lib/ansible/modules/cloud/google/gce_lb.py validate-modules:E323
lib/ansible/modules/cloud/google/gce_lb.py validate-modules:E326
lib/ansible/modules/cloud/google/gce_lb.py validate-modules:E337
lib/ansible/modules/cloud/google/gce_lb.py validate-modules:E338
lib/ansible/modules/cloud/google/gce_mig.py pylint:blacklisted-name
lib/ansible/modules/cloud/google/gce_mig.py validate-modules:E322
lib/ansible/modules/cloud/google/gce_mig.py validate-modules:E337
lib/ansible/modules/cloud/google/gce_mig.py validate-modules:E338
lib/ansible/modules/cloud/google/gce_net.py pylint:blacklisted-name
lib/ansible/modules/cloud/google/gce_net.py validate-modules:E323
lib/ansible/modules/cloud/google/gce_net.py validate-modules:E326
lib/ansible/modules/cloud/google/gce_net.py validate-modules:E337
lib/ansible/modules/cloud/google/gce_net.py validate-modules:E338
lib/ansible/modules/cloud/google/gce_pd.py pylint:blacklisted-name
lib/ansible/modules/cloud/google/gce_pd.py validate-modules:E322
lib/ansible/modules/cloud/google/gce_pd.py validate-modules:E326
lib/ansible/modules/cloud/google/gce_pd.py validate-modules:E337
lib/ansible/modules/cloud/google/gce_pd.py validate-modules:E338
lib/ansible/modules/cloud/google/gce_snapshot.py pylint:blacklisted-name
lib/ansible/modules/cloud/google/gce_snapshot.py validate-modules:E324
lib/ansible/modules/cloud/google/gce_snapshot.py validate-modules:E337
lib/ansible/modules/cloud/google/gce_snapshot.py validate-modules:E338
lib/ansible/modules/cloud/google/gce_tag.py pylint:blacklisted-name
lib/ansible/modules/cloud/google/gce_tag.py validate-modules:E337
lib/ansible/modules/cloud/google/gcp_bigquery_table.py validate-modules:E324
lib/ansible/modules/cloud/google/gcp_compute_address_facts.py validate-modules:E337
lib/ansible/modules/cloud/google/gcp_compute_backend_bucket_facts.py validate-modules:E337
lib/ansible/modules/cloud/google/gcp_compute_backend_service_facts.py validate-modules:E337
lib/ansible/modules/cloud/google/gcp_compute_disk_facts.py validate-modules:E337
lib/ansible/modules/cloud/google/gcp_compute_firewall_facts.py validate-modules:E337
lib/ansible/modules/cloud/google/gcp_compute_forwarding_rule_facts.py validate-modules:E337
lib/ansible/modules/cloud/google/gcp_compute_global_address_facts.py validate-modules:E337
lib/ansible/modules/cloud/google/gcp_compute_global_forwarding_rule_facts.py validate-modules:E337
lib/ansible/modules/cloud/google/gcp_compute_health_check_facts.py validate-modules:E337
lib/ansible/modules/cloud/google/gcp_compute_http_health_check_facts.py validate-modules:E337
lib/ansible/modules/cloud/google/gcp_compute_https_health_check_facts.py validate-modules:E337
lib/ansible/modules/cloud/google/gcp_compute_image_facts.py validate-modules:E337
lib/ansible/modules/cloud/google/gcp_compute_instance_facts.py validate-modules:E337
lib/ansible/modules/cloud/google/gcp_compute_instance_group_facts.py validate-modules:E337
lib/ansible/modules/cloud/google/gcp_compute_instance_group_manager_facts.py validate-modules:E337
lib/ansible/modules/cloud/google/gcp_compute_instance_template_facts.py validate-modules:E337
lib/ansible/modules/cloud/google/gcp_compute_interconnect_attachment_facts.py validate-modules:E337
lib/ansible/modules/cloud/google/gcp_compute_network_facts.py validate-modules:E337
lib/ansible/modules/cloud/google/gcp_compute_region_disk_facts.py validate-modules:E337
lib/ansible/modules/cloud/google/gcp_compute_route_facts.py validate-modules:E337
lib/ansible/modules/cloud/google/gcp_compute_router_facts.py validate-modules:E337
lib/ansible/modules/cloud/google/gcp_compute_ssl_certificate_facts.py validate-modules:E337
lib/ansible/modules/cloud/google/gcp_compute_ssl_policy_facts.py validate-modules:E337
lib/ansible/modules/cloud/google/gcp_compute_subnetwork_facts.py validate-modules:E337
lib/ansible/modules/cloud/google/gcp_compute_target_http_proxy_facts.py validate-modules:E337
lib/ansible/modules/cloud/google/gcp_compute_target_https_proxy_facts.py validate-modules:E337
lib/ansible/modules/cloud/google/gcp_compute_target_pool_facts.py validate-modules:E337
lib/ansible/modules/cloud/google/gcp_compute_target_ssl_proxy_facts.py validate-modules:E337
lib/ansible/modules/cloud/google/gcp_compute_target_tcp_proxy_facts.py validate-modules:E337
lib/ansible/modules/cloud/google/gcp_compute_target_vpn_gateway_facts.py validate-modules:E337
lib/ansible/modules/cloud/google/gcp_compute_url_map_facts.py validate-modules:E337
lib/ansible/modules/cloud/google/gcp_compute_vpn_tunnel_facts.py validate-modules:E337
lib/ansible/modules/cloud/google/gcp_dns_managed_zone_facts.py validate-modules:E337
lib/ansible/modules/cloud/google/gcpubsub.py validate-modules:E322
lib/ansible/modules/cloud/google/gcpubsub.py validate-modules:E323
lib/ansible/modules/cloud/google/gcpubsub.py validate-modules:E337
lib/ansible/modules/cloud/google/gcpubsub_info.py validate-modules:E322
lib/ansible/modules/cloud/google/gcpubsub_info.py validate-modules:E324
lib/ansible/modules/cloud/google/gcpubsub_info.py validate-modules:E326
lib/ansible/modules/cloud/google/gcpubsub_info.py validate-modules:E338
lib/ansible/modules/cloud/hcloud/hcloud_image_facts.py validate-modules:E338
lib/ansible/modules/cloud/heroku/heroku_collaborator.py validate-modules:E337
lib/ansible/modules/cloud/kubevirt/kubevirt_cdi_upload.py validate-modules:E338
lib/ansible/modules/cloud/kubevirt/kubevirt_preset.py validate-modules:E337
lib/ansible/modules/cloud/kubevirt/kubevirt_preset.py validate-modules:E338
lib/ansible/modules/cloud/kubevirt/kubevirt_pvc.py validate-modules:E337
lib/ansible/modules/cloud/kubevirt/kubevirt_pvc.py validate-modules:E338
lib/ansible/modules/cloud/kubevirt/kubevirt_rs.py validate-modules:E337
lib/ansible/modules/cloud/kubevirt/kubevirt_rs.py validate-modules:E338
lib/ansible/modules/cloud/kubevirt/kubevirt_template.py validate-modules:E338
lib/ansible/modules/cloud/kubevirt/kubevirt_vm.py validate-modules:E337
lib/ansible/modules/cloud/kubevirt/kubevirt_vm.py validate-modules:E338
lib/ansible/modules/cloud/linode/linode.py validate-modules:E322
lib/ansible/modules/cloud/linode/linode.py validate-modules:E324
lib/ansible/modules/cloud/linode/linode.py validate-modules:E337
lib/ansible/modules/cloud/linode/linode_v4.py validate-modules:E337
lib/ansible/modules/cloud/lxc/lxc_container.py pylint:blacklisted-name
lib/ansible/modules/cloud/lxc/lxc_container.py use-argspec-type-path
lib/ansible/modules/cloud/lxc/lxc_container.py validate-modules:E210
lib/ansible/modules/cloud/lxc/lxc_container.py validate-modules:E324
lib/ansible/modules/cloud/lxc/lxc_container.py validate-modules:E326
lib/ansible/modules/cloud/lxc/lxc_container.py validate-modules:E337
lib/ansible/modules/cloud/lxc/lxc_container.py validate-modules:E338
lib/ansible/modules/cloud/lxd/lxd_container.py validate-modules:E322
lib/ansible/modules/cloud/lxd/lxd_container.py validate-modules:E324
lib/ansible/modules/cloud/lxd/lxd_container.py validate-modules:E337
lib/ansible/modules/cloud/lxd/lxd_container.py validate-modules:E338
lib/ansible/modules/cloud/lxd/lxd_profile.py validate-modules:E324
lib/ansible/modules/cloud/lxd/lxd_profile.py validate-modules:E337
lib/ansible/modules/cloud/lxd/lxd_profile.py validate-modules:E338
lib/ansible/modules/cloud/memset/memset_dns_reload.py validate-modules:E337
lib/ansible/modules/cloud/memset/memset_memstore_info.py validate-modules:E337
lib/ansible/modules/cloud/memset/memset_server_info.py validate-modules:E337
lib/ansible/modules/cloud/memset/memset_zone.py validate-modules:E337
lib/ansible/modules/cloud/memset/memset_zone_domain.py validate-modules:E337
lib/ansible/modules/cloud/memset/memset_zone_record.py validate-modules:E337
lib/ansible/modules/cloud/misc/cloud_init_data_facts.py validate-modules:E338
lib/ansible/modules/cloud/misc/helm.py validate-modules:E337
lib/ansible/modules/cloud/misc/helm.py validate-modules:E338
lib/ansible/modules/cloud/misc/ovirt.py validate-modules:E322
lib/ansible/modules/cloud/misc/ovirt.py validate-modules:E326
lib/ansible/modules/cloud/misc/ovirt.py validate-modules:E337
lib/ansible/modules/cloud/misc/proxmox.py validate-modules:E337
lib/ansible/modules/cloud/misc/proxmox.py validate-modules:E338
lib/ansible/modules/cloud/misc/proxmox_kvm.py validate-modules:E322
lib/ansible/modules/cloud/misc/proxmox_kvm.py validate-modules:E324
lib/ansible/modules/cloud/misc/proxmox_kvm.py validate-modules:E337
lib/ansible/modules/cloud/misc/proxmox_kvm.py validate-modules:E338
lib/ansible/modules/cloud/misc/proxmox_template.py validate-modules:E323
lib/ansible/modules/cloud/misc/proxmox_template.py validate-modules:E337
lib/ansible/modules/cloud/misc/proxmox_template.py validate-modules:E338
lib/ansible/modules/cloud/misc/terraform.py validate-modules:E324
lib/ansible/modules/cloud/misc/terraform.py validate-modules:E337
lib/ansible/modules/cloud/misc/terraform.py validate-modules:E338
lib/ansible/modules/cloud/misc/virt.py validate-modules:E322
lib/ansible/modules/cloud/misc/virt.py validate-modules:E326
lib/ansible/modules/cloud/misc/virt.py validate-modules:E337
lib/ansible/modules/cloud/misc/virt_net.py validate-modules:E338
lib/ansible/modules/cloud/misc/virt_pool.py validate-modules:E326
lib/ansible/modules/cloud/misc/virt_pool.py validate-modules:E338
lib/ansible/modules/cloud/oneandone/oneandone_firewall_policy.py validate-modules:E337
lib/ansible/modules/cloud/oneandone/oneandone_load_balancer.py validate-modules:E324
lib/ansible/modules/cloud/oneandone/oneandone_load_balancer.py validate-modules:E337
lib/ansible/modules/cloud/oneandone/oneandone_load_balancer.py validate-modules:E338
lib/ansible/modules/cloud/oneandone/oneandone_monitoring_policy.py validate-modules:E337
lib/ansible/modules/cloud/oneandone/oneandone_private_network.py validate-modules:E326
lib/ansible/modules/cloud/oneandone/oneandone_private_network.py validate-modules:E337
lib/ansible/modules/cloud/oneandone/oneandone_private_network.py validate-modules:E338
lib/ansible/modules/cloud/oneandone/oneandone_public_ip.py validate-modules:E324
lib/ansible/modules/cloud/oneandone/oneandone_public_ip.py validate-modules:E326
lib/ansible/modules/cloud/oneandone/oneandone_public_ip.py validate-modules:E337
lib/ansible/modules/cloud/oneandone/oneandone_public_ip.py validate-modules:E338
lib/ansible/modules/cloud/oneandone/oneandone_server.py validate-modules:E326
lib/ansible/modules/cloud/oneandone/oneandone_server.py validate-modules:E337
lib/ansible/modules/cloud/oneandone/oneandone_server.py validate-modules:E338
lib/ansible/modules/cloud/opennebula/one_host.py validate-modules:E337
lib/ansible/modules/cloud/opennebula/one_host.py validate-modules:E338
lib/ansible/modules/cloud/opennebula/one_image.py validate-modules:E337
lib/ansible/modules/cloud/opennebula/one_image_info.py validate-modules:E337
lib/ansible/modules/cloud/opennebula/one_service.py validate-modules:E337
lib/ansible/modules/cloud/opennebula/one_vm.py validate-modules:E337
lib/ansible/modules/cloud/openstack/os_auth.py validate-modules:E338
lib/ansible/modules/cloud/openstack/os_client_config.py validate-modules:E337
lib/ansible/modules/cloud/openstack/os_coe_cluster.py validate-modules:E337
lib/ansible/modules/cloud/openstack/os_coe_cluster.py validate-modules:E338
lib/ansible/modules/cloud/openstack/os_coe_cluster_template.py validate-modules:E337
lib/ansible/modules/cloud/openstack/os_coe_cluster_template.py validate-modules:E338
lib/ansible/modules/cloud/openstack/os_flavor_facts.py validate-modules:E324
lib/ansible/modules/cloud/openstack/os_flavor_facts.py validate-modules:E335
lib/ansible/modules/cloud/openstack/os_flavor_facts.py validate-modules:E337
lib/ansible/modules/cloud/openstack/os_flavor_facts.py validate-modules:E338
lib/ansible/modules/cloud/openstack/os_floating_ip.py validate-modules:E338
lib/ansible/modules/cloud/openstack/os_group.py validate-modules:E338
lib/ansible/modules/cloud/openstack/os_image.py validate-modules:E324
lib/ansible/modules/cloud/openstack/os_image.py validate-modules:E326
lib/ansible/modules/cloud/openstack/os_image.py validate-modules:E337
lib/ansible/modules/cloud/openstack/os_image.py validate-modules:E338
lib/ansible/modules/cloud/openstack/os_image_facts.py validate-modules:E338
lib/ansible/modules/cloud/openstack/os_ironic.py validate-modules:E322
lib/ansible/modules/cloud/openstack/os_ironic.py validate-modules:E323
lib/ansible/modules/cloud/openstack/os_ironic.py validate-modules:E326
lib/ansible/modules/cloud/openstack/os_ironic.py validate-modules:E337
lib/ansible/modules/cloud/openstack/os_ironic.py validate-modules:E338
lib/ansible/modules/cloud/openstack/os_ironic_inspect.py validate-modules:E338
lib/ansible/modules/cloud/openstack/os_ironic_node.py validate-modules:E322
lib/ansible/modules/cloud/openstack/os_ironic_node.py validate-modules:E324
lib/ansible/modules/cloud/openstack/os_ironic_node.py validate-modules:E326
lib/ansible/modules/cloud/openstack/os_ironic_node.py validate-modules:E335
lib/ansible/modules/cloud/openstack/os_ironic_node.py validate-modules:E337
lib/ansible/modules/cloud/openstack/os_ironic_node.py validate-modules:E338
lib/ansible/modules/cloud/openstack/os_keypair.py validate-modules:E338
lib/ansible/modules/cloud/openstack/os_keystone_domain.py validate-modules:E338
lib/ansible/modules/cloud/openstack/os_keystone_domain_facts.py validate-modules:E337
lib/ansible/modules/cloud/openstack/os_keystone_domain_facts.py validate-modules:E338
lib/ansible/modules/cloud/openstack/os_keystone_endpoint.py validate-modules:E322
lib/ansible/modules/cloud/openstack/os_keystone_endpoint.py validate-modules:E326
lib/ansible/modules/cloud/openstack/os_keystone_endpoint.py validate-modules:E337
lib/ansible/modules/cloud/openstack/os_keystone_endpoint.py validate-modules:E338
lib/ansible/modules/cloud/openstack/os_keystone_role.py validate-modules:E338
lib/ansible/modules/cloud/openstack/os_keystone_service.py validate-modules:E338
lib/ansible/modules/cloud/openstack/os_listener.py validate-modules:E337
lib/ansible/modules/cloud/openstack/os_listener.py validate-modules:E338
lib/ansible/modules/cloud/openstack/os_loadbalancer.py validate-modules:E337
lib/ansible/modules/cloud/openstack/os_loadbalancer.py validate-modules:E338
lib/ansible/modules/cloud/openstack/os_member.py validate-modules:E337
lib/ansible/modules/cloud/openstack/os_member.py validate-modules:E338
lib/ansible/modules/cloud/openstack/os_network.py validate-modules:E337
lib/ansible/modules/cloud/openstack/os_network.py validate-modules:E338
lib/ansible/modules/cloud/openstack/os_networks_facts.py validate-modules:E337
lib/ansible/modules/cloud/openstack/os_networks_facts.py validate-modules:E338
lib/ansible/modules/cloud/openstack/os_nova_flavor.py validate-modules:E337
lib/ansible/modules/cloud/openstack/os_nova_flavor.py validate-modules:E338
lib/ansible/modules/cloud/openstack/os_nova_host_aggregate.py validate-modules:E337
lib/ansible/modules/cloud/openstack/os_nova_host_aggregate.py validate-modules:E338
lib/ansible/modules/cloud/openstack/os_object.py validate-modules:E338
lib/ansible/modules/cloud/openstack/os_pool.py validate-modules:E338
lib/ansible/modules/cloud/openstack/os_port.py validate-modules:E337
lib/ansible/modules/cloud/openstack/os_port.py validate-modules:E338
lib/ansible/modules/cloud/openstack/os_port_facts.py validate-modules:E337
lib/ansible/modules/cloud/openstack/os_port_facts.py validate-modules:E338
lib/ansible/modules/cloud/openstack/os_project.py validate-modules:E338
lib/ansible/modules/cloud/openstack/os_project_access.py validate-modules:E337
lib/ansible/modules/cloud/openstack/os_project_access.py validate-modules:E338
lib/ansible/modules/cloud/openstack/os_project_facts.py validate-modules:E337
lib/ansible/modules/cloud/openstack/os_project_facts.py validate-modules:E338
lib/ansible/modules/cloud/openstack/os_quota.py validate-modules:E322
lib/ansible/modules/cloud/openstack/os_quota.py validate-modules:E323
lib/ansible/modules/cloud/openstack/os_quota.py validate-modules:E326
lib/ansible/modules/cloud/openstack/os_quota.py validate-modules:E337
lib/ansible/modules/cloud/openstack/os_quota.py validate-modules:E338
lib/ansible/modules/cloud/openstack/os_recordset.py validate-modules:E337
lib/ansible/modules/cloud/openstack/os_recordset.py validate-modules:E338
lib/ansible/modules/cloud/openstack/os_router.py validate-modules:E337
lib/ansible/modules/cloud/openstack/os_router.py validate-modules:E338
lib/ansible/modules/cloud/openstack/os_security_group.py validate-modules:E338
lib/ansible/modules/cloud/openstack/os_security_group_rule.py validate-modules:E337
lib/ansible/modules/cloud/openstack/os_security_group_rule.py validate-modules:E338
lib/ansible/modules/cloud/openstack/os_server.py validate-modules:E322
lib/ansible/modules/cloud/openstack/os_server.py validate-modules:E324
lib/ansible/modules/cloud/openstack/os_server.py validate-modules:E337
lib/ansible/modules/cloud/openstack/os_server.py validate-modules:E338
lib/ansible/modules/cloud/openstack/os_server_action.py validate-modules:E324
lib/ansible/modules/cloud/openstack/os_server_action.py validate-modules:E338
lib/ansible/modules/cloud/openstack/os_server_facts.py validate-modules:E337
lib/ansible/modules/cloud/openstack/os_server_facts.py validate-modules:E338
lib/ansible/modules/cloud/openstack/os_server_group.py validate-modules:E337
lib/ansible/modules/cloud/openstack/os_server_group.py validate-modules:E338
lib/ansible/modules/cloud/openstack/os_server_metadata.py validate-modules:E337
lib/ansible/modules/cloud/openstack/os_server_metadata.py validate-modules:E338
lib/ansible/modules/cloud/openstack/os_server_volume.py validate-modules:E338
lib/ansible/modules/cloud/openstack/os_stack.py validate-modules:E324
lib/ansible/modules/cloud/openstack/os_stack.py validate-modules:E337
lib/ansible/modules/cloud/openstack/os_stack.py validate-modules:E338
lib/ansible/modules/cloud/openstack/os_subnet.py validate-modules:E326
lib/ansible/modules/cloud/openstack/os_subnet.py validate-modules:E337
lib/ansible/modules/cloud/openstack/os_subnet.py validate-modules:E338
lib/ansible/modules/cloud/openstack/os_subnets_facts.py validate-modules:E337
lib/ansible/modules/cloud/openstack/os_subnets_facts.py validate-modules:E338
lib/ansible/modules/cloud/openstack/os_user.py validate-modules:E337
lib/ansible/modules/cloud/openstack/os_user.py validate-modules:E338
lib/ansible/modules/cloud/openstack/os_user_facts.py validate-modules:E337
lib/ansible/modules/cloud/openstack/os_user_facts.py validate-modules:E338
lib/ansible/modules/cloud/openstack/os_user_group.py validate-modules:E338
lib/ansible/modules/cloud/openstack/os_user_role.py validate-modules:E338
lib/ansible/modules/cloud/openstack/os_volume.py validate-modules:E322
lib/ansible/modules/cloud/openstack/os_volume.py validate-modules:E337
lib/ansible/modules/cloud/openstack/os_volume.py validate-modules:E338
lib/ansible/modules/cloud/openstack/os_volume_snapshot.py validate-modules:E338
lib/ansible/modules/cloud/openstack/os_zone.py validate-modules:E326
lib/ansible/modules/cloud/openstack/os_zone.py validate-modules:E337
lib/ansible/modules/cloud/openstack/os_zone.py validate-modules:E338
lib/ansible/modules/cloud/oracle/oci_vcn.py validate-modules:E337
lib/ansible/modules/cloud/ovh/ovh_ip_failover.py validate-modules:E337
lib/ansible/modules/cloud/ovh/ovh_ip_failover.py validate-modules:E338
lib/ansible/modules/cloud/ovh/ovh_ip_loadbalancing_backend.py validate-modules:E337
lib/ansible/modules/cloud/ovh/ovh_ip_loadbalancing_backend.py validate-modules:E338
lib/ansible/modules/cloud/ovirt/ovirt_affinity_group.py future-import-boilerplate
lib/ansible/modules/cloud/ovirt/ovirt_affinity_group.py metaclass-boilerplate
lib/ansible/modules/cloud/ovirt/ovirt_affinity_group.py validate-modules:E337
lib/ansible/modules/cloud/ovirt/ovirt_affinity_label.py future-import-boilerplate
lib/ansible/modules/cloud/ovirt/ovirt_affinity_label.py metaclass-boilerplate
lib/ansible/modules/cloud/ovirt/ovirt_affinity_label.py validate-modules:E317
lib/ansible/modules/cloud/ovirt/ovirt_affinity_label.py validate-modules:E337
lib/ansible/modules/cloud/ovirt/ovirt_affinity_label.py validate-modules:E338
lib/ansible/modules/cloud/ovirt/ovirt_affinity_label_facts.py future-import-boilerplate
lib/ansible/modules/cloud/ovirt/ovirt_affinity_label_facts.py metaclass-boilerplate
lib/ansible/modules/cloud/ovirt/ovirt_affinity_label_facts.py validate-modules:E338
lib/ansible/modules/cloud/ovirt/ovirt_auth.py future-import-boilerplate
lib/ansible/modules/cloud/ovirt/ovirt_auth.py metaclass-boilerplate
lib/ansible/modules/cloud/ovirt/ovirt_auth.py validate-modules:E322
lib/ansible/modules/cloud/ovirt/ovirt_auth.py validate-modules:E324
lib/ansible/modules/cloud/ovirt/ovirt_auth.py validate-modules:E337
lib/ansible/modules/cloud/ovirt/ovirt_auth.py validate-modules:E338
lib/ansible/modules/cloud/ovirt/ovirt_cluster.py future-import-boilerplate
lib/ansible/modules/cloud/ovirt/ovirt_cluster.py metaclass-boilerplate
lib/ansible/modules/cloud/ovirt/ovirt_cluster.py validate-modules:E317
lib/ansible/modules/cloud/ovirt/ovirt_cluster.py validate-modules:E322
lib/ansible/modules/cloud/ovirt/ovirt_cluster.py validate-modules:E326
lib/ansible/modules/cloud/ovirt/ovirt_cluster.py validate-modules:E337
lib/ansible/modules/cloud/ovirt/ovirt_cluster.py validate-modules:E338
lib/ansible/modules/cloud/ovirt/ovirt_cluster_facts.py future-import-boilerplate
lib/ansible/modules/cloud/ovirt/ovirt_cluster_facts.py metaclass-boilerplate
lib/ansible/modules/cloud/ovirt/ovirt_cluster_facts.py validate-modules:E338
lib/ansible/modules/cloud/ovirt/ovirt_datacenter.py future-import-boilerplate
lib/ansible/modules/cloud/ovirt/ovirt_datacenter.py metaclass-boilerplate
lib/ansible/modules/cloud/ovirt/ovirt_datacenter.py validate-modules:E317
lib/ansible/modules/cloud/ovirt/ovirt_datacenter.py validate-modules:E338
lib/ansible/modules/cloud/ovirt/ovirt_datacenter_facts.py future-import-boilerplate
lib/ansible/modules/cloud/ovirt/ovirt_datacenter_facts.py metaclass-boilerplate
lib/ansible/modules/cloud/ovirt/ovirt_datacenter_facts.py validate-modules:E338
lib/ansible/modules/cloud/ovirt/ovirt_disk.py future-import-boilerplate
lib/ansible/modules/cloud/ovirt/ovirt_disk.py metaclass-boilerplate
lib/ansible/modules/cloud/ovirt/ovirt_disk.py validate-modules:E322
lib/ansible/modules/cloud/ovirt/ovirt_disk.py validate-modules:E324
lib/ansible/modules/cloud/ovirt/ovirt_disk.py validate-modules:E326
lib/ansible/modules/cloud/ovirt/ovirt_disk.py validate-modules:E337
lib/ansible/modules/cloud/ovirt/ovirt_disk.py validate-modules:E338
lib/ansible/modules/cloud/ovirt/ovirt_disk_facts.py future-import-boilerplate
lib/ansible/modules/cloud/ovirt/ovirt_disk_facts.py metaclass-boilerplate
lib/ansible/modules/cloud/ovirt/ovirt_disk_facts.py validate-modules:E338
lib/ansible/modules/cloud/ovirt/ovirt_external_provider.py future-import-boilerplate
lib/ansible/modules/cloud/ovirt/ovirt_external_provider.py metaclass-boilerplate
lib/ansible/modules/cloud/ovirt/ovirt_external_provider.py validate-modules:E317
lib/ansible/modules/cloud/ovirt/ovirt_external_provider.py validate-modules:E322
lib/ansible/modules/cloud/ovirt/ovirt_external_provider.py validate-modules:E324
lib/ansible/modules/cloud/ovirt/ovirt_external_provider.py validate-modules:E337
lib/ansible/modules/cloud/ovirt/ovirt_external_provider.py validate-modules:E338
lib/ansible/modules/cloud/ovirt/ovirt_external_provider_facts.py future-import-boilerplate
lib/ansible/modules/cloud/ovirt/ovirt_external_provider_facts.py metaclass-boilerplate
lib/ansible/modules/cloud/ovirt/ovirt_external_provider_facts.py validate-modules:E317
lib/ansible/modules/cloud/ovirt/ovirt_external_provider_facts.py validate-modules:E322
lib/ansible/modules/cloud/ovirt/ovirt_external_provider_facts.py validate-modules:E338
lib/ansible/modules/cloud/ovirt/ovirt_group.py future-import-boilerplate
lib/ansible/modules/cloud/ovirt/ovirt_group.py metaclass-boilerplate
lib/ansible/modules/cloud/ovirt/ovirt_group.py validate-modules:E338
lib/ansible/modules/cloud/ovirt/ovirt_group_facts.py future-import-boilerplate
lib/ansible/modules/cloud/ovirt/ovirt_group_facts.py metaclass-boilerplate
lib/ansible/modules/cloud/ovirt/ovirt_group_facts.py validate-modules:E338
lib/ansible/modules/cloud/ovirt/ovirt_host.py future-import-boilerplate
lib/ansible/modules/cloud/ovirt/ovirt_host.py metaclass-boilerplate
lib/ansible/modules/cloud/ovirt/ovirt_host.py validate-modules:E335
lib/ansible/modules/cloud/ovirt/ovirt_host.py validate-modules:E337
lib/ansible/modules/cloud/ovirt/ovirt_host.py validate-modules:E338
lib/ansible/modules/cloud/ovirt/ovirt_host_facts.py future-import-boilerplate
lib/ansible/modules/cloud/ovirt/ovirt_host_facts.py metaclass-boilerplate
lib/ansible/modules/cloud/ovirt/ovirt_host_facts.py validate-modules:E338
lib/ansible/modules/cloud/ovirt/ovirt_host_network.py future-import-boilerplate
lib/ansible/modules/cloud/ovirt/ovirt_host_network.py metaclass-boilerplate
lib/ansible/modules/cloud/ovirt/ovirt_host_network.py validate-modules:E337
lib/ansible/modules/cloud/ovirt/ovirt_host_network.py validate-modules:E338
lib/ansible/modules/cloud/ovirt/ovirt_host_pm.py future-import-boilerplate
lib/ansible/modules/cloud/ovirt/ovirt_host_pm.py metaclass-boilerplate
lib/ansible/modules/cloud/ovirt/ovirt_host_pm.py validate-modules:E317
lib/ansible/modules/cloud/ovirt/ovirt_host_pm.py validate-modules:E337
lib/ansible/modules/cloud/ovirt/ovirt_host_pm.py validate-modules:E338
lib/ansible/modules/cloud/ovirt/ovirt_host_storage_facts.py future-import-boilerplate
lib/ansible/modules/cloud/ovirt/ovirt_host_storage_facts.py metaclass-boilerplate
lib/ansible/modules/cloud/ovirt/ovirt_host_storage_facts.py validate-modules:E337
lib/ansible/modules/cloud/ovirt/ovirt_host_storage_facts.py validate-modules:E338
lib/ansible/modules/cloud/ovirt/ovirt_instance_type.py future-import-boilerplate
lib/ansible/modules/cloud/ovirt/ovirt_instance_type.py metaclass-boilerplate
lib/ansible/modules/cloud/ovirt/ovirt_instance_type.py validate-modules:E337
lib/ansible/modules/cloud/ovirt/ovirt_job.py future-import-boilerplate
lib/ansible/modules/cloud/ovirt/ovirt_job.py metaclass-boilerplate
lib/ansible/modules/cloud/ovirt/ovirt_job.py validate-modules:E338
lib/ansible/modules/cloud/ovirt/ovirt_mac_pool.py future-import-boilerplate
lib/ansible/modules/cloud/ovirt/ovirt_mac_pool.py metaclass-boilerplate
lib/ansible/modules/cloud/ovirt/ovirt_mac_pool.py validate-modules:E337
lib/ansible/modules/cloud/ovirt/ovirt_mac_pool.py validate-modules:E338
lib/ansible/modules/cloud/ovirt/ovirt_network.py future-import-boilerplate
lib/ansible/modules/cloud/ovirt/ovirt_network.py metaclass-boilerplate
lib/ansible/modules/cloud/ovirt/ovirt_network.py validate-modules:E337
lib/ansible/modules/cloud/ovirt/ovirt_network.py validate-modules:E338
lib/ansible/modules/cloud/ovirt/ovirt_network_facts.py future-import-boilerplate
lib/ansible/modules/cloud/ovirt/ovirt_network_facts.py metaclass-boilerplate
lib/ansible/modules/cloud/ovirt/ovirt_network_facts.py validate-modules:E338
lib/ansible/modules/cloud/ovirt/ovirt_nic.py future-import-boilerplate
lib/ansible/modules/cloud/ovirt/ovirt_nic.py metaclass-boilerplate
lib/ansible/modules/cloud/ovirt/ovirt_nic.py validate-modules:E337
lib/ansible/modules/cloud/ovirt/ovirt_nic.py validate-modules:E338
lib/ansible/modules/cloud/ovirt/ovirt_nic_facts.py future-import-boilerplate
lib/ansible/modules/cloud/ovirt/ovirt_nic_facts.py metaclass-boilerplate
lib/ansible/modules/cloud/ovirt/ovirt_nic_facts.py validate-modules:E338
lib/ansible/modules/cloud/ovirt/ovirt_permission.py future-import-boilerplate
lib/ansible/modules/cloud/ovirt/ovirt_permission.py metaclass-boilerplate
lib/ansible/modules/cloud/ovirt/ovirt_permission.py validate-modules:E337
lib/ansible/modules/cloud/ovirt/ovirt_permission_facts.py future-import-boilerplate
lib/ansible/modules/cloud/ovirt/ovirt_permission_facts.py metaclass-boilerplate
lib/ansible/modules/cloud/ovirt/ovirt_permission_facts.py validate-modules:E338
lib/ansible/modules/cloud/ovirt/ovirt_quota.py future-import-boilerplate
lib/ansible/modules/cloud/ovirt/ovirt_quota.py metaclass-boilerplate
lib/ansible/modules/cloud/ovirt/ovirt_quota.py validate-modules:E337
lib/ansible/modules/cloud/ovirt/ovirt_quota.py validate-modules:E338
lib/ansible/modules/cloud/ovirt/ovirt_quota_facts.py future-import-boilerplate
lib/ansible/modules/cloud/ovirt/ovirt_quota_facts.py metaclass-boilerplate
lib/ansible/modules/cloud/ovirt/ovirt_quota_facts.py validate-modules:E338
lib/ansible/modules/cloud/ovirt/ovirt_role.py future-import-boilerplate
lib/ansible/modules/cloud/ovirt/ovirt_role.py metaclass-boilerplate
lib/ansible/modules/cloud/ovirt/ovirt_role.py validate-modules:E337
lib/ansible/modules/cloud/ovirt/ovirt_role.py validate-modules:E338
lib/ansible/modules/cloud/ovirt/ovirt_scheduling_policy_facts.py future-import-boilerplate
lib/ansible/modules/cloud/ovirt/ovirt_scheduling_policy_facts.py metaclass-boilerplate
lib/ansible/modules/cloud/ovirt/ovirt_scheduling_policy_facts.py validate-modules:E338
lib/ansible/modules/cloud/ovirt/ovirt_snapshot.py future-import-boilerplate
lib/ansible/modules/cloud/ovirt/ovirt_snapshot.py metaclass-boilerplate
lib/ansible/modules/cloud/ovirt/ovirt_snapshot.py validate-modules:E337
lib/ansible/modules/cloud/ovirt/ovirt_snapshot.py validate-modules:E338
lib/ansible/modules/cloud/ovirt/ovirt_snapshot_facts.py future-import-boilerplate
lib/ansible/modules/cloud/ovirt/ovirt_snapshot_facts.py metaclass-boilerplate
lib/ansible/modules/cloud/ovirt/ovirt_snapshot_facts.py validate-modules:E338
lib/ansible/modules/cloud/ovirt/ovirt_storage_connection.py future-import-boilerplate
lib/ansible/modules/cloud/ovirt/ovirt_storage_connection.py metaclass-boilerplate
lib/ansible/modules/cloud/ovirt/ovirt_storage_connection.py validate-modules:E337
lib/ansible/modules/cloud/ovirt/ovirt_storage_connection.py validate-modules:E338
lib/ansible/modules/cloud/ovirt/ovirt_storage_domain.py future-import-boilerplate
lib/ansible/modules/cloud/ovirt/ovirt_storage_domain.py metaclass-boilerplate
lib/ansible/modules/cloud/ovirt/ovirt_storage_domain.py validate-modules:E337
lib/ansible/modules/cloud/ovirt/ovirt_storage_domain.py validate-modules:E338
lib/ansible/modules/cloud/ovirt/ovirt_storage_domain_facts.py future-import-boilerplate
lib/ansible/modules/cloud/ovirt/ovirt_storage_domain_facts.py metaclass-boilerplate
lib/ansible/modules/cloud/ovirt/ovirt_storage_domain_facts.py validate-modules:E338
lib/ansible/modules/cloud/ovirt/ovirt_storage_template_facts.py future-import-boilerplate
lib/ansible/modules/cloud/ovirt/ovirt_storage_template_facts.py metaclass-boilerplate
lib/ansible/modules/cloud/ovirt/ovirt_storage_template_facts.py validate-modules:E337
lib/ansible/modules/cloud/ovirt/ovirt_storage_template_facts.py validate-modules:E338
lib/ansible/modules/cloud/ovirt/ovirt_storage_vm_facts.py future-import-boilerplate
lib/ansible/modules/cloud/ovirt/ovirt_storage_vm_facts.py metaclass-boilerplate
lib/ansible/modules/cloud/ovirt/ovirt_storage_vm_facts.py validate-modules:E337
lib/ansible/modules/cloud/ovirt/ovirt_storage_vm_facts.py validate-modules:E338
lib/ansible/modules/cloud/ovirt/ovirt_tag.py future-import-boilerplate
lib/ansible/modules/cloud/ovirt/ovirt_tag.py metaclass-boilerplate
lib/ansible/modules/cloud/ovirt/ovirt_tag.py validate-modules:E337
lib/ansible/modules/cloud/ovirt/ovirt_tag.py validate-modules:E338
lib/ansible/modules/cloud/ovirt/ovirt_tag_facts.py future-import-boilerplate
lib/ansible/modules/cloud/ovirt/ovirt_tag_facts.py metaclass-boilerplate
lib/ansible/modules/cloud/ovirt/ovirt_tag_facts.py validate-modules:E338
lib/ansible/modules/cloud/ovirt/ovirt_template.py future-import-boilerplate
lib/ansible/modules/cloud/ovirt/ovirt_template.py metaclass-boilerplate
lib/ansible/modules/cloud/ovirt/ovirt_template.py validate-modules:E337
lib/ansible/modules/cloud/ovirt/ovirt_template.py validate-modules:E338
lib/ansible/modules/cloud/ovirt/ovirt_template_facts.py future-import-boilerplate
lib/ansible/modules/cloud/ovirt/ovirt_template_facts.py metaclass-boilerplate
lib/ansible/modules/cloud/ovirt/ovirt_template_facts.py validate-modules:E338
lib/ansible/modules/cloud/ovirt/ovirt_user.py future-import-boilerplate
lib/ansible/modules/cloud/ovirt/ovirt_user.py metaclass-boilerplate
lib/ansible/modules/cloud/ovirt/ovirt_user.py validate-modules:E338
lib/ansible/modules/cloud/ovirt/ovirt_user_facts.py future-import-boilerplate
lib/ansible/modules/cloud/ovirt/ovirt_user_facts.py metaclass-boilerplate
lib/ansible/modules/cloud/ovirt/ovirt_user_facts.py validate-modules:E338
lib/ansible/modules/cloud/ovirt/ovirt_vm.py future-import-boilerplate
lib/ansible/modules/cloud/ovirt/ovirt_vm.py metaclass-boilerplate
lib/ansible/modules/cloud/ovirt/ovirt_vm.py validate-modules:E337
lib/ansible/modules/cloud/ovirt/ovirt_vm.py validate-modules:E338
lib/ansible/modules/cloud/ovirt/ovirt_vm_facts.py future-import-boilerplate
lib/ansible/modules/cloud/ovirt/ovirt_vm_facts.py metaclass-boilerplate
lib/ansible/modules/cloud/ovirt/ovirt_vm_facts.py validate-modules:E337
lib/ansible/modules/cloud/ovirt/ovirt_vm_facts.py validate-modules:E338
lib/ansible/modules/cloud/ovirt/ovirt_vmpool.py future-import-boilerplate
lib/ansible/modules/cloud/ovirt/ovirt_vmpool.py metaclass-boilerplate
lib/ansible/modules/cloud/ovirt/ovirt_vmpool.py validate-modules:E337
lib/ansible/modules/cloud/ovirt/ovirt_vmpool.py validate-modules:E338
lib/ansible/modules/cloud/ovirt/ovirt_vmpool_facts.py future-import-boilerplate
lib/ansible/modules/cloud/ovirt/ovirt_vmpool_facts.py metaclass-boilerplate
lib/ansible/modules/cloud/ovirt/ovirt_vmpool_facts.py validate-modules:E338
lib/ansible/modules/cloud/ovirt/ovirt_vnic_profile.py future-import-boilerplate
lib/ansible/modules/cloud/ovirt/ovirt_vnic_profile.py metaclass-boilerplate
lib/ansible/modules/cloud/ovirt/ovirt_vnic_profile.py validate-modules:E337
lib/ansible/modules/cloud/packet/packet_device.py validate-modules:E337
lib/ansible/modules/cloud/packet/packet_device.py validate-modules:E338
lib/ansible/modules/cloud/packet/packet_sshkey.py validate-modules:E322
lib/ansible/modules/cloud/packet/packet_sshkey.py validate-modules:E337
lib/ansible/modules/cloud/packet/packet_sshkey.py validate-modules:E338
lib/ansible/modules/cloud/podman/podman_image.py validate-modules:E322
lib/ansible/modules/cloud/podman/podman_image.py validate-modules:E325
lib/ansible/modules/cloud/podman/podman_image.py validate-modules:E337
lib/ansible/modules/cloud/podman/podman_image_info.py validate-modules:E337
lib/ansible/modules/cloud/profitbricks/profitbricks.py validate-modules:E322
lib/ansible/modules/cloud/profitbricks/profitbricks.py validate-modules:E324
lib/ansible/modules/cloud/profitbricks/profitbricks.py validate-modules:E326
lib/ansible/modules/cloud/profitbricks/profitbricks.py validate-modules:E337
lib/ansible/modules/cloud/profitbricks/profitbricks.py validate-modules:E338
lib/ansible/modules/cloud/profitbricks/profitbricks_datacenter.py validate-modules:E326
lib/ansible/modules/cloud/profitbricks/profitbricks_datacenter.py validate-modules:E337
lib/ansible/modules/cloud/profitbricks/profitbricks_datacenter.py validate-modules:E338
lib/ansible/modules/cloud/profitbricks/profitbricks_nic.py validate-modules:E324
lib/ansible/modules/cloud/profitbricks/profitbricks_nic.py validate-modules:E326
lib/ansible/modules/cloud/profitbricks/profitbricks_nic.py validate-modules:E337
lib/ansible/modules/cloud/profitbricks/profitbricks_nic.py validate-modules:E338
lib/ansible/modules/cloud/profitbricks/profitbricks_volume.py validate-modules:E322
lib/ansible/modules/cloud/profitbricks/profitbricks_volume.py validate-modules:E326
lib/ansible/modules/cloud/profitbricks/profitbricks_volume.py validate-modules:E337
lib/ansible/modules/cloud/profitbricks/profitbricks_volume.py validate-modules:E338
lib/ansible/modules/cloud/profitbricks/profitbricks_volume_attachments.py validate-modules:E326
lib/ansible/modules/cloud/profitbricks/profitbricks_volume_attachments.py validate-modules:E337
lib/ansible/modules/cloud/profitbricks/profitbricks_volume_attachments.py validate-modules:E338
lib/ansible/modules/cloud/pubnub/pubnub_blocks.py validate-modules:E324
lib/ansible/modules/cloud/pubnub/pubnub_blocks.py validate-modules:E337
lib/ansible/modules/cloud/rackspace/rax.py use-argspec-type-path # fix needed
lib/ansible/modules/cloud/rackspace/rax.py validate-modules:E322
lib/ansible/modules/cloud/rackspace/rax.py validate-modules:E324
lib/ansible/modules/cloud/rackspace/rax.py validate-modules:E337
lib/ansible/modules/cloud/rackspace/rax.py validate-modules:E338
lib/ansible/modules/cloud/rackspace/rax_cbs.py validate-modules:E324
lib/ansible/modules/cloud/rackspace/rax_cbs.py validate-modules:E337
lib/ansible/modules/cloud/rackspace/rax_cbs.py validate-modules:E338
lib/ansible/modules/cloud/rackspace/rax_cbs_attachments.py validate-modules:E324
lib/ansible/modules/cloud/rackspace/rax_cbs_attachments.py validate-modules:E337
lib/ansible/modules/cloud/rackspace/rax_cbs_attachments.py validate-modules:E338
lib/ansible/modules/cloud/rackspace/rax_cdb.py validate-modules:E324
lib/ansible/modules/cloud/rackspace/rax_cdb.py validate-modules:E326
lib/ansible/modules/cloud/rackspace/rax_cdb.py validate-modules:E337
lib/ansible/modules/cloud/rackspace/rax_cdb.py validate-modules:E338
lib/ansible/modules/cloud/rackspace/rax_cdb_database.py validate-modules:E324
lib/ansible/modules/cloud/rackspace/rax_cdb_database.py validate-modules:E337
lib/ansible/modules/cloud/rackspace/rax_cdb_database.py validate-modules:E338
lib/ansible/modules/cloud/rackspace/rax_cdb_user.py validate-modules:E324
lib/ansible/modules/cloud/rackspace/rax_cdb_user.py validate-modules:E337
lib/ansible/modules/cloud/rackspace/rax_cdb_user.py validate-modules:E338
lib/ansible/modules/cloud/rackspace/rax_clb.py validate-modules:E324
lib/ansible/modules/cloud/rackspace/rax_clb.py validate-modules:E337
lib/ansible/modules/cloud/rackspace/rax_clb.py validate-modules:E338
lib/ansible/modules/cloud/rackspace/rax_clb_nodes.py validate-modules:E322
lib/ansible/modules/cloud/rackspace/rax_clb_nodes.py validate-modules:E324
lib/ansible/modules/cloud/rackspace/rax_clb_nodes.py validate-modules:E337
lib/ansible/modules/cloud/rackspace/rax_clb_nodes.py validate-modules:E338
lib/ansible/modules/cloud/rackspace/rax_clb_ssl.py validate-modules:E324
lib/ansible/modules/cloud/rackspace/rax_clb_ssl.py validate-modules:E337
lib/ansible/modules/cloud/rackspace/rax_clb_ssl.py validate-modules:E338
lib/ansible/modules/cloud/rackspace/rax_dns.py validate-modules:E324
lib/ansible/modules/cloud/rackspace/rax_dns.py validate-modules:E337
lib/ansible/modules/cloud/rackspace/rax_dns.py validate-modules:E338
lib/ansible/modules/cloud/rackspace/rax_dns_record.py validate-modules:E324
lib/ansible/modules/cloud/rackspace/rax_dns_record.py validate-modules:E337
lib/ansible/modules/cloud/rackspace/rax_dns_record.py validate-modules:E338
lib/ansible/modules/cloud/rackspace/rax_facts.py validate-modules:E324
lib/ansible/modules/cloud/rackspace/rax_facts.py validate-modules:E337
lib/ansible/modules/cloud/rackspace/rax_facts.py validate-modules:E338
lib/ansible/modules/cloud/rackspace/rax_files.py validate-modules:E324
lib/ansible/modules/cloud/rackspace/rax_files.py validate-modules:E326
lib/ansible/modules/cloud/rackspace/rax_files.py validate-modules:E337
lib/ansible/modules/cloud/rackspace/rax_files.py validate-modules:E338
lib/ansible/modules/cloud/rackspace/rax_files_objects.py use-argspec-type-path
lib/ansible/modules/cloud/rackspace/rax_files_objects.py validate-modules:E323
lib/ansible/modules/cloud/rackspace/rax_files_objects.py validate-modules:E324
lib/ansible/modules/cloud/rackspace/rax_files_objects.py validate-modules:E337
lib/ansible/modules/cloud/rackspace/rax_files_objects.py validate-modules:E338
lib/ansible/modules/cloud/rackspace/rax_identity.py validate-modules:E324
lib/ansible/modules/cloud/rackspace/rax_identity.py validate-modules:E326
lib/ansible/modules/cloud/rackspace/rax_identity.py validate-modules:E337
lib/ansible/modules/cloud/rackspace/rax_identity.py validate-modules:E338
lib/ansible/modules/cloud/rackspace/rax_keypair.py validate-modules:E324
lib/ansible/modules/cloud/rackspace/rax_keypair.py validate-modules:E337
lib/ansible/modules/cloud/rackspace/rax_keypair.py validate-modules:E338
lib/ansible/modules/cloud/rackspace/rax_meta.py validate-modules:E324
lib/ansible/modules/cloud/rackspace/rax_meta.py validate-modules:E337
lib/ansible/modules/cloud/rackspace/rax_meta.py validate-modules:E338
lib/ansible/modules/cloud/rackspace/rax_mon_alarm.py validate-modules:E324
lib/ansible/modules/cloud/rackspace/rax_mon_alarm.py validate-modules:E337
lib/ansible/modules/cloud/rackspace/rax_mon_alarm.py validate-modules:E338
lib/ansible/modules/cloud/rackspace/rax_mon_check.py validate-modules:E324
lib/ansible/modules/cloud/rackspace/rax_mon_check.py validate-modules:E326
lib/ansible/modules/cloud/rackspace/rax_mon_check.py validate-modules:E337
lib/ansible/modules/cloud/rackspace/rax_mon_check.py validate-modules:E338
lib/ansible/modules/cloud/rackspace/rax_mon_entity.py validate-modules:E324
lib/ansible/modules/cloud/rackspace/rax_mon_entity.py validate-modules:E337
lib/ansible/modules/cloud/rackspace/rax_mon_entity.py validate-modules:E338
lib/ansible/modules/cloud/rackspace/rax_mon_notification.py validate-modules:E324
lib/ansible/modules/cloud/rackspace/rax_mon_notification.py validate-modules:E337
lib/ansible/modules/cloud/rackspace/rax_mon_notification.py validate-modules:E338
lib/ansible/modules/cloud/rackspace/rax_mon_notification_plan.py validate-modules:E324
lib/ansible/modules/cloud/rackspace/rax_mon_notification_plan.py validate-modules:E337
lib/ansible/modules/cloud/rackspace/rax_mon_notification_plan.py validate-modules:E338
lib/ansible/modules/cloud/rackspace/rax_network.py validate-modules:E324
lib/ansible/modules/cloud/rackspace/rax_network.py validate-modules:E337
lib/ansible/modules/cloud/rackspace/rax_network.py validate-modules:E338
lib/ansible/modules/cloud/rackspace/rax_queue.py validate-modules:E324
lib/ansible/modules/cloud/rackspace/rax_queue.py validate-modules:E337
lib/ansible/modules/cloud/rackspace/rax_queue.py validate-modules:E338
lib/ansible/modules/cloud/rackspace/rax_scaling_group.py use-argspec-type-path # fix needed
lib/ansible/modules/cloud/rackspace/rax_scaling_group.py validate-modules:E324
lib/ansible/modules/cloud/rackspace/rax_scaling_group.py validate-modules:E337
lib/ansible/modules/cloud/rackspace/rax_scaling_group.py validate-modules:E338
lib/ansible/modules/cloud/rackspace/rax_scaling_policy.py validate-modules:E324
lib/ansible/modules/cloud/rackspace/rax_scaling_policy.py validate-modules:E337
lib/ansible/modules/cloud/rackspace/rax_scaling_policy.py validate-modules:E338
lib/ansible/modules/cloud/scaleway/scaleway_compute.py validate-modules:E337
lib/ansible/modules/cloud/scaleway/scaleway_compute.py validate-modules:E338
lib/ansible/modules/cloud/scaleway/scaleway_image_facts.py validate-modules:E338
lib/ansible/modules/cloud/scaleway/scaleway_ip.py validate-modules:E338
lib/ansible/modules/cloud/scaleway/scaleway_ip_facts.py validate-modules:E338
lib/ansible/modules/cloud/scaleway/scaleway_lb.py validate-modules:E337
lib/ansible/modules/cloud/scaleway/scaleway_lb.py validate-modules:E338
lib/ansible/modules/cloud/scaleway/scaleway_security_group_facts.py validate-modules:E338
lib/ansible/modules/cloud/scaleway/scaleway_security_group_rule.py validate-modules:E337
lib/ansible/modules/cloud/scaleway/scaleway_server_facts.py validate-modules:E338
lib/ansible/modules/cloud/scaleway/scaleway_snapshot_facts.py validate-modules:E338
lib/ansible/modules/cloud/scaleway/scaleway_sshkey.py validate-modules:E338
lib/ansible/modules/cloud/scaleway/scaleway_user_data.py validate-modules:E337
lib/ansible/modules/cloud/scaleway/scaleway_user_data.py validate-modules:E338
lib/ansible/modules/cloud/scaleway/scaleway_volume.py validate-modules:E337
lib/ansible/modules/cloud/scaleway/scaleway_volume.py validate-modules:E338
lib/ansible/modules/cloud/scaleway/scaleway_volume_facts.py validate-modules:E338
lib/ansible/modules/cloud/smartos/imgadm.py validate-modules:E317
lib/ansible/modules/cloud/smartos/imgadm.py validate-modules:E338
lib/ansible/modules/cloud/smartos/smartos_image_facts.py validate-modules:E338
lib/ansible/modules/cloud/smartos/vmadm.py validate-modules:E322
lib/ansible/modules/cloud/smartos/vmadm.py validate-modules:E324
lib/ansible/modules/cloud/smartos/vmadm.py validate-modules:E326
lib/ansible/modules/cloud/smartos/vmadm.py validate-modules:E337
lib/ansible/modules/cloud/softlayer/sl_vm.py validate-modules:E324
lib/ansible/modules/cloud/softlayer/sl_vm.py validate-modules:E326
lib/ansible/modules/cloud/softlayer/sl_vm.py validate-modules:E337
lib/ansible/modules/cloud/spotinst/spotinst_aws_elastigroup.py validate-modules:E322
lib/ansible/modules/cloud/spotinst/spotinst_aws_elastigroup.py validate-modules:E323
lib/ansible/modules/cloud/spotinst/spotinst_aws_elastigroup.py validate-modules:E324
lib/ansible/modules/cloud/spotinst/spotinst_aws_elastigroup.py validate-modules:E326
lib/ansible/modules/cloud/spotinst/spotinst_aws_elastigroup.py validate-modules:E337
lib/ansible/modules/cloud/spotinst/spotinst_aws_elastigroup.py validate-modules:E338
lib/ansible/modules/cloud/univention/udm_dns_record.py validate-modules:E326
lib/ansible/modules/cloud/univention/udm_dns_record.py validate-modules:E337
lib/ansible/modules/cloud/univention/udm_dns_zone.py validate-modules:E322
lib/ansible/modules/cloud/univention/udm_dns_zone.py validate-modules:E326
lib/ansible/modules/cloud/univention/udm_dns_zone.py validate-modules:E337
lib/ansible/modules/cloud/univention/udm_group.py validate-modules:E324
lib/ansible/modules/cloud/univention/udm_group.py validate-modules:E337
lib/ansible/modules/cloud/univention/udm_share.py validate-modules:E322
lib/ansible/modules/cloud/univention/udm_share.py validate-modules:E323
lib/ansible/modules/cloud/univention/udm_share.py validate-modules:E324
lib/ansible/modules/cloud/univention/udm_share.py validate-modules:E326
lib/ansible/modules/cloud/univention/udm_share.py validate-modules:E337
lib/ansible/modules/cloud/univention/udm_user.py validate-modules:E324
lib/ansible/modules/cloud/univention/udm_user.py validate-modules:E326
lib/ansible/modules/cloud/univention/udm_user.py validate-modules:E337
lib/ansible/modules/cloud/vmware/vca_fw.py validate-modules:E322
lib/ansible/modules/cloud/vmware/vca_fw.py validate-modules:E324
lib/ansible/modules/cloud/vmware/vca_fw.py validate-modules:E337
lib/ansible/modules/cloud/vmware/vca_fw.py validate-modules:E338
lib/ansible/modules/cloud/vmware/vca_nat.py validate-modules:E322
lib/ansible/modules/cloud/vmware/vca_nat.py validate-modules:E324
lib/ansible/modules/cloud/vmware/vca_nat.py validate-modules:E337
lib/ansible/modules/cloud/vmware/vca_nat.py validate-modules:E338
lib/ansible/modules/cloud/vmware/vca_vapp.py validate-modules:E322
lib/ansible/modules/cloud/vmware/vca_vapp.py validate-modules:E324
lib/ansible/modules/cloud/vmware/vca_vapp.py validate-modules:E338
lib/ansible/modules/cloud/vmware/vmware_cluster.py validate-modules:E337
lib/ansible/modules/cloud/vmware/vmware_deploy_ovf.py use-argspec-type-path
lib/ansible/modules/cloud/vmware/vmware_deploy_ovf.py validate-modules:E337
lib/ansible/modules/cloud/vmware/vmware_dvs_host.py validate-modules:E322
lib/ansible/modules/cloud/vmware/vmware_dvs_host.py validate-modules:E337
lib/ansible/modules/cloud/vmware/vmware_dvs_host.py validate-modules:E340
lib/ansible/modules/cloud/vmware/vmware_dvs_portgroup.py validate-modules:E322
lib/ansible/modules/cloud/vmware/vmware_dvs_portgroup.py validate-modules:E324
lib/ansible/modules/cloud/vmware/vmware_dvs_portgroup.py validate-modules:E326
lib/ansible/modules/cloud/vmware/vmware_dvs_portgroup.py validate-modules:E337
lib/ansible/modules/cloud/vmware/vmware_dvs_portgroup.py validate-modules:E340
lib/ansible/modules/cloud/vmware/vmware_dvswitch.py validate-modules:E322
lib/ansible/modules/cloud/vmware/vmware_dvswitch.py validate-modules:E337
lib/ansible/modules/cloud/vmware/vmware_dvswitch.py validate-modules:E340
lib/ansible/modules/cloud/vmware/vmware_dvswitch_nioc.py validate-modules:E322
lib/ansible/modules/cloud/vmware/vmware_dvswitch_nioc.py validate-modules:E324
lib/ansible/modules/cloud/vmware/vmware_dvswitch_nioc.py validate-modules:E326
lib/ansible/modules/cloud/vmware/vmware_dvswitch_nioc.py validate-modules:E337
lib/ansible/modules/cloud/vmware/vmware_dvswitch_nioc.py validate-modules:E340
lib/ansible/modules/cloud/vmware/vmware_dvswitch_uplink_pg.py validate-modules:E322
lib/ansible/modules/cloud/vmware/vmware_dvswitch_uplink_pg.py validate-modules:E324
lib/ansible/modules/cloud/vmware/vmware_dvswitch_uplink_pg.py validate-modules:E326
lib/ansible/modules/cloud/vmware/vmware_dvswitch_uplink_pg.py validate-modules:E337
lib/ansible/modules/cloud/vmware/vmware_dvswitch_uplink_pg.py validate-modules:E340
lib/ansible/modules/cloud/vmware/vmware_guest.py validate-modules:E322
lib/ansible/modules/cloud/vmware/vmware_guest.py validate-modules:E337
lib/ansible/modules/cloud/vmware/vmware_guest_custom_attributes.py validate-modules:E322
lib/ansible/modules/cloud/vmware/vmware_guest_custom_attributes.py validate-modules:E337
lib/ansible/modules/cloud/vmware/vmware_guest_custom_attributes.py validate-modules:E340
lib/ansible/modules/cloud/vmware/vmware_guest_file_operation.py validate-modules:E322
lib/ansible/modules/cloud/vmware/vmware_guest_file_operation.py validate-modules:E326
lib/ansible/modules/cloud/vmware/vmware_guest_file_operation.py validate-modules:E337
lib/ansible/modules/cloud/vmware/vmware_guest_file_operation.py validate-modules:E340
lib/ansible/modules/cloud/vmware/vmware_portgroup.py validate-modules:E322
lib/ansible/modules/cloud/vmware/vmware_portgroup.py validate-modules:E326
lib/ansible/modules/cloud/vmware/vmware_portgroup.py validate-modules:E337
lib/ansible/modules/cloud/vmware/vmware_portgroup.py validate-modules:E340
lib/ansible/modules/cloud/vmware/vmware_vcenter_settings.py validate-modules:E322
lib/ansible/modules/cloud/vmware/vmware_vcenter_settings.py validate-modules:E324
lib/ansible/modules/cloud/vmware/vmware_vcenter_settings.py validate-modules:E337
lib/ansible/modules/cloud/vmware/vmware_vcenter_settings.py validate-modules:E340
lib/ansible/modules/cloud/vmware/vmware_vcenter_statistics.py validate-modules:E322
lib/ansible/modules/cloud/vmware/vmware_vcenter_statistics.py validate-modules:E324
lib/ansible/modules/cloud/vmware/vmware_vcenter_statistics.py validate-modules:E326
lib/ansible/modules/cloud/vmware/vmware_vcenter_statistics.py validate-modules:E337
lib/ansible/modules/cloud/vmware/vmware_vcenter_statistics.py validate-modules:E340
lib/ansible/modules/cloud/vmware/vmware_vmkernel.py validate-modules:E322
lib/ansible/modules/cloud/vmware/vmware_vmkernel.py validate-modules:E324
lib/ansible/modules/cloud/vmware/vmware_vmkernel.py validate-modules:E326
lib/ansible/modules/cloud/vmware/vmware_vmkernel.py validate-modules:E337
lib/ansible/modules/cloud/vmware/vmware_vmkernel.py validate-modules:E340
lib/ansible/modules/cloud/vmware/vmware_vspan_session.py validate-modules:E322
lib/ansible/modules/cloud/vmware/vmware_vspan_session.py validate-modules:E337
lib/ansible/modules/cloud/vmware/vmware_vspan_session.py validate-modules:E340
lib/ansible/modules/cloud/vmware/vsphere_copy.py validate-modules:E322
lib/ansible/modules/cloud/vmware/vsphere_copy.py validate-modules:E338
lib/ansible/modules/cloud/vultr/vultr_block_storage.py validate-modules:E337
lib/ansible/modules/cloud/vultr/vultr_block_storage.py validate-modules:E338
lib/ansible/modules/cloud/vultr/vultr_dns_domain.py validate-modules:E338
lib/ansible/modules/cloud/vultr/vultr_dns_record.py validate-modules:E337
lib/ansible/modules/cloud/vultr/vultr_dns_record.py validate-modules:E338
lib/ansible/modules/cloud/vultr/vultr_firewall_group.py validate-modules:E338
lib/ansible/modules/cloud/vultr/vultr_firewall_rule.py validate-modules:E337
lib/ansible/modules/cloud/vultr/vultr_firewall_rule.py validate-modules:E338
lib/ansible/modules/cloud/vultr/vultr_network.py validate-modules:E338
lib/ansible/modules/cloud/webfaction/webfaction_app.py validate-modules:E338
lib/ansible/modules/cloud/webfaction/webfaction_db.py validate-modules:E338
lib/ansible/modules/cloud/webfaction/webfaction_domain.py validate-modules:E337
lib/ansible/modules/cloud/webfaction/webfaction_domain.py validate-modules:E338
lib/ansible/modules/cloud/webfaction/webfaction_mailbox.py validate-modules:E338
lib/ansible/modules/cloud/webfaction/webfaction_site.py validate-modules:E337
lib/ansible/modules/cloud/webfaction/webfaction_site.py validate-modules:E338
lib/ansible/modules/cloud/xenserver/xenserver_guest.py validate-modules:E322
lib/ansible/modules/cloud/xenserver/xenserver_guest.py validate-modules:E326
lib/ansible/modules/cloud/xenserver/xenserver_guest.py validate-modules:E337
lib/ansible/modules/cloud/xenserver/xenserver_guest.py validate-modules:E340
lib/ansible/modules/clustering/consul.py validate-modules:E322
lib/ansible/modules/clustering/consul.py validate-modules:E338
lib/ansible/modules/clustering/consul_acl.py validate-modules:E338
lib/ansible/modules/clustering/consul_kv.py validate-modules:E337
lib/ansible/modules/clustering/etcd3.py validate-modules:E326
lib/ansible/modules/clustering/etcd3.py validate-modules:E337
lib/ansible/modules/clustering/k8s/k8s.py validate-modules:E324
lib/ansible/modules/clustering/k8s/k8s.py validate-modules:E337
lib/ansible/modules/clustering/k8s/k8s.py validate-modules:E338
lib/ansible/modules/clustering/k8s/k8s_auth.py validate-modules:E337
lib/ansible/modules/clustering/k8s/k8s_auth.py validate-modules:E338
lib/ansible/modules/clustering/k8s/k8s_info.py validate-modules:E337
lib/ansible/modules/clustering/k8s/k8s_info.py validate-modules:E338
lib/ansible/modules/clustering/k8s/k8s_scale.py validate-modules:E337
lib/ansible/modules/clustering/k8s/k8s_scale.py validate-modules:E338
lib/ansible/modules/clustering/k8s/k8s_service.py validate-modules:E337
lib/ansible/modules/clustering/k8s/k8s_service.py validate-modules:E338
lib/ansible/modules/clustering/pacemaker_cluster.py validate-modules:E337
lib/ansible/modules/clustering/znode.py validate-modules:E326
lib/ansible/modules/clustering/znode.py validate-modules:E337
lib/ansible/modules/clustering/znode.py validate-modules:E338
lib/ansible/modules/commands/command.py validate-modules:E322
lib/ansible/modules/commands/command.py validate-modules:E323
lib/ansible/modules/commands/command.py validate-modules:E338
lib/ansible/modules/commands/expect.py validate-modules:E338
lib/ansible/modules/crypto/certificate_complete_chain.py use-argspec-type-path # would need something like type=list(path)
lib/ansible/modules/database/influxdb/influxdb_database.py validate-modules:E324
lib/ansible/modules/database/influxdb/influxdb_database.py validate-modules:E337
lib/ansible/modules/database/influxdb/influxdb_query.py validate-modules:E324
lib/ansible/modules/database/influxdb/influxdb_query.py validate-modules:E337
lib/ansible/modules/database/influxdb/influxdb_retention_policy.py validate-modules:E324
lib/ansible/modules/database/influxdb/influxdb_retention_policy.py validate-modules:E337
lib/ansible/modules/database/influxdb/influxdb_user.py validate-modules:E324
lib/ansible/modules/database/influxdb/influxdb_user.py validate-modules:E337
lib/ansible/modules/database/influxdb/influxdb_write.py validate-modules:E324
lib/ansible/modules/database/influxdb/influxdb_write.py validate-modules:E337
lib/ansible/modules/database/misc/elasticsearch_plugin.py validate-modules:E337
lib/ansible/modules/database/misc/elasticsearch_plugin.py validate-modules:E338
lib/ansible/modules/database/misc/kibana_plugin.py validate-modules:E337
lib/ansible/modules/database/misc/kibana_plugin.py validate-modules:E338
lib/ansible/modules/database/misc/redis.py validate-modules:E337
lib/ansible/modules/database/misc/riak.py validate-modules:E324
lib/ansible/modules/database/misc/riak.py validate-modules:E337
lib/ansible/modules/database/misc/riak.py validate-modules:E338
lib/ansible/modules/database/mongodb/mongodb_parameter.py use-argspec-type-path
lib/ansible/modules/database/mongodb/mongodb_parameter.py validate-modules:E317
lib/ansible/modules/database/mongodb/mongodb_parameter.py validate-modules:E323
lib/ansible/modules/database/mongodb/mongodb_parameter.py validate-modules:E326
lib/ansible/modules/database/mongodb/mongodb_parameter.py validate-modules:E337
lib/ansible/modules/database/mongodb/mongodb_parameter.py validate-modules:E338
lib/ansible/modules/database/mongodb/mongodb_replicaset.py use-argspec-type-path
lib/ansible/modules/database/mongodb/mongodb_shard.py use-argspec-type-path
lib/ansible/modules/database/mongodb/mongodb_shard.py validate-modules:E337
lib/ansible/modules/database/mongodb/mongodb_shard.py validate-modules:E338
lib/ansible/modules/database/mongodb/mongodb_user.py use-argspec-type-path
lib/ansible/modules/database/mongodb/mongodb_user.py validate-modules:E322
lib/ansible/modules/database/mongodb/mongodb_user.py validate-modules:E337
lib/ansible/modules/database/mongodb/mongodb_user.py validate-modules:E338
lib/ansible/modules/database/mssql/mssql_db.py validate-modules:E338
lib/ansible/modules/database/mysql/mysql_db.py validate-modules:E210
lib/ansible/modules/database/mysql/mysql_db.py validate-modules:E337
lib/ansible/modules/database/mysql/mysql_user.py validate-modules:E322
lib/ansible/modules/database/mysql/mysql_user.py validate-modules:E337
lib/ansible/modules/database/postgresql/postgresql_db.py use-argspec-type-path
lib/ansible/modules/database/postgresql/postgresql_db.py validate-modules:E210
lib/ansible/modules/database/postgresql/postgresql_db.py validate-modules:E337
lib/ansible/modules/database/postgresql/postgresql_ext.py validate-modules:E337
lib/ansible/modules/database/postgresql/postgresql_pg_hba.py validate-modules:E337
lib/ansible/modules/database/postgresql/postgresql_schema.py validate-modules:E337
lib/ansible/modules/database/postgresql/postgresql_tablespace.py validate-modules:E337
lib/ansible/modules/database/postgresql/postgresql_user.py validate-modules:E326
lib/ansible/modules/database/postgresql/postgresql_user.py validate-modules:E337
lib/ansible/modules/database/proxysql/proxysql_backend_servers.py validate-modules:E322
lib/ansible/modules/database/proxysql/proxysql_backend_servers.py validate-modules:E337
lib/ansible/modules/database/proxysql/proxysql_backend_servers.py validate-modules:E338
lib/ansible/modules/database/proxysql/proxysql_global_variables.py validate-modules:E322
lib/ansible/modules/database/proxysql/proxysql_global_variables.py validate-modules:E337
lib/ansible/modules/database/proxysql/proxysql_global_variables.py validate-modules:E338
lib/ansible/modules/database/proxysql/proxysql_manage_config.py validate-modules:E322
lib/ansible/modules/database/proxysql/proxysql_manage_config.py validate-modules:E338
lib/ansible/modules/database/proxysql/proxysql_mysql_users.py validate-modules:E322
lib/ansible/modules/database/proxysql/proxysql_mysql_users.py validate-modules:E337
lib/ansible/modules/database/proxysql/proxysql_mysql_users.py validate-modules:E338
lib/ansible/modules/database/proxysql/proxysql_query_rules.py validate-modules:E322
lib/ansible/modules/database/proxysql/proxysql_query_rules.py validate-modules:E337
lib/ansible/modules/database/proxysql/proxysql_query_rules.py validate-modules:E338
lib/ansible/modules/database/proxysql/proxysql_replication_hostgroups.py validate-modules:E322
lib/ansible/modules/database/proxysql/proxysql_replication_hostgroups.py validate-modules:E337
lib/ansible/modules/database/proxysql/proxysql_replication_hostgroups.py validate-modules:E338
lib/ansible/modules/database/proxysql/proxysql_scheduler.py validate-modules:E322
lib/ansible/modules/database/proxysql/proxysql_scheduler.py validate-modules:E337
lib/ansible/modules/database/proxysql/proxysql_scheduler.py validate-modules:E338
lib/ansible/modules/database/vertica/vertica_configuration.py validate-modules:E338
lib/ansible/modules/database/vertica/vertica_facts.py validate-modules:E338
lib/ansible/modules/database/vertica/vertica_role.py validate-modules:E322
lib/ansible/modules/database/vertica/vertica_role.py validate-modules:E338
lib/ansible/modules/database/vertica/vertica_schema.py validate-modules:E322
lib/ansible/modules/database/vertica/vertica_schema.py validate-modules:E338
lib/ansible/modules/database/vertica/vertica_user.py validate-modules:E322
lib/ansible/modules/database/vertica/vertica_user.py validate-modules:E338
lib/ansible/modules/files/acl.py validate-modules:E337
lib/ansible/modules/files/archive.py use-argspec-type-path # fix needed
lib/ansible/modules/files/assemble.py validate-modules:E323
lib/ansible/modules/files/blockinfile.py validate-modules:E324
lib/ansible/modules/files/blockinfile.py validate-modules:E326
lib/ansible/modules/files/copy.py pylint:blacklisted-name
lib/ansible/modules/files/copy.py validate-modules:E322
lib/ansible/modules/files/copy.py validate-modules:E323
lib/ansible/modules/files/copy.py validate-modules:E324
lib/ansible/modules/files/file.py validate-modules:E322
lib/ansible/modules/files/file.py validate-modules:E324
lib/ansible/modules/files/find.py use-argspec-type-path # fix needed
lib/ansible/modules/files/find.py validate-modules:E337
lib/ansible/modules/files/ini_file.py validate-modules:E323
lib/ansible/modules/files/iso_extract.py validate-modules:E324
lib/ansible/modules/files/lineinfile.py validate-modules:E323
lib/ansible/modules/files/lineinfile.py validate-modules:E324
lib/ansible/modules/files/lineinfile.py validate-modules:E326
lib/ansible/modules/files/patch.py pylint:blacklisted-name
lib/ansible/modules/files/replace.py validate-modules:E323
lib/ansible/modules/files/stat.py validate-modules:E322
lib/ansible/modules/files/stat.py validate-modules:E336
lib/ansible/modules/files/stat.py validate-modules:E337
lib/ansible/modules/files/synchronize.py pylint:blacklisted-name
lib/ansible/modules/files/synchronize.py use-argspec-type-path
lib/ansible/modules/files/synchronize.py validate-modules:E322
lib/ansible/modules/files/synchronize.py validate-modules:E323
lib/ansible/modules/files/synchronize.py validate-modules:E324
lib/ansible/modules/files/synchronize.py validate-modules:E337
lib/ansible/modules/files/unarchive.py validate-modules:E323
lib/ansible/modules/identity/cyberark/cyberark_authentication.py validate-modules:E337
lib/ansible/modules/identity/ipa/ipa_config.py validate-modules:E337
lib/ansible/modules/identity/ipa/ipa_dnsrecord.py validate-modules:E337
lib/ansible/modules/identity/ipa/ipa_dnszone.py validate-modules:E337
lib/ansible/modules/identity/ipa/ipa_group.py validate-modules:E337
lib/ansible/modules/identity/ipa/ipa_hbacrule.py validate-modules:E337
lib/ansible/modules/identity/ipa/ipa_host.py validate-modules:E337
lib/ansible/modules/identity/ipa/ipa_hostgroup.py validate-modules:E337
lib/ansible/modules/identity/ipa/ipa_role.py validate-modules:E337
lib/ansible/modules/identity/ipa/ipa_service.py validate-modules:E337
lib/ansible/modules/identity/ipa/ipa_subca.py validate-modules:E337
lib/ansible/modules/identity/ipa/ipa_sudocmd.py validate-modules:E337
lib/ansible/modules/identity/ipa/ipa_sudocmdgroup.py validate-modules:E337
lib/ansible/modules/identity/ipa/ipa_sudorule.py validate-modules:E337
lib/ansible/modules/identity/ipa/ipa_user.py validate-modules:E337
lib/ansible/modules/identity/ipa/ipa_vault.py validate-modules:E337
lib/ansible/modules/identity/keycloak/keycloak_client.py validate-modules:E324
lib/ansible/modules/identity/keycloak/keycloak_client.py validate-modules:E337
lib/ansible/modules/identity/keycloak/keycloak_client.py validate-modules:E338
lib/ansible/modules/identity/keycloak/keycloak_clienttemplate.py validate-modules:E324
lib/ansible/modules/identity/keycloak/keycloak_clienttemplate.py validate-modules:E337
lib/ansible/modules/identity/keycloak/keycloak_clienttemplate.py validate-modules:E338
lib/ansible/modules/identity/onepassword_facts.py validate-modules:E337
lib/ansible/modules/identity/opendj/opendj_backendprop.py validate-modules:E337
lib/ansible/modules/identity/opendj/opendj_backendprop.py validate-modules:E338
lib/ansible/modules/messaging/rabbitmq/rabbitmq_binding.py validate-modules:E324
lib/ansible/modules/messaging/rabbitmq/rabbitmq_binding.py validate-modules:E337
lib/ansible/modules/messaging/rabbitmq/rabbitmq_exchange.py validate-modules:E324
lib/ansible/modules/messaging/rabbitmq/rabbitmq_exchange.py validate-modules:E326
lib/ansible/modules/messaging/rabbitmq/rabbitmq_exchange.py validate-modules:E337
lib/ansible/modules/messaging/rabbitmq/rabbitmq_global_parameter.py validate-modules:E337
lib/ansible/modules/messaging/rabbitmq/rabbitmq_global_parameter.py validate-modules:E338
lib/ansible/modules/messaging/rabbitmq/rabbitmq_parameter.py validate-modules:E338
lib/ansible/modules/messaging/rabbitmq/rabbitmq_plugin.py validate-modules:E338
lib/ansible/modules/messaging/rabbitmq/rabbitmq_policy.py validate-modules:E324
lib/ansible/modules/messaging/rabbitmq/rabbitmq_policy.py validate-modules:E337
lib/ansible/modules/messaging/rabbitmq/rabbitmq_policy.py validate-modules:E338
lib/ansible/modules/messaging/rabbitmq/rabbitmq_queue.py validate-modules:E324
lib/ansible/modules/messaging/rabbitmq/rabbitmq_queue.py validate-modules:E327
lib/ansible/modules/messaging/rabbitmq/rabbitmq_queue.py validate-modules:E337
lib/ansible/modules/messaging/rabbitmq/rabbitmq_user.py validate-modules:E337
lib/ansible/modules/messaging/rabbitmq/rabbitmq_user.py validate-modules:E338
lib/ansible/modules/messaging/rabbitmq/rabbitmq_vhost.py validate-modules:E338
lib/ansible/modules/messaging/rabbitmq/rabbitmq_vhost_limits.py validate-modules:E337
lib/ansible/modules/monitoring/airbrake_deployment.py validate-modules:E324
lib/ansible/modules/monitoring/airbrake_deployment.py validate-modules:E338
lib/ansible/modules/monitoring/bigpanda.py validate-modules:E322
lib/ansible/modules/monitoring/bigpanda.py validate-modules:E324
lib/ansible/modules/monitoring/bigpanda.py validate-modules:E338
lib/ansible/modules/monitoring/circonus_annotation.py validate-modules:E327
lib/ansible/modules/monitoring/circonus_annotation.py validate-modules:E337
lib/ansible/modules/monitoring/circonus_annotation.py validate-modules:E338
lib/ansible/modules/monitoring/datadog_event.py validate-modules:E324
lib/ansible/modules/monitoring/datadog_event.py validate-modules:E327
lib/ansible/modules/monitoring/datadog_event.py validate-modules:E337
lib/ansible/modules/monitoring/datadog_event.py validate-modules:E338
lib/ansible/modules/monitoring/datadog_monitor.py validate-modules:E324
lib/ansible/modules/monitoring/datadog_monitor.py validate-modules:E337
lib/ansible/modules/monitoring/grafana_dashboard.py validate-modules:E337
lib/ansible/modules/monitoring/grafana_dashboard.py validate-modules:E338
lib/ansible/modules/monitoring/grafana_datasource.py validate-modules:E324
lib/ansible/modules/monitoring/grafana_datasource.py validate-modules:E337
lib/ansible/modules/monitoring/grafana_datasource.py validate-modules:E338
lib/ansible/modules/monitoring/grafana_plugin.py validate-modules:E337
lib/ansible/modules/monitoring/grafana_plugin.py validate-modules:E338
lib/ansible/modules/monitoring/honeybadger_deployment.py validate-modules:E338
lib/ansible/modules/monitoring/icinga2_feature.py validate-modules:E337
lib/ansible/modules/monitoring/icinga2_host.py validate-modules:E322
lib/ansible/modules/monitoring/icinga2_host.py validate-modules:E324
lib/ansible/modules/monitoring/icinga2_host.py validate-modules:E337
lib/ansible/modules/monitoring/icinga2_host.py validate-modules:E338
lib/ansible/modules/monitoring/librato_annotation.py validate-modules:E337
lib/ansible/modules/monitoring/librato_annotation.py validate-modules:E338
lib/ansible/modules/monitoring/logentries.py validate-modules:E322
lib/ansible/modules/monitoring/logentries.py validate-modules:E326
lib/ansible/modules/monitoring/logentries.py validate-modules:E337
lib/ansible/modules/monitoring/logentries.py validate-modules:E338
lib/ansible/modules/monitoring/logicmonitor.py validate-modules:E317
lib/ansible/modules/monitoring/logicmonitor.py validate-modules:E324
lib/ansible/modules/monitoring/logicmonitor.py validate-modules:E326
lib/ansible/modules/monitoring/logicmonitor.py validate-modules:E337
lib/ansible/modules/monitoring/logicmonitor.py validate-modules:E338
lib/ansible/modules/monitoring/logicmonitor_facts.py validate-modules:E317
lib/ansible/modules/monitoring/logicmonitor_facts.py validate-modules:E324
lib/ansible/modules/monitoring/logicmonitor_facts.py validate-modules:E338
lib/ansible/modules/monitoring/logstash_plugin.py validate-modules:E337
lib/ansible/modules/monitoring/logstash_plugin.py validate-modules:E338
lib/ansible/modules/monitoring/monit.py validate-modules:E337
lib/ansible/modules/monitoring/monit.py validate-modules:E338
lib/ansible/modules/monitoring/nagios.py validate-modules:E317
lib/ansible/modules/monitoring/nagios.py validate-modules:E324
lib/ansible/modules/monitoring/nagios.py validate-modules:E337
lib/ansible/modules/monitoring/nagios.py validate-modules:E338
lib/ansible/modules/monitoring/newrelic_deployment.py validate-modules:E338
lib/ansible/modules/monitoring/pagerduty.py validate-modules:E324
lib/ansible/modules/monitoring/pagerduty.py validate-modules:E337
lib/ansible/modules/monitoring/pagerduty.py validate-modules:E338
lib/ansible/modules/monitoring/pagerduty_alert.py validate-modules:E338
lib/ansible/modules/monitoring/pingdom.py validate-modules:E326
lib/ansible/modules/monitoring/pingdom.py validate-modules:E338
lib/ansible/modules/monitoring/rollbar_deployment.py validate-modules:E338
lib/ansible/modules/monitoring/sensu_check.py validate-modules:E324
lib/ansible/modules/monitoring/sensu_check.py validate-modules:E337
lib/ansible/modules/monitoring/sensu_client.py validate-modules:E324
lib/ansible/modules/monitoring/sensu_client.py validate-modules:E337
lib/ansible/modules/monitoring/sensu_handler.py validate-modules:E326
lib/ansible/modules/monitoring/sensu_handler.py validate-modules:E337
lib/ansible/modules/monitoring/sensu_silence.py validate-modules:E337
lib/ansible/modules/monitoring/sensu_silence.py validate-modules:E338
lib/ansible/modules/monitoring/sensu_subscription.py validate-modules:E337
lib/ansible/modules/monitoring/spectrum_device.py validate-modules:E337
lib/ansible/modules/monitoring/spectrum_device.py validate-modules:E338
lib/ansible/modules/monitoring/stackdriver.py validate-modules:E338
lib/ansible/modules/monitoring/statusio_maintenance.py pylint:blacklisted-name
lib/ansible/modules/monitoring/statusio_maintenance.py validate-modules:E337
lib/ansible/modules/monitoring/statusio_maintenance.py validate-modules:E338
lib/ansible/modules/monitoring/uptimerobot.py validate-modules:E338
lib/ansible/modules/monitoring/zabbix/zabbix_action.py validate-modules:E322
lib/ansible/modules/monitoring/zabbix/zabbix_action.py validate-modules:E324
lib/ansible/modules/monitoring/zabbix/zabbix_action.py validate-modules:E326
lib/ansible/modules/monitoring/zabbix/zabbix_action.py validate-modules:E327
lib/ansible/modules/monitoring/zabbix/zabbix_action.py validate-modules:E337
lib/ansible/modules/monitoring/zabbix/zabbix_action.py validate-modules:E340
lib/ansible/modules/monitoring/zabbix/zabbix_group.py validate-modules:E337
lib/ansible/modules/monitoring/zabbix/zabbix_group.py validate-modules:E338
lib/ansible/modules/monitoring/zabbix/zabbix_group_info.py validate-modules:E337
lib/ansible/modules/monitoring/zabbix/zabbix_host.py validate-modules:E337
lib/ansible/modules/monitoring/zabbix/zabbix_host.py validate-modules:E338
lib/ansible/modules/monitoring/zabbix/zabbix_host_info.py validate-modules:E337
lib/ansible/modules/monitoring/zabbix/zabbix_hostmacro.py validate-modules:E337
lib/ansible/modules/monitoring/zabbix/zabbix_hostmacro.py validate-modules:E338
lib/ansible/modules/monitoring/zabbix/zabbix_maintenance.py pylint:blacklisted-name
lib/ansible/modules/monitoring/zabbix/zabbix_maintenance.py validate-modules:E317
lib/ansible/modules/monitoring/zabbix/zabbix_maintenance.py validate-modules:E337
lib/ansible/modules/monitoring/zabbix/zabbix_maintenance.py validate-modules:E338
lib/ansible/modules/monitoring/zabbix/zabbix_map.py validate-modules:E337
lib/ansible/modules/monitoring/zabbix/zabbix_map.py validate-modules:E338
lib/ansible/modules/monitoring/zabbix/zabbix_proxy.py validate-modules:E337
lib/ansible/modules/monitoring/zabbix/zabbix_proxy.py validate-modules:E338
lib/ansible/modules/monitoring/zabbix/zabbix_screen.py validate-modules:E327
lib/ansible/modules/monitoring/zabbix/zabbix_template.py validate-modules:E337
lib/ansible/modules/monitoring/zabbix/zabbix_template.py validate-modules:E338
lib/ansible/modules/net_tools/basics/get_url.py validate-modules:E337
lib/ansible/modules/net_tools/basics/uri.py pylint:blacklisted-name
lib/ansible/modules/net_tools/basics/uri.py validate-modules:E337
lib/ansible/modules/net_tools/cloudflare_dns.py validate-modules:E337
lib/ansible/modules/net_tools/dnsmadeeasy.py validate-modules:E337
lib/ansible/modules/net_tools/dnsmadeeasy.py validate-modules:E338
lib/ansible/modules/net_tools/ip_netns.py validate-modules:E338
lib/ansible/modules/net_tools/ipinfoio_facts.py validate-modules:E337
lib/ansible/modules/net_tools/ipinfoio_facts.py validate-modules:E338
lib/ansible/modules/net_tools/ldap/ldap_attr.py validate-modules:E337
lib/ansible/modules/net_tools/ldap/ldap_entry.py validate-modules:E337
lib/ansible/modules/net_tools/ldap/ldap_entry.py validate-modules:E338
lib/ansible/modules/net_tools/ldap/ldap_passwd.py validate-modules:E338
lib/ansible/modules/net_tools/netbox/netbox_device.py validate-modules:E337
lib/ansible/modules/net_tools/netbox/netbox_device.py validate-modules:E338
lib/ansible/modules/net_tools/netbox/netbox_interface.py validate-modules:E337
lib/ansible/modules/net_tools/netbox/netbox_ip_address.py validate-modules:E337
lib/ansible/modules/net_tools/netbox/netbox_ip_address.py validate-modules:E338
lib/ansible/modules/net_tools/netbox/netbox_prefix.py validate-modules:E337
lib/ansible/modules/net_tools/netbox/netbox_prefix.py validate-modules:E338
lib/ansible/modules/net_tools/netbox/netbox_site.py validate-modules:E337
lib/ansible/modules/net_tools/netcup_dns.py validate-modules:E337
lib/ansible/modules/net_tools/netcup_dns.py validate-modules:E338
lib/ansible/modules/net_tools/nios/nios_a_record.py validate-modules:E322
lib/ansible/modules/net_tools/nios/nios_a_record.py validate-modules:E324
lib/ansible/modules/net_tools/nios/nios_a_record.py validate-modules:E337
lib/ansible/modules/net_tools/nios/nios_a_record.py validate-modules:E338
lib/ansible/modules/net_tools/nios/nios_aaaa_record.py validate-modules:E322
lib/ansible/modules/net_tools/nios/nios_aaaa_record.py validate-modules:E324
lib/ansible/modules/net_tools/nios/nios_aaaa_record.py validate-modules:E337
lib/ansible/modules/net_tools/nios/nios_aaaa_record.py validate-modules:E338
lib/ansible/modules/net_tools/nios/nios_cname_record.py validate-modules:E322
lib/ansible/modules/net_tools/nios/nios_cname_record.py validate-modules:E324
lib/ansible/modules/net_tools/nios/nios_cname_record.py validate-modules:E337
lib/ansible/modules/net_tools/nios/nios_cname_record.py validate-modules:E338
lib/ansible/modules/net_tools/nios/nios_dns_view.py validate-modules:E322
lib/ansible/modules/net_tools/nios/nios_dns_view.py validate-modules:E324
lib/ansible/modules/net_tools/nios/nios_dns_view.py validate-modules:E337
lib/ansible/modules/net_tools/nios/nios_dns_view.py validate-modules:E338
lib/ansible/modules/net_tools/nios/nios_fixed_address.py validate-modules:E322
lib/ansible/modules/net_tools/nios/nios_fixed_address.py validate-modules:E324
lib/ansible/modules/net_tools/nios/nios_fixed_address.py validate-modules:E337
lib/ansible/modules/net_tools/nios/nios_fixed_address.py validate-modules:E338
lib/ansible/modules/net_tools/nios/nios_host_record.py validate-modules:E322
lib/ansible/modules/net_tools/nios/nios_host_record.py validate-modules:E323
lib/ansible/modules/net_tools/nios/nios_host_record.py validate-modules:E324
lib/ansible/modules/net_tools/nios/nios_host_record.py validate-modules:E337
lib/ansible/modules/net_tools/nios/nios_host_record.py validate-modules:E338
lib/ansible/modules/net_tools/nios/nios_member.py validate-modules:E322
lib/ansible/modules/net_tools/nios/nios_member.py validate-modules:E324
lib/ansible/modules/net_tools/nios/nios_member.py validate-modules:E337
lib/ansible/modules/net_tools/nios/nios_member.py validate-modules:E338
lib/ansible/modules/net_tools/nios/nios_mx_record.py validate-modules:E322
lib/ansible/modules/net_tools/nios/nios_mx_record.py validate-modules:E324
lib/ansible/modules/net_tools/nios/nios_mx_record.py validate-modules:E337
lib/ansible/modules/net_tools/nios/nios_mx_record.py validate-modules:E338
lib/ansible/modules/net_tools/nios/nios_naptr_record.py validate-modules:E322
lib/ansible/modules/net_tools/nios/nios_naptr_record.py validate-modules:E324
lib/ansible/modules/net_tools/nios/nios_naptr_record.py validate-modules:E337
lib/ansible/modules/net_tools/nios/nios_naptr_record.py validate-modules:E338
lib/ansible/modules/net_tools/nios/nios_network.py validate-modules:E322
lib/ansible/modules/net_tools/nios/nios_network.py validate-modules:E324
lib/ansible/modules/net_tools/nios/nios_network.py validate-modules:E337
lib/ansible/modules/net_tools/nios/nios_network.py validate-modules:E338
lib/ansible/modules/net_tools/nios/nios_network_view.py validate-modules:E322
lib/ansible/modules/net_tools/nios/nios_network_view.py validate-modules:E324
lib/ansible/modules/net_tools/nios/nios_network_view.py validate-modules:E337
lib/ansible/modules/net_tools/nios/nios_network_view.py validate-modules:E338
lib/ansible/modules/net_tools/nios/nios_nsgroup.py validate-modules:E322
lib/ansible/modules/net_tools/nios/nios_nsgroup.py validate-modules:E324
lib/ansible/modules/net_tools/nios/nios_nsgroup.py validate-modules:E326
lib/ansible/modules/net_tools/nios/nios_nsgroup.py validate-modules:E337
lib/ansible/modules/net_tools/nios/nios_nsgroup.py validate-modules:E338
lib/ansible/modules/net_tools/nios/nios_nsgroup.py validate-modules:E340
lib/ansible/modules/net_tools/nios/nios_ptr_record.py validate-modules:E322
lib/ansible/modules/net_tools/nios/nios_ptr_record.py validate-modules:E324
lib/ansible/modules/net_tools/nios/nios_ptr_record.py validate-modules:E337
lib/ansible/modules/net_tools/nios/nios_ptr_record.py validate-modules:E338
lib/ansible/modules/net_tools/nios/nios_srv_record.py validate-modules:E322
lib/ansible/modules/net_tools/nios/nios_srv_record.py validate-modules:E324
lib/ansible/modules/net_tools/nios/nios_srv_record.py validate-modules:E337
lib/ansible/modules/net_tools/nios/nios_srv_record.py validate-modules:E338
lib/ansible/modules/net_tools/nios/nios_txt_record.py validate-modules:E322
lib/ansible/modules/net_tools/nios/nios_txt_record.py validate-modules:E324
lib/ansible/modules/net_tools/nios/nios_txt_record.py validate-modules:E337
lib/ansible/modules/net_tools/nios/nios_txt_record.py validate-modules:E338
lib/ansible/modules/net_tools/nios/nios_zone.py validate-modules:E322
lib/ansible/modules/net_tools/nios/nios_zone.py validate-modules:E324
lib/ansible/modules/net_tools/nios/nios_zone.py validate-modules:E337
lib/ansible/modules/net_tools/nios/nios_zone.py validate-modules:E338
lib/ansible/modules/net_tools/nmcli.py validate-modules:E337
lib/ansible/modules/net_tools/nsupdate.py validate-modules:E337
lib/ansible/modules/network/a10/a10_server.py validate-modules:E337
lib/ansible/modules/network/a10/a10_server_axapi3.py validate-modules:E326
lib/ansible/modules/network/a10/a10_server_axapi3.py validate-modules:E337
lib/ansible/modules/network/a10/a10_service_group.py validate-modules:E337
lib/ansible/modules/network/a10/a10_virtual_server.py validate-modules:E324
lib/ansible/modules/network/a10/a10_virtual_server.py validate-modules:E337
lib/ansible/modules/network/aci/aci_access_port_block_to_access_port.py validate-modules:E337
lib/ansible/modules/network/aci/aci_access_sub_port_block_to_access_port.py validate-modules:E337
lib/ansible/modules/network/aci/aci_bd.py validate-modules:E337
lib/ansible/modules/network/aci/aci_contract_subject.py validate-modules:E337
lib/ansible/modules/network/aci/aci_fabric_scheduler.py validate-modules:E337
lib/ansible/modules/network/aci/aci_firmware_group.py validate-modules:E337
lib/ansible/modules/network/aci/aci_firmware_group_node.py validate-modules:E337
lib/ansible/modules/network/aci/aci_firmware_policy.py validate-modules:E337
lib/ansible/modules/network/aci/aci_maintenance_group.py validate-modules:E337
lib/ansible/modules/network/aci/aci_maintenance_group_node.py validate-modules:E337
lib/ansible/modules/network/aci/aci_maintenance_policy.py validate-modules:E337
lib/ansible/modules/network/aci/mso_schema_template_anp_epg.py validate-modules:E322
lib/ansible/modules/network/aci/mso_schema_template_anp_epg.py validate-modules:E337
lib/ansible/modules/network/aci/mso_schema_template_bd.py validate-modules:E322
lib/ansible/modules/network/aci/mso_schema_template_bd.py validate-modules:E337
lib/ansible/modules/network/aci/mso_schema_template_externalepg.py validate-modules:E322
lib/ansible/modules/network/aci/mso_schema_template_externalepg.py validate-modules:E337
lib/ansible/modules/network/aci/mso_schema_template_externalepg.py validate-modules:E340
lib/ansible/modules/network/aci/mso_schema_template_l3out.py validate-modules:E322
lib/ansible/modules/network/aci/mso_schema_template_l3out.py validate-modules:E337
lib/ansible/modules/network/aci/mso_schema_template_l3out.py validate-modules:E340
lib/ansible/modules/network/aci/mso_site.py validate-modules:E337
lib/ansible/modules/network/aireos/aireos_command.py validate-modules:E324
lib/ansible/modules/network/aireos/aireos_command.py validate-modules:E337
lib/ansible/modules/network/aireos/aireos_command.py validate-modules:E338
lib/ansible/modules/network/aireos/aireos_config.py validate-modules:E324
lib/ansible/modules/network/aireos/aireos_config.py validate-modules:E337
lib/ansible/modules/network/aireos/aireos_config.py validate-modules:E338
lib/ansible/modules/network/aos/_aos_asn_pool.py future-import-boilerplate
lib/ansible/modules/network/aos/_aos_asn_pool.py metaclass-boilerplate
lib/ansible/modules/network/aos/_aos_blueprint.py future-import-boilerplate
lib/ansible/modules/network/aos/_aos_blueprint.py metaclass-boilerplate
lib/ansible/modules/network/aos/_aos_blueprint_param.py future-import-boilerplate
lib/ansible/modules/network/aos/_aos_blueprint_param.py metaclass-boilerplate
lib/ansible/modules/network/aos/_aos_blueprint_virtnet.py future-import-boilerplate
lib/ansible/modules/network/aos/_aos_blueprint_virtnet.py metaclass-boilerplate
lib/ansible/modules/network/aos/_aos_device.py future-import-boilerplate
lib/ansible/modules/network/aos/_aos_device.py metaclass-boilerplate
lib/ansible/modules/network/aos/_aos_external_router.py future-import-boilerplate
lib/ansible/modules/network/aos/_aos_external_router.py metaclass-boilerplate
lib/ansible/modules/network/aos/_aos_ip_pool.py future-import-boilerplate
lib/ansible/modules/network/aos/_aos_ip_pool.py metaclass-boilerplate
lib/ansible/modules/network/aos/_aos_logical_device.py future-import-boilerplate
lib/ansible/modules/network/aos/_aos_logical_device.py metaclass-boilerplate
lib/ansible/modules/network/aos/_aos_logical_device_map.py future-import-boilerplate
lib/ansible/modules/network/aos/_aos_logical_device_map.py metaclass-boilerplate
lib/ansible/modules/network/aos/_aos_login.py future-import-boilerplate
lib/ansible/modules/network/aos/_aos_login.py metaclass-boilerplate
lib/ansible/modules/network/aos/_aos_rack_type.py future-import-boilerplate
lib/ansible/modules/network/aos/_aos_rack_type.py metaclass-boilerplate
lib/ansible/modules/network/aos/_aos_template.py future-import-boilerplate
lib/ansible/modules/network/aos/_aos_template.py metaclass-boilerplate
lib/ansible/modules/network/aruba/aruba_command.py validate-modules:E324
lib/ansible/modules/network/aruba/aruba_command.py validate-modules:E337
lib/ansible/modules/network/aruba/aruba_command.py validate-modules:E338
lib/ansible/modules/network/aruba/aruba_config.py validate-modules:E324
lib/ansible/modules/network/aruba/aruba_config.py validate-modules:E337
lib/ansible/modules/network/aruba/aruba_config.py validate-modules:E338
lib/ansible/modules/network/asa/asa_acl.py validate-modules:E322
lib/ansible/modules/network/asa/asa_acl.py validate-modules:E324
lib/ansible/modules/network/asa/asa_acl.py validate-modules:E337
lib/ansible/modules/network/asa/asa_acl.py validate-modules:E338
lib/ansible/modules/network/asa/asa_command.py validate-modules:E322
lib/ansible/modules/network/asa/asa_command.py validate-modules:E324
lib/ansible/modules/network/asa/asa_command.py validate-modules:E337
lib/ansible/modules/network/asa/asa_command.py validate-modules:E338
lib/ansible/modules/network/asa/asa_config.py validate-modules:E322
lib/ansible/modules/network/asa/asa_config.py validate-modules:E324
lib/ansible/modules/network/asa/asa_config.py validate-modules:E335
lib/ansible/modules/network/asa/asa_config.py validate-modules:E337
lib/ansible/modules/network/asa/asa_config.py validate-modules:E338
lib/ansible/modules/network/asa/asa_og.py validate-modules:E337
lib/ansible/modules/network/asa/asa_og.py validate-modules:E338
lib/ansible/modules/network/avi/avi_actiongroupconfig.py future-import-boilerplate
lib/ansible/modules/network/avi/avi_actiongroupconfig.py metaclass-boilerplate
lib/ansible/modules/network/avi/avi_actiongroupconfig.py validate-modules:E337
lib/ansible/modules/network/avi/avi_actiongroupconfig.py validate-modules:E338
lib/ansible/modules/network/avi/avi_alertconfig.py future-import-boilerplate
lib/ansible/modules/network/avi/avi_alertconfig.py metaclass-boilerplate
lib/ansible/modules/network/avi/avi_alertconfig.py validate-modules:E337
lib/ansible/modules/network/avi/avi_alertconfig.py validate-modules:E338
lib/ansible/modules/network/avi/avi_alertemailconfig.py future-import-boilerplate
lib/ansible/modules/network/avi/avi_alertemailconfig.py metaclass-boilerplate
lib/ansible/modules/network/avi/avi_alertemailconfig.py validate-modules:E337
lib/ansible/modules/network/avi/avi_alertemailconfig.py validate-modules:E338
lib/ansible/modules/network/avi/avi_alertscriptconfig.py future-import-boilerplate
lib/ansible/modules/network/avi/avi_alertscriptconfig.py metaclass-boilerplate
lib/ansible/modules/network/avi/avi_alertscriptconfig.py validate-modules:E337
lib/ansible/modules/network/avi/avi_alertscriptconfig.py validate-modules:E338
lib/ansible/modules/network/avi/avi_alertsyslogconfig.py future-import-boilerplate
lib/ansible/modules/network/avi/avi_alertsyslogconfig.py metaclass-boilerplate
lib/ansible/modules/network/avi/avi_alertsyslogconfig.py validate-modules:E337
lib/ansible/modules/network/avi/avi_alertsyslogconfig.py validate-modules:E338
lib/ansible/modules/network/avi/avi_analyticsprofile.py future-import-boilerplate
lib/ansible/modules/network/avi/avi_analyticsprofile.py metaclass-boilerplate
lib/ansible/modules/network/avi/avi_analyticsprofile.py validate-modules:E337
lib/ansible/modules/network/avi/avi_analyticsprofile.py validate-modules:E338
lib/ansible/modules/network/avi/avi_api_session.py future-import-boilerplate
lib/ansible/modules/network/avi/avi_api_session.py metaclass-boilerplate
lib/ansible/modules/network/avi/avi_api_session.py validate-modules:E337
lib/ansible/modules/network/avi/avi_api_session.py validate-modules:E338
lib/ansible/modules/network/avi/avi_api_version.py future-import-boilerplate
lib/ansible/modules/network/avi/avi_api_version.py metaclass-boilerplate
lib/ansible/modules/network/avi/avi_api_version.py validate-modules:E337
lib/ansible/modules/network/avi/avi_api_version.py validate-modules:E338
lib/ansible/modules/network/avi/avi_applicationpersistenceprofile.py future-import-boilerplate
lib/ansible/modules/network/avi/avi_applicationpersistenceprofile.py metaclass-boilerplate
lib/ansible/modules/network/avi/avi_applicationpersistenceprofile.py validate-modules:E337
lib/ansible/modules/network/avi/avi_applicationpersistenceprofile.py validate-modules:E338
lib/ansible/modules/network/avi/avi_applicationprofile.py future-import-boilerplate
lib/ansible/modules/network/avi/avi_applicationprofile.py metaclass-boilerplate
lib/ansible/modules/network/avi/avi_applicationprofile.py validate-modules:E337
lib/ansible/modules/network/avi/avi_applicationprofile.py validate-modules:E338
lib/ansible/modules/network/avi/avi_authprofile.py future-import-boilerplate
lib/ansible/modules/network/avi/avi_authprofile.py metaclass-boilerplate
lib/ansible/modules/network/avi/avi_authprofile.py validate-modules:E337
lib/ansible/modules/network/avi/avi_authprofile.py validate-modules:E338
lib/ansible/modules/network/avi/avi_autoscalelaunchconfig.py future-import-boilerplate
lib/ansible/modules/network/avi/avi_autoscalelaunchconfig.py metaclass-boilerplate
lib/ansible/modules/network/avi/avi_autoscalelaunchconfig.py validate-modules:E337
lib/ansible/modules/network/avi/avi_autoscalelaunchconfig.py validate-modules:E338
lib/ansible/modules/network/avi/avi_backup.py future-import-boilerplate
lib/ansible/modules/network/avi/avi_backup.py metaclass-boilerplate
lib/ansible/modules/network/avi/avi_backup.py validate-modules:E337
lib/ansible/modules/network/avi/avi_backup.py validate-modules:E338
lib/ansible/modules/network/avi/avi_backupconfiguration.py future-import-boilerplate
lib/ansible/modules/network/avi/avi_backupconfiguration.py metaclass-boilerplate
lib/ansible/modules/network/avi/avi_backupconfiguration.py validate-modules:E337
lib/ansible/modules/network/avi/avi_backupconfiguration.py validate-modules:E338
lib/ansible/modules/network/avi/avi_certificatemanagementprofile.py future-import-boilerplate
lib/ansible/modules/network/avi/avi_certificatemanagementprofile.py metaclass-boilerplate
lib/ansible/modules/network/avi/avi_certificatemanagementprofile.py validate-modules:E337
lib/ansible/modules/network/avi/avi_certificatemanagementprofile.py validate-modules:E338
lib/ansible/modules/network/avi/avi_cloud.py future-import-boilerplate
lib/ansible/modules/network/avi/avi_cloud.py metaclass-boilerplate
lib/ansible/modules/network/avi/avi_cloud.py validate-modules:E337
lib/ansible/modules/network/avi/avi_cloud.py validate-modules:E338
lib/ansible/modules/network/avi/avi_cloudconnectoruser.py future-import-boilerplate
lib/ansible/modules/network/avi/avi_cloudconnectoruser.py metaclass-boilerplate
lib/ansible/modules/network/avi/avi_cloudconnectoruser.py validate-modules:E337
lib/ansible/modules/network/avi/avi_cloudconnectoruser.py validate-modules:E338
lib/ansible/modules/network/avi/avi_cloudproperties.py future-import-boilerplate
lib/ansible/modules/network/avi/avi_cloudproperties.py metaclass-boilerplate
lib/ansible/modules/network/avi/avi_cloudproperties.py validate-modules:E337
lib/ansible/modules/network/avi/avi_cloudproperties.py validate-modules:E338
lib/ansible/modules/network/avi/avi_cluster.py future-import-boilerplate
lib/ansible/modules/network/avi/avi_cluster.py metaclass-boilerplate
lib/ansible/modules/network/avi/avi_cluster.py validate-modules:E337
lib/ansible/modules/network/avi/avi_cluster.py validate-modules:E338
lib/ansible/modules/network/avi/avi_clusterclouddetails.py future-import-boilerplate
lib/ansible/modules/network/avi/avi_clusterclouddetails.py metaclass-boilerplate
lib/ansible/modules/network/avi/avi_clusterclouddetails.py validate-modules:E337
lib/ansible/modules/network/avi/avi_clusterclouddetails.py validate-modules:E338
lib/ansible/modules/network/avi/avi_controllerproperties.py future-import-boilerplate
lib/ansible/modules/network/avi/avi_controllerproperties.py metaclass-boilerplate
lib/ansible/modules/network/avi/avi_controllerproperties.py validate-modules:E337
lib/ansible/modules/network/avi/avi_controllerproperties.py validate-modules:E338
lib/ansible/modules/network/avi/avi_customipamdnsprofile.py future-import-boilerplate
lib/ansible/modules/network/avi/avi_customipamdnsprofile.py metaclass-boilerplate
lib/ansible/modules/network/avi/avi_customipamdnsprofile.py validate-modules:E337
lib/ansible/modules/network/avi/avi_customipamdnsprofile.py validate-modules:E338
lib/ansible/modules/network/avi/avi_dnspolicy.py future-import-boilerplate
lib/ansible/modules/network/avi/avi_dnspolicy.py metaclass-boilerplate
lib/ansible/modules/network/avi/avi_dnspolicy.py validate-modules:E337
lib/ansible/modules/network/avi/avi_dnspolicy.py validate-modules:E338
lib/ansible/modules/network/avi/avi_errorpagebody.py future-import-boilerplate
lib/ansible/modules/network/avi/avi_errorpagebody.py metaclass-boilerplate
lib/ansible/modules/network/avi/avi_errorpagebody.py validate-modules:E337
lib/ansible/modules/network/avi/avi_errorpagebody.py validate-modules:E338
lib/ansible/modules/network/avi/avi_errorpageprofile.py future-import-boilerplate
lib/ansible/modules/network/avi/avi_errorpageprofile.py metaclass-boilerplate
lib/ansible/modules/network/avi/avi_errorpageprofile.py validate-modules:E337
lib/ansible/modules/network/avi/avi_errorpageprofile.py validate-modules:E338
lib/ansible/modules/network/avi/avi_gslb.py future-import-boilerplate
lib/ansible/modules/network/avi/avi_gslb.py metaclass-boilerplate
lib/ansible/modules/network/avi/avi_gslb.py validate-modules:E337
lib/ansible/modules/network/avi/avi_gslb.py validate-modules:E338
lib/ansible/modules/network/avi/avi_gslbgeodbprofile.py future-import-boilerplate
lib/ansible/modules/network/avi/avi_gslbgeodbprofile.py metaclass-boilerplate
lib/ansible/modules/network/avi/avi_gslbgeodbprofile.py validate-modules:E337
lib/ansible/modules/network/avi/avi_gslbgeodbprofile.py validate-modules:E338
lib/ansible/modules/network/avi/avi_gslbservice.py future-import-boilerplate
lib/ansible/modules/network/avi/avi_gslbservice.py metaclass-boilerplate
lib/ansible/modules/network/avi/avi_gslbservice.py validate-modules:E337
lib/ansible/modules/network/avi/avi_gslbservice.py validate-modules:E338
lib/ansible/modules/network/avi/avi_gslbservice_patch_member.py future-import-boilerplate
lib/ansible/modules/network/avi/avi_gslbservice_patch_member.py metaclass-boilerplate
lib/ansible/modules/network/avi/avi_gslbservice_patch_member.py validate-modules:E337
lib/ansible/modules/network/avi/avi_gslbservice_patch_member.py validate-modules:E338
lib/ansible/modules/network/avi/avi_hardwaresecuritymodulegroup.py future-import-boilerplate
lib/ansible/modules/network/avi/avi_hardwaresecuritymodulegroup.py metaclass-boilerplate
lib/ansible/modules/network/avi/avi_hardwaresecuritymodulegroup.py validate-modules:E337
lib/ansible/modules/network/avi/avi_hardwaresecuritymodulegroup.py validate-modules:E338
lib/ansible/modules/network/avi/avi_healthmonitor.py future-import-boilerplate
lib/ansible/modules/network/avi/avi_healthmonitor.py metaclass-boilerplate
lib/ansible/modules/network/avi/avi_healthmonitor.py validate-modules:E337
lib/ansible/modules/network/avi/avi_healthmonitor.py validate-modules:E338
lib/ansible/modules/network/avi/avi_httppolicyset.py future-import-boilerplate
lib/ansible/modules/network/avi/avi_httppolicyset.py metaclass-boilerplate
lib/ansible/modules/network/avi/avi_httppolicyset.py validate-modules:E337
lib/ansible/modules/network/avi/avi_httppolicyset.py validate-modules:E338
lib/ansible/modules/network/avi/avi_ipaddrgroup.py future-import-boilerplate
lib/ansible/modules/network/avi/avi_ipaddrgroup.py metaclass-boilerplate
lib/ansible/modules/network/avi/avi_ipaddrgroup.py validate-modules:E337
lib/ansible/modules/network/avi/avi_ipaddrgroup.py validate-modules:E338
lib/ansible/modules/network/avi/avi_ipamdnsproviderprofile.py future-import-boilerplate
lib/ansible/modules/network/avi/avi_ipamdnsproviderprofile.py metaclass-boilerplate
lib/ansible/modules/network/avi/avi_ipamdnsproviderprofile.py validate-modules:E337
lib/ansible/modules/network/avi/avi_ipamdnsproviderprofile.py validate-modules:E338
lib/ansible/modules/network/avi/avi_l4policyset.py future-import-boilerplate
lib/ansible/modules/network/avi/avi_l4policyset.py metaclass-boilerplate
lib/ansible/modules/network/avi/avi_l4policyset.py validate-modules:E337
lib/ansible/modules/network/avi/avi_l4policyset.py validate-modules:E338
lib/ansible/modules/network/avi/avi_microservicegroup.py future-import-boilerplate
lib/ansible/modules/network/avi/avi_microservicegroup.py metaclass-boilerplate
lib/ansible/modules/network/avi/avi_microservicegroup.py validate-modules:E337
lib/ansible/modules/network/avi/avi_microservicegroup.py validate-modules:E338
lib/ansible/modules/network/avi/avi_network.py future-import-boilerplate
lib/ansible/modules/network/avi/avi_network.py metaclass-boilerplate
lib/ansible/modules/network/avi/avi_network.py validate-modules:E337
lib/ansible/modules/network/avi/avi_network.py validate-modules:E338
lib/ansible/modules/network/avi/avi_networkprofile.py future-import-boilerplate
lib/ansible/modules/network/avi/avi_networkprofile.py metaclass-boilerplate
lib/ansible/modules/network/avi/avi_networkprofile.py validate-modules:E337
lib/ansible/modules/network/avi/avi_networkprofile.py validate-modules:E338
lib/ansible/modules/network/avi/avi_networksecuritypolicy.py future-import-boilerplate
lib/ansible/modules/network/avi/avi_networksecuritypolicy.py metaclass-boilerplate
lib/ansible/modules/network/avi/avi_networksecuritypolicy.py validate-modules:E337
lib/ansible/modules/network/avi/avi_networksecuritypolicy.py validate-modules:E338
lib/ansible/modules/network/avi/avi_pkiprofile.py future-import-boilerplate
lib/ansible/modules/network/avi/avi_pkiprofile.py metaclass-boilerplate
lib/ansible/modules/network/avi/avi_pkiprofile.py validate-modules:E337
lib/ansible/modules/network/avi/avi_pkiprofile.py validate-modules:E338
lib/ansible/modules/network/avi/avi_pool.py future-import-boilerplate
lib/ansible/modules/network/avi/avi_pool.py metaclass-boilerplate
lib/ansible/modules/network/avi/avi_pool.py validate-modules:E337
lib/ansible/modules/network/avi/avi_pool.py validate-modules:E338
lib/ansible/modules/network/avi/avi_poolgroup.py future-import-boilerplate
lib/ansible/modules/network/avi/avi_poolgroup.py metaclass-boilerplate
lib/ansible/modules/network/avi/avi_poolgroup.py validate-modules:E337
lib/ansible/modules/network/avi/avi_poolgroup.py validate-modules:E338
lib/ansible/modules/network/avi/avi_poolgroupdeploymentpolicy.py future-import-boilerplate
lib/ansible/modules/network/avi/avi_poolgroupdeploymentpolicy.py metaclass-boilerplate
lib/ansible/modules/network/avi/avi_poolgroupdeploymentpolicy.py validate-modules:E337
lib/ansible/modules/network/avi/avi_poolgroupdeploymentpolicy.py validate-modules:E338
lib/ansible/modules/network/avi/avi_prioritylabels.py future-import-boilerplate
lib/ansible/modules/network/avi/avi_prioritylabels.py metaclass-boilerplate
lib/ansible/modules/network/avi/avi_prioritylabels.py validate-modules:E337
lib/ansible/modules/network/avi/avi_prioritylabels.py validate-modules:E338
lib/ansible/modules/network/avi/avi_role.py future-import-boilerplate
lib/ansible/modules/network/avi/avi_role.py metaclass-boilerplate
lib/ansible/modules/network/avi/avi_role.py validate-modules:E337
lib/ansible/modules/network/avi/avi_role.py validate-modules:E338
lib/ansible/modules/network/avi/avi_scheduler.py future-import-boilerplate
lib/ansible/modules/network/avi/avi_scheduler.py metaclass-boilerplate
lib/ansible/modules/network/avi/avi_scheduler.py validate-modules:E337
lib/ansible/modules/network/avi/avi_scheduler.py validate-modules:E338
lib/ansible/modules/network/avi/avi_seproperties.py future-import-boilerplate
lib/ansible/modules/network/avi/avi_seproperties.py metaclass-boilerplate
lib/ansible/modules/network/avi/avi_seproperties.py validate-modules:E337
lib/ansible/modules/network/avi/avi_seproperties.py validate-modules:E338
lib/ansible/modules/network/avi/avi_serverautoscalepolicy.py future-import-boilerplate
lib/ansible/modules/network/avi/avi_serverautoscalepolicy.py metaclass-boilerplate
lib/ansible/modules/network/avi/avi_serverautoscalepolicy.py validate-modules:E337
lib/ansible/modules/network/avi/avi_serverautoscalepolicy.py validate-modules:E338
lib/ansible/modules/network/avi/avi_serviceengine.py future-import-boilerplate
lib/ansible/modules/network/avi/avi_serviceengine.py metaclass-boilerplate
lib/ansible/modules/network/avi/avi_serviceengine.py validate-modules:E337
lib/ansible/modules/network/avi/avi_serviceengine.py validate-modules:E338
lib/ansible/modules/network/avi/avi_serviceenginegroup.py future-import-boilerplate
lib/ansible/modules/network/avi/avi_serviceenginegroup.py metaclass-boilerplate
lib/ansible/modules/network/avi/avi_serviceenginegroup.py validate-modules:E337
lib/ansible/modules/network/avi/avi_serviceenginegroup.py validate-modules:E338
lib/ansible/modules/network/avi/avi_snmptrapprofile.py future-import-boilerplate
lib/ansible/modules/network/avi/avi_snmptrapprofile.py metaclass-boilerplate
lib/ansible/modules/network/avi/avi_snmptrapprofile.py validate-modules:E337
lib/ansible/modules/network/avi/avi_snmptrapprofile.py validate-modules:E338
lib/ansible/modules/network/avi/avi_sslkeyandcertificate.py future-import-boilerplate
lib/ansible/modules/network/avi/avi_sslkeyandcertificate.py metaclass-boilerplate
lib/ansible/modules/network/avi/avi_sslkeyandcertificate.py validate-modules:E337
lib/ansible/modules/network/avi/avi_sslkeyandcertificate.py validate-modules:E338
lib/ansible/modules/network/avi/avi_sslprofile.py future-import-boilerplate
lib/ansible/modules/network/avi/avi_sslprofile.py metaclass-boilerplate
lib/ansible/modules/network/avi/avi_sslprofile.py validate-modules:E337
lib/ansible/modules/network/avi/avi_sslprofile.py validate-modules:E338
lib/ansible/modules/network/avi/avi_stringgroup.py future-import-boilerplate
lib/ansible/modules/network/avi/avi_stringgroup.py metaclass-boilerplate
lib/ansible/modules/network/avi/avi_stringgroup.py validate-modules:E337
lib/ansible/modules/network/avi/avi_stringgroup.py validate-modules:E338
lib/ansible/modules/network/avi/avi_systemconfiguration.py future-import-boilerplate
lib/ansible/modules/network/avi/avi_systemconfiguration.py metaclass-boilerplate
lib/ansible/modules/network/avi/avi_systemconfiguration.py validate-modules:E337
lib/ansible/modules/network/avi/avi_systemconfiguration.py validate-modules:E338
lib/ansible/modules/network/avi/avi_tenant.py future-import-boilerplate
lib/ansible/modules/network/avi/avi_tenant.py metaclass-boilerplate
lib/ansible/modules/network/avi/avi_tenant.py validate-modules:E337
lib/ansible/modules/network/avi/avi_tenant.py validate-modules:E338
lib/ansible/modules/network/avi/avi_trafficcloneprofile.py future-import-boilerplate
lib/ansible/modules/network/avi/avi_trafficcloneprofile.py metaclass-boilerplate
lib/ansible/modules/network/avi/avi_trafficcloneprofile.py validate-modules:E337
lib/ansible/modules/network/avi/avi_trafficcloneprofile.py validate-modules:E338
lib/ansible/modules/network/avi/avi_user.py validate-modules:E337
lib/ansible/modules/network/avi/avi_user.py validate-modules:E338
lib/ansible/modules/network/avi/avi_useraccount.py future-import-boilerplate
lib/ansible/modules/network/avi/avi_useraccount.py metaclass-boilerplate
lib/ansible/modules/network/avi/avi_useraccount.py validate-modules:E337
lib/ansible/modules/network/avi/avi_useraccount.py validate-modules:E338
lib/ansible/modules/network/avi/avi_useraccountprofile.py future-import-boilerplate
lib/ansible/modules/network/avi/avi_useraccountprofile.py metaclass-boilerplate
lib/ansible/modules/network/avi/avi_useraccountprofile.py validate-modules:E337
lib/ansible/modules/network/avi/avi_useraccountprofile.py validate-modules:E338
lib/ansible/modules/network/avi/avi_virtualservice.py future-import-boilerplate
lib/ansible/modules/network/avi/avi_virtualservice.py metaclass-boilerplate
lib/ansible/modules/network/avi/avi_virtualservice.py validate-modules:E337
lib/ansible/modules/network/avi/avi_virtualservice.py validate-modules:E338
lib/ansible/modules/network/avi/avi_vrfcontext.py future-import-boilerplate
lib/ansible/modules/network/avi/avi_vrfcontext.py metaclass-boilerplate
lib/ansible/modules/network/avi/avi_vrfcontext.py validate-modules:E337
lib/ansible/modules/network/avi/avi_vrfcontext.py validate-modules:E338
lib/ansible/modules/network/avi/avi_vsdatascriptset.py future-import-boilerplate
lib/ansible/modules/network/avi/avi_vsdatascriptset.py metaclass-boilerplate
lib/ansible/modules/network/avi/avi_vsdatascriptset.py validate-modules:E337
lib/ansible/modules/network/avi/avi_vsdatascriptset.py validate-modules:E338
lib/ansible/modules/network/avi/avi_vsvip.py future-import-boilerplate
lib/ansible/modules/network/avi/avi_vsvip.py metaclass-boilerplate
lib/ansible/modules/network/avi/avi_vsvip.py validate-modules:E337
lib/ansible/modules/network/avi/avi_vsvip.py validate-modules:E338
lib/ansible/modules/network/avi/avi_webhook.py future-import-boilerplate
lib/ansible/modules/network/avi/avi_webhook.py metaclass-boilerplate
lib/ansible/modules/network/avi/avi_webhook.py validate-modules:E337
lib/ansible/modules/network/avi/avi_webhook.py validate-modules:E338
lib/ansible/modules/network/bigswitch/bcf_switch.py validate-modules:E337
lib/ansible/modules/network/bigswitch/bcf_switch.py validate-modules:E338
lib/ansible/modules/network/bigswitch/bigmon_chain.py validate-modules:E337
lib/ansible/modules/network/bigswitch/bigmon_chain.py validate-modules:E338
lib/ansible/modules/network/bigswitch/bigmon_policy.py validate-modules:E324
lib/ansible/modules/network/bigswitch/bigmon_policy.py validate-modules:E326
lib/ansible/modules/network/bigswitch/bigmon_policy.py validate-modules:E337
lib/ansible/modules/network/bigswitch/bigmon_policy.py validate-modules:E338
lib/ansible/modules/network/checkpoint/checkpoint_object_facts.py validate-modules:E337
lib/ansible/modules/network/checkpoint/cp_network.py validate-modules:E337
lib/ansible/modules/network/cli/cli_command.py validate-modules:E337
lib/ansible/modules/network/cli/cli_config.py validate-modules:E337
lib/ansible/modules/network/cli/cli_config.py validate-modules:E338
lib/ansible/modules/network/cloudengine/ce_aaa_server.py future-import-boilerplate
lib/ansible/modules/network/cloudengine/ce_aaa_server.py metaclass-boilerplate
lib/ansible/modules/network/cloudengine/ce_aaa_server.py validate-modules:E322
lib/ansible/modules/network/cloudengine/ce_aaa_server.py validate-modules:E324
lib/ansible/modules/network/cloudengine/ce_aaa_server.py validate-modules:E326
lib/ansible/modules/network/cloudengine/ce_aaa_server.py validate-modules:E337
lib/ansible/modules/network/cloudengine/ce_aaa_server.py validate-modules:E338
lib/ansible/modules/network/cloudengine/ce_aaa_server.py validate-modules:E340
lib/ansible/modules/network/cloudengine/ce_aaa_server_host.py future-import-boilerplate
lib/ansible/modules/network/cloudengine/ce_aaa_server_host.py metaclass-boilerplate
lib/ansible/modules/network/cloudengine/ce_aaa_server_host.py validate-modules:E322
lib/ansible/modules/network/cloudengine/ce_aaa_server_host.py validate-modules:E324
lib/ansible/modules/network/cloudengine/ce_aaa_server_host.py validate-modules:E326
lib/ansible/modules/network/cloudengine/ce_aaa_server_host.py validate-modules:E337
lib/ansible/modules/network/cloudengine/ce_aaa_server_host.py validate-modules:E338
lib/ansible/modules/network/cloudengine/ce_aaa_server_host.py validate-modules:E340
lib/ansible/modules/network/cloudengine/ce_acl.py future-import-boilerplate
lib/ansible/modules/network/cloudengine/ce_acl.py metaclass-boilerplate
lib/ansible/modules/network/cloudengine/ce_acl.py validate-modules:E322
lib/ansible/modules/network/cloudengine/ce_acl.py validate-modules:E324
lib/ansible/modules/network/cloudengine/ce_acl.py validate-modules:E326
lib/ansible/modules/network/cloudengine/ce_acl.py validate-modules:E337
lib/ansible/modules/network/cloudengine/ce_acl.py validate-modules:E338
lib/ansible/modules/network/cloudengine/ce_acl.py validate-modules:E340
lib/ansible/modules/network/cloudengine/ce_acl_advance.py future-import-boilerplate
lib/ansible/modules/network/cloudengine/ce_acl_advance.py metaclass-boilerplate
lib/ansible/modules/network/cloudengine/ce_acl_advance.py validate-modules:E322
lib/ansible/modules/network/cloudengine/ce_acl_advance.py validate-modules:E324
lib/ansible/modules/network/cloudengine/ce_acl_advance.py validate-modules:E326
lib/ansible/modules/network/cloudengine/ce_acl_advance.py validate-modules:E337
lib/ansible/modules/network/cloudengine/ce_acl_advance.py validate-modules:E338
lib/ansible/modules/network/cloudengine/ce_acl_advance.py validate-modules:E340
lib/ansible/modules/network/cloudengine/ce_acl_interface.py future-import-boilerplate
lib/ansible/modules/network/cloudengine/ce_acl_interface.py metaclass-boilerplate
lib/ansible/modules/network/cloudengine/ce_acl_interface.py validate-modules:E322
lib/ansible/modules/network/cloudengine/ce_acl_interface.py validate-modules:E324
lib/ansible/modules/network/cloudengine/ce_acl_interface.py validate-modules:E326
lib/ansible/modules/network/cloudengine/ce_acl_interface.py validate-modules:E337
lib/ansible/modules/network/cloudengine/ce_acl_interface.py validate-modules:E338
lib/ansible/modules/network/cloudengine/ce_acl_interface.py validate-modules:E340
lib/ansible/modules/network/cloudengine/ce_bfd_global.py future-import-boilerplate
lib/ansible/modules/network/cloudengine/ce_bfd_global.py metaclass-boilerplate
lib/ansible/modules/network/cloudengine/ce_bfd_global.py validate-modules:E322
lib/ansible/modules/network/cloudengine/ce_bfd_global.py validate-modules:E324
lib/ansible/modules/network/cloudengine/ce_bfd_global.py validate-modules:E326
lib/ansible/modules/network/cloudengine/ce_bfd_global.py validate-modules:E337
lib/ansible/modules/network/cloudengine/ce_bfd_global.py validate-modules:E338
lib/ansible/modules/network/cloudengine/ce_bfd_global.py validate-modules:E340
lib/ansible/modules/network/cloudengine/ce_bfd_session.py future-import-boilerplate
lib/ansible/modules/network/cloudengine/ce_bfd_session.py metaclass-boilerplate
lib/ansible/modules/network/cloudengine/ce_bfd_session.py validate-modules:E322
lib/ansible/modules/network/cloudengine/ce_bfd_session.py validate-modules:E324
lib/ansible/modules/network/cloudengine/ce_bfd_session.py validate-modules:E326
lib/ansible/modules/network/cloudengine/ce_bfd_session.py validate-modules:E337
lib/ansible/modules/network/cloudengine/ce_bfd_session.py validate-modules:E338
lib/ansible/modules/network/cloudengine/ce_bfd_session.py validate-modules:E340
lib/ansible/modules/network/cloudengine/ce_bfd_view.py future-import-boilerplate
lib/ansible/modules/network/cloudengine/ce_bfd_view.py metaclass-boilerplate
lib/ansible/modules/network/cloudengine/ce_bfd_view.py validate-modules:E322
lib/ansible/modules/network/cloudengine/ce_bfd_view.py validate-modules:E327
lib/ansible/modules/network/cloudengine/ce_bfd_view.py validate-modules:E337
lib/ansible/modules/network/cloudengine/ce_bfd_view.py validate-modules:E338
lib/ansible/modules/network/cloudengine/ce_bgp.py future-import-boilerplate
lib/ansible/modules/network/cloudengine/ce_bgp.py metaclass-boilerplate
lib/ansible/modules/network/cloudengine/ce_bgp.py validate-modules:E322
lib/ansible/modules/network/cloudengine/ce_bgp.py validate-modules:E324
lib/ansible/modules/network/cloudengine/ce_bgp.py validate-modules:E326
lib/ansible/modules/network/cloudengine/ce_bgp.py validate-modules:E337
lib/ansible/modules/network/cloudengine/ce_bgp.py validate-modules:E338
lib/ansible/modules/network/cloudengine/ce_bgp.py validate-modules:E340
lib/ansible/modules/network/cloudengine/ce_bgp_af.py future-import-boilerplate
lib/ansible/modules/network/cloudengine/ce_bgp_af.py metaclass-boilerplate
lib/ansible/modules/network/cloudengine/ce_bgp_af.py validate-modules:E322
lib/ansible/modules/network/cloudengine/ce_bgp_af.py validate-modules:E324
lib/ansible/modules/network/cloudengine/ce_bgp_af.py validate-modules:E326
lib/ansible/modules/network/cloudengine/ce_bgp_af.py validate-modules:E337
lib/ansible/modules/network/cloudengine/ce_bgp_af.py validate-modules:E338
lib/ansible/modules/network/cloudengine/ce_bgp_af.py validate-modules:E340
lib/ansible/modules/network/cloudengine/ce_bgp_neighbor.py future-import-boilerplate
lib/ansible/modules/network/cloudengine/ce_bgp_neighbor.py metaclass-boilerplate
lib/ansible/modules/network/cloudengine/ce_bgp_neighbor.py validate-modules:E322
lib/ansible/modules/network/cloudengine/ce_bgp_neighbor.py validate-modules:E324
lib/ansible/modules/network/cloudengine/ce_bgp_neighbor.py validate-modules:E326
lib/ansible/modules/network/cloudengine/ce_bgp_neighbor.py validate-modules:E337
lib/ansible/modules/network/cloudengine/ce_bgp_neighbor.py validate-modules:E338
lib/ansible/modules/network/cloudengine/ce_bgp_neighbor.py validate-modules:E340
lib/ansible/modules/network/cloudengine/ce_bgp_neighbor_af.py future-import-boilerplate
lib/ansible/modules/network/cloudengine/ce_bgp_neighbor_af.py metaclass-boilerplate
lib/ansible/modules/network/cloudengine/ce_bgp_neighbor_af.py validate-modules:E322
lib/ansible/modules/network/cloudengine/ce_bgp_neighbor_af.py validate-modules:E324
lib/ansible/modules/network/cloudengine/ce_bgp_neighbor_af.py validate-modules:E326
lib/ansible/modules/network/cloudengine/ce_bgp_neighbor_af.py validate-modules:E337
lib/ansible/modules/network/cloudengine/ce_bgp_neighbor_af.py validate-modules:E338
lib/ansible/modules/network/cloudengine/ce_bgp_neighbor_af.py validate-modules:E340
lib/ansible/modules/network/cloudengine/ce_command.py future-import-boilerplate
lib/ansible/modules/network/cloudengine/ce_command.py metaclass-boilerplate
lib/ansible/modules/network/cloudengine/ce_command.py pylint:blacklisted-name
lib/ansible/modules/network/cloudengine/ce_command.py validate-modules:E322
lib/ansible/modules/network/cloudengine/ce_command.py validate-modules:E324
lib/ansible/modules/network/cloudengine/ce_command.py validate-modules:E326
lib/ansible/modules/network/cloudengine/ce_command.py validate-modules:E337
lib/ansible/modules/network/cloudengine/ce_command.py validate-modules:E338
lib/ansible/modules/network/cloudengine/ce_command.py validate-modules:E340
lib/ansible/modules/network/cloudengine/ce_config.py future-import-boilerplate
lib/ansible/modules/network/cloudengine/ce_config.py metaclass-boilerplate
lib/ansible/modules/network/cloudengine/ce_config.py validate-modules:E322
lib/ansible/modules/network/cloudengine/ce_config.py validate-modules:E324
lib/ansible/modules/network/cloudengine/ce_config.py validate-modules:E326
lib/ansible/modules/network/cloudengine/ce_config.py validate-modules:E337
lib/ansible/modules/network/cloudengine/ce_config.py validate-modules:E338
lib/ansible/modules/network/cloudengine/ce_config.py validate-modules:E340
lib/ansible/modules/network/cloudengine/ce_dldp.py future-import-boilerplate
lib/ansible/modules/network/cloudengine/ce_dldp.py metaclass-boilerplate
lib/ansible/modules/network/cloudengine/ce_dldp.py validate-modules:E322
lib/ansible/modules/network/cloudengine/ce_dldp.py validate-modules:E323
lib/ansible/modules/network/cloudengine/ce_dldp.py validate-modules:E324
lib/ansible/modules/network/cloudengine/ce_dldp.py validate-modules:E326
lib/ansible/modules/network/cloudengine/ce_dldp.py validate-modules:E337
lib/ansible/modules/network/cloudengine/ce_dldp.py validate-modules:E338
lib/ansible/modules/network/cloudengine/ce_dldp.py validate-modules:E340
lib/ansible/modules/network/cloudengine/ce_dldp_interface.py future-import-boilerplate
lib/ansible/modules/network/cloudengine/ce_dldp_interface.py metaclass-boilerplate
lib/ansible/modules/network/cloudengine/ce_dldp_interface.py pylint:blacklisted-name
lib/ansible/modules/network/cloudengine/ce_dldp_interface.py validate-modules:E322
lib/ansible/modules/network/cloudengine/ce_dldp_interface.py validate-modules:E324
lib/ansible/modules/network/cloudengine/ce_dldp_interface.py validate-modules:E326
lib/ansible/modules/network/cloudengine/ce_dldp_interface.py validate-modules:E337
lib/ansible/modules/network/cloudengine/ce_dldp_interface.py validate-modules:E338
lib/ansible/modules/network/cloudengine/ce_dldp_interface.py validate-modules:E340
lib/ansible/modules/network/cloudengine/ce_eth_trunk.py future-import-boilerplate
lib/ansible/modules/network/cloudengine/ce_eth_trunk.py metaclass-boilerplate
lib/ansible/modules/network/cloudengine/ce_eth_trunk.py validate-modules:E322
lib/ansible/modules/network/cloudengine/ce_eth_trunk.py validate-modules:E324
lib/ansible/modules/network/cloudengine/ce_eth_trunk.py validate-modules:E326
lib/ansible/modules/network/cloudengine/ce_eth_trunk.py validate-modules:E337
lib/ansible/modules/network/cloudengine/ce_eth_trunk.py validate-modules:E338
lib/ansible/modules/network/cloudengine/ce_eth_trunk.py validate-modules:E340
lib/ansible/modules/network/cloudengine/ce_evpn_bd_vni.py future-import-boilerplate
lib/ansible/modules/network/cloudengine/ce_evpn_bd_vni.py metaclass-boilerplate
lib/ansible/modules/network/cloudengine/ce_evpn_bd_vni.py validate-modules:E322
lib/ansible/modules/network/cloudengine/ce_evpn_bd_vni.py validate-modules:E324
lib/ansible/modules/network/cloudengine/ce_evpn_bd_vni.py validate-modules:E326
lib/ansible/modules/network/cloudengine/ce_evpn_bd_vni.py validate-modules:E337
lib/ansible/modules/network/cloudengine/ce_evpn_bd_vni.py validate-modules:E338
lib/ansible/modules/network/cloudengine/ce_evpn_bd_vni.py validate-modules:E340
lib/ansible/modules/network/cloudengine/ce_evpn_bgp.py future-import-boilerplate
lib/ansible/modules/network/cloudengine/ce_evpn_bgp.py metaclass-boilerplate
lib/ansible/modules/network/cloudengine/ce_evpn_bgp.py validate-modules:E322
lib/ansible/modules/network/cloudengine/ce_evpn_bgp.py validate-modules:E324
lib/ansible/modules/network/cloudengine/ce_evpn_bgp.py validate-modules:E326
lib/ansible/modules/network/cloudengine/ce_evpn_bgp.py validate-modules:E337
lib/ansible/modules/network/cloudengine/ce_evpn_bgp.py validate-modules:E338
lib/ansible/modules/network/cloudengine/ce_evpn_bgp.py validate-modules:E340
lib/ansible/modules/network/cloudengine/ce_evpn_bgp_rr.py future-import-boilerplate
lib/ansible/modules/network/cloudengine/ce_evpn_bgp_rr.py metaclass-boilerplate
lib/ansible/modules/network/cloudengine/ce_evpn_bgp_rr.py validate-modules:E322
lib/ansible/modules/network/cloudengine/ce_evpn_bgp_rr.py validate-modules:E324
lib/ansible/modules/network/cloudengine/ce_evpn_bgp_rr.py validate-modules:E326
lib/ansible/modules/network/cloudengine/ce_evpn_bgp_rr.py validate-modules:E337
lib/ansible/modules/network/cloudengine/ce_evpn_bgp_rr.py validate-modules:E338
lib/ansible/modules/network/cloudengine/ce_evpn_bgp_rr.py validate-modules:E340
lib/ansible/modules/network/cloudengine/ce_evpn_global.py future-import-boilerplate
lib/ansible/modules/network/cloudengine/ce_evpn_global.py metaclass-boilerplate
lib/ansible/modules/network/cloudengine/ce_evpn_global.py validate-modules:E322
lib/ansible/modules/network/cloudengine/ce_evpn_global.py validate-modules:E324
lib/ansible/modules/network/cloudengine/ce_evpn_global.py validate-modules:E326
lib/ansible/modules/network/cloudengine/ce_evpn_global.py validate-modules:E337
lib/ansible/modules/network/cloudengine/ce_evpn_global.py validate-modules:E338
lib/ansible/modules/network/cloudengine/ce_evpn_global.py validate-modules:E340
lib/ansible/modules/network/cloudengine/ce_facts.py future-import-boilerplate
lib/ansible/modules/network/cloudengine/ce_facts.py metaclass-boilerplate
lib/ansible/modules/network/cloudengine/ce_facts.py validate-modules:E322
lib/ansible/modules/network/cloudengine/ce_facts.py validate-modules:E324
lib/ansible/modules/network/cloudengine/ce_facts.py validate-modules:E326
lib/ansible/modules/network/cloudengine/ce_facts.py validate-modules:E337
lib/ansible/modules/network/cloudengine/ce_facts.py validate-modules:E338
lib/ansible/modules/network/cloudengine/ce_facts.py validate-modules:E340
lib/ansible/modules/network/cloudengine/ce_file_copy.py future-import-boilerplate
lib/ansible/modules/network/cloudengine/ce_file_copy.py metaclass-boilerplate
lib/ansible/modules/network/cloudengine/ce_file_copy.py validate-modules:E322
lib/ansible/modules/network/cloudengine/ce_file_copy.py validate-modules:E324
lib/ansible/modules/network/cloudengine/ce_file_copy.py validate-modules:E326
lib/ansible/modules/network/cloudengine/ce_file_copy.py validate-modules:E337
lib/ansible/modules/network/cloudengine/ce_file_copy.py validate-modules:E338
lib/ansible/modules/network/cloudengine/ce_file_copy.py validate-modules:E340
lib/ansible/modules/network/cloudengine/ce_info_center_debug.py future-import-boilerplate
lib/ansible/modules/network/cloudengine/ce_info_center_debug.py metaclass-boilerplate
lib/ansible/modules/network/cloudengine/ce_info_center_debug.py validate-modules:E322
lib/ansible/modules/network/cloudengine/ce_info_center_debug.py validate-modules:E324
lib/ansible/modules/network/cloudengine/ce_info_center_debug.py validate-modules:E326
lib/ansible/modules/network/cloudengine/ce_info_center_debug.py validate-modules:E337
lib/ansible/modules/network/cloudengine/ce_info_center_debug.py validate-modules:E338
lib/ansible/modules/network/cloudengine/ce_info_center_debug.py validate-modules:E340
lib/ansible/modules/network/cloudengine/ce_info_center_global.py future-import-boilerplate
lib/ansible/modules/network/cloudengine/ce_info_center_global.py metaclass-boilerplate
lib/ansible/modules/network/cloudengine/ce_info_center_global.py validate-modules:E322
lib/ansible/modules/network/cloudengine/ce_info_center_global.py validate-modules:E324
lib/ansible/modules/network/cloudengine/ce_info_center_global.py validate-modules:E326
lib/ansible/modules/network/cloudengine/ce_info_center_global.py validate-modules:E337
lib/ansible/modules/network/cloudengine/ce_info_center_global.py validate-modules:E338
lib/ansible/modules/network/cloudengine/ce_info_center_global.py validate-modules:E340
lib/ansible/modules/network/cloudengine/ce_info_center_log.py future-import-boilerplate
lib/ansible/modules/network/cloudengine/ce_info_center_log.py metaclass-boilerplate
lib/ansible/modules/network/cloudengine/ce_info_center_log.py validate-modules:E322
lib/ansible/modules/network/cloudengine/ce_info_center_log.py validate-modules:E324
lib/ansible/modules/network/cloudengine/ce_info_center_log.py validate-modules:E326
lib/ansible/modules/network/cloudengine/ce_info_center_log.py validate-modules:E337
lib/ansible/modules/network/cloudengine/ce_info_center_log.py validate-modules:E338
lib/ansible/modules/network/cloudengine/ce_info_center_log.py validate-modules:E340
lib/ansible/modules/network/cloudengine/ce_info_center_trap.py future-import-boilerplate
lib/ansible/modules/network/cloudengine/ce_info_center_trap.py metaclass-boilerplate
lib/ansible/modules/network/cloudengine/ce_info_center_trap.py validate-modules:E322
lib/ansible/modules/network/cloudengine/ce_info_center_trap.py validate-modules:E324
lib/ansible/modules/network/cloudengine/ce_info_center_trap.py validate-modules:E326
lib/ansible/modules/network/cloudengine/ce_info_center_trap.py validate-modules:E337
lib/ansible/modules/network/cloudengine/ce_info_center_trap.py validate-modules:E338
lib/ansible/modules/network/cloudengine/ce_info_center_trap.py validate-modules:E340
lib/ansible/modules/network/cloudengine/ce_interface.py future-import-boilerplate
lib/ansible/modules/network/cloudengine/ce_interface.py metaclass-boilerplate
lib/ansible/modules/network/cloudengine/ce_interface.py validate-modules:E322
lib/ansible/modules/network/cloudengine/ce_interface.py validate-modules:E324
lib/ansible/modules/network/cloudengine/ce_interface.py validate-modules:E326
lib/ansible/modules/network/cloudengine/ce_interface.py validate-modules:E337
lib/ansible/modules/network/cloudengine/ce_interface.py validate-modules:E338
lib/ansible/modules/network/cloudengine/ce_interface.py validate-modules:E340
lib/ansible/modules/network/cloudengine/ce_interface_ospf.py future-import-boilerplate
lib/ansible/modules/network/cloudengine/ce_interface_ospf.py metaclass-boilerplate
lib/ansible/modules/network/cloudengine/ce_interface_ospf.py validate-modules:E322
lib/ansible/modules/network/cloudengine/ce_interface_ospf.py validate-modules:E324
lib/ansible/modules/network/cloudengine/ce_interface_ospf.py validate-modules:E326
lib/ansible/modules/network/cloudengine/ce_interface_ospf.py validate-modules:E337
lib/ansible/modules/network/cloudengine/ce_interface_ospf.py validate-modules:E338
lib/ansible/modules/network/cloudengine/ce_interface_ospf.py validate-modules:E340
lib/ansible/modules/network/cloudengine/ce_ip_interface.py future-import-boilerplate
lib/ansible/modules/network/cloudengine/ce_ip_interface.py metaclass-boilerplate
lib/ansible/modules/network/cloudengine/ce_ip_interface.py validate-modules:E322
lib/ansible/modules/network/cloudengine/ce_ip_interface.py validate-modules:E324
lib/ansible/modules/network/cloudengine/ce_ip_interface.py validate-modules:E326
lib/ansible/modules/network/cloudengine/ce_ip_interface.py validate-modules:E337
lib/ansible/modules/network/cloudengine/ce_ip_interface.py validate-modules:E338
lib/ansible/modules/network/cloudengine/ce_ip_interface.py validate-modules:E340
lib/ansible/modules/network/cloudengine/ce_link_status.py future-import-boilerplate
lib/ansible/modules/network/cloudengine/ce_link_status.py metaclass-boilerplate
lib/ansible/modules/network/cloudengine/ce_link_status.py validate-modules:E322
lib/ansible/modules/network/cloudengine/ce_link_status.py validate-modules:E324
lib/ansible/modules/network/cloudengine/ce_link_status.py validate-modules:E326
lib/ansible/modules/network/cloudengine/ce_link_status.py validate-modules:E337
lib/ansible/modules/network/cloudengine/ce_link_status.py validate-modules:E338
lib/ansible/modules/network/cloudengine/ce_link_status.py validate-modules:E340
lib/ansible/modules/network/cloudengine/ce_mlag_config.py future-import-boilerplate
lib/ansible/modules/network/cloudengine/ce_mlag_config.py metaclass-boilerplate
lib/ansible/modules/network/cloudengine/ce_mlag_config.py validate-modules:E322
lib/ansible/modules/network/cloudengine/ce_mlag_config.py validate-modules:E324
lib/ansible/modules/network/cloudengine/ce_mlag_config.py validate-modules:E326
lib/ansible/modules/network/cloudengine/ce_mlag_config.py validate-modules:E337
lib/ansible/modules/network/cloudengine/ce_mlag_config.py validate-modules:E338
lib/ansible/modules/network/cloudengine/ce_mlag_config.py validate-modules:E340
lib/ansible/modules/network/cloudengine/ce_mlag_interface.py future-import-boilerplate
lib/ansible/modules/network/cloudengine/ce_mlag_interface.py metaclass-boilerplate
lib/ansible/modules/network/cloudengine/ce_mlag_interface.py pylint:blacklisted-name
lib/ansible/modules/network/cloudengine/ce_mlag_interface.py validate-modules:E322
lib/ansible/modules/network/cloudengine/ce_mlag_interface.py validate-modules:E324
lib/ansible/modules/network/cloudengine/ce_mlag_interface.py validate-modules:E326
lib/ansible/modules/network/cloudengine/ce_mlag_interface.py validate-modules:E337
lib/ansible/modules/network/cloudengine/ce_mlag_interface.py validate-modules:E338
lib/ansible/modules/network/cloudengine/ce_mlag_interface.py validate-modules:E340
lib/ansible/modules/network/cloudengine/ce_mtu.py future-import-boilerplate
lib/ansible/modules/network/cloudengine/ce_mtu.py metaclass-boilerplate
lib/ansible/modules/network/cloudengine/ce_mtu.py validate-modules:E322
lib/ansible/modules/network/cloudengine/ce_mtu.py validate-modules:E324
lib/ansible/modules/network/cloudengine/ce_mtu.py validate-modules:E326
lib/ansible/modules/network/cloudengine/ce_mtu.py validate-modules:E337
lib/ansible/modules/network/cloudengine/ce_mtu.py validate-modules:E338
lib/ansible/modules/network/cloudengine/ce_mtu.py validate-modules:E340
lib/ansible/modules/network/cloudengine/ce_netconf.py future-import-boilerplate
lib/ansible/modules/network/cloudengine/ce_netconf.py metaclass-boilerplate
lib/ansible/modules/network/cloudengine/ce_netconf.py validate-modules:E322
lib/ansible/modules/network/cloudengine/ce_netconf.py validate-modules:E324
lib/ansible/modules/network/cloudengine/ce_netconf.py validate-modules:E326
lib/ansible/modules/network/cloudengine/ce_netconf.py validate-modules:E337
lib/ansible/modules/network/cloudengine/ce_netconf.py validate-modules:E338
lib/ansible/modules/network/cloudengine/ce_netconf.py validate-modules:E340
lib/ansible/modules/network/cloudengine/ce_netstream_aging.py future-import-boilerplate
lib/ansible/modules/network/cloudengine/ce_netstream_aging.py metaclass-boilerplate
lib/ansible/modules/network/cloudengine/ce_netstream_aging.py validate-modules:E322
lib/ansible/modules/network/cloudengine/ce_netstream_aging.py validate-modules:E324
lib/ansible/modules/network/cloudengine/ce_netstream_aging.py validate-modules:E326
lib/ansible/modules/network/cloudengine/ce_netstream_aging.py validate-modules:E337
lib/ansible/modules/network/cloudengine/ce_netstream_aging.py validate-modules:E338
lib/ansible/modules/network/cloudengine/ce_netstream_aging.py validate-modules:E340
lib/ansible/modules/network/cloudengine/ce_netstream_export.py future-import-boilerplate
lib/ansible/modules/network/cloudengine/ce_netstream_export.py metaclass-boilerplate
lib/ansible/modules/network/cloudengine/ce_netstream_export.py validate-modules:E322
lib/ansible/modules/network/cloudengine/ce_netstream_export.py validate-modules:E324
lib/ansible/modules/network/cloudengine/ce_netstream_export.py validate-modules:E326
lib/ansible/modules/network/cloudengine/ce_netstream_export.py validate-modules:E337
lib/ansible/modules/network/cloudengine/ce_netstream_export.py validate-modules:E338
lib/ansible/modules/network/cloudengine/ce_netstream_export.py validate-modules:E340
lib/ansible/modules/network/cloudengine/ce_netstream_global.py future-import-boilerplate
lib/ansible/modules/network/cloudengine/ce_netstream_global.py metaclass-boilerplate
lib/ansible/modules/network/cloudengine/ce_netstream_global.py validate-modules:E322
lib/ansible/modules/network/cloudengine/ce_netstream_global.py validate-modules:E324
lib/ansible/modules/network/cloudengine/ce_netstream_global.py validate-modules:E326
lib/ansible/modules/network/cloudengine/ce_netstream_global.py validate-modules:E337
lib/ansible/modules/network/cloudengine/ce_netstream_global.py validate-modules:E338
lib/ansible/modules/network/cloudengine/ce_netstream_global.py validate-modules:E340
lib/ansible/modules/network/cloudengine/ce_netstream_template.py future-import-boilerplate
lib/ansible/modules/network/cloudengine/ce_netstream_template.py metaclass-boilerplate
lib/ansible/modules/network/cloudengine/ce_netstream_template.py validate-modules:E322
lib/ansible/modules/network/cloudengine/ce_netstream_template.py validate-modules:E324
lib/ansible/modules/network/cloudengine/ce_netstream_template.py validate-modules:E326
lib/ansible/modules/network/cloudengine/ce_netstream_template.py validate-modules:E337
lib/ansible/modules/network/cloudengine/ce_netstream_template.py validate-modules:E338
lib/ansible/modules/network/cloudengine/ce_netstream_template.py validate-modules:E340
lib/ansible/modules/network/cloudengine/ce_ntp.py future-import-boilerplate
lib/ansible/modules/network/cloudengine/ce_ntp.py metaclass-boilerplate
lib/ansible/modules/network/cloudengine/ce_ntp.py validate-modules:E322
lib/ansible/modules/network/cloudengine/ce_ntp.py validate-modules:E324
lib/ansible/modules/network/cloudengine/ce_ntp.py validate-modules:E326
lib/ansible/modules/network/cloudengine/ce_ntp.py validate-modules:E337
lib/ansible/modules/network/cloudengine/ce_ntp.py validate-modules:E338
lib/ansible/modules/network/cloudengine/ce_ntp.py validate-modules:E340
lib/ansible/modules/network/cloudengine/ce_ntp_auth.py future-import-boilerplate
lib/ansible/modules/network/cloudengine/ce_ntp_auth.py metaclass-boilerplate
lib/ansible/modules/network/cloudengine/ce_ntp_auth.py validate-modules:E322
lib/ansible/modules/network/cloudengine/ce_ntp_auth.py validate-modules:E324
lib/ansible/modules/network/cloudengine/ce_ntp_auth.py validate-modules:E326
lib/ansible/modules/network/cloudengine/ce_ntp_auth.py validate-modules:E337
lib/ansible/modules/network/cloudengine/ce_ntp_auth.py validate-modules:E338
lib/ansible/modules/network/cloudengine/ce_ntp_auth.py validate-modules:E340
lib/ansible/modules/network/cloudengine/ce_ospf.py future-import-boilerplate
lib/ansible/modules/network/cloudengine/ce_ospf.py metaclass-boilerplate
lib/ansible/modules/network/cloudengine/ce_ospf.py validate-modules:E322
lib/ansible/modules/network/cloudengine/ce_ospf.py validate-modules:E324
lib/ansible/modules/network/cloudengine/ce_ospf.py validate-modules:E326
lib/ansible/modules/network/cloudengine/ce_ospf.py validate-modules:E337
lib/ansible/modules/network/cloudengine/ce_ospf.py validate-modules:E338
lib/ansible/modules/network/cloudengine/ce_ospf.py validate-modules:E340
lib/ansible/modules/network/cloudengine/ce_ospf_vrf.py future-import-boilerplate
lib/ansible/modules/network/cloudengine/ce_ospf_vrf.py metaclass-boilerplate
lib/ansible/modules/network/cloudengine/ce_ospf_vrf.py validate-modules:E322
lib/ansible/modules/network/cloudengine/ce_ospf_vrf.py validate-modules:E324
lib/ansible/modules/network/cloudengine/ce_ospf_vrf.py validate-modules:E326
lib/ansible/modules/network/cloudengine/ce_ospf_vrf.py validate-modules:E337
lib/ansible/modules/network/cloudengine/ce_ospf_vrf.py validate-modules:E338
lib/ansible/modules/network/cloudengine/ce_ospf_vrf.py validate-modules:E340
lib/ansible/modules/network/cloudengine/ce_reboot.py future-import-boilerplate
lib/ansible/modules/network/cloudengine/ce_reboot.py metaclass-boilerplate
lib/ansible/modules/network/cloudengine/ce_reboot.py validate-modules:E322
lib/ansible/modules/network/cloudengine/ce_reboot.py validate-modules:E324
lib/ansible/modules/network/cloudengine/ce_reboot.py validate-modules:E326
lib/ansible/modules/network/cloudengine/ce_reboot.py validate-modules:E337
lib/ansible/modules/network/cloudengine/ce_reboot.py validate-modules:E338
lib/ansible/modules/network/cloudengine/ce_reboot.py validate-modules:E340
lib/ansible/modules/network/cloudengine/ce_rollback.py future-import-boilerplate
lib/ansible/modules/network/cloudengine/ce_rollback.py metaclass-boilerplate
lib/ansible/modules/network/cloudengine/ce_rollback.py validate-modules:E322
lib/ansible/modules/network/cloudengine/ce_rollback.py validate-modules:E324
lib/ansible/modules/network/cloudengine/ce_rollback.py validate-modules:E326
lib/ansible/modules/network/cloudengine/ce_rollback.py validate-modules:E337
lib/ansible/modules/network/cloudengine/ce_rollback.py validate-modules:E338
lib/ansible/modules/network/cloudengine/ce_rollback.py validate-modules:E340
lib/ansible/modules/network/cloudengine/ce_sflow.py future-import-boilerplate
lib/ansible/modules/network/cloudengine/ce_sflow.py metaclass-boilerplate
lib/ansible/modules/network/cloudengine/ce_sflow.py validate-modules:E322
lib/ansible/modules/network/cloudengine/ce_sflow.py validate-modules:E324
lib/ansible/modules/network/cloudengine/ce_sflow.py validate-modules:E326
lib/ansible/modules/network/cloudengine/ce_sflow.py validate-modules:E337
lib/ansible/modules/network/cloudengine/ce_sflow.py validate-modules:E338
lib/ansible/modules/network/cloudengine/ce_sflow.py validate-modules:E340
lib/ansible/modules/network/cloudengine/ce_snmp_community.py future-import-boilerplate
lib/ansible/modules/network/cloudengine/ce_snmp_community.py metaclass-boilerplate
lib/ansible/modules/network/cloudengine/ce_snmp_community.py validate-modules:E322
lib/ansible/modules/network/cloudengine/ce_snmp_community.py validate-modules:E324
lib/ansible/modules/network/cloudengine/ce_snmp_community.py validate-modules:E326
lib/ansible/modules/network/cloudengine/ce_snmp_community.py validate-modules:E337
lib/ansible/modules/network/cloudengine/ce_snmp_community.py validate-modules:E338
lib/ansible/modules/network/cloudengine/ce_snmp_community.py validate-modules:E340
lib/ansible/modules/network/cloudengine/ce_snmp_contact.py future-import-boilerplate
lib/ansible/modules/network/cloudengine/ce_snmp_contact.py metaclass-boilerplate
lib/ansible/modules/network/cloudengine/ce_snmp_contact.py validate-modules:E322
lib/ansible/modules/network/cloudengine/ce_snmp_contact.py validate-modules:E324
lib/ansible/modules/network/cloudengine/ce_snmp_contact.py validate-modules:E326
lib/ansible/modules/network/cloudengine/ce_snmp_contact.py validate-modules:E337
lib/ansible/modules/network/cloudengine/ce_snmp_contact.py validate-modules:E338
lib/ansible/modules/network/cloudengine/ce_snmp_contact.py validate-modules:E340
lib/ansible/modules/network/cloudengine/ce_snmp_location.py future-import-boilerplate
lib/ansible/modules/network/cloudengine/ce_snmp_location.py metaclass-boilerplate
lib/ansible/modules/network/cloudengine/ce_snmp_location.py validate-modules:E322
lib/ansible/modules/network/cloudengine/ce_snmp_location.py validate-modules:E324
lib/ansible/modules/network/cloudengine/ce_snmp_location.py validate-modules:E326
lib/ansible/modules/network/cloudengine/ce_snmp_location.py validate-modules:E337
lib/ansible/modules/network/cloudengine/ce_snmp_location.py validate-modules:E338
lib/ansible/modules/network/cloudengine/ce_snmp_location.py validate-modules:E340
lib/ansible/modules/network/cloudengine/ce_snmp_target_host.py future-import-boilerplate
lib/ansible/modules/network/cloudengine/ce_snmp_target_host.py metaclass-boilerplate
lib/ansible/modules/network/cloudengine/ce_snmp_target_host.py validate-modules:E322
lib/ansible/modules/network/cloudengine/ce_snmp_target_host.py validate-modules:E324
lib/ansible/modules/network/cloudengine/ce_snmp_target_host.py validate-modules:E326
lib/ansible/modules/network/cloudengine/ce_snmp_target_host.py validate-modules:E337
lib/ansible/modules/network/cloudengine/ce_snmp_target_host.py validate-modules:E338
lib/ansible/modules/network/cloudengine/ce_snmp_target_host.py validate-modules:E340
lib/ansible/modules/network/cloudengine/ce_snmp_traps.py future-import-boilerplate
lib/ansible/modules/network/cloudengine/ce_snmp_traps.py metaclass-boilerplate
lib/ansible/modules/network/cloudengine/ce_snmp_traps.py validate-modules:E322
lib/ansible/modules/network/cloudengine/ce_snmp_traps.py validate-modules:E324
lib/ansible/modules/network/cloudengine/ce_snmp_traps.py validate-modules:E326
lib/ansible/modules/network/cloudengine/ce_snmp_traps.py validate-modules:E337
lib/ansible/modules/network/cloudengine/ce_snmp_traps.py validate-modules:E338
lib/ansible/modules/network/cloudengine/ce_snmp_traps.py validate-modules:E340
lib/ansible/modules/network/cloudengine/ce_snmp_user.py future-import-boilerplate
lib/ansible/modules/network/cloudengine/ce_snmp_user.py metaclass-boilerplate
lib/ansible/modules/network/cloudengine/ce_snmp_user.py validate-modules:E322
lib/ansible/modules/network/cloudengine/ce_snmp_user.py validate-modules:E324
lib/ansible/modules/network/cloudengine/ce_snmp_user.py validate-modules:E326
lib/ansible/modules/network/cloudengine/ce_snmp_user.py validate-modules:E337
lib/ansible/modules/network/cloudengine/ce_snmp_user.py validate-modules:E338
lib/ansible/modules/network/cloudengine/ce_snmp_user.py validate-modules:E340
lib/ansible/modules/network/cloudengine/ce_startup.py future-import-boilerplate
lib/ansible/modules/network/cloudengine/ce_startup.py metaclass-boilerplate
lib/ansible/modules/network/cloudengine/ce_startup.py validate-modules:E322
lib/ansible/modules/network/cloudengine/ce_startup.py validate-modules:E324
lib/ansible/modules/network/cloudengine/ce_startup.py validate-modules:E326
lib/ansible/modules/network/cloudengine/ce_startup.py validate-modules:E337
lib/ansible/modules/network/cloudengine/ce_startup.py validate-modules:E338
lib/ansible/modules/network/cloudengine/ce_startup.py validate-modules:E340
lib/ansible/modules/network/cloudengine/ce_static_route.py future-import-boilerplate
lib/ansible/modules/network/cloudengine/ce_static_route.py metaclass-boilerplate
lib/ansible/modules/network/cloudengine/ce_static_route.py validate-modules:E322
lib/ansible/modules/network/cloudengine/ce_static_route.py validate-modules:E324
lib/ansible/modules/network/cloudengine/ce_static_route.py validate-modules:E326
lib/ansible/modules/network/cloudengine/ce_static_route.py validate-modules:E337
lib/ansible/modules/network/cloudengine/ce_static_route.py validate-modules:E338
lib/ansible/modules/network/cloudengine/ce_static_route.py validate-modules:E340
lib/ansible/modules/network/cloudengine/ce_stp.py future-import-boilerplate
lib/ansible/modules/network/cloudengine/ce_stp.py metaclass-boilerplate
lib/ansible/modules/network/cloudengine/ce_stp.py validate-modules:E322
lib/ansible/modules/network/cloudengine/ce_stp.py validate-modules:E324
lib/ansible/modules/network/cloudengine/ce_stp.py validate-modules:E326
lib/ansible/modules/network/cloudengine/ce_stp.py validate-modules:E337
lib/ansible/modules/network/cloudengine/ce_stp.py validate-modules:E338
lib/ansible/modules/network/cloudengine/ce_stp.py validate-modules:E340
lib/ansible/modules/network/cloudengine/ce_switchport.py future-import-boilerplate
lib/ansible/modules/network/cloudengine/ce_switchport.py metaclass-boilerplate
lib/ansible/modules/network/cloudengine/ce_switchport.py validate-modules:E322
lib/ansible/modules/network/cloudengine/ce_switchport.py validate-modules:E324
lib/ansible/modules/network/cloudengine/ce_switchport.py validate-modules:E326
lib/ansible/modules/network/cloudengine/ce_switchport.py validate-modules:E337
lib/ansible/modules/network/cloudengine/ce_switchport.py validate-modules:E338
lib/ansible/modules/network/cloudengine/ce_switchport.py validate-modules:E340
lib/ansible/modules/network/cloudengine/ce_vlan.py future-import-boilerplate
lib/ansible/modules/network/cloudengine/ce_vlan.py metaclass-boilerplate
lib/ansible/modules/network/cloudengine/ce_vlan.py validate-modules:E322
lib/ansible/modules/network/cloudengine/ce_vlan.py validate-modules:E324
lib/ansible/modules/network/cloudengine/ce_vlan.py validate-modules:E326
lib/ansible/modules/network/cloudengine/ce_vlan.py validate-modules:E337
lib/ansible/modules/network/cloudengine/ce_vlan.py validate-modules:E338
lib/ansible/modules/network/cloudengine/ce_vlan.py validate-modules:E340
lib/ansible/modules/network/cloudengine/ce_vrf.py future-import-boilerplate
lib/ansible/modules/network/cloudengine/ce_vrf.py metaclass-boilerplate
lib/ansible/modules/network/cloudengine/ce_vrf.py validate-modules:E322
lib/ansible/modules/network/cloudengine/ce_vrf.py validate-modules:E324
lib/ansible/modules/network/cloudengine/ce_vrf.py validate-modules:E326
lib/ansible/modules/network/cloudengine/ce_vrf.py validate-modules:E337
lib/ansible/modules/network/cloudengine/ce_vrf.py validate-modules:E338
lib/ansible/modules/network/cloudengine/ce_vrf.py validate-modules:E340
lib/ansible/modules/network/cloudengine/ce_vrf_af.py future-import-boilerplate
lib/ansible/modules/network/cloudengine/ce_vrf_af.py metaclass-boilerplate
lib/ansible/modules/network/cloudengine/ce_vrf_af.py validate-modules:E322
lib/ansible/modules/network/cloudengine/ce_vrf_af.py validate-modules:E324
lib/ansible/modules/network/cloudengine/ce_vrf_af.py validate-modules:E326
lib/ansible/modules/network/cloudengine/ce_vrf_af.py validate-modules:E337
lib/ansible/modules/network/cloudengine/ce_vrf_af.py validate-modules:E338
lib/ansible/modules/network/cloudengine/ce_vrf_af.py validate-modules:E340
lib/ansible/modules/network/cloudengine/ce_vrf_interface.py future-import-boilerplate
lib/ansible/modules/network/cloudengine/ce_vrf_interface.py metaclass-boilerplate
lib/ansible/modules/network/cloudengine/ce_vrf_interface.py validate-modules:E322
lib/ansible/modules/network/cloudengine/ce_vrf_interface.py validate-modules:E324
lib/ansible/modules/network/cloudengine/ce_vrf_interface.py validate-modules:E326
lib/ansible/modules/network/cloudengine/ce_vrf_interface.py validate-modules:E337
lib/ansible/modules/network/cloudengine/ce_vrf_interface.py validate-modules:E338
lib/ansible/modules/network/cloudengine/ce_vrf_interface.py validate-modules:E340
lib/ansible/modules/network/cloudengine/ce_vrrp.py future-import-boilerplate
lib/ansible/modules/network/cloudengine/ce_vrrp.py metaclass-boilerplate
lib/ansible/modules/network/cloudengine/ce_vrrp.py validate-modules:E322
lib/ansible/modules/network/cloudengine/ce_vrrp.py validate-modules:E324
lib/ansible/modules/network/cloudengine/ce_vrrp.py validate-modules:E326
lib/ansible/modules/network/cloudengine/ce_vrrp.py validate-modules:E337
lib/ansible/modules/network/cloudengine/ce_vrrp.py validate-modules:E338
lib/ansible/modules/network/cloudengine/ce_vrrp.py validate-modules:E340
lib/ansible/modules/network/cloudengine/ce_vxlan_arp.py future-import-boilerplate
lib/ansible/modules/network/cloudengine/ce_vxlan_arp.py metaclass-boilerplate
lib/ansible/modules/network/cloudengine/ce_vxlan_arp.py validate-modules:E322
lib/ansible/modules/network/cloudengine/ce_vxlan_arp.py validate-modules:E324
lib/ansible/modules/network/cloudengine/ce_vxlan_arp.py validate-modules:E326
lib/ansible/modules/network/cloudengine/ce_vxlan_arp.py validate-modules:E337
lib/ansible/modules/network/cloudengine/ce_vxlan_arp.py validate-modules:E338
lib/ansible/modules/network/cloudengine/ce_vxlan_arp.py validate-modules:E340
lib/ansible/modules/network/cloudengine/ce_vxlan_gateway.py future-import-boilerplate
lib/ansible/modules/network/cloudengine/ce_vxlan_gateway.py metaclass-boilerplate
lib/ansible/modules/network/cloudengine/ce_vxlan_gateway.py validate-modules:E322
lib/ansible/modules/network/cloudengine/ce_vxlan_gateway.py validate-modules:E324
lib/ansible/modules/network/cloudengine/ce_vxlan_gateway.py validate-modules:E326
lib/ansible/modules/network/cloudengine/ce_vxlan_gateway.py validate-modules:E337
lib/ansible/modules/network/cloudengine/ce_vxlan_gateway.py validate-modules:E338
lib/ansible/modules/network/cloudengine/ce_vxlan_gateway.py validate-modules:E340
lib/ansible/modules/network/cloudengine/ce_vxlan_global.py future-import-boilerplate
lib/ansible/modules/network/cloudengine/ce_vxlan_global.py metaclass-boilerplate
lib/ansible/modules/network/cloudengine/ce_vxlan_global.py validate-modules:E322
lib/ansible/modules/network/cloudengine/ce_vxlan_global.py validate-modules:E324
lib/ansible/modules/network/cloudengine/ce_vxlan_global.py validate-modules:E326
lib/ansible/modules/network/cloudengine/ce_vxlan_global.py validate-modules:E337
lib/ansible/modules/network/cloudengine/ce_vxlan_global.py validate-modules:E338
lib/ansible/modules/network/cloudengine/ce_vxlan_global.py validate-modules:E340
lib/ansible/modules/network/cloudengine/ce_vxlan_tunnel.py future-import-boilerplate
lib/ansible/modules/network/cloudengine/ce_vxlan_tunnel.py metaclass-boilerplate
lib/ansible/modules/network/cloudengine/ce_vxlan_tunnel.py validate-modules:E322
lib/ansible/modules/network/cloudengine/ce_vxlan_tunnel.py validate-modules:E324
lib/ansible/modules/network/cloudengine/ce_vxlan_tunnel.py validate-modules:E326
lib/ansible/modules/network/cloudengine/ce_vxlan_tunnel.py validate-modules:E337
lib/ansible/modules/network/cloudengine/ce_vxlan_tunnel.py validate-modules:E338
lib/ansible/modules/network/cloudengine/ce_vxlan_tunnel.py validate-modules:E340
lib/ansible/modules/network/cloudengine/ce_vxlan_vap.py future-import-boilerplate
lib/ansible/modules/network/cloudengine/ce_vxlan_vap.py metaclass-boilerplate
lib/ansible/modules/network/cloudengine/ce_vxlan_vap.py validate-modules:E322
lib/ansible/modules/network/cloudengine/ce_vxlan_vap.py validate-modules:E324
lib/ansible/modules/network/cloudengine/ce_vxlan_vap.py validate-modules:E326
lib/ansible/modules/network/cloudengine/ce_vxlan_vap.py validate-modules:E337
lib/ansible/modules/network/cloudengine/ce_vxlan_vap.py validate-modules:E338
lib/ansible/modules/network/cloudengine/ce_vxlan_vap.py validate-modules:E340
lib/ansible/modules/network/cloudvision/cv_server_provision.py pylint:blacklisted-name
lib/ansible/modules/network/cloudvision/cv_server_provision.py validate-modules:E338
lib/ansible/modules/network/cnos/cnos_backup.py validate-modules:E322
lib/ansible/modules/network/cnos/cnos_backup.py validate-modules:E323
lib/ansible/modules/network/cnos/cnos_backup.py validate-modules:E326
lib/ansible/modules/network/cnos/cnos_backup.py validate-modules:E338
lib/ansible/modules/network/cnos/cnos_banner.py validate-modules:E322
lib/ansible/modules/network/cnos/cnos_banner.py validate-modules:E324
lib/ansible/modules/network/cnos/cnos_banner.py validate-modules:E337
lib/ansible/modules/network/cnos/cnos_banner.py validate-modules:E338
lib/ansible/modules/network/cnos/cnos_bgp.py validate-modules:E326
lib/ansible/modules/network/cnos/cnos_bgp.py validate-modules:E338
lib/ansible/modules/network/cnos/cnos_command.py validate-modules:E337
lib/ansible/modules/network/cnos/cnos_command.py validate-modules:E338
lib/ansible/modules/network/cnos/cnos_conditional_command.py validate-modules:E326
lib/ansible/modules/network/cnos/cnos_conditional_command.py validate-modules:E338
lib/ansible/modules/network/cnos/cnos_conditional_template.py validate-modules:E326
lib/ansible/modules/network/cnos/cnos_conditional_template.py validate-modules:E338
lib/ansible/modules/network/cnos/cnos_config.py validate-modules:E337
lib/ansible/modules/network/cnos/cnos_config.py validate-modules:E338
lib/ansible/modules/network/cnos/cnos_factory.py validate-modules:E326
lib/ansible/modules/network/cnos/cnos_facts.py validate-modules:E323
lib/ansible/modules/network/cnos/cnos_facts.py validate-modules:E337
lib/ansible/modules/network/cnos/cnos_image.py validate-modules:E326
lib/ansible/modules/network/cnos/cnos_image.py validate-modules:E338
lib/ansible/modules/network/cnos/cnos_interface.py validate-modules:E322
lib/ansible/modules/network/cnos/cnos_interface.py validate-modules:E324
lib/ansible/modules/network/cnos/cnos_interface.py validate-modules:E326
lib/ansible/modules/network/cnos/cnos_interface.py validate-modules:E337
lib/ansible/modules/network/cnos/cnos_interface.py validate-modules:E338
lib/ansible/modules/network/cnos/cnos_interface.py validate-modules:E340
lib/ansible/modules/network/cnos/cnos_l2_interface.py validate-modules:E322
lib/ansible/modules/network/cnos/cnos_l2_interface.py validate-modules:E324
lib/ansible/modules/network/cnos/cnos_l2_interface.py validate-modules:E326
lib/ansible/modules/network/cnos/cnos_l2_interface.py validate-modules:E337
lib/ansible/modules/network/cnos/cnos_l2_interface.py validate-modules:E338
lib/ansible/modules/network/cnos/cnos_l2_interface.py validate-modules:E340
lib/ansible/modules/network/cnos/cnos_l3_interface.py validate-modules:E322
lib/ansible/modules/network/cnos/cnos_l3_interface.py validate-modules:E324
lib/ansible/modules/network/cnos/cnos_l3_interface.py validate-modules:E326
lib/ansible/modules/network/cnos/cnos_l3_interface.py validate-modules:E337
lib/ansible/modules/network/cnos/cnos_l3_interface.py validate-modules:E338
lib/ansible/modules/network/cnos/cnos_l3_interface.py validate-modules:E340
lib/ansible/modules/network/cnos/cnos_linkagg.py validate-modules:E322
lib/ansible/modules/network/cnos/cnos_linkagg.py validate-modules:E324
lib/ansible/modules/network/cnos/cnos_linkagg.py validate-modules:E326
lib/ansible/modules/network/cnos/cnos_linkagg.py validate-modules:E337
lib/ansible/modules/network/cnos/cnos_linkagg.py validate-modules:E338
lib/ansible/modules/network/cnos/cnos_linkagg.py validate-modules:E340
lib/ansible/modules/network/cnos/cnos_lldp.py validate-modules:E338
lib/ansible/modules/network/cnos/cnos_logging.py validate-modules:E322
lib/ansible/modules/network/cnos/cnos_logging.py validate-modules:E326
lib/ansible/modules/network/cnos/cnos_logging.py validate-modules:E337
lib/ansible/modules/network/cnos/cnos_logging.py validate-modules:E338
lib/ansible/modules/network/cnos/cnos_logging.py validate-modules:E340
lib/ansible/modules/network/cnos/cnos_reload.py validate-modules:E326
lib/ansible/modules/network/cnos/cnos_rollback.py validate-modules:E322
lib/ansible/modules/network/cnos/cnos_rollback.py validate-modules:E323
lib/ansible/modules/network/cnos/cnos_rollback.py validate-modules:E326
lib/ansible/modules/network/cnos/cnos_rollback.py validate-modules:E338
lib/ansible/modules/network/cnos/cnos_save.py validate-modules:E326
lib/ansible/modules/network/cnos/cnos_showrun.py validate-modules:E323
lib/ansible/modules/network/cnos/cnos_static_route.py validate-modules:E322
lib/ansible/modules/network/cnos/cnos_static_route.py validate-modules:E326
lib/ansible/modules/network/cnos/cnos_static_route.py validate-modules:E337
lib/ansible/modules/network/cnos/cnos_static_route.py validate-modules:E338
lib/ansible/modules/network/cnos/cnos_static_route.py validate-modules:E340
lib/ansible/modules/network/cnos/cnos_system.py validate-modules:E337
lib/ansible/modules/network/cnos/cnos_system.py validate-modules:E338
lib/ansible/modules/network/cnos/cnos_template.py validate-modules:E326
lib/ansible/modules/network/cnos/cnos_template.py validate-modules:E338
lib/ansible/modules/network/cnos/cnos_user.py validate-modules:E322
lib/ansible/modules/network/cnos/cnos_user.py validate-modules:E326
lib/ansible/modules/network/cnos/cnos_user.py validate-modules:E337
lib/ansible/modules/network/cnos/cnos_user.py validate-modules:E338
lib/ansible/modules/network/cnos/cnos_user.py validate-modules:E340
lib/ansible/modules/network/cnos/cnos_vlag.py validate-modules:E326
lib/ansible/modules/network/cnos/cnos_vlag.py validate-modules:E338
lib/ansible/modules/network/cnos/cnos_vlan.py validate-modules:E322
lib/ansible/modules/network/cnos/cnos_vlan.py validate-modules:E324
lib/ansible/modules/network/cnos/cnos_vlan.py validate-modules:E326
lib/ansible/modules/network/cnos/cnos_vlan.py validate-modules:E337
lib/ansible/modules/network/cnos/cnos_vlan.py validate-modules:E338
lib/ansible/modules/network/cnos/cnos_vlan.py validate-modules:E340
lib/ansible/modules/network/cnos/cnos_vrf.py validate-modules:E322
lib/ansible/modules/network/cnos/cnos_vrf.py validate-modules:E326
lib/ansible/modules/network/cnos/cnos_vrf.py validate-modules:E337
lib/ansible/modules/network/cnos/cnos_vrf.py validate-modules:E338
lib/ansible/modules/network/cnos/cnos_vrf.py validate-modules:E340
lib/ansible/modules/network/cumulus/nclu.py validate-modules:E337
lib/ansible/modules/network/dellos10/dellos10_command.py validate-modules:E322
lib/ansible/modules/network/dellos10/dellos10_command.py validate-modules:E324
lib/ansible/modules/network/dellos10/dellos10_command.py validate-modules:E337
lib/ansible/modules/network/dellos10/dellos10_command.py validate-modules:E338
lib/ansible/modules/network/dellos10/dellos10_config.py validate-modules:E322
lib/ansible/modules/network/dellos10/dellos10_config.py validate-modules:E324
lib/ansible/modules/network/dellos10/dellos10_config.py validate-modules:E337
lib/ansible/modules/network/dellos10/dellos10_config.py validate-modules:E338
lib/ansible/modules/network/dellos10/dellos10_facts.py validate-modules:E322
lib/ansible/modules/network/dellos10/dellos10_facts.py validate-modules:E324
lib/ansible/modules/network/dellos10/dellos10_facts.py validate-modules:E337
lib/ansible/modules/network/dellos10/dellos10_facts.py validate-modules:E338
lib/ansible/modules/network/dellos6/dellos6_command.py validate-modules:E322
lib/ansible/modules/network/dellos6/dellos6_command.py validate-modules:E324
lib/ansible/modules/network/dellos6/dellos6_command.py validate-modules:E337
lib/ansible/modules/network/dellos6/dellos6_command.py validate-modules:E338
lib/ansible/modules/network/dellos6/dellos6_config.py validate-modules:E322
lib/ansible/modules/network/dellos6/dellos6_config.py validate-modules:E324
lib/ansible/modules/network/dellos6/dellos6_config.py validate-modules:E337
lib/ansible/modules/network/dellos6/dellos6_config.py validate-modules:E338
lib/ansible/modules/network/dellos6/dellos6_facts.py validate-modules:E322
lib/ansible/modules/network/dellos6/dellos6_facts.py validate-modules:E324
lib/ansible/modules/network/dellos6/dellos6_facts.py validate-modules:E337
lib/ansible/modules/network/dellos6/dellos6_facts.py validate-modules:E338
lib/ansible/modules/network/dellos9/dellos9_command.py validate-modules:E322
lib/ansible/modules/network/dellos9/dellos9_command.py validate-modules:E324
lib/ansible/modules/network/dellos9/dellos9_command.py validate-modules:E337
lib/ansible/modules/network/dellos9/dellos9_command.py validate-modules:E338
lib/ansible/modules/network/dellos9/dellos9_config.py validate-modules:E322
lib/ansible/modules/network/dellos9/dellos9_config.py validate-modules:E324
lib/ansible/modules/network/dellos9/dellos9_config.py validate-modules:E337
lib/ansible/modules/network/dellos9/dellos9_config.py validate-modules:E338
lib/ansible/modules/network/dellos9/dellos9_facts.py validate-modules:E322
lib/ansible/modules/network/dellos9/dellos9_facts.py validate-modules:E324
lib/ansible/modules/network/dellos9/dellos9_facts.py validate-modules:E337
lib/ansible/modules/network/dellos9/dellos9_facts.py validate-modules:E338
lib/ansible/modules/network/edgeos/edgeos_command.py validate-modules:E337
lib/ansible/modules/network/edgeos/edgeos_command.py validate-modules:E338
lib/ansible/modules/network/edgeos/edgeos_config.py validate-modules:E337
lib/ansible/modules/network/edgeos/edgeos_config.py validate-modules:E338
lib/ansible/modules/network/edgeos/edgeos_facts.py validate-modules:E337
lib/ansible/modules/network/edgeswitch/edgeswitch_facts.py validate-modules:E337
lib/ansible/modules/network/edgeswitch/edgeswitch_vlan.py validate-modules:E322
lib/ansible/modules/network/edgeswitch/edgeswitch_vlan.py validate-modules:E326
lib/ansible/modules/network/edgeswitch/edgeswitch_vlan.py validate-modules:E337
lib/ansible/modules/network/edgeswitch/edgeswitch_vlan.py validate-modules:E338
lib/ansible/modules/network/edgeswitch/edgeswitch_vlan.py validate-modules:E340
lib/ansible/modules/network/enos/enos_command.py validate-modules:E322
lib/ansible/modules/network/enos/enos_command.py validate-modules:E323
lib/ansible/modules/network/enos/enos_command.py validate-modules:E324
lib/ansible/modules/network/enos/enos_command.py validate-modules:E337
lib/ansible/modules/network/enos/enos_command.py validate-modules:E338
lib/ansible/modules/network/enos/enos_config.py validate-modules:E322
lib/ansible/modules/network/enos/enos_config.py validate-modules:E323
lib/ansible/modules/network/enos/enos_config.py validate-modules:E324
lib/ansible/modules/network/enos/enos_config.py validate-modules:E337
lib/ansible/modules/network/enos/enos_config.py validate-modules:E338
lib/ansible/modules/network/enos/enos_facts.py validate-modules:E322
lib/ansible/modules/network/enos/enos_facts.py validate-modules:E323
lib/ansible/modules/network/enos/enos_facts.py validate-modules:E324
lib/ansible/modules/network/enos/enos_facts.py validate-modules:E337
lib/ansible/modules/network/enos/enos_facts.py validate-modules:E338
lib/ansible/modules/network/eos/eos_banner.py future-import-boilerplate
lib/ansible/modules/network/eos/eos_banner.py metaclass-boilerplate
lib/ansible/modules/network/eos/eos_banner.py validate-modules:E324
lib/ansible/modules/network/eos/eos_banner.py validate-modules:E327
lib/ansible/modules/network/eos/eos_banner.py validate-modules:E338
lib/ansible/modules/network/eos/eos_bgp.py validate-modules:E325
lib/ansible/modules/network/eos/eos_bgp.py validate-modules:E326
lib/ansible/modules/network/eos/eos_bgp.py validate-modules:E337
lib/ansible/modules/network/eos/eos_bgp.py validate-modules:E338
lib/ansible/modules/network/eos/eos_command.py future-import-boilerplate
lib/ansible/modules/network/eos/eos_command.py metaclass-boilerplate
lib/ansible/modules/network/eos/eos_command.py validate-modules:E324
lib/ansible/modules/network/eos/eos_command.py validate-modules:E327
lib/ansible/modules/network/eos/eos_command.py validate-modules:E337
lib/ansible/modules/network/eos/eos_command.py validate-modules:E338
lib/ansible/modules/network/eos/eos_config.py future-import-boilerplate
lib/ansible/modules/network/eos/eos_config.py metaclass-boilerplate
lib/ansible/modules/network/eos/eos_config.py validate-modules:E324
lib/ansible/modules/network/eos/eos_config.py validate-modules:E327
lib/ansible/modules/network/eos/eos_config.py validate-modules:E337
lib/ansible/modules/network/eos/eos_config.py validate-modules:E338
lib/ansible/modules/network/eos/eos_eapi.py future-import-boilerplate
lib/ansible/modules/network/eos/eos_eapi.py metaclass-boilerplate
lib/ansible/modules/network/eos/eos_eapi.py validate-modules:E324
lib/ansible/modules/network/eos/eos_eapi.py validate-modules:E327
lib/ansible/modules/network/eos/eos_eapi.py validate-modules:E337
lib/ansible/modules/network/eos/eos_eapi.py validate-modules:E338
lib/ansible/modules/network/eos/eos_facts.py future-import-boilerplate
lib/ansible/modules/network/eos/eos_facts.py metaclass-boilerplate
lib/ansible/modules/network/eos/eos_facts.py validate-modules:E324
lib/ansible/modules/network/eos/eos_facts.py validate-modules:E327
lib/ansible/modules/network/eos/eos_facts.py validate-modules:E337
lib/ansible/modules/network/eos/eos_interface.py validate-modules:E322
lib/ansible/modules/network/eos/eos_interface.py validate-modules:E324
lib/ansible/modules/network/eos/eos_interface.py validate-modules:E326
lib/ansible/modules/network/eos/eos_interface.py validate-modules:E327
lib/ansible/modules/network/eos/eos_interface.py validate-modules:E337
lib/ansible/modules/network/eos/eos_interface.py validate-modules:E338
lib/ansible/modules/network/eos/eos_interface.py validate-modules:E340
lib/ansible/modules/network/eos/eos_l2_interface.py validate-modules:E322
lib/ansible/modules/network/eos/eos_l2_interface.py validate-modules:E324
lib/ansible/modules/network/eos/eos_l2_interface.py validate-modules:E326
lib/ansible/modules/network/eos/eos_l2_interface.py validate-modules:E327
lib/ansible/modules/network/eos/eos_l2_interface.py validate-modules:E337
lib/ansible/modules/network/eos/eos_l2_interface.py validate-modules:E338
lib/ansible/modules/network/eos/eos_l2_interface.py validate-modules:E340
lib/ansible/modules/network/eos/eos_l3_interface.py validate-modules:E322
lib/ansible/modules/network/eos/eos_l3_interface.py validate-modules:E324
lib/ansible/modules/network/eos/eos_l3_interface.py validate-modules:E326
lib/ansible/modules/network/eos/eos_l3_interface.py validate-modules:E327
lib/ansible/modules/network/eos/eos_l3_interface.py validate-modules:E337
lib/ansible/modules/network/eos/eos_l3_interface.py validate-modules:E338
lib/ansible/modules/network/eos/eos_l3_interface.py validate-modules:E340
lib/ansible/modules/network/eos/eos_linkagg.py validate-modules:E322
lib/ansible/modules/network/eos/eos_linkagg.py validate-modules:E324
lib/ansible/modules/network/eos/eos_linkagg.py validate-modules:E326
lib/ansible/modules/network/eos/eos_linkagg.py validate-modules:E327
lib/ansible/modules/network/eos/eos_linkagg.py validate-modules:E337
lib/ansible/modules/network/eos/eos_linkagg.py validate-modules:E338
lib/ansible/modules/network/eos/eos_linkagg.py validate-modules:E340
lib/ansible/modules/network/eos/eos_lldp.py validate-modules:E324
lib/ansible/modules/network/eos/eos_lldp.py validate-modules:E326
lib/ansible/modules/network/eos/eos_lldp.py validate-modules:E327
lib/ansible/modules/network/eos/eos_lldp.py validate-modules:E338
lib/ansible/modules/network/eos/eos_logging.py future-import-boilerplate
lib/ansible/modules/network/eos/eos_logging.py metaclass-boilerplate
lib/ansible/modules/network/eos/eos_logging.py validate-modules:E322
lib/ansible/modules/network/eos/eos_logging.py validate-modules:E324
lib/ansible/modules/network/eos/eos_logging.py validate-modules:E326
lib/ansible/modules/network/eos/eos_logging.py validate-modules:E327
lib/ansible/modules/network/eos/eos_logging.py validate-modules:E337
lib/ansible/modules/network/eos/eos_logging.py validate-modules:E338
lib/ansible/modules/network/eos/eos_logging.py validate-modules:E340
lib/ansible/modules/network/eos/eos_static_route.py validate-modules:E322
lib/ansible/modules/network/eos/eos_static_route.py validate-modules:E324
lib/ansible/modules/network/eos/eos_static_route.py validate-modules:E326
lib/ansible/modules/network/eos/eos_static_route.py validate-modules:E327
lib/ansible/modules/network/eos/eos_static_route.py validate-modules:E337
lib/ansible/modules/network/eos/eos_static_route.py validate-modules:E338
lib/ansible/modules/network/eos/eos_static_route.py validate-modules:E340
lib/ansible/modules/network/eos/eos_system.py future-import-boilerplate
lib/ansible/modules/network/eos/eos_system.py metaclass-boilerplate
lib/ansible/modules/network/eos/eos_system.py validate-modules:E324
lib/ansible/modules/network/eos/eos_system.py validate-modules:E327
lib/ansible/modules/network/eos/eos_system.py validate-modules:E337
lib/ansible/modules/network/eos/eos_system.py validate-modules:E338
lib/ansible/modules/network/eos/eos_user.py future-import-boilerplate
lib/ansible/modules/network/eos/eos_user.py metaclass-boilerplate
lib/ansible/modules/network/eos/eos_user.py validate-modules:E322
lib/ansible/modules/network/eos/eos_user.py validate-modules:E324
lib/ansible/modules/network/eos/eos_user.py validate-modules:E326
lib/ansible/modules/network/eos/eos_user.py validate-modules:E327
lib/ansible/modules/network/eos/eos_user.py validate-modules:E337
lib/ansible/modules/network/eos/eos_user.py validate-modules:E338
lib/ansible/modules/network/eos/eos_user.py validate-modules:E340
lib/ansible/modules/network/eos/eos_vlan.py future-import-boilerplate
lib/ansible/modules/network/eos/eos_vlan.py metaclass-boilerplate
lib/ansible/modules/network/eos/eos_vlan.py validate-modules:E322
lib/ansible/modules/network/eos/eos_vlan.py validate-modules:E324
lib/ansible/modules/network/eos/eos_vlan.py validate-modules:E326
lib/ansible/modules/network/eos/eos_vlan.py validate-modules:E327
lib/ansible/modules/network/eos/eos_vlan.py validate-modules:E337
lib/ansible/modules/network/eos/eos_vlan.py validate-modules:E338
lib/ansible/modules/network/eos/eos_vlan.py validate-modules:E340
lib/ansible/modules/network/eos/eos_vrf.py future-import-boilerplate
lib/ansible/modules/network/eos/eos_vrf.py metaclass-boilerplate
lib/ansible/modules/network/eos/eos_vrf.py validate-modules:E322
lib/ansible/modules/network/eos/eos_vrf.py validate-modules:E324
lib/ansible/modules/network/eos/eos_vrf.py validate-modules:E326
lib/ansible/modules/network/eos/eos_vrf.py validate-modules:E327
lib/ansible/modules/network/eos/eos_vrf.py validate-modules:E337
lib/ansible/modules/network/eos/eos_vrf.py validate-modules:E338
lib/ansible/modules/network/eos/eos_vrf.py validate-modules:E340
lib/ansible/modules/network/exos/exos_command.py validate-modules:E337
lib/ansible/modules/network/exos/exos_command.py validate-modules:E338
lib/ansible/modules/network/exos/exos_config.py validate-modules:E337
lib/ansible/modules/network/exos/exos_config.py validate-modules:E338
lib/ansible/modules/network/exos/exos_facts.py validate-modules:E337
lib/ansible/modules/network/f5/_bigip_asm_policy.py validate-modules:E322
lib/ansible/modules/network/f5/_bigip_asm_policy.py validate-modules:E324
lib/ansible/modules/network/f5/_bigip_asm_policy.py validate-modules:E337
lib/ansible/modules/network/f5/_bigip_asm_policy.py validate-modules:E338
lib/ansible/modules/network/f5/_bigip_facts.py validate-modules:E322
lib/ansible/modules/network/f5/_bigip_facts.py validate-modules:E324
lib/ansible/modules/network/f5/_bigip_facts.py validate-modules:E337
lib/ansible/modules/network/f5/_bigip_facts.py validate-modules:E338
lib/ansible/modules/network/f5/_bigip_gtm_facts.py validate-modules:E322
lib/ansible/modules/network/f5/_bigip_gtm_facts.py validate-modules:E324
lib/ansible/modules/network/f5/_bigip_gtm_facts.py validate-modules:E337
lib/ansible/modules/network/f5/_bigip_gtm_facts.py validate-modules:E338
lib/ansible/modules/network/f5/bigip_apm_policy_fetch.py validate-modules:E322
lib/ansible/modules/network/f5/bigip_apm_policy_fetch.py validate-modules:E324
lib/ansible/modules/network/f5/bigip_apm_policy_fetch.py validate-modules:E338
lib/ansible/modules/network/f5/bigip_apm_policy_import.py validate-modules:E322
lib/ansible/modules/network/f5/bigip_apm_policy_import.py validate-modules:E324
lib/ansible/modules/network/f5/bigip_apm_policy_import.py validate-modules:E338
lib/ansible/modules/network/f5/bigip_appsvcs_extension.py validate-modules:E322
lib/ansible/modules/network/f5/bigip_appsvcs_extension.py validate-modules:E324
lib/ansible/modules/network/f5/bigip_appsvcs_extension.py validate-modules:E338
lib/ansible/modules/network/f5/bigip_asm_policy_fetch.py validate-modules:E322
lib/ansible/modules/network/f5/bigip_asm_policy_fetch.py validate-modules:E324
lib/ansible/modules/network/f5/bigip_asm_policy_fetch.py validate-modules:E338
lib/ansible/modules/network/f5/bigip_asm_policy_import.py validate-modules:E322
lib/ansible/modules/network/f5/bigip_asm_policy_import.py validate-modules:E324
lib/ansible/modules/network/f5/bigip_asm_policy_import.py validate-modules:E338
lib/ansible/modules/network/f5/bigip_asm_policy_manage.py validate-modules:E322
lib/ansible/modules/network/f5/bigip_asm_policy_manage.py validate-modules:E324
lib/ansible/modules/network/f5/bigip_asm_policy_manage.py validate-modules:E338
lib/ansible/modules/network/f5/bigip_asm_policy_server_technology.py validate-modules:E322
lib/ansible/modules/network/f5/bigip_asm_policy_server_technology.py validate-modules:E324
lib/ansible/modules/network/f5/bigip_asm_policy_server_technology.py validate-modules:E338
lib/ansible/modules/network/f5/bigip_asm_policy_signature_set.py validate-modules:E322
lib/ansible/modules/network/f5/bigip_asm_policy_signature_set.py validate-modules:E324
lib/ansible/modules/network/f5/bigip_asm_policy_signature_set.py validate-modules:E338
lib/ansible/modules/network/f5/bigip_cli_alias.py validate-modules:E322
lib/ansible/modules/network/f5/bigip_cli_alias.py validate-modules:E324
lib/ansible/modules/network/f5/bigip_cli_alias.py validate-modules:E338
lib/ansible/modules/network/f5/bigip_cli_script.py validate-modules:E322
lib/ansible/modules/network/f5/bigip_cli_script.py validate-modules:E324
lib/ansible/modules/network/f5/bigip_cli_script.py validate-modules:E338
lib/ansible/modules/network/f5/bigip_command.py validate-modules:E322
lib/ansible/modules/network/f5/bigip_command.py validate-modules:E324
lib/ansible/modules/network/f5/bigip_command.py validate-modules:E337
lib/ansible/modules/network/f5/bigip_command.py validate-modules:E338
lib/ansible/modules/network/f5/bigip_config.py validate-modules:E322
lib/ansible/modules/network/f5/bigip_config.py validate-modules:E324
lib/ansible/modules/network/f5/bigip_config.py validate-modules:E338
lib/ansible/modules/network/f5/bigip_configsync_action.py validate-modules:E322
lib/ansible/modules/network/f5/bigip_configsync_action.py validate-modules:E324
lib/ansible/modules/network/f5/bigip_configsync_action.py validate-modules:E338
lib/ansible/modules/network/f5/bigip_data_group.py validate-modules:E322
lib/ansible/modules/network/f5/bigip_data_group.py validate-modules:E324
lib/ansible/modules/network/f5/bigip_data_group.py validate-modules:E338
lib/ansible/modules/network/f5/bigip_device_auth.py validate-modules:E322
lib/ansible/modules/network/f5/bigip_device_auth.py validate-modules:E324
lib/ansible/modules/network/f5/bigip_device_auth.py validate-modules:E338
lib/ansible/modules/network/f5/bigip_device_auth_ldap.py validate-modules:E322
lib/ansible/modules/network/f5/bigip_device_auth_ldap.py validate-modules:E324
lib/ansible/modules/network/f5/bigip_device_auth_ldap.py validate-modules:E338
lib/ansible/modules/network/f5/bigip_device_connectivity.py validate-modules:E322
lib/ansible/modules/network/f5/bigip_device_connectivity.py validate-modules:E324
lib/ansible/modules/network/f5/bigip_device_connectivity.py validate-modules:E338
lib/ansible/modules/network/f5/bigip_device_dns.py validate-modules:E322
lib/ansible/modules/network/f5/bigip_device_dns.py validate-modules:E324
lib/ansible/modules/network/f5/bigip_device_dns.py validate-modules:E337
lib/ansible/modules/network/f5/bigip_device_dns.py validate-modules:E338
lib/ansible/modules/network/f5/bigip_device_group.py validate-modules:E322
lib/ansible/modules/network/f5/bigip_device_group.py validate-modules:E324
lib/ansible/modules/network/f5/bigip_device_group.py validate-modules:E338
lib/ansible/modules/network/f5/bigip_device_group_member.py validate-modules:E322
lib/ansible/modules/network/f5/bigip_device_group_member.py validate-modules:E324
lib/ansible/modules/network/f5/bigip_device_group_member.py validate-modules:E338
lib/ansible/modules/network/f5/bigip_device_ha_group.py validate-modules:E322
lib/ansible/modules/network/f5/bigip_device_ha_group.py validate-modules:E324
lib/ansible/modules/network/f5/bigip_device_ha_group.py validate-modules:E338
lib/ansible/modules/network/f5/bigip_device_httpd.py validate-modules:E322
lib/ansible/modules/network/f5/bigip_device_httpd.py validate-modules:E324
lib/ansible/modules/network/f5/bigip_device_httpd.py validate-modules:E338
lib/ansible/modules/network/f5/bigip_device_info.py validate-modules:E322
lib/ansible/modules/network/f5/bigip_device_info.py validate-modules:E324
lib/ansible/modules/network/f5/bigip_device_info.py validate-modules:E338
lib/ansible/modules/network/f5/bigip_device_license.py validate-modules:E322
lib/ansible/modules/network/f5/bigip_device_license.py validate-modules:E324
lib/ansible/modules/network/f5/bigip_device_license.py validate-modules:E338
lib/ansible/modules/network/f5/bigip_device_ntp.py validate-modules:E322
lib/ansible/modules/network/f5/bigip_device_ntp.py validate-modules:E324
lib/ansible/modules/network/f5/bigip_device_ntp.py validate-modules:E338
lib/ansible/modules/network/f5/bigip_device_sshd.py validate-modules:E322
lib/ansible/modules/network/f5/bigip_device_sshd.py validate-modules:E324
lib/ansible/modules/network/f5/bigip_device_sshd.py validate-modules:E338
lib/ansible/modules/network/f5/bigip_device_syslog.py validate-modules:E322
lib/ansible/modules/network/f5/bigip_device_syslog.py validate-modules:E324
lib/ansible/modules/network/f5/bigip_device_syslog.py validate-modules:E338
lib/ansible/modules/network/f5/bigip_device_traffic_group.py validate-modules:E322
lib/ansible/modules/network/f5/bigip_device_traffic_group.py validate-modules:E324
lib/ansible/modules/network/f5/bigip_device_traffic_group.py validate-modules:E338
lib/ansible/modules/network/f5/bigip_device_trust.py validate-modules:E322
lib/ansible/modules/network/f5/bigip_device_trust.py validate-modules:E324
lib/ansible/modules/network/f5/bigip_device_trust.py validate-modules:E338
lib/ansible/modules/network/f5/bigip_dns_cache_resolver.py validate-modules:E322
lib/ansible/modules/network/f5/bigip_dns_cache_resolver.py validate-modules:E324
lib/ansible/modules/network/f5/bigip_dns_cache_resolver.py validate-modules:E338
lib/ansible/modules/network/f5/bigip_dns_nameserver.py validate-modules:E322
lib/ansible/modules/network/f5/bigip_dns_nameserver.py validate-modules:E324
lib/ansible/modules/network/f5/bigip_dns_nameserver.py validate-modules:E338
lib/ansible/modules/network/f5/bigip_dns_resolver.py validate-modules:E322
lib/ansible/modules/network/f5/bigip_dns_resolver.py validate-modules:E324
lib/ansible/modules/network/f5/bigip_dns_resolver.py validate-modules:E338
lib/ansible/modules/network/f5/bigip_dns_zone.py validate-modules:E322
lib/ansible/modules/network/f5/bigip_dns_zone.py validate-modules:E324
lib/ansible/modules/network/f5/bigip_dns_zone.py validate-modules:E338
lib/ansible/modules/network/f5/bigip_file_copy.py validate-modules:E322
lib/ansible/modules/network/f5/bigip_file_copy.py validate-modules:E324
lib/ansible/modules/network/f5/bigip_file_copy.py validate-modules:E338
lib/ansible/modules/network/f5/bigip_firewall_address_list.py validate-modules:E322
lib/ansible/modules/network/f5/bigip_firewall_address_list.py validate-modules:E324
lib/ansible/modules/network/f5/bigip_firewall_address_list.py validate-modules:E326
lib/ansible/modules/network/f5/bigip_firewall_address_list.py validate-modules:E338
lib/ansible/modules/network/f5/bigip_firewall_dos_profile.py validate-modules:E322
lib/ansible/modules/network/f5/bigip_firewall_dos_profile.py validate-modules:E324
lib/ansible/modules/network/f5/bigip_firewall_dos_profile.py validate-modules:E338
lib/ansible/modules/network/f5/bigip_firewall_dos_vector.py validate-modules:E322
lib/ansible/modules/network/f5/bigip_firewall_dos_vector.py validate-modules:E324
lib/ansible/modules/network/f5/bigip_firewall_dos_vector.py validate-modules:E338
lib/ansible/modules/network/f5/bigip_firewall_global_rules.py validate-modules:E322
lib/ansible/modules/network/f5/bigip_firewall_global_rules.py validate-modules:E324
lib/ansible/modules/network/f5/bigip_firewall_global_rules.py validate-modules:E338
lib/ansible/modules/network/f5/bigip_firewall_log_profile.py validate-modules:E322
lib/ansible/modules/network/f5/bigip_firewall_log_profile.py validate-modules:E324
lib/ansible/modules/network/f5/bigip_firewall_log_profile.py validate-modules:E338
lib/ansible/modules/network/f5/bigip_firewall_log_profile_network.py validate-modules:E322
lib/ansible/modules/network/f5/bigip_firewall_log_profile_network.py validate-modules:E324
lib/ansible/modules/network/f5/bigip_firewall_log_profile_network.py validate-modules:E335
lib/ansible/modules/network/f5/bigip_firewall_log_profile_network.py validate-modules:E338
lib/ansible/modules/network/f5/bigip_firewall_policy.py validate-modules:E322
lib/ansible/modules/network/f5/bigip_firewall_policy.py validate-modules:E324
lib/ansible/modules/network/f5/bigip_firewall_policy.py validate-modules:E338
lib/ansible/modules/network/f5/bigip_firewall_port_list.py validate-modules:E322
lib/ansible/modules/network/f5/bigip_firewall_port_list.py validate-modules:E324
lib/ansible/modules/network/f5/bigip_firewall_port_list.py validate-modules:E338
lib/ansible/modules/network/f5/bigip_firewall_rule.py validate-modules:E322
lib/ansible/modules/network/f5/bigip_firewall_rule.py validate-modules:E324
lib/ansible/modules/network/f5/bigip_firewall_rule.py validate-modules:E338
lib/ansible/modules/network/f5/bigip_firewall_rule_list.py validate-modules:E322
lib/ansible/modules/network/f5/bigip_firewall_rule_list.py validate-modules:E324
lib/ansible/modules/network/f5/bigip_firewall_rule_list.py validate-modules:E338
lib/ansible/modules/network/f5/bigip_firewall_schedule.py validate-modules:E322
lib/ansible/modules/network/f5/bigip_firewall_schedule.py validate-modules:E324
lib/ansible/modules/network/f5/bigip_firewall_schedule.py validate-modules:E338
lib/ansible/modules/network/f5/bigip_gtm_datacenter.py validate-modules:E322
lib/ansible/modules/network/f5/bigip_gtm_datacenter.py validate-modules:E324
lib/ansible/modules/network/f5/bigip_gtm_datacenter.py validate-modules:E338
lib/ansible/modules/network/f5/bigip_gtm_global.py validate-modules:E322
lib/ansible/modules/network/f5/bigip_gtm_global.py validate-modules:E324
lib/ansible/modules/network/f5/bigip_gtm_global.py validate-modules:E338
lib/ansible/modules/network/f5/bigip_gtm_monitor_bigip.py validate-modules:E322
lib/ansible/modules/network/f5/bigip_gtm_monitor_bigip.py validate-modules:E324
lib/ansible/modules/network/f5/bigip_gtm_monitor_bigip.py validate-modules:E338
lib/ansible/modules/network/f5/bigip_gtm_monitor_external.py validate-modules:E322
lib/ansible/modules/network/f5/bigip_gtm_monitor_external.py validate-modules:E324
lib/ansible/modules/network/f5/bigip_gtm_monitor_external.py validate-modules:E338
lib/ansible/modules/network/f5/bigip_gtm_monitor_firepass.py validate-modules:E322
lib/ansible/modules/network/f5/bigip_gtm_monitor_firepass.py validate-modules:E324
lib/ansible/modules/network/f5/bigip_gtm_monitor_firepass.py validate-modules:E338
lib/ansible/modules/network/f5/bigip_gtm_monitor_http.py validate-modules:E322
lib/ansible/modules/network/f5/bigip_gtm_monitor_http.py validate-modules:E324
lib/ansible/modules/network/f5/bigip_gtm_monitor_http.py validate-modules:E338
lib/ansible/modules/network/f5/bigip_gtm_monitor_https.py validate-modules:E322
lib/ansible/modules/network/f5/bigip_gtm_monitor_https.py validate-modules:E324
lib/ansible/modules/network/f5/bigip_gtm_monitor_https.py validate-modules:E338
lib/ansible/modules/network/f5/bigip_gtm_monitor_tcp.py validate-modules:E322
lib/ansible/modules/network/f5/bigip_gtm_monitor_tcp.py validate-modules:E324
lib/ansible/modules/network/f5/bigip_gtm_monitor_tcp.py validate-modules:E338
lib/ansible/modules/network/f5/bigip_gtm_monitor_tcp_half_open.py validate-modules:E322
lib/ansible/modules/network/f5/bigip_gtm_monitor_tcp_half_open.py validate-modules:E324
lib/ansible/modules/network/f5/bigip_gtm_monitor_tcp_half_open.py validate-modules:E338
lib/ansible/modules/network/f5/bigip_gtm_pool.py validate-modules:E322
lib/ansible/modules/network/f5/bigip_gtm_pool.py validate-modules:E324
lib/ansible/modules/network/f5/bigip_gtm_pool.py validate-modules:E338
lib/ansible/modules/network/f5/bigip_gtm_pool_member.py validate-modules:E322
lib/ansible/modules/network/f5/bigip_gtm_pool_member.py validate-modules:E324
lib/ansible/modules/network/f5/bigip_gtm_pool_member.py validate-modules:E326
lib/ansible/modules/network/f5/bigip_gtm_pool_member.py validate-modules:E337
lib/ansible/modules/network/f5/bigip_gtm_pool_member.py validate-modules:E338
lib/ansible/modules/network/f5/bigip_gtm_pool_member.py validate-modules:E340
lib/ansible/modules/network/f5/bigip_gtm_server.py validate-modules:E322
lib/ansible/modules/network/f5/bigip_gtm_server.py validate-modules:E324
lib/ansible/modules/network/f5/bigip_gtm_server.py validate-modules:E338
lib/ansible/modules/network/f5/bigip_gtm_topology_record.py validate-modules:E322
lib/ansible/modules/network/f5/bigip_gtm_topology_record.py validate-modules:E324
lib/ansible/modules/network/f5/bigip_gtm_topology_record.py validate-modules:E338
lib/ansible/modules/network/f5/bigip_gtm_topology_region.py validate-modules:E322
lib/ansible/modules/network/f5/bigip_gtm_topology_region.py validate-modules:E324
lib/ansible/modules/network/f5/bigip_gtm_topology_region.py validate-modules:E338
lib/ansible/modules/network/f5/bigip_gtm_virtual_server.py validate-modules:E322
lib/ansible/modules/network/f5/bigip_gtm_virtual_server.py validate-modules:E324
lib/ansible/modules/network/f5/bigip_gtm_virtual_server.py validate-modules:E338
lib/ansible/modules/network/f5/bigip_gtm_wide_ip.py validate-modules:E322
lib/ansible/modules/network/f5/bigip_gtm_wide_ip.py validate-modules:E324
lib/ansible/modules/network/f5/bigip_gtm_wide_ip.py validate-modules:E338
lib/ansible/modules/network/f5/bigip_hostname.py validate-modules:E322
lib/ansible/modules/network/f5/bigip_hostname.py validate-modules:E324
lib/ansible/modules/network/f5/bigip_hostname.py validate-modules:E338
lib/ansible/modules/network/f5/bigip_iapp_service.py validate-modules:E322
lib/ansible/modules/network/f5/bigip_iapp_service.py validate-modules:E324
lib/ansible/modules/network/f5/bigip_iapp_service.py validate-modules:E338
lib/ansible/modules/network/f5/bigip_iapp_template.py validate-modules:E322
lib/ansible/modules/network/f5/bigip_iapp_template.py validate-modules:E324
lib/ansible/modules/network/f5/bigip_iapp_template.py validate-modules:E338
lib/ansible/modules/network/f5/bigip_ike_peer.py validate-modules:E322
lib/ansible/modules/network/f5/bigip_ike_peer.py validate-modules:E324
lib/ansible/modules/network/f5/bigip_ike_peer.py validate-modules:E338
lib/ansible/modules/network/f5/bigip_imish_config.py validate-modules:E322
lib/ansible/modules/network/f5/bigip_imish_config.py validate-modules:E324
lib/ansible/modules/network/f5/bigip_imish_config.py validate-modules:E338
lib/ansible/modules/network/f5/bigip_ipsec_policy.py validate-modules:E322
lib/ansible/modules/network/f5/bigip_ipsec_policy.py validate-modules:E324
lib/ansible/modules/network/f5/bigip_ipsec_policy.py validate-modules:E338
lib/ansible/modules/network/f5/bigip_irule.py validate-modules:E322
lib/ansible/modules/network/f5/bigip_irule.py validate-modules:E324
lib/ansible/modules/network/f5/bigip_irule.py validate-modules:E338
lib/ansible/modules/network/f5/bigip_log_destination.py validate-modules:E322
lib/ansible/modules/network/f5/bigip_log_destination.py validate-modules:E324
lib/ansible/modules/network/f5/bigip_log_destination.py validate-modules:E338
lib/ansible/modules/network/f5/bigip_log_publisher.py validate-modules:E322
lib/ansible/modules/network/f5/bigip_log_publisher.py validate-modules:E324
lib/ansible/modules/network/f5/bigip_log_publisher.py validate-modules:E338
lib/ansible/modules/network/f5/bigip_lx_package.py validate-modules:E322
lib/ansible/modules/network/f5/bigip_lx_package.py validate-modules:E324
lib/ansible/modules/network/f5/bigip_lx_package.py validate-modules:E338
lib/ansible/modules/network/f5/bigip_management_route.py validate-modules:E322
lib/ansible/modules/network/f5/bigip_management_route.py validate-modules:E324
lib/ansible/modules/network/f5/bigip_management_route.py validate-modules:E338
lib/ansible/modules/network/f5/bigip_message_routing_peer.py validate-modules:E322
lib/ansible/modules/network/f5/bigip_message_routing_peer.py validate-modules:E324
lib/ansible/modules/network/f5/bigip_message_routing_peer.py validate-modules:E338
lib/ansible/modules/network/f5/bigip_message_routing_protocol.py validate-modules:E322
lib/ansible/modules/network/f5/bigip_message_routing_protocol.py validate-modules:E324
lib/ansible/modules/network/f5/bigip_message_routing_protocol.py validate-modules:E338
lib/ansible/modules/network/f5/bigip_message_routing_route.py validate-modules:E322
lib/ansible/modules/network/f5/bigip_message_routing_route.py validate-modules:E324
lib/ansible/modules/network/f5/bigip_message_routing_route.py validate-modules:E338
lib/ansible/modules/network/f5/bigip_message_routing_router.py validate-modules:E322
lib/ansible/modules/network/f5/bigip_message_routing_router.py validate-modules:E324
lib/ansible/modules/network/f5/bigip_message_routing_router.py validate-modules:E338
lib/ansible/modules/network/f5/bigip_message_routing_transport_config.py validate-modules:E322
lib/ansible/modules/network/f5/bigip_message_routing_transport_config.py validate-modules:E324
lib/ansible/modules/network/f5/bigip_message_routing_transport_config.py validate-modules:E338
lib/ansible/modules/network/f5/bigip_monitor_dns.py validate-modules:E322
lib/ansible/modules/network/f5/bigip_monitor_dns.py validate-modules:E324
lib/ansible/modules/network/f5/bigip_monitor_dns.py validate-modules:E338
lib/ansible/modules/network/f5/bigip_monitor_external.py validate-modules:E322
lib/ansible/modules/network/f5/bigip_monitor_external.py validate-modules:E324
lib/ansible/modules/network/f5/bigip_monitor_external.py validate-modules:E338
lib/ansible/modules/network/f5/bigip_monitor_gateway_icmp.py validate-modules:E322
lib/ansible/modules/network/f5/bigip_monitor_gateway_icmp.py validate-modules:E324
lib/ansible/modules/network/f5/bigip_monitor_gateway_icmp.py validate-modules:E338
lib/ansible/modules/network/f5/bigip_monitor_http.py validate-modules:E322
lib/ansible/modules/network/f5/bigip_monitor_http.py validate-modules:E324
lib/ansible/modules/network/f5/bigip_monitor_http.py validate-modules:E338
lib/ansible/modules/network/f5/bigip_monitor_https.py validate-modules:E322
lib/ansible/modules/network/f5/bigip_monitor_https.py validate-modules:E324
lib/ansible/modules/network/f5/bigip_monitor_https.py validate-modules:E338
lib/ansible/modules/network/f5/bigip_monitor_ldap.py validate-modules:E322
lib/ansible/modules/network/f5/bigip_monitor_ldap.py validate-modules:E324
lib/ansible/modules/network/f5/bigip_monitor_ldap.py validate-modules:E338
lib/ansible/modules/network/f5/bigip_monitor_snmp_dca.py validate-modules:E322
lib/ansible/modules/network/f5/bigip_monitor_snmp_dca.py validate-modules:E324
lib/ansible/modules/network/f5/bigip_monitor_snmp_dca.py validate-modules:E338
lib/ansible/modules/network/f5/bigip_monitor_tcp.py validate-modules:E322
lib/ansible/modules/network/f5/bigip_monitor_tcp.py validate-modules:E324
lib/ansible/modules/network/f5/bigip_monitor_tcp.py validate-modules:E338
lib/ansible/modules/network/f5/bigip_monitor_tcp_echo.py validate-modules:E322
lib/ansible/modules/network/f5/bigip_monitor_tcp_echo.py validate-modules:E324
lib/ansible/modules/network/f5/bigip_monitor_tcp_echo.py validate-modules:E338
lib/ansible/modules/network/f5/bigip_monitor_tcp_half_open.py validate-modules:E322
lib/ansible/modules/network/f5/bigip_monitor_tcp_half_open.py validate-modules:E324
lib/ansible/modules/network/f5/bigip_monitor_tcp_half_open.py validate-modules:E338
lib/ansible/modules/network/f5/bigip_monitor_udp.py validate-modules:E322
lib/ansible/modules/network/f5/bigip_monitor_udp.py validate-modules:E324
lib/ansible/modules/network/f5/bigip_monitor_udp.py validate-modules:E338
lib/ansible/modules/network/f5/bigip_node.py validate-modules:E322
lib/ansible/modules/network/f5/bigip_node.py validate-modules:E324
lib/ansible/modules/network/f5/bigip_node.py validate-modules:E338
lib/ansible/modules/network/f5/bigip_partition.py validate-modules:E322
lib/ansible/modules/network/f5/bigip_partition.py validate-modules:E324
lib/ansible/modules/network/f5/bigip_partition.py validate-modules:E338
lib/ansible/modules/network/f5/bigip_password_policy.py validate-modules:E322
lib/ansible/modules/network/f5/bigip_password_policy.py validate-modules:E324
lib/ansible/modules/network/f5/bigip_password_policy.py validate-modules:E338
lib/ansible/modules/network/f5/bigip_policy.py validate-modules:E322
lib/ansible/modules/network/f5/bigip_policy.py validate-modules:E324
lib/ansible/modules/network/f5/bigip_policy.py validate-modules:E338
lib/ansible/modules/network/f5/bigip_policy_rule.py validate-modules:E322
lib/ansible/modules/network/f5/bigip_policy_rule.py validate-modules:E324
lib/ansible/modules/network/f5/bigip_policy_rule.py validate-modules:E338
lib/ansible/modules/network/f5/bigip_pool.py validate-modules:E322
lib/ansible/modules/network/f5/bigip_pool.py validate-modules:E324
lib/ansible/modules/network/f5/bigip_pool.py validate-modules:E326
lib/ansible/modules/network/f5/bigip_pool.py validate-modules:E337
lib/ansible/modules/network/f5/bigip_pool.py validate-modules:E338
lib/ansible/modules/network/f5/bigip_pool.py validate-modules:E340
lib/ansible/modules/network/f5/bigip_pool_member.py validate-modules:E322
lib/ansible/modules/network/f5/bigip_pool_member.py validate-modules:E324
lib/ansible/modules/network/f5/bigip_pool_member.py validate-modules:E326
lib/ansible/modules/network/f5/bigip_pool_member.py validate-modules:E337
lib/ansible/modules/network/f5/bigip_pool_member.py validate-modules:E338
lib/ansible/modules/network/f5/bigip_pool_member.py validate-modules:E340
lib/ansible/modules/network/f5/bigip_profile_analytics.py validate-modules:E322
lib/ansible/modules/network/f5/bigip_profile_analytics.py validate-modules:E324
lib/ansible/modules/network/f5/bigip_profile_analytics.py validate-modules:E338
lib/ansible/modules/network/f5/bigip_profile_client_ssl.py validate-modules:E322
lib/ansible/modules/network/f5/bigip_profile_client_ssl.py validate-modules:E324
lib/ansible/modules/network/f5/bigip_profile_client_ssl.py validate-modules:E338
lib/ansible/modules/network/f5/bigip_profile_dns.py validate-modules:E322
lib/ansible/modules/network/f5/bigip_profile_dns.py validate-modules:E324
lib/ansible/modules/network/f5/bigip_profile_dns.py validate-modules:E338
lib/ansible/modules/network/f5/bigip_profile_fastl4.py validate-modules:E322
lib/ansible/modules/network/f5/bigip_profile_fastl4.py validate-modules:E324
lib/ansible/modules/network/f5/bigip_profile_fastl4.py validate-modules:E338
lib/ansible/modules/network/f5/bigip_profile_http.py validate-modules:E322
lib/ansible/modules/network/f5/bigip_profile_http.py validate-modules:E324
lib/ansible/modules/network/f5/bigip_profile_http.py validate-modules:E325
lib/ansible/modules/network/f5/bigip_profile_http.py validate-modules:E338
lib/ansible/modules/network/f5/bigip_profile_http2.py validate-modules:E322
lib/ansible/modules/network/f5/bigip_profile_http2.py validate-modules:E324
lib/ansible/modules/network/f5/bigip_profile_http2.py validate-modules:E338
lib/ansible/modules/network/f5/bigip_profile_http_compression.py validate-modules:E322
lib/ansible/modules/network/f5/bigip_profile_http_compression.py validate-modules:E324
lib/ansible/modules/network/f5/bigip_profile_http_compression.py validate-modules:E338
lib/ansible/modules/network/f5/bigip_profile_oneconnect.py validate-modules:E322
lib/ansible/modules/network/f5/bigip_profile_oneconnect.py validate-modules:E324
lib/ansible/modules/network/f5/bigip_profile_oneconnect.py validate-modules:E338
lib/ansible/modules/network/f5/bigip_profile_persistence_cookie.py validate-modules:E322
lib/ansible/modules/network/f5/bigip_profile_persistence_cookie.py validate-modules:E324
lib/ansible/modules/network/f5/bigip_profile_persistence_cookie.py validate-modules:E338
lib/ansible/modules/network/f5/bigip_profile_persistence_src_addr.py validate-modules:E322
lib/ansible/modules/network/f5/bigip_profile_persistence_src_addr.py validate-modules:E324
lib/ansible/modules/network/f5/bigip_profile_persistence_src_addr.py validate-modules:E338
lib/ansible/modules/network/f5/bigip_profile_server_ssl.py validate-modules:E322
lib/ansible/modules/network/f5/bigip_profile_server_ssl.py validate-modules:E324
lib/ansible/modules/network/f5/bigip_profile_server_ssl.py validate-modules:E338
lib/ansible/modules/network/f5/bigip_profile_tcp.py validate-modules:E322
lib/ansible/modules/network/f5/bigip_profile_tcp.py validate-modules:E324
lib/ansible/modules/network/f5/bigip_profile_tcp.py validate-modules:E338
lib/ansible/modules/network/f5/bigip_profile_udp.py validate-modules:E322
lib/ansible/modules/network/f5/bigip_profile_udp.py validate-modules:E324
lib/ansible/modules/network/f5/bigip_profile_udp.py validate-modules:E338
lib/ansible/modules/network/f5/bigip_provision.py validate-modules:E322
lib/ansible/modules/network/f5/bigip_provision.py validate-modules:E324
lib/ansible/modules/network/f5/bigip_provision.py validate-modules:E338
lib/ansible/modules/network/f5/bigip_qkview.py validate-modules:E322
lib/ansible/modules/network/f5/bigip_qkview.py validate-modules:E324
lib/ansible/modules/network/f5/bigip_qkview.py validate-modules:E338
lib/ansible/modules/network/f5/bigip_remote_role.py validate-modules:E322
lib/ansible/modules/network/f5/bigip_remote_role.py validate-modules:E324
lib/ansible/modules/network/f5/bigip_remote_role.py validate-modules:E338
lib/ansible/modules/network/f5/bigip_remote_syslog.py validate-modules:E322
lib/ansible/modules/network/f5/bigip_remote_syslog.py validate-modules:E324
lib/ansible/modules/network/f5/bigip_remote_syslog.py validate-modules:E338
lib/ansible/modules/network/f5/bigip_remote_user.py validate-modules:E322
lib/ansible/modules/network/f5/bigip_remote_user.py validate-modules:E324
lib/ansible/modules/network/f5/bigip_remote_user.py validate-modules:E338
lib/ansible/modules/network/f5/bigip_routedomain.py validate-modules:E322
lib/ansible/modules/network/f5/bigip_routedomain.py validate-modules:E324
lib/ansible/modules/network/f5/bigip_routedomain.py validate-modules:E338
lib/ansible/modules/network/f5/bigip_selfip.py validate-modules:E322
lib/ansible/modules/network/f5/bigip_selfip.py validate-modules:E324
lib/ansible/modules/network/f5/bigip_selfip.py validate-modules:E338
lib/ansible/modules/network/f5/bigip_service_policy.py validate-modules:E322
lib/ansible/modules/network/f5/bigip_service_policy.py validate-modules:E324
lib/ansible/modules/network/f5/bigip_service_policy.py validate-modules:E338
lib/ansible/modules/network/f5/bigip_smtp.py validate-modules:E322
lib/ansible/modules/network/f5/bigip_smtp.py validate-modules:E324
lib/ansible/modules/network/f5/bigip_smtp.py validate-modules:E338
lib/ansible/modules/network/f5/bigip_snat_pool.py validate-modules:E322
lib/ansible/modules/network/f5/bigip_snat_pool.py validate-modules:E324
lib/ansible/modules/network/f5/bigip_snat_pool.py validate-modules:E338
lib/ansible/modules/network/f5/bigip_snmp.py validate-modules:E322
lib/ansible/modules/network/f5/bigip_snmp.py validate-modules:E324
lib/ansible/modules/network/f5/bigip_snmp.py validate-modules:E338
lib/ansible/modules/network/f5/bigip_snmp_community.py validate-modules:E322
lib/ansible/modules/network/f5/bigip_snmp_community.py validate-modules:E324
lib/ansible/modules/network/f5/bigip_snmp_community.py validate-modules:E338
lib/ansible/modules/network/f5/bigip_snmp_trap.py validate-modules:E322
lib/ansible/modules/network/f5/bigip_snmp_trap.py validate-modules:E324
lib/ansible/modules/network/f5/bigip_snmp_trap.py validate-modules:E338
lib/ansible/modules/network/f5/bigip_software_image.py validate-modules:E322
lib/ansible/modules/network/f5/bigip_software_image.py validate-modules:E324
lib/ansible/modules/network/f5/bigip_software_image.py validate-modules:E338
lib/ansible/modules/network/f5/bigip_software_install.py validate-modules:E322
lib/ansible/modules/network/f5/bigip_software_install.py validate-modules:E324
lib/ansible/modules/network/f5/bigip_software_install.py validate-modules:E338
lib/ansible/modules/network/f5/bigip_software_update.py validate-modules:E322
lib/ansible/modules/network/f5/bigip_software_update.py validate-modules:E324
lib/ansible/modules/network/f5/bigip_software_update.py validate-modules:E338
lib/ansible/modules/network/f5/bigip_ssl_certificate.py validate-modules:E322
lib/ansible/modules/network/f5/bigip_ssl_certificate.py validate-modules:E324
lib/ansible/modules/network/f5/bigip_ssl_certificate.py validate-modules:E338
lib/ansible/modules/network/f5/bigip_ssl_key.py validate-modules:E322
lib/ansible/modules/network/f5/bigip_ssl_key.py validate-modules:E324
lib/ansible/modules/network/f5/bigip_ssl_key.py validate-modules:E338
lib/ansible/modules/network/f5/bigip_ssl_ocsp.py validate-modules:E322
lib/ansible/modules/network/f5/bigip_ssl_ocsp.py validate-modules:E324
lib/ansible/modules/network/f5/bigip_ssl_ocsp.py validate-modules:E338
lib/ansible/modules/network/f5/bigip_static_route.py validate-modules:E322
lib/ansible/modules/network/f5/bigip_static_route.py validate-modules:E324
lib/ansible/modules/network/f5/bigip_static_route.py validate-modules:E338
lib/ansible/modules/network/f5/bigip_sys_daemon_log_tmm.py validate-modules:E322
lib/ansible/modules/network/f5/bigip_sys_daemon_log_tmm.py validate-modules:E324
lib/ansible/modules/network/f5/bigip_sys_daemon_log_tmm.py validate-modules:E338
lib/ansible/modules/network/f5/bigip_sys_db.py validate-modules:E322
lib/ansible/modules/network/f5/bigip_sys_db.py validate-modules:E324
lib/ansible/modules/network/f5/bigip_sys_db.py validate-modules:E338
lib/ansible/modules/network/f5/bigip_sys_global.py validate-modules:E322
lib/ansible/modules/network/f5/bigip_sys_global.py validate-modules:E324
lib/ansible/modules/network/f5/bigip_sys_global.py validate-modules:E338
lib/ansible/modules/network/f5/bigip_timer_policy.py validate-modules:E322
lib/ansible/modules/network/f5/bigip_timer_policy.py validate-modules:E324
lib/ansible/modules/network/f5/bigip_timer_policy.py validate-modules:E338
lib/ansible/modules/network/f5/bigip_traffic_selector.py validate-modules:E322
lib/ansible/modules/network/f5/bigip_traffic_selector.py validate-modules:E324
lib/ansible/modules/network/f5/bigip_traffic_selector.py validate-modules:E338
lib/ansible/modules/network/f5/bigip_trunk.py validate-modules:E322
lib/ansible/modules/network/f5/bigip_trunk.py validate-modules:E324
lib/ansible/modules/network/f5/bigip_trunk.py validate-modules:E338
lib/ansible/modules/network/f5/bigip_tunnel.py validate-modules:E322
lib/ansible/modules/network/f5/bigip_tunnel.py validate-modules:E324
lib/ansible/modules/network/f5/bigip_tunnel.py validate-modules:E338
lib/ansible/modules/network/f5/bigip_ucs.py validate-modules:E322
lib/ansible/modules/network/f5/bigip_ucs.py validate-modules:E324
lib/ansible/modules/network/f5/bigip_ucs.py validate-modules:E335
lib/ansible/modules/network/f5/bigip_ucs.py validate-modules:E338
lib/ansible/modules/network/f5/bigip_ucs_fetch.py validate-modules:E322
lib/ansible/modules/network/f5/bigip_ucs_fetch.py validate-modules:E324
lib/ansible/modules/network/f5/bigip_ucs_fetch.py validate-modules:E338
lib/ansible/modules/network/f5/bigip_user.py validate-modules:E322
lib/ansible/modules/network/f5/bigip_user.py validate-modules:E324
lib/ansible/modules/network/f5/bigip_user.py validate-modules:E338
lib/ansible/modules/network/f5/bigip_vcmp_guest.py validate-modules:E322
lib/ansible/modules/network/f5/bigip_vcmp_guest.py validate-modules:E324
lib/ansible/modules/network/f5/bigip_vcmp_guest.py validate-modules:E338
lib/ansible/modules/network/f5/bigip_virtual_address.py validate-modules:E322
lib/ansible/modules/network/f5/bigip_virtual_address.py validate-modules:E324
lib/ansible/modules/network/f5/bigip_virtual_address.py validate-modules:E338
lib/ansible/modules/network/f5/bigip_virtual_server.py validate-modules:E322
lib/ansible/modules/network/f5/bigip_virtual_server.py validate-modules:E324
lib/ansible/modules/network/f5/bigip_virtual_server.py validate-modules:E338
lib/ansible/modules/network/f5/bigip_vlan.py validate-modules:E322
lib/ansible/modules/network/f5/bigip_vlan.py validate-modules:E324
lib/ansible/modules/network/f5/bigip_vlan.py validate-modules:E326
lib/ansible/modules/network/f5/bigip_vlan.py validate-modules:E338
lib/ansible/modules/network/f5/bigip_wait.py validate-modules:E322
lib/ansible/modules/network/f5/bigip_wait.py validate-modules:E324
lib/ansible/modules/network/f5/bigip_wait.py validate-modules:E338
lib/ansible/modules/network/f5/bigiq_application_fasthttp.py validate-modules:E322
lib/ansible/modules/network/f5/bigiq_application_fasthttp.py validate-modules:E324
lib/ansible/modules/network/f5/bigiq_application_fasthttp.py validate-modules:E338
lib/ansible/modules/network/f5/bigiq_application_fastl4_tcp.py validate-modules:E322
lib/ansible/modules/network/f5/bigiq_application_fastl4_tcp.py validate-modules:E324
lib/ansible/modules/network/f5/bigiq_application_fastl4_tcp.py validate-modules:E338
lib/ansible/modules/network/f5/bigiq_application_fastl4_udp.py validate-modules:E322
lib/ansible/modules/network/f5/bigiq_application_fastl4_udp.py validate-modules:E324
lib/ansible/modules/network/f5/bigiq_application_fastl4_udp.py validate-modules:E337
lib/ansible/modules/network/f5/bigiq_application_fastl4_udp.py validate-modules:E338
lib/ansible/modules/network/f5/bigiq_application_http.py validate-modules:E322
lib/ansible/modules/network/f5/bigiq_application_http.py validate-modules:E324
lib/ansible/modules/network/f5/bigiq_application_http.py validate-modules:E338
lib/ansible/modules/network/f5/bigiq_application_https_offload.py validate-modules:E322
lib/ansible/modules/network/f5/bigiq_application_https_offload.py validate-modules:E324
lib/ansible/modules/network/f5/bigiq_application_https_offload.py validate-modules:E338
lib/ansible/modules/network/f5/bigiq_application_https_waf.py validate-modules:E322
lib/ansible/modules/network/f5/bigiq_application_https_waf.py validate-modules:E324
lib/ansible/modules/network/f5/bigiq_application_https_waf.py validate-modules:E338
lib/ansible/modules/network/f5/bigiq_device_discovery.py validate-modules:E322
lib/ansible/modules/network/f5/bigiq_device_discovery.py validate-modules:E324
lib/ansible/modules/network/f5/bigiq_device_discovery.py validate-modules:E337
lib/ansible/modules/network/f5/bigiq_device_discovery.py validate-modules:E338
lib/ansible/modules/network/f5/bigiq_device_info.py validate-modules:E322
lib/ansible/modules/network/f5/bigiq_device_info.py validate-modules:E324
lib/ansible/modules/network/f5/bigiq_device_info.py validate-modules:E338
lib/ansible/modules/network/f5/bigiq_regkey_license.py validate-modules:E322
lib/ansible/modules/network/f5/bigiq_regkey_license.py validate-modules:E324
lib/ansible/modules/network/f5/bigiq_regkey_license.py validate-modules:E338
lib/ansible/modules/network/f5/bigiq_regkey_license_assignment.py validate-modules:E322
lib/ansible/modules/network/f5/bigiq_regkey_license_assignment.py validate-modules:E324
lib/ansible/modules/network/f5/bigiq_regkey_license_assignment.py validate-modules:E338
lib/ansible/modules/network/f5/bigiq_regkey_pool.py validate-modules:E322
lib/ansible/modules/network/f5/bigiq_regkey_pool.py validate-modules:E324
lib/ansible/modules/network/f5/bigiq_regkey_pool.py validate-modules:E338
lib/ansible/modules/network/f5/bigiq_utility_license.py validate-modules:E322
lib/ansible/modules/network/f5/bigiq_utility_license.py validate-modules:E324
lib/ansible/modules/network/f5/bigiq_utility_license.py validate-modules:E338
lib/ansible/modules/network/f5/bigiq_utility_license_assignment.py validate-modules:E322
lib/ansible/modules/network/f5/bigiq_utility_license_assignment.py validate-modules:E324
lib/ansible/modules/network/f5/bigiq_utility_license_assignment.py validate-modules:E338
lib/ansible/modules/network/fortimanager/fmgr_device.py validate-modules:E337
lib/ansible/modules/network/fortimanager/fmgr_device_config.py validate-modules:E337
lib/ansible/modules/network/fortimanager/fmgr_device_group.py validate-modules:E337
lib/ansible/modules/network/fortimanager/fmgr_device_provision_template.py validate-modules:E337
lib/ansible/modules/network/fortimanager/fmgr_fwobj_address.py validate-modules:E337
lib/ansible/modules/network/fortimanager/fmgr_fwobj_ippool.py validate-modules:E337
lib/ansible/modules/network/fortimanager/fmgr_fwobj_ippool6.py validate-modules:E337
lib/ansible/modules/network/fortimanager/fmgr_fwobj_service.py validate-modules:E337
lib/ansible/modules/network/fortimanager/fmgr_fwobj_vip.py validate-modules:E337
lib/ansible/modules/network/fortimanager/fmgr_fwpol_ipv4.py validate-modules:E337
lib/ansible/modules/network/fortimanager/fmgr_fwpol_package.py validate-modules:E337
lib/ansible/modules/network/fortimanager/fmgr_ha.py validate-modules:E337
lib/ansible/modules/network/fortimanager/fmgr_provisioning.py validate-modules:E337
lib/ansible/modules/network/fortimanager/fmgr_provisioning.py validate-modules:E338
lib/ansible/modules/network/fortimanager/fmgr_query.py validate-modules:E337
lib/ansible/modules/network/fortimanager/fmgr_script.py validate-modules:E324
lib/ansible/modules/network/fortimanager/fmgr_script.py validate-modules:E337
lib/ansible/modules/network/fortimanager/fmgr_secprof_appctrl.py validate-modules:E337
lib/ansible/modules/network/fortimanager/fmgr_secprof_av.py validate-modules:E337
lib/ansible/modules/network/fortimanager/fmgr_secprof_dns.py validate-modules:E337
lib/ansible/modules/network/fortimanager/fmgr_secprof_ips.py validate-modules:E337
lib/ansible/modules/network/fortimanager/fmgr_secprof_profile_group.py validate-modules:E337
lib/ansible/modules/network/fortimanager/fmgr_secprof_proxy.py validate-modules:E337
lib/ansible/modules/network/fortimanager/fmgr_secprof_spam.py validate-modules:E337
lib/ansible/modules/network/fortimanager/fmgr_secprof_ssl_ssh.py validate-modules:E337
lib/ansible/modules/network/fortimanager/fmgr_secprof_voip.py validate-modules:E337
lib/ansible/modules/network/fortimanager/fmgr_secprof_waf.py validate-modules:E337
lib/ansible/modules/network/fortimanager/fmgr_secprof_wanopt.py validate-modules:E337
lib/ansible/modules/network/fortimanager/fmgr_secprof_web.py validate-modules:E337
lib/ansible/modules/network/fortios/fortios_address.py validate-modules:E324
lib/ansible/modules/network/fortios/fortios_address.py validate-modules:E338
lib/ansible/modules/network/fortios/fortios_antivirus_heuristic.py validate-modules:E337
lib/ansible/modules/network/fortios/fortios_antivirus_profile.py validate-modules:E336
lib/ansible/modules/network/fortios/fortios_antivirus_profile.py validate-modules:E337
lib/ansible/modules/network/fortios/fortios_antivirus_quarantine.py validate-modules:E326
lib/ansible/modules/network/fortios/fortios_antivirus_quarantine.py validate-modules:E336
lib/ansible/modules/network/fortios/fortios_antivirus_quarantine.py validate-modules:E337
lib/ansible/modules/network/fortios/fortios_antivirus_settings.py validate-modules:E336
lib/ansible/modules/network/fortios/fortios_antivirus_settings.py validate-modules:E337
lib/ansible/modules/network/fortios/fortios_application_custom.py validate-modules:E337
lib/ansible/modules/network/fortios/fortios_application_group.py validate-modules:E337
lib/ansible/modules/network/fortios/fortios_application_list.py validate-modules:E336
lib/ansible/modules/network/fortios/fortios_application_list.py validate-modules:E337
lib/ansible/modules/network/fortios/fortios_application_name.py validate-modules:E336
lib/ansible/modules/network/fortios/fortios_application_name.py validate-modules:E337
lib/ansible/modules/network/fortios/fortios_application_rule_settings.py validate-modules:E337
lib/ansible/modules/network/fortios/fortios_authentication_rule.py validate-modules:E336
lib/ansible/modules/network/fortios/fortios_authentication_rule.py validate-modules:E337
lib/ansible/modules/network/fortios/fortios_authentication_scheme.py validate-modules:E336
lib/ansible/modules/network/fortios/fortios_authentication_scheme.py validate-modules:E337
lib/ansible/modules/network/fortios/fortios_authentication_setting.py validate-modules:E336
lib/ansible/modules/network/fortios/fortios_authentication_setting.py validate-modules:E337
lib/ansible/modules/network/fortios/fortios_config.py validate-modules:E337
lib/ansible/modules/network/fortios/fortios_dlp_filepattern.py validate-modules:E336
lib/ansible/modules/network/fortios/fortios_dlp_filepattern.py validate-modules:E337
lib/ansible/modules/network/fortios/fortios_dlp_fp_doc_source.py validate-modules:E336
lib/ansible/modules/network/fortios/fortios_dlp_fp_doc_source.py validate-modules:E337
lib/ansible/modules/network/fortios/fortios_dlp_fp_sensitivity.py validate-modules:E337
lib/ansible/modules/network/fortios/fortios_dlp_sensor.py validate-modules:E336
lib/ansible/modules/network/fortios/fortios_dlp_sensor.py validate-modules:E337
lib/ansible/modules/network/fortios/fortios_dlp_settings.py validate-modules:E336
lib/ansible/modules/network/fortios/fortios_dlp_settings.py validate-modules:E337
lib/ansible/modules/network/fortios/fortios_dnsfilter_domain_filter.py validate-modules:E337
lib/ansible/modules/network/fortios/fortios_dnsfilter_profile.py validate-modules:E336
lib/ansible/modules/network/fortios/fortios_dnsfilter_profile.py validate-modules:E337
lib/ansible/modules/network/fortios/fortios_endpoint_control_client.py validate-modules:E336
lib/ansible/modules/network/fortios/fortios_endpoint_control_client.py validate-modules:E337
lib/ansible/modules/network/fortios/fortios_endpoint_control_forticlient_ems.py validate-modules:E336
lib/ansible/modules/network/fortios/fortios_endpoint_control_forticlient_ems.py validate-modules:E337
lib/ansible/modules/network/fortios/fortios_endpoint_control_forticlient_registration_sync.py validate-modules:E336
lib/ansible/modules/network/fortios/fortios_endpoint_control_forticlient_registration_sync.py validate-modules:E337
lib/ansible/modules/network/fortios/fortios_endpoint_control_profile.py validate-modules:E336
lib/ansible/modules/network/fortios/fortios_endpoint_control_profile.py validate-modules:E337
lib/ansible/modules/network/fortios/fortios_endpoint_control_settings.py validate-modules:E336
lib/ansible/modules/network/fortios/fortios_endpoint_control_settings.py validate-modules:E337
lib/ansible/modules/network/fortios/fortios_extender_controller_extender.py validate-modules:E336
lib/ansible/modules/network/fortios/fortios_extender_controller_extender.py validate-modules:E337
lib/ansible/modules/network/fortios/fortios_firewall_DoS_policy.py validate-modules:E336
lib/ansible/modules/network/fortios/fortios_firewall_DoS_policy.py validate-modules:E337
lib/ansible/modules/network/fortios/fortios_firewall_DoS_policy6.py validate-modules:E336
lib/ansible/modules/network/fortios/fortios_firewall_DoS_policy6.py validate-modules:E337
lib/ansible/modules/network/fortios/fortios_firewall_address.py validate-modules:E336
lib/ansible/modules/network/fortios/fortios_firewall_address.py validate-modules:E337
lib/ansible/modules/network/fortios/fortios_firewall_address6.py validate-modules:E336
lib/ansible/modules/network/fortios/fortios_firewall_address6.py validate-modules:E337
lib/ansible/modules/network/fortios/fortios_firewall_address6_template.py validate-modules:E336
lib/ansible/modules/network/fortios/fortios_firewall_address6_template.py validate-modules:E337
lib/ansible/modules/network/fortios/fortios_firewall_addrgrp.py validate-modules:E336
lib/ansible/modules/network/fortios/fortios_firewall_addrgrp.py validate-modules:E337
lib/ansible/modules/network/fortios/fortios_firewall_addrgrp6.py validate-modules:E337
lib/ansible/modules/network/fortios/fortios_firewall_auth_portal.py validate-modules:E336
lib/ansible/modules/network/fortios/fortios_firewall_auth_portal.py validate-modules:E337
lib/ansible/modules/network/fortios/fortios_firewall_central_snat_map.py validate-modules:E336
lib/ansible/modules/network/fortios/fortios_firewall_central_snat_map.py validate-modules:E337
lib/ansible/modules/network/fortios/fortios_firewall_dnstranslation.py validate-modules:E337
lib/ansible/modules/network/fortios/fortios_firewall_identity_based_route.py validate-modules:E337
lib/ansible/modules/network/fortios/fortios_firewall_interface_policy.py validate-modules:E336
lib/ansible/modules/network/fortios/fortios_firewall_interface_policy.py validate-modules:E337
lib/ansible/modules/network/fortios/fortios_firewall_interface_policy6.py validate-modules:E336
lib/ansible/modules/network/fortios/fortios_firewall_interface_policy6.py validate-modules:E337
lib/ansible/modules/network/fortios/fortios_firewall_internet_service.py validate-modules:E336
lib/ansible/modules/network/fortios/fortios_firewall_internet_service.py validate-modules:E337
lib/ansible/modules/network/fortios/fortios_firewall_internet_service_custom.py validate-modules:E336
lib/ansible/modules/network/fortios/fortios_firewall_internet_service_custom.py validate-modules:E337
lib/ansible/modules/network/fortios/fortios_firewall_internet_service_group.py validate-modules:E337
lib/ansible/modules/network/fortios/fortios_firewall_ip_translation.py validate-modules:E336
lib/ansible/modules/network/fortios/fortios_firewall_ip_translation.py validate-modules:E337
lib/ansible/modules/network/fortios/fortios_firewall_ipmacbinding_setting.py validate-modules:E337
lib/ansible/modules/network/fortios/fortios_firewall_ipmacbinding_table.py validate-modules:E336
lib/ansible/modules/network/fortios/fortios_firewall_ipmacbinding_table.py validate-modules:E337
lib/ansible/modules/network/fortios/fortios_firewall_ippool.py validate-modules:E336
lib/ansible/modules/network/fortios/fortios_firewall_ippool.py validate-modules:E337
lib/ansible/modules/network/fortios/fortios_firewall_ippool6.py validate-modules:E337
lib/ansible/modules/network/fortios/fortios_firewall_ipv6_eh_filter.py validate-modules:E336
lib/ansible/modules/network/fortios/fortios_firewall_ipv6_eh_filter.py validate-modules:E337
lib/ansible/modules/network/fortios/fortios_firewall_ldb_monitor.py validate-modules:E336
lib/ansible/modules/network/fortios/fortios_firewall_ldb_monitor.py validate-modules:E337
lib/ansible/modules/network/fortios/fortios_firewall_local_in_policy.py validate-modules:E336
lib/ansible/modules/network/fortios/fortios_firewall_local_in_policy.py validate-modules:E337
lib/ansible/modules/network/fortios/fortios_firewall_local_in_policy6.py validate-modules:E337
lib/ansible/modules/network/fortios/fortios_firewall_multicast_address.py validate-modules:E336
lib/ansible/modules/network/fortios/fortios_firewall_multicast_address.py validate-modules:E337
lib/ansible/modules/network/fortios/fortios_firewall_multicast_address6.py validate-modules:E337
lib/ansible/modules/network/fortios/fortios_firewall_multicast_policy.py validate-modules:E336
lib/ansible/modules/network/fortios/fortios_firewall_multicast_policy.py validate-modules:E337
lib/ansible/modules/network/fortios/fortios_firewall_multicast_policy6.py validate-modules:E336
lib/ansible/modules/network/fortios/fortios_firewall_multicast_policy6.py validate-modules:E337
lib/ansible/modules/network/fortios/fortios_firewall_policy.py validate-modules:E326
lib/ansible/modules/network/fortios/fortios_firewall_policy.py validate-modules:E336
lib/ansible/modules/network/fortios/fortios_firewall_policy.py validate-modules:E337
lib/ansible/modules/network/fortios/fortios_firewall_policy46.py validate-modules:E336
lib/ansible/modules/network/fortios/fortios_firewall_policy46.py validate-modules:E337
lib/ansible/modules/network/fortios/fortios_firewall_policy6.py validate-modules:E336
lib/ansible/modules/network/fortios/fortios_firewall_policy6.py validate-modules:E337
lib/ansible/modules/network/fortios/fortios_firewall_policy64.py validate-modules:E336
lib/ansible/modules/network/fortios/fortios_firewall_policy64.py validate-modules:E337
lib/ansible/modules/network/fortios/fortios_firewall_profile_group.py validate-modules:E336
lib/ansible/modules/network/fortios/fortios_firewall_profile_group.py validate-modules:E337
lib/ansible/modules/network/fortios/fortios_firewall_profile_protocol_options.py validate-modules:E336
lib/ansible/modules/network/fortios/fortios_firewall_profile_protocol_options.py validate-modules:E337
lib/ansible/modules/network/fortios/fortios_firewall_proxy_address.py validate-modules:E336
lib/ansible/modules/network/fortios/fortios_firewall_proxy_address.py validate-modules:E337
lib/ansible/modules/network/fortios/fortios_firewall_proxy_addrgrp.py validate-modules:E337
lib/ansible/modules/network/fortios/fortios_firewall_proxy_policy.py validate-modules:E336
lib/ansible/modules/network/fortios/fortios_firewall_proxy_policy.py validate-modules:E337
lib/ansible/modules/network/fortios/fortios_firewall_schedule_group.py validate-modules:E337
lib/ansible/modules/network/fortios/fortios_firewall_schedule_onetime.py validate-modules:E336
lib/ansible/modules/network/fortios/fortios_firewall_schedule_onetime.py validate-modules:E337
lib/ansible/modules/network/fortios/fortios_firewall_schedule_recurring.py validate-modules:E337
lib/ansible/modules/network/fortios/fortios_firewall_service_category.py validate-modules:E337
lib/ansible/modules/network/fortios/fortios_firewall_service_custom.py validate-modules:E336
lib/ansible/modules/network/fortios/fortios_firewall_service_custom.py validate-modules:E337
lib/ansible/modules/network/fortios/fortios_firewall_service_group.py validate-modules:E337
lib/ansible/modules/network/fortios/fortios_firewall_shaper_per_ip_shaper.py validate-modules:E336
lib/ansible/modules/network/fortios/fortios_firewall_shaper_per_ip_shaper.py validate-modules:E337
lib/ansible/modules/network/fortios/fortios_firewall_shaper_traffic_shaper.py validate-modules:E336
lib/ansible/modules/network/fortios/fortios_firewall_shaper_traffic_shaper.py validate-modules:E337
lib/ansible/modules/network/fortios/fortios_firewall_shaping_policy.py validate-modules:E336
lib/ansible/modules/network/fortios/fortios_firewall_shaping_policy.py validate-modules:E337
lib/ansible/modules/network/fortios/fortios_firewall_shaping_profile.py validate-modules:E336
lib/ansible/modules/network/fortios/fortios_firewall_shaping_profile.py validate-modules:E337
lib/ansible/modules/network/fortios/fortios_firewall_sniffer.py validate-modules:E336
lib/ansible/modules/network/fortios/fortios_firewall_sniffer.py validate-modules:E337
lib/ansible/modules/network/fortios/fortios_firewall_ssh_host_key.py validate-modules:E336
lib/ansible/modules/network/fortios/fortios_firewall_ssh_host_key.py validate-modules:E337
lib/ansible/modules/network/fortios/fortios_firewall_ssh_local_ca.py validate-modules:E336
lib/ansible/modules/network/fortios/fortios_firewall_ssh_local_ca.py validate-modules:E337
lib/ansible/modules/network/fortios/fortios_firewall_ssh_local_key.py validate-modules:E336
lib/ansible/modules/network/fortios/fortios_firewall_ssh_local_key.py validate-modules:E337
lib/ansible/modules/network/fortios/fortios_firewall_ssh_setting.py validate-modules:E336
lib/ansible/modules/network/fortios/fortios_firewall_ssh_setting.py validate-modules:E337
lib/ansible/modules/network/fortios/fortios_firewall_ssl_server.py validate-modules:E336
lib/ansible/modules/network/fortios/fortios_firewall_ssl_server.py validate-modules:E337
lib/ansible/modules/network/fortios/fortios_firewall_ssl_setting.py validate-modules:E336
lib/ansible/modules/network/fortios/fortios_firewall_ssl_setting.py validate-modules:E337
lib/ansible/modules/network/fortios/fortios_firewall_ssl_ssh_profile.py validate-modules:E336
lib/ansible/modules/network/fortios/fortios_firewall_ssl_ssh_profile.py validate-modules:E337
lib/ansible/modules/network/fortios/fortios_firewall_ttl_policy.py validate-modules:E337
lib/ansible/modules/network/fortios/fortios_firewall_vip.py validate-modules:E336
lib/ansible/modules/network/fortios/fortios_firewall_vip.py validate-modules:E337
lib/ansible/modules/network/fortios/fortios_firewall_vip46.py validate-modules:E336
lib/ansible/modules/network/fortios/fortios_firewall_vip46.py validate-modules:E337
lib/ansible/modules/network/fortios/fortios_firewall_vip6.py validate-modules:E336
lib/ansible/modules/network/fortios/fortios_firewall_vip6.py validate-modules:E337
lib/ansible/modules/network/fortios/fortios_firewall_vip64.py validate-modules:E336
lib/ansible/modules/network/fortios/fortios_firewall_vip64.py validate-modules:E337
lib/ansible/modules/network/fortios/fortios_firewall_vipgrp.py validate-modules:E337
lib/ansible/modules/network/fortios/fortios_firewall_vipgrp46.py validate-modules:E337
lib/ansible/modules/network/fortios/fortios_firewall_vipgrp6.py validate-modules:E337
lib/ansible/modules/network/fortios/fortios_firewall_vipgrp64.py validate-modules:E337
lib/ansible/modules/network/fortios/fortios_firewall_wildcard_fqdn_custom.py validate-modules:E336
lib/ansible/modules/network/fortios/fortios_firewall_wildcard_fqdn_custom.py validate-modules:E337
lib/ansible/modules/network/fortios/fortios_firewall_wildcard_fqdn_group.py validate-modules:E337
lib/ansible/modules/network/fortios/fortios_ftp_proxy_explicit.py validate-modules:E336
lib/ansible/modules/network/fortios/fortios_ftp_proxy_explicit.py validate-modules:E337
lib/ansible/modules/network/fortios/fortios_icap_profile.py validate-modules:E336
lib/ansible/modules/network/fortios/fortios_icap_profile.py validate-modules:E337
lib/ansible/modules/network/fortios/fortios_icap_server.py validate-modules:E336
lib/ansible/modules/network/fortios/fortios_icap_server.py validate-modules:E337
lib/ansible/modules/network/fortios/fortios_ips_custom.py validate-modules:E336
lib/ansible/modules/network/fortios/fortios_ips_custom.py validate-modules:E337
lib/ansible/modules/network/fortios/fortios_ips_decoder.py validate-modules:E337
lib/ansible/modules/network/fortios/fortios_ips_global.py validate-modules:E336
lib/ansible/modules/network/fortios/fortios_ips_global.py validate-modules:E337
lib/ansible/modules/network/fortios/fortios_ips_rule.py validate-modules:E336
lib/ansible/modules/network/fortios/fortios_ips_rule.py validate-modules:E337
lib/ansible/modules/network/fortios/fortios_ips_rule_settings.py validate-modules:E337
lib/ansible/modules/network/fortios/fortios_ips_sensor.py validate-modules:E336
lib/ansible/modules/network/fortios/fortios_ips_sensor.py validate-modules:E337
lib/ansible/modules/network/fortios/fortios_ips_settings.py validate-modules:E336
lib/ansible/modules/network/fortios/fortios_ips_settings.py validate-modules:E337
lib/ansible/modules/network/fortios/fortios_ipv4_policy.py validate-modules:E337
lib/ansible/modules/network/fortios/fortios_ipv4_policy.py validate-modules:E338
lib/ansible/modules/network/fortios/fortios_log_custom_field.py validate-modules:E337
lib/ansible/modules/network/fortios/fortios_log_disk_filter.py validate-modules:E336
lib/ansible/modules/network/fortios/fortios_log_disk_filter.py validate-modules:E337
lib/ansible/modules/network/fortios/fortios_log_disk_setting.py validate-modules:E336
lib/ansible/modules/network/fortios/fortios_log_disk_setting.py validate-modules:E337
lib/ansible/modules/network/fortios/fortios_log_eventfilter.py validate-modules:E336
lib/ansible/modules/network/fortios/fortios_log_eventfilter.py validate-modules:E337
lib/ansible/modules/network/fortios/fortios_log_fortianalyzer2_filter.py validate-modules:E336
lib/ansible/modules/network/fortios/fortios_log_fortianalyzer2_filter.py validate-modules:E337
lib/ansible/modules/network/fortios/fortios_log_fortianalyzer2_setting.py validate-modules:E336
lib/ansible/modules/network/fortios/fortios_log_fortianalyzer2_setting.py validate-modules:E337
lib/ansible/modules/network/fortios/fortios_log_fortianalyzer3_filter.py validate-modules:E336
lib/ansible/modules/network/fortios/fortios_log_fortianalyzer3_filter.py validate-modules:E337
lib/ansible/modules/network/fortios/fortios_log_fortianalyzer3_setting.py validate-modules:E336
lib/ansible/modules/network/fortios/fortios_log_fortianalyzer3_setting.py validate-modules:E337
lib/ansible/modules/network/fortios/fortios_log_fortianalyzer_filter.py validate-modules:E336
lib/ansible/modules/network/fortios/fortios_log_fortianalyzer_filter.py validate-modules:E337
lib/ansible/modules/network/fortios/fortios_log_fortianalyzer_override_filter.py validate-modules:E336
lib/ansible/modules/network/fortios/fortios_log_fortianalyzer_override_filter.py validate-modules:E337
lib/ansible/modules/network/fortios/fortios_log_fortianalyzer_override_setting.py validate-modules:E336
lib/ansible/modules/network/fortios/fortios_log_fortianalyzer_override_setting.py validate-modules:E337
lib/ansible/modules/network/fortios/fortios_log_fortianalyzer_setting.py validate-modules:E336
lib/ansible/modules/network/fortios/fortios_log_fortianalyzer_setting.py validate-modules:E337
lib/ansible/modules/network/fortios/fortios_log_fortiguard_filter.py validate-modules:E336
lib/ansible/modules/network/fortios/fortios_log_fortiguard_filter.py validate-modules:E337
lib/ansible/modules/network/fortios/fortios_log_fortiguard_override_filter.py validate-modules:E336
lib/ansible/modules/network/fortios/fortios_log_fortiguard_override_filter.py validate-modules:E337
lib/ansible/modules/network/fortios/fortios_log_fortiguard_override_setting.py validate-modules:E336
lib/ansible/modules/network/fortios/fortios_log_fortiguard_override_setting.py validate-modules:E337
lib/ansible/modules/network/fortios/fortios_log_fortiguard_setting.py validate-modules:E336
lib/ansible/modules/network/fortios/fortios_log_fortiguard_setting.py validate-modules:E337
lib/ansible/modules/network/fortios/fortios_log_gui_display.py validate-modules:E336
lib/ansible/modules/network/fortios/fortios_log_gui_display.py validate-modules:E337
lib/ansible/modules/network/fortios/fortios_log_memory_filter.py validate-modules:E336
lib/ansible/modules/network/fortios/fortios_log_memory_filter.py validate-modules:E337
lib/ansible/modules/network/fortios/fortios_log_memory_global_setting.py validate-modules:E336
lib/ansible/modules/network/fortios/fortios_log_memory_global_setting.py validate-modules:E337
lib/ansible/modules/network/fortios/fortios_log_memory_setting.py validate-modules:E337
lib/ansible/modules/network/fortios/fortios_log_null_device_filter.py validate-modules:E336
lib/ansible/modules/network/fortios/fortios_log_null_device_filter.py validate-modules:E337
lib/ansible/modules/network/fortios/fortios_log_null_device_setting.py validate-modules:E337
lib/ansible/modules/network/fortios/fortios_log_setting.py validate-modules:E336
lib/ansible/modules/network/fortios/fortios_log_setting.py validate-modules:E337
lib/ansible/modules/network/fortios/fortios_log_syslogd2_filter.py validate-modules:E336
lib/ansible/modules/network/fortios/fortios_log_syslogd2_filter.py validate-modules:E337
lib/ansible/modules/network/fortios/fortios_log_syslogd2_setting.py validate-modules:E336
lib/ansible/modules/network/fortios/fortios_log_syslogd2_setting.py validate-modules:E337
lib/ansible/modules/network/fortios/fortios_log_syslogd3_filter.py validate-modules:E336
lib/ansible/modules/network/fortios/fortios_log_syslogd3_filter.py validate-modules:E337
lib/ansible/modules/network/fortios/fortios_log_syslogd3_setting.py validate-modules:E336
lib/ansible/modules/network/fortios/fortios_log_syslogd3_setting.py validate-modules:E337
lib/ansible/modules/network/fortios/fortios_log_syslogd4_filter.py validate-modules:E336
lib/ansible/modules/network/fortios/fortios_log_syslogd4_filter.py validate-modules:E337
lib/ansible/modules/network/fortios/fortios_log_syslogd4_setting.py validate-modules:E336
lib/ansible/modules/network/fortios/fortios_log_syslogd4_setting.py validate-modules:E337
lib/ansible/modules/network/fortios/fortios_log_syslogd_filter.py validate-modules:E336
lib/ansible/modules/network/fortios/fortios_log_syslogd_filter.py validate-modules:E337
lib/ansible/modules/network/fortios/fortios_log_syslogd_override_filter.py validate-modules:E336
lib/ansible/modules/network/fortios/fortios_log_syslogd_override_filter.py validate-modules:E337
lib/ansible/modules/network/fortios/fortios_log_syslogd_override_setting.py validate-modules:E336
lib/ansible/modules/network/fortios/fortios_log_syslogd_override_setting.py validate-modules:E337
lib/ansible/modules/network/fortios/fortios_log_syslogd_setting.py validate-modules:E336
lib/ansible/modules/network/fortios/fortios_log_syslogd_setting.py validate-modules:E337
lib/ansible/modules/network/fortios/fortios_log_threat_weight.py validate-modules:E336
lib/ansible/modules/network/fortios/fortios_log_threat_weight.py validate-modules:E337
lib/ansible/modules/network/fortios/fortios_log_webtrends_filter.py validate-modules:E336
lib/ansible/modules/network/fortios/fortios_log_webtrends_filter.py validate-modules:E337
lib/ansible/modules/network/fortios/fortios_log_webtrends_setting.py validate-modules:E337
lib/ansible/modules/network/fortios/fortios_report_chart.py validate-modules:E326
lib/ansible/modules/network/fortios/fortios_report_chart.py validate-modules:E336
lib/ansible/modules/network/fortios/fortios_report_chart.py validate-modules:E337
lib/ansible/modules/network/fortios/fortios_report_dataset.py validate-modules:E336
lib/ansible/modules/network/fortios/fortios_report_dataset.py validate-modules:E337
lib/ansible/modules/network/fortios/fortios_report_layout.py validate-modules:E336
lib/ansible/modules/network/fortios/fortios_report_layout.py validate-modules:E337
lib/ansible/modules/network/fortios/fortios_report_setting.py validate-modules:E336
lib/ansible/modules/network/fortios/fortios_report_setting.py validate-modules:E337
lib/ansible/modules/network/fortios/fortios_report_style.py validate-modules:E336
lib/ansible/modules/network/fortios/fortios_report_style.py validate-modules:E337
lib/ansible/modules/network/fortios/fortios_report_theme.py validate-modules:E336
lib/ansible/modules/network/fortios/fortios_report_theme.py validate-modules:E337
lib/ansible/modules/network/fortios/fortios_router_access_list.py validate-modules:E336
lib/ansible/modules/network/fortios/fortios_router_access_list.py validate-modules:E337
lib/ansible/modules/network/fortios/fortios_router_auth_path.py validate-modules:E337
lib/ansible/modules/network/fortios/fortios_router_bfd.py validate-modules:E337
lib/ansible/modules/network/fortios/fortios_router_bfd6.py validate-modules:E336
lib/ansible/modules/network/fortios/fortios_router_bfd6.py validate-modules:E337
lib/ansible/modules/network/fortios/fortios_router_bgp.py validate-modules:E336
lib/ansible/modules/network/fortios/fortios_router_bgp.py validate-modules:E337
lib/ansible/modules/network/fortios/fortios_router_multicast.py validate-modules:E336
lib/ansible/modules/network/fortios/fortios_router_multicast.py validate-modules:E337
lib/ansible/modules/network/fortios/fortios_router_multicast6.py validate-modules:E336
lib/ansible/modules/network/fortios/fortios_router_multicast6.py validate-modules:E337
lib/ansible/modules/network/fortios/fortios_router_multicast_flow.py validate-modules:E336
lib/ansible/modules/network/fortios/fortios_router_multicast_flow.py validate-modules:E337
lib/ansible/modules/network/fortios/fortios_router_ospf.py validate-modules:E336
lib/ansible/modules/network/fortios/fortios_router_ospf.py validate-modules:E337
lib/ansible/modules/network/fortios/fortios_router_ospf6.py validate-modules:E336
lib/ansible/modules/network/fortios/fortios_router_ospf6.py validate-modules:E337
lib/ansible/modules/network/fortios/fortios_router_policy.py validate-modules:E336
lib/ansible/modules/network/fortios/fortios_router_policy.py validate-modules:E337
lib/ansible/modules/network/fortios/fortios_router_policy6.py validate-modules:E336
lib/ansible/modules/network/fortios/fortios_router_policy6.py validate-modules:E337
lib/ansible/modules/network/fortios/fortios_router_prefix_list.py validate-modules:E337
lib/ansible/modules/network/fortios/fortios_router_rip.py validate-modules:E336
lib/ansible/modules/network/fortios/fortios_router_rip.py validate-modules:E337
lib/ansible/modules/network/fortios/fortios_router_setting.py validate-modules:E336
lib/ansible/modules/network/fortios/fortios_router_setting.py validate-modules:E337
lib/ansible/modules/network/fortios/fortios_router_static.py validate-modules:E336
lib/ansible/modules/network/fortios/fortios_router_static.py validate-modules:E337
lib/ansible/modules/network/fortios/fortios_spamfilter_profile.py validate-modules:E336
lib/ansible/modules/network/fortios/fortios_spamfilter_profile.py validate-modules:E337
lib/ansible/modules/network/fortios/fortios_ssh_filter_profile.py validate-modules:E336
lib/ansible/modules/network/fortios/fortios_ssh_filter_profile.py validate-modules:E337
lib/ansible/modules/network/fortios/fortios_switch_controller_global.py validate-modules:E336
lib/ansible/modules/network/fortios/fortios_switch_controller_global.py validate-modules:E337
lib/ansible/modules/network/fortios/fortios_switch_controller_lldp_profile.py validate-modules:E336
lib/ansible/modules/network/fortios/fortios_switch_controller_lldp_profile.py validate-modules:E337
lib/ansible/modules/network/fortios/fortios_switch_controller_lldp_settings.py validate-modules:E336
lib/ansible/modules/network/fortios/fortios_switch_controller_lldp_settings.py validate-modules:E337
lib/ansible/modules/network/fortios/fortios_switch_controller_mac_sync_settings.py validate-modules:E336
lib/ansible/modules/network/fortios/fortios_switch_controller_mac_sync_settings.py validate-modules:E337
lib/ansible/modules/network/fortios/fortios_switch_controller_managed_switch.py validate-modules:E336
lib/ansible/modules/network/fortios/fortios_switch_controller_managed_switch.py validate-modules:E337
lib/ansible/modules/network/fortios/fortios_switch_controller_network_monitor_settings.py validate-modules:E336
lib/ansible/modules/network/fortios/fortios_switch_controller_network_monitor_settings.py validate-modules:E337
lib/ansible/modules/network/fortios/fortios_system_accprofile.py validate-modules:E336
lib/ansible/modules/network/fortios/fortios_system_accprofile.py validate-modules:E337
lib/ansible/modules/network/fortios/fortios_system_admin.py validate-modules:E336
lib/ansible/modules/network/fortios/fortios_system_admin.py validate-modules:E337
lib/ansible/modules/network/fortios/fortios_system_api_user.py validate-modules:E336
lib/ansible/modules/network/fortios/fortios_system_api_user.py validate-modules:E337
lib/ansible/modules/network/fortios/fortios_system_central_management.py validate-modules:E336
lib/ansible/modules/network/fortios/fortios_system_central_management.py validate-modules:E337
lib/ansible/modules/network/fortios/fortios_system_dhcp_server.py validate-modules:E326
lib/ansible/modules/network/fortios/fortios_system_dhcp_server.py validate-modules:E336
lib/ansible/modules/network/fortios/fortios_system_dhcp_server.py validate-modules:E337
lib/ansible/modules/network/fortios/fortios_system_dns.py validate-modules:E336
lib/ansible/modules/network/fortios/fortios_system_dns.py validate-modules:E337
lib/ansible/modules/network/fortios/fortios_system_global.py validate-modules:E326
lib/ansible/modules/network/fortios/fortios_system_global.py validate-modules:E336
lib/ansible/modules/network/fortios/fortios_system_global.py validate-modules:E337
lib/ansible/modules/network/fortios/fortios_system_interface.py validate-modules:E336
lib/ansible/modules/network/fortios/fortios_system_interface.py validate-modules:E337
lib/ansible/modules/network/fortios/fortios_system_sdn_connector.py validate-modules:E336
lib/ansible/modules/network/fortios/fortios_system_sdn_connector.py validate-modules:E337
lib/ansible/modules/network/fortios/fortios_system_settings.py validate-modules:E336
lib/ansible/modules/network/fortios/fortios_system_settings.py validate-modules:E337
lib/ansible/modules/network/fortios/fortios_system_vdom.py validate-modules:E336
lib/ansible/modules/network/fortios/fortios_system_vdom.py validate-modules:E337
lib/ansible/modules/network/fortios/fortios_system_virtual_wan_link.py validate-modules:E336
lib/ansible/modules/network/fortios/fortios_system_virtual_wan_link.py validate-modules:E337
lib/ansible/modules/network/fortios/fortios_user_adgrp.py validate-modules:E336
lib/ansible/modules/network/fortios/fortios_user_adgrp.py validate-modules:E337
lib/ansible/modules/network/fortios/fortios_user_device.py validate-modules:E337
lib/ansible/modules/network/fortios/fortios_user_radius.py validate-modules:E336
lib/ansible/modules/network/fortios/fortios_user_radius.py validate-modules:E337
lib/ansible/modules/network/fortios/fortios_user_tacacsplus.py validate-modules:E336
lib/ansible/modules/network/fortios/fortios_user_tacacsplus.py validate-modules:E337
lib/ansible/modules/network/fortios/fortios_voip_profile.py validate-modules:E326
lib/ansible/modules/network/fortios/fortios_voip_profile.py validate-modules:E336
lib/ansible/modules/network/fortios/fortios_voip_profile.py validate-modules:E337
lib/ansible/modules/network/fortios/fortios_vpn_ipsec_concentrator.py validate-modules:E336
lib/ansible/modules/network/fortios/fortios_vpn_ipsec_concentrator.py validate-modules:E337
lib/ansible/modules/network/fortios/fortios_vpn_ipsec_forticlient.py validate-modules:E337
lib/ansible/modules/network/fortios/fortios_vpn_ipsec_manualkey.py validate-modules:E326
lib/ansible/modules/network/fortios/fortios_vpn_ipsec_manualkey.py validate-modules:E336
lib/ansible/modules/network/fortios/fortios_vpn_ipsec_manualkey.py validate-modules:E337
lib/ansible/modules/network/fortios/fortios_vpn_ipsec_manualkey_interface.py validate-modules:E326
lib/ansible/modules/network/fortios/fortios_vpn_ipsec_manualkey_interface.py validate-modules:E336
lib/ansible/modules/network/fortios/fortios_vpn_ipsec_manualkey_interface.py validate-modules:E337
lib/ansible/modules/network/fortios/fortios_vpn_ipsec_phase1.py validate-modules:E336
lib/ansible/modules/network/fortios/fortios_vpn_ipsec_phase1.py validate-modules:E337
lib/ansible/modules/network/fortios/fortios_vpn_ipsec_phase1_interface.py validate-modules:E336
lib/ansible/modules/network/fortios/fortios_vpn_ipsec_phase1_interface.py validate-modules:E337
lib/ansible/modules/network/fortios/fortios_vpn_ipsec_phase2.py validate-modules:E336
lib/ansible/modules/network/fortios/fortios_vpn_ipsec_phase2.py validate-modules:E337
lib/ansible/modules/network/fortios/fortios_vpn_ipsec_phase2_interface.py validate-modules:E336
lib/ansible/modules/network/fortios/fortios_vpn_ipsec_phase2_interface.py validate-modules:E337
lib/ansible/modules/network/fortios/fortios_vpn_ssl_settings.py validate-modules:E336
lib/ansible/modules/network/fortios/fortios_vpn_ssl_settings.py validate-modules:E337
lib/ansible/modules/network/fortios/fortios_vpn_ssl_web_portal.py validate-modules:E336
lib/ansible/modules/network/fortios/fortios_vpn_ssl_web_portal.py validate-modules:E337
lib/ansible/modules/network/fortios/fortios_waf_profile.py validate-modules:E336
lib/ansible/modules/network/fortios/fortios_waf_profile.py validate-modules:E337
lib/ansible/modules/network/fortios/fortios_wanopt_profile.py validate-modules:E336
lib/ansible/modules/network/fortios/fortios_wanopt_profile.py validate-modules:E337
lib/ansible/modules/network/fortios/fortios_wanopt_settings.py validate-modules:E336
lib/ansible/modules/network/fortios/fortios_wanopt_settings.py validate-modules:E337
lib/ansible/modules/network/fortios/fortios_web_proxy_explicit.py validate-modules:E336
lib/ansible/modules/network/fortios/fortios_web_proxy_explicit.py validate-modules:E337
lib/ansible/modules/network/fortios/fortios_web_proxy_global.py validate-modules:E336
lib/ansible/modules/network/fortios/fortios_web_proxy_global.py validate-modules:E337
lib/ansible/modules/network/fortios/fortios_web_proxy_profile.py validate-modules:E336
lib/ansible/modules/network/fortios/fortios_web_proxy_profile.py validate-modules:E337
lib/ansible/modules/network/fortios/fortios_webfilter.py validate-modules:E326
lib/ansible/modules/network/fortios/fortios_webfilter.py validate-modules:E328
lib/ansible/modules/network/fortios/fortios_webfilter.py validate-modules:E336
lib/ansible/modules/network/fortios/fortios_webfilter.py validate-modules:E337
lib/ansible/modules/network/fortios/fortios_webfilter_content.py validate-modules:E336
lib/ansible/modules/network/fortios/fortios_webfilter_content.py validate-modules:E337
lib/ansible/modules/network/fortios/fortios_webfilter_content_header.py validate-modules:E337
lib/ansible/modules/network/fortios/fortios_webfilter_fortiguard.py validate-modules:E336
lib/ansible/modules/network/fortios/fortios_webfilter_fortiguard.py validate-modules:E337
lib/ansible/modules/network/fortios/fortios_webfilter_ftgd_local_cat.py validate-modules:E337
lib/ansible/modules/network/fortios/fortios_webfilter_ftgd_local_rating.py validate-modules:E337
lib/ansible/modules/network/fortios/fortios_webfilter_ips_urlfilter_cache_setting.py validate-modules:E336
lib/ansible/modules/network/fortios/fortios_webfilter_ips_urlfilter_cache_setting.py validate-modules:E337
lib/ansible/modules/network/fortios/fortios_webfilter_ips_urlfilter_setting.py validate-modules:E336
lib/ansible/modules/network/fortios/fortios_webfilter_ips_urlfilter_setting.py validate-modules:E337
lib/ansible/modules/network/fortios/fortios_webfilter_ips_urlfilter_setting6.py validate-modules:E336
lib/ansible/modules/network/fortios/fortios_webfilter_ips_urlfilter_setting6.py validate-modules:E337
lib/ansible/modules/network/fortios/fortios_webfilter_override.py validate-modules:E336
lib/ansible/modules/network/fortios/fortios_webfilter_override.py validate-modules:E337
lib/ansible/modules/network/fortios/fortios_webfilter_profile.py validate-modules:E336
lib/ansible/modules/network/fortios/fortios_webfilter_profile.py validate-modules:E337
lib/ansible/modules/network/fortios/fortios_webfilter_search_engine.py validate-modules:E336
lib/ansible/modules/network/fortios/fortios_webfilter_search_engine.py validate-modules:E337
lib/ansible/modules/network/fortios/fortios_webfilter_urlfilter.py validate-modules:E336
lib/ansible/modules/network/fortios/fortios_webfilter_urlfilter.py validate-modules:E337
lib/ansible/modules/network/fortios/fortios_wireless_controller_global.py validate-modules:E336
lib/ansible/modules/network/fortios/fortios_wireless_controller_global.py validate-modules:E337
lib/ansible/modules/network/fortios/fortios_wireless_controller_setting.py validate-modules:E326
lib/ansible/modules/network/fortios/fortios_wireless_controller_setting.py validate-modules:E336
lib/ansible/modules/network/fortios/fortios_wireless_controller_setting.py validate-modules:E337
lib/ansible/modules/network/fortios/fortios_wireless_controller_utm_profile.py validate-modules:E336
lib/ansible/modules/network/fortios/fortios_wireless_controller_utm_profile.py validate-modules:E337
lib/ansible/modules/network/fortios/fortios_wireless_controller_vap.py validate-modules:E336
lib/ansible/modules/network/fortios/fortios_wireless_controller_vap.py validate-modules:E337
lib/ansible/modules/network/fortios/fortios_wireless_controller_wids_profile.py validate-modules:E336
lib/ansible/modules/network/fortios/fortios_wireless_controller_wids_profile.py validate-modules:E337
lib/ansible/modules/network/fortios/fortios_wireless_controller_wtp.py validate-modules:E326
lib/ansible/modules/network/fortios/fortios_wireless_controller_wtp.py validate-modules:E336
lib/ansible/modules/network/fortios/fortios_wireless_controller_wtp.py validate-modules:E337
lib/ansible/modules/network/fortios/fortios_wireless_controller_wtp_profile.py validate-modules:E326
lib/ansible/modules/network/fortios/fortios_wireless_controller_wtp_profile.py validate-modules:E336
lib/ansible/modules/network/fortios/fortios_wireless_controller_wtp_profile.py validate-modules:E337
lib/ansible/modules/network/frr/frr_bgp.py validate-modules:E322
lib/ansible/modules/network/frr/frr_bgp.py validate-modules:E323
lib/ansible/modules/network/frr/frr_bgp.py validate-modules:E337
lib/ansible/modules/network/frr/frr_bgp.py validate-modules:E338
lib/ansible/modules/network/frr/frr_facts.py validate-modules:E337
lib/ansible/modules/network/illumos/dladm_etherstub.py pylint:blacklisted-name
lib/ansible/modules/network/illumos/dladm_etherstub.py validate-modules:E338
lib/ansible/modules/network/illumos/dladm_iptun.py pylint:blacklisted-name
lib/ansible/modules/network/illumos/dladm_iptun.py validate-modules:E337
lib/ansible/modules/network/illumos/dladm_iptun.py validate-modules:E338
lib/ansible/modules/network/illumos/dladm_linkprop.py pylint:blacklisted-name
lib/ansible/modules/network/illumos/dladm_linkprop.py validate-modules:E317
lib/ansible/modules/network/illumos/dladm_linkprop.py validate-modules:E337
lib/ansible/modules/network/illumos/dladm_linkprop.py validate-modules:E338
lib/ansible/modules/network/illumos/dladm_vlan.py pylint:blacklisted-name
lib/ansible/modules/network/illumos/dladm_vlan.py validate-modules:E324
lib/ansible/modules/network/illumos/dladm_vlan.py validate-modules:E337
lib/ansible/modules/network/illumos/dladm_vlan.py validate-modules:E338
lib/ansible/modules/network/illumos/dladm_vnic.py pylint:blacklisted-name
lib/ansible/modules/network/illumos/dladm_vnic.py validate-modules:E324
lib/ansible/modules/network/illumos/dladm_vnic.py validate-modules:E338
lib/ansible/modules/network/illumos/flowadm.py pylint:blacklisted-name
lib/ansible/modules/network/illumos/flowadm.py validate-modules:E326
lib/ansible/modules/network/illumos/flowadm.py validate-modules:E338
lib/ansible/modules/network/illumos/ipadm_addr.py pylint:blacklisted-name
lib/ansible/modules/network/illumos/ipadm_addr.py validate-modules:E337
lib/ansible/modules/network/illumos/ipadm_addr.py validate-modules:E338
lib/ansible/modules/network/illumos/ipadm_addrprop.py pylint:blacklisted-name
lib/ansible/modules/network/illumos/ipadm_addrprop.py validate-modules:E317
lib/ansible/modules/network/illumos/ipadm_addrprop.py validate-modules:E338
lib/ansible/modules/network/illumos/ipadm_if.py pylint:blacklisted-name
lib/ansible/modules/network/illumos/ipadm_if.py validate-modules:E338
lib/ansible/modules/network/illumos/ipadm_ifprop.py pylint:blacklisted-name
lib/ansible/modules/network/illumos/ipadm_ifprop.py validate-modules:E317
lib/ansible/modules/network/illumos/ipadm_ifprop.py validate-modules:E326
lib/ansible/modules/network/illumos/ipadm_ifprop.py validate-modules:E338
lib/ansible/modules/network/illumos/ipadm_prop.py pylint:blacklisted-name
lib/ansible/modules/network/illumos/ipadm_prop.py validate-modules:E326
lib/ansible/modules/network/illumos/ipadm_prop.py validate-modules:E338
lib/ansible/modules/network/ingate/ig_config.py validate-modules:E337
lib/ansible/modules/network/ingate/ig_config.py validate-modules:E338
lib/ansible/modules/network/ingate/ig_unit_information.py validate-modules:E337
lib/ansible/modules/network/ios/ios_banner.py future-import-boilerplate
lib/ansible/modules/network/ios/ios_banner.py metaclass-boilerplate
lib/ansible/modules/network/ios/ios_banner.py validate-modules:E324
lib/ansible/modules/network/ios/ios_banner.py validate-modules:E338
lib/ansible/modules/network/ios/ios_bgp.py validate-modules:E323
lib/ansible/modules/network/ios/ios_bgp.py validate-modules:E337
lib/ansible/modules/network/ios/ios_bgp.py validate-modules:E338
lib/ansible/modules/network/ios/ios_command.py future-import-boilerplate
lib/ansible/modules/network/ios/ios_command.py metaclass-boilerplate
lib/ansible/modules/network/ios/ios_command.py validate-modules:E324
lib/ansible/modules/network/ios/ios_command.py validate-modules:E337
lib/ansible/modules/network/ios/ios_command.py validate-modules:E338
lib/ansible/modules/network/ios/ios_config.py future-import-boilerplate
lib/ansible/modules/network/ios/ios_config.py metaclass-boilerplate
lib/ansible/modules/network/ios/ios_config.py validate-modules:E324
lib/ansible/modules/network/ios/ios_config.py validate-modules:E337
lib/ansible/modules/network/ios/ios_config.py validate-modules:E338
lib/ansible/modules/network/ios/ios_facts.py future-import-boilerplate
lib/ansible/modules/network/ios/ios_facts.py metaclass-boilerplate
lib/ansible/modules/network/ios/ios_facts.py validate-modules:E324
lib/ansible/modules/network/ios/ios_facts.py validate-modules:E337
lib/ansible/modules/network/ios/_ios_interface.py validate-modules:E322
lib/ansible/modules/network/ios/_ios_interface.py validate-modules:E324
lib/ansible/modules/network/ios/_ios_interface.py validate-modules:E326
lib/ansible/modules/network/ios/_ios_interface.py validate-modules:E337
lib/ansible/modules/network/ios/_ios_interface.py validate-modules:E338
lib/ansible/modules/network/ios/_ios_interface.py validate-modules:E340
lib/ansible/modules/network/ios/ios_l2_interface.py validate-modules:E322
lib/ansible/modules/network/ios/ios_l2_interface.py validate-modules:E324
lib/ansible/modules/network/ios/ios_l2_interface.py validate-modules:E326
lib/ansible/modules/network/ios/ios_l2_interface.py validate-modules:E337
lib/ansible/modules/network/ios/ios_l2_interface.py validate-modules:E338
lib/ansible/modules/network/ios/ios_l2_interface.py validate-modules:E340
lib/ansible/modules/network/ios/ios_l3_interface.py validate-modules:E322
lib/ansible/modules/network/ios/ios_l3_interface.py validate-modules:E324
lib/ansible/modules/network/ios/ios_l3_interface.py validate-modules:E326
lib/ansible/modules/network/ios/ios_l3_interface.py validate-modules:E337
lib/ansible/modules/network/ios/ios_l3_interface.py validate-modules:E338
lib/ansible/modules/network/ios/ios_l3_interface.py validate-modules:E340
lib/ansible/modules/network/ios/ios_linkagg.py validate-modules:E322
lib/ansible/modules/network/ios/ios_linkagg.py validate-modules:E324
lib/ansible/modules/network/ios/ios_linkagg.py validate-modules:E326
lib/ansible/modules/network/ios/ios_linkagg.py validate-modules:E337
lib/ansible/modules/network/ios/ios_linkagg.py validate-modules:E338
lib/ansible/modules/network/ios/ios_linkagg.py validate-modules:E340
lib/ansible/modules/network/ios/ios_lldp.py validate-modules:E324
lib/ansible/modules/network/ios/ios_lldp.py validate-modules:E326
lib/ansible/modules/network/ios/ios_lldp.py validate-modules:E338
lib/ansible/modules/network/ios/ios_logging.py future-import-boilerplate
lib/ansible/modules/network/ios/ios_logging.py metaclass-boilerplate
lib/ansible/modules/network/ios/ios_logging.py validate-modules:E322
lib/ansible/modules/network/ios/ios_logging.py validate-modules:E324
lib/ansible/modules/network/ios/ios_logging.py validate-modules:E326
lib/ansible/modules/network/ios/ios_logging.py validate-modules:E337
lib/ansible/modules/network/ios/ios_logging.py validate-modules:E338
lib/ansible/modules/network/ios/ios_logging.py validate-modules:E340
lib/ansible/modules/network/ios/ios_ntp.py future-import-boilerplate
lib/ansible/modules/network/ios/ios_ntp.py metaclass-boilerplate
lib/ansible/modules/network/ios/ios_ntp.py validate-modules:E324
lib/ansible/modules/network/ios/ios_ntp.py validate-modules:E338
lib/ansible/modules/network/ios/ios_ping.py validate-modules:E324
lib/ansible/modules/network/ios/ios_ping.py validate-modules:E337
lib/ansible/modules/network/ios/ios_static_route.py future-import-boilerplate
lib/ansible/modules/network/ios/ios_static_route.py metaclass-boilerplate
lib/ansible/modules/network/ios/ios_static_route.py validate-modules:E322
lib/ansible/modules/network/ios/ios_static_route.py validate-modules:E324
lib/ansible/modules/network/ios/ios_static_route.py validate-modules:E326
lib/ansible/modules/network/ios/ios_static_route.py validate-modules:E337
lib/ansible/modules/network/ios/ios_static_route.py validate-modules:E338
lib/ansible/modules/network/ios/ios_static_route.py validate-modules:E340
lib/ansible/modules/network/ios/ios_system.py future-import-boilerplate
lib/ansible/modules/network/ios/ios_system.py metaclass-boilerplate
lib/ansible/modules/network/ios/ios_system.py validate-modules:E324
lib/ansible/modules/network/ios/ios_system.py validate-modules:E337
lib/ansible/modules/network/ios/ios_system.py validate-modules:E338
lib/ansible/modules/network/ios/ios_user.py future-import-boilerplate
lib/ansible/modules/network/ios/ios_user.py metaclass-boilerplate
lib/ansible/modules/network/ios/ios_user.py validate-modules:E322
lib/ansible/modules/network/ios/ios_user.py validate-modules:E324
lib/ansible/modules/network/ios/ios_user.py validate-modules:E326
lib/ansible/modules/network/ios/ios_user.py validate-modules:E337
lib/ansible/modules/network/ios/ios_user.py validate-modules:E338
lib/ansible/modules/network/ios/ios_user.py validate-modules:E340
lib/ansible/modules/network/ios/ios_vlan.py validate-modules:E322
lib/ansible/modules/network/ios/ios_vlan.py validate-modules:E324
lib/ansible/modules/network/ios/ios_vlan.py validate-modules:E326
lib/ansible/modules/network/ios/ios_vlan.py validate-modules:E337
lib/ansible/modules/network/ios/ios_vlan.py validate-modules:E338
lib/ansible/modules/network/ios/ios_vlan.py validate-modules:E340
lib/ansible/modules/network/ios/ios_vrf.py future-import-boilerplate
lib/ansible/modules/network/ios/ios_vrf.py metaclass-boilerplate
lib/ansible/modules/network/ios/ios_vrf.py validate-modules:E324
lib/ansible/modules/network/ios/ios_vrf.py validate-modules:E337
lib/ansible/modules/network/ios/ios_vrf.py validate-modules:E338
lib/ansible/modules/network/iosxr/iosxr_banner.py validate-modules:E322
lib/ansible/modules/network/iosxr/iosxr_banner.py validate-modules:E324
lib/ansible/modules/network/iosxr/iosxr_banner.py validate-modules:E326
lib/ansible/modules/network/iosxr/iosxr_banner.py validate-modules:E337
lib/ansible/modules/network/iosxr/iosxr_banner.py validate-modules:E338
lib/ansible/modules/network/iosxr/iosxr_bgp.py validate-modules:E322
lib/ansible/modules/network/iosxr/iosxr_bgp.py validate-modules:E323
lib/ansible/modules/network/iosxr/iosxr_bgp.py validate-modules:E337
lib/ansible/modules/network/iosxr/iosxr_bgp.py validate-modules:E338
lib/ansible/modules/network/iosxr/iosxr_command.py validate-modules:E322
lib/ansible/modules/network/iosxr/iosxr_command.py validate-modules:E324
lib/ansible/modules/network/iosxr/iosxr_command.py validate-modules:E326
lib/ansible/modules/network/iosxr/iosxr_command.py validate-modules:E337
lib/ansible/modules/network/iosxr/iosxr_command.py validate-modules:E338
lib/ansible/modules/network/iosxr/iosxr_config.py validate-modules:E322
lib/ansible/modules/network/iosxr/iosxr_config.py validate-modules:E324
lib/ansible/modules/network/iosxr/iosxr_config.py validate-modules:E326
lib/ansible/modules/network/iosxr/iosxr_config.py validate-modules:E337
lib/ansible/modules/network/iosxr/iosxr_config.py validate-modules:E338
lib/ansible/modules/network/iosxr/iosxr_facts.py validate-modules:E322
lib/ansible/modules/network/iosxr/iosxr_facts.py validate-modules:E324
lib/ansible/modules/network/iosxr/iosxr_facts.py validate-modules:E326
lib/ansible/modules/network/iosxr/iosxr_facts.py validate-modules:E337
lib/ansible/modules/network/iosxr/iosxr_interface.py validate-modules:E322
lib/ansible/modules/network/iosxr/iosxr_interface.py validate-modules:E324
lib/ansible/modules/network/iosxr/iosxr_interface.py validate-modules:E326
lib/ansible/modules/network/iosxr/iosxr_interface.py validate-modules:E337
lib/ansible/modules/network/iosxr/iosxr_interface.py validate-modules:E338
lib/ansible/modules/network/iosxr/iosxr_interface.py validate-modules:E340
lib/ansible/modules/network/iosxr/iosxr_logging.py validate-modules:E322
lib/ansible/modules/network/iosxr/iosxr_logging.py validate-modules:E324
lib/ansible/modules/network/iosxr/iosxr_logging.py validate-modules:E326
lib/ansible/modules/network/iosxr/iosxr_logging.py validate-modules:E337
lib/ansible/modules/network/iosxr/iosxr_logging.py validate-modules:E338
lib/ansible/modules/network/iosxr/iosxr_logging.py validate-modules:E340
lib/ansible/modules/network/iosxr/iosxr_netconf.py validate-modules:E322
lib/ansible/modules/network/iosxr/iosxr_netconf.py validate-modules:E324
lib/ansible/modules/network/iosxr/iosxr_netconf.py validate-modules:E326
lib/ansible/modules/network/iosxr/iosxr_netconf.py validate-modules:E337
lib/ansible/modules/network/iosxr/iosxr_netconf.py validate-modules:E338
lib/ansible/modules/network/iosxr/iosxr_system.py validate-modules:E322
lib/ansible/modules/network/iosxr/iosxr_system.py validate-modules:E324
lib/ansible/modules/network/iosxr/iosxr_system.py validate-modules:E326
lib/ansible/modules/network/iosxr/iosxr_system.py validate-modules:E337
lib/ansible/modules/network/iosxr/iosxr_system.py validate-modules:E338
lib/ansible/modules/network/iosxr/iosxr_user.py validate-modules:E322
lib/ansible/modules/network/iosxr/iosxr_user.py validate-modules:E324
lib/ansible/modules/network/iosxr/iosxr_user.py validate-modules:E326
lib/ansible/modules/network/iosxr/iosxr_user.py validate-modules:E337
lib/ansible/modules/network/iosxr/iosxr_user.py validate-modules:E338
lib/ansible/modules/network/iosxr/iosxr_user.py validate-modules:E340
lib/ansible/modules/network/ironware/ironware_command.py validate-modules:E323
lib/ansible/modules/network/ironware/ironware_command.py validate-modules:E324
lib/ansible/modules/network/ironware/ironware_command.py validate-modules:E337
lib/ansible/modules/network/ironware/ironware_command.py validate-modules:E338
lib/ansible/modules/network/ironware/ironware_config.py validate-modules:E323
lib/ansible/modules/network/ironware/ironware_config.py validate-modules:E324
lib/ansible/modules/network/ironware/ironware_config.py validate-modules:E337
lib/ansible/modules/network/ironware/ironware_config.py validate-modules:E338
lib/ansible/modules/network/ironware/ironware_facts.py validate-modules:E323
lib/ansible/modules/network/ironware/ironware_facts.py validate-modules:E324
lib/ansible/modules/network/ironware/ironware_facts.py validate-modules:E337
lib/ansible/modules/network/itential/iap_token.py validate-modules:E337
lib/ansible/modules/network/junos/_junos_interface.py validate-modules:E322
lib/ansible/modules/network/junos/_junos_interface.py validate-modules:E324
lib/ansible/modules/network/junos/_junos_interface.py validate-modules:E326
lib/ansible/modules/network/junos/_junos_interface.py validate-modules:E337
lib/ansible/modules/network/junos/_junos_interface.py validate-modules:E338
lib/ansible/modules/network/junos/_junos_interface.py validate-modules:E340
lib/ansible/modules/network/junos/_junos_linkagg.py validate-modules:E322
lib/ansible/modules/network/junos/_junos_linkagg.py validate-modules:E324
lib/ansible/modules/network/junos/_junos_linkagg.py validate-modules:E326
lib/ansible/modules/network/junos/_junos_linkagg.py validate-modules:E337
lib/ansible/modules/network/junos/_junos_linkagg.py validate-modules:E338
lib/ansible/modules/network/junos/_junos_linkagg.py validate-modules:E340
lib/ansible/modules/network/junos/junos_banner.py validate-modules:E322
lib/ansible/modules/network/junos/junos_banner.py validate-modules:E324
lib/ansible/modules/network/junos/junos_banner.py validate-modules:E326
lib/ansible/modules/network/junos/junos_banner.py validate-modules:E338
lib/ansible/modules/network/junos/junos_command.py validate-modules:E322
lib/ansible/modules/network/junos/junos_command.py validate-modules:E324
lib/ansible/modules/network/junos/junos_command.py validate-modules:E326
lib/ansible/modules/network/junos/junos_command.py validate-modules:E337
lib/ansible/modules/network/junos/junos_command.py validate-modules:E338
lib/ansible/modules/network/junos/junos_config.py validate-modules:E322
lib/ansible/modules/network/junos/junos_config.py validate-modules:E324
lib/ansible/modules/network/junos/junos_config.py validate-modules:E326
lib/ansible/modules/network/junos/junos_config.py validate-modules:E337
lib/ansible/modules/network/junos/junos_config.py validate-modules:E338
lib/ansible/modules/network/junos/junos_facts.py validate-modules:E322
lib/ansible/modules/network/junos/junos_facts.py validate-modules:E324
lib/ansible/modules/network/junos/junos_facts.py validate-modules:E326
lib/ansible/modules/network/junos/junos_facts.py validate-modules:E337
lib/ansible/modules/network/junos/junos_facts.py validate-modules:E338
lib/ansible/modules/network/junos/junos_interfaces.py validate-modules:E325
lib/ansible/modules/network/junos/junos_l2_interface.py validate-modules:E322
lib/ansible/modules/network/junos/junos_l2_interface.py validate-modules:E324
lib/ansible/modules/network/junos/junos_l2_interface.py validate-modules:E326
lib/ansible/modules/network/junos/junos_l2_interface.py validate-modules:E337
lib/ansible/modules/network/junos/junos_l2_interface.py validate-modules:E338
lib/ansible/modules/network/junos/junos_l2_interface.py validate-modules:E340
lib/ansible/modules/network/junos/_junos_l3_interface.py validate-modules:E322
lib/ansible/modules/network/junos/_junos_l3_interface.py validate-modules:E324
lib/ansible/modules/network/junos/_junos_l3_interface.py validate-modules:E326
lib/ansible/modules/network/junos/_junos_l3_interface.py validate-modules:E337
lib/ansible/modules/network/junos/_junos_l3_interface.py validate-modules:E338
lib/ansible/modules/network/junos/_junos_l3_interface.py validate-modules:E340
lib/ansible/modules/network/junos/junos_lag_interfaces.py validate-modules:E338
lib/ansible/modules/network/junos/junos_lldp.py validate-modules:E322
lib/ansible/modules/network/junos/junos_lldp.py validate-modules:E324
lib/ansible/modules/network/junos/junos_lldp.py validate-modules:E326
lib/ansible/modules/network/junos/junos_lldp.py validate-modules:E337
lib/ansible/modules/network/junos/junos_lldp.py validate-modules:E338
lib/ansible/modules/network/junos/junos_lldp_interface.py validate-modules:E322
lib/ansible/modules/network/junos/junos_lldp_interface.py validate-modules:E324
lib/ansible/modules/network/junos/junos_lldp_interface.py validate-modules:E326
lib/ansible/modules/network/junos/junos_lldp_interface.py validate-modules:E338
lib/ansible/modules/network/junos/junos_logging.py validate-modules:E322
lib/ansible/modules/network/junos/junos_logging.py validate-modules:E324
lib/ansible/modules/network/junos/junos_logging.py validate-modules:E326
lib/ansible/modules/network/junos/junos_logging.py validate-modules:E337
lib/ansible/modules/network/junos/junos_logging.py validate-modules:E338
lib/ansible/modules/network/junos/junos_logging.py validate-modules:E340
lib/ansible/modules/network/junos/junos_netconf.py validate-modules:E322
lib/ansible/modules/network/junos/junos_netconf.py validate-modules:E324
lib/ansible/modules/network/junos/junos_netconf.py validate-modules:E326
lib/ansible/modules/network/junos/junos_netconf.py validate-modules:E337
lib/ansible/modules/network/junos/junos_netconf.py validate-modules:E338
lib/ansible/modules/network/junos/junos_package.py validate-modules:E322
lib/ansible/modules/network/junos/junos_package.py validate-modules:E324
lib/ansible/modules/network/junos/junos_package.py validate-modules:E326
lib/ansible/modules/network/junos/junos_package.py validate-modules:E337
lib/ansible/modules/network/junos/junos_package.py validate-modules:E338
lib/ansible/modules/network/junos/junos_ping.py validate-modules:E322
lib/ansible/modules/network/junos/junos_ping.py validate-modules:E324
lib/ansible/modules/network/junos/junos_ping.py validate-modules:E326
lib/ansible/modules/network/junos/junos_ping.py validate-modules:E337
lib/ansible/modules/network/junos/junos_ping.py validate-modules:E338
lib/ansible/modules/network/junos/junos_rpc.py validate-modules:E322
lib/ansible/modules/network/junos/junos_rpc.py validate-modules:E324
lib/ansible/modules/network/junos/junos_rpc.py validate-modules:E326
lib/ansible/modules/network/junos/junos_rpc.py validate-modules:E337
lib/ansible/modules/network/junos/junos_rpc.py validate-modules:E338
lib/ansible/modules/network/junos/junos_scp.py validate-modules:E322
lib/ansible/modules/network/junos/junos_scp.py validate-modules:E324
lib/ansible/modules/network/junos/junos_scp.py validate-modules:E326
lib/ansible/modules/network/junos/junos_scp.py validate-modules:E337
lib/ansible/modules/network/junos/junos_scp.py validate-modules:E338
lib/ansible/modules/network/junos/junos_static_route.py validate-modules:E322
lib/ansible/modules/network/junos/junos_static_route.py validate-modules:E324
lib/ansible/modules/network/junos/junos_static_route.py validate-modules:E326
lib/ansible/modules/network/junos/junos_static_route.py validate-modules:E337
lib/ansible/modules/network/junos/junos_static_route.py validate-modules:E338
lib/ansible/modules/network/junos/junos_static_route.py validate-modules:E340
lib/ansible/modules/network/junos/junos_system.py validate-modules:E322
lib/ansible/modules/network/junos/junos_system.py validate-modules:E324
lib/ansible/modules/network/junos/junos_system.py validate-modules:E326
lib/ansible/modules/network/junos/junos_system.py validate-modules:E337
lib/ansible/modules/network/junos/junos_system.py validate-modules:E338
lib/ansible/modules/network/junos/junos_user.py validate-modules:E322
lib/ansible/modules/network/junos/junos_user.py validate-modules:E324
lib/ansible/modules/network/junos/junos_user.py validate-modules:E326
lib/ansible/modules/network/junos/junos_user.py validate-modules:E337
lib/ansible/modules/network/junos/junos_user.py validate-modules:E338
lib/ansible/modules/network/junos/junos_user.py validate-modules:E340
lib/ansible/modules/network/junos/junos_vlan.py validate-modules:E322
lib/ansible/modules/network/junos/junos_vlan.py validate-modules:E324
lib/ansible/modules/network/junos/junos_vlan.py validate-modules:E326
lib/ansible/modules/network/junos/junos_vlan.py validate-modules:E337
lib/ansible/modules/network/junos/junos_vlan.py validate-modules:E338
lib/ansible/modules/network/junos/junos_vlan.py validate-modules:E340
lib/ansible/modules/network/junos/junos_vrf.py validate-modules:E322
lib/ansible/modules/network/junos/junos_vrf.py validate-modules:E324
lib/ansible/modules/network/junos/junos_vrf.py validate-modules:E326
lib/ansible/modules/network/junos/junos_vrf.py validate-modules:E337
lib/ansible/modules/network/junos/junos_vrf.py validate-modules:E338
lib/ansible/modules/network/junos/junos_vrf.py validate-modules:E340
lib/ansible/modules/network/meraki/meraki_admin.py validate-modules:E337
lib/ansible/modules/network/meraki/meraki_config_template.py validate-modules:E337
lib/ansible/modules/network/meraki/meraki_malware.py validate-modules:E337
lib/ansible/modules/network/meraki/meraki_mr_l3_firewall.py validate-modules:E325
lib/ansible/modules/network/meraki/meraki_mx_l3_firewall.py validate-modules:E337
lib/ansible/modules/network/meraki/meraki_mx_l7_firewall.py validate-modules:E323
lib/ansible/modules/network/meraki/meraki_mx_l7_firewall.py validate-modules:E337
lib/ansible/modules/network/meraki/meraki_nat.py validate-modules:E325
lib/ansible/modules/network/meraki/meraki_nat.py validate-modules:E337
lib/ansible/modules/network/meraki/meraki_network.py validate-modules:E337
lib/ansible/modules/network/meraki/meraki_organization.py validate-modules:E337
lib/ansible/modules/network/meraki/meraki_snmp.py validate-modules:E337
lib/ansible/modules/network/meraki/meraki_ssid.py validate-modules:E325
lib/ansible/modules/network/meraki/meraki_switchport.py validate-modules:E337
lib/ansible/modules/network/meraki/meraki_syslog.py validate-modules:E326
lib/ansible/modules/network/meraki/meraki_syslog.py validate-modules:E337
lib/ansible/modules/network/meraki/meraki_vlan.py validate-modules:E322
lib/ansible/modules/network/meraki/meraki_vlan.py validate-modules:E337
lib/ansible/modules/network/meraki/meraki_vlan.py validate-modules:E340
lib/ansible/modules/network/netact/netact_cm_command.py validate-modules:E326
lib/ansible/modules/network/netact/netact_cm_command.py validate-modules:E337
lib/ansible/modules/network/netconf/netconf_config.py validate-modules:E326
lib/ansible/modules/network/netconf/netconf_config.py validate-modules:E337
lib/ansible/modules/network/netconf/netconf_config.py validate-modules:E338
lib/ansible/modules/network/netconf/netconf_get.py validate-modules:E338
lib/ansible/modules/network/netconf/netconf_rpc.py validate-modules:E337
lib/ansible/modules/network/netconf/netconf_rpc.py validate-modules:E338
lib/ansible/modules/network/netscaler/netscaler_cs_action.py validate-modules:E323
lib/ansible/modules/network/netscaler/netscaler_cs_action.py validate-modules:E337
lib/ansible/modules/network/netscaler/netscaler_cs_policy.py validate-modules:E337
lib/ansible/modules/network/netscaler/netscaler_cs_vserver.py validate-modules:E322
lib/ansible/modules/network/netscaler/netscaler_cs_vserver.py validate-modules:E323
lib/ansible/modules/network/netscaler/netscaler_cs_vserver.py validate-modules:E326
lib/ansible/modules/network/netscaler/netscaler_cs_vserver.py validate-modules:E337
lib/ansible/modules/network/netscaler/netscaler_gslb_service.py validate-modules:E337
lib/ansible/modules/network/netscaler/netscaler_gslb_site.py validate-modules:E337
lib/ansible/modules/network/netscaler/netscaler_gslb_vserver.py validate-modules:E322
lib/ansible/modules/network/netscaler/netscaler_gslb_vserver.py validate-modules:E337
lib/ansible/modules/network/netscaler/netscaler_lb_monitor.py validate-modules:E323
lib/ansible/modules/network/netscaler/netscaler_lb_monitor.py validate-modules:E326
lib/ansible/modules/network/netscaler/netscaler_lb_monitor.py validate-modules:E337
lib/ansible/modules/network/netscaler/netscaler_lb_vserver.py validate-modules:E323
lib/ansible/modules/network/netscaler/netscaler_lb_vserver.py validate-modules:E337
lib/ansible/modules/network/netscaler/netscaler_nitro_request.py validate-modules:E337
lib/ansible/modules/network/netscaler/netscaler_nitro_request.py validate-modules:E338
lib/ansible/modules/network/netscaler/netscaler_save_config.py validate-modules:E337
lib/ansible/modules/network/netscaler/netscaler_save_config.py validate-modules:E338
lib/ansible/modules/network/netscaler/netscaler_server.py validate-modules:E324
lib/ansible/modules/network/netscaler/netscaler_server.py validate-modules:E337
lib/ansible/modules/network/netscaler/netscaler_service.py validate-modules:E323
lib/ansible/modules/network/netscaler/netscaler_service.py validate-modules:E337
lib/ansible/modules/network/netscaler/netscaler_servicegroup.py validate-modules:E337
lib/ansible/modules/network/netscaler/netscaler_ssl_certkey.py validate-modules:E337
lib/ansible/modules/network/netvisor/_pn_cluster.py future-import-boilerplate
lib/ansible/modules/network/netvisor/_pn_cluster.py metaclass-boilerplate
lib/ansible/modules/network/netvisor/_pn_cluster.py validate-modules:E337
lib/ansible/modules/network/netvisor/_pn_ospf.py future-import-boilerplate
lib/ansible/modules/network/netvisor/_pn_ospf.py metaclass-boilerplate
lib/ansible/modules/network/netvisor/_pn_ospf.py validate-modules:E337
lib/ansible/modules/network/netvisor/_pn_ospfarea.py future-import-boilerplate
lib/ansible/modules/network/netvisor/_pn_ospfarea.py metaclass-boilerplate
lib/ansible/modules/network/netvisor/_pn_ospfarea.py validate-modules:E337
lib/ansible/modules/network/netvisor/_pn_show.py future-import-boilerplate
lib/ansible/modules/network/netvisor/_pn_show.py metaclass-boilerplate
lib/ansible/modules/network/netvisor/_pn_show.py validate-modules:E337
lib/ansible/modules/network/netvisor/_pn_trunk.py future-import-boilerplate
lib/ansible/modules/network/netvisor/_pn_trunk.py metaclass-boilerplate
lib/ansible/modules/network/netvisor/_pn_trunk.py validate-modules:E337
lib/ansible/modules/network/netvisor/_pn_vlag.py future-import-boilerplate
lib/ansible/modules/network/netvisor/_pn_vlag.py metaclass-boilerplate
lib/ansible/modules/network/netvisor/_pn_vlag.py validate-modules:E337
lib/ansible/modules/network/netvisor/_pn_vlan.py future-import-boilerplate
lib/ansible/modules/network/netvisor/_pn_vlan.py metaclass-boilerplate
lib/ansible/modules/network/netvisor/_pn_vlan.py validate-modules:E337
lib/ansible/modules/network/netvisor/_pn_vrouter.py future-import-boilerplate
lib/ansible/modules/network/netvisor/_pn_vrouter.py metaclass-boilerplate
lib/ansible/modules/network/netvisor/_pn_vrouter.py validate-modules:E337
lib/ansible/modules/network/netvisor/_pn_vrouterbgp.py future-import-boilerplate
lib/ansible/modules/network/netvisor/_pn_vrouterbgp.py metaclass-boilerplate
lib/ansible/modules/network/netvisor/_pn_vrouterbgp.py validate-modules:E337
lib/ansible/modules/network/netvisor/_pn_vrouterif.py future-import-boilerplate
lib/ansible/modules/network/netvisor/_pn_vrouterif.py metaclass-boilerplate
lib/ansible/modules/network/netvisor/_pn_vrouterif.py validate-modules:E337
lib/ansible/modules/network/netvisor/_pn_vrouterlbif.py future-import-boilerplate
lib/ansible/modules/network/netvisor/_pn_vrouterlbif.py metaclass-boilerplate
lib/ansible/modules/network/netvisor/_pn_vrouterlbif.py validate-modules:E337
lib/ansible/modules/network/netvisor/pn_access_list.py validate-modules:E337
lib/ansible/modules/network/netvisor/pn_access_list_ip.py validate-modules:E337
lib/ansible/modules/network/netvisor/pn_cpu_class.py validate-modules:E337
lib/ansible/modules/network/netvisor/pn_dscp_map.py validate-modules:E337
lib/ansible/modules/network/netvisor/pn_fabric_local.py validate-modules:E337
lib/ansible/modules/network/netvisor/pn_igmp_snooping.py validate-modules:E337
lib/ansible/modules/network/netvisor/pn_port_config.py validate-modules:E337
lib/ansible/modules/network/netvisor/pn_snmp_community.py validate-modules:E337
lib/ansible/modules/network/netvisor/pn_switch_setup.py validate-modules:E337
lib/ansible/modules/network/netvisor/pn_vrouter_bgp.py validate-modules:E337
lib/ansible/modules/network/nos/nos_command.py validate-modules:E337
lib/ansible/modules/network/nos/nos_command.py validate-modules:E338
lib/ansible/modules/network/nos/nos_config.py validate-modules:E337
lib/ansible/modules/network/nos/nos_config.py validate-modules:E338
lib/ansible/modules/network/nos/nos_facts.py validate-modules:E337
lib/ansible/modules/network/nso/nso_action.py validate-modules:E337
lib/ansible/modules/network/nso/nso_action.py validate-modules:E338
lib/ansible/modules/network/nso/nso_config.py validate-modules:E337
lib/ansible/modules/network/nso/nso_query.py validate-modules:E337
lib/ansible/modules/network/nso/nso_show.py validate-modules:E337
lib/ansible/modules/network/nso/nso_verify.py validate-modules:E337
lib/ansible/modules/network/nuage/nuage_vspk.py validate-modules:E322
lib/ansible/modules/network/nuage/nuage_vspk.py validate-modules:E337
lib/ansible/modules/network/nuage/nuage_vspk.py validate-modules:E340
lib/ansible/modules/network/nxos/_nxos_ip_interface.py future-import-boilerplate
lib/ansible/modules/network/nxos/_nxos_ip_interface.py metaclass-boilerplate
lib/ansible/modules/network/nxos/_nxos_linkagg.py validate-modules:E322
lib/ansible/modules/network/nxos/_nxos_linkagg.py validate-modules:E324
lib/ansible/modules/network/nxos/_nxos_linkagg.py validate-modules:E326
lib/ansible/modules/network/nxos/_nxos_linkagg.py validate-modules:E327
lib/ansible/modules/network/nxos/_nxos_linkagg.py validate-modules:E337
lib/ansible/modules/network/nxos/_nxos_linkagg.py validate-modules:E338
lib/ansible/modules/network/nxos/_nxos_linkagg.py validate-modules:E340
lib/ansible/modules/network/nxos/_nxos_mtu.py future-import-boilerplate
lib/ansible/modules/network/nxos/_nxos_mtu.py metaclass-boilerplate
lib/ansible/modules/network/nxos/_nxos_portchannel.py future-import-boilerplate
lib/ansible/modules/network/nxos/_nxos_portchannel.py metaclass-boilerplate
lib/ansible/modules/network/nxos/_nxos_switchport.py future-import-boilerplate
lib/ansible/modules/network/nxos/_nxos_switchport.py metaclass-boilerplate
lib/ansible/modules/network/nxos/nxos_aaa_server.py future-import-boilerplate
lib/ansible/modules/network/nxos/nxos_aaa_server.py metaclass-boilerplate
lib/ansible/modules/network/nxos/nxos_aaa_server.py validate-modules:E324
lib/ansible/modules/network/nxos/nxos_aaa_server.py validate-modules:E326
lib/ansible/modules/network/nxos/nxos_aaa_server.py validate-modules:E327
lib/ansible/modules/network/nxos/nxos_aaa_server.py validate-modules:E337
lib/ansible/modules/network/nxos/nxos_aaa_server.py validate-modules:E338
lib/ansible/modules/network/nxos/nxos_aaa_server_host.py future-import-boilerplate
lib/ansible/modules/network/nxos/nxos_aaa_server_host.py metaclass-boilerplate
lib/ansible/modules/network/nxos/nxos_aaa_server_host.py validate-modules:E324
lib/ansible/modules/network/nxos/nxos_aaa_server_host.py validate-modules:E327
lib/ansible/modules/network/nxos/nxos_aaa_server_host.py validate-modules:E337
lib/ansible/modules/network/nxos/nxos_aaa_server_host.py validate-modules:E338
lib/ansible/modules/network/nxos/nxos_acl.py future-import-boilerplate
lib/ansible/modules/network/nxos/nxos_acl.py metaclass-boilerplate
lib/ansible/modules/network/nxos/nxos_acl.py validate-modules:E324
lib/ansible/modules/network/nxos/nxos_acl.py validate-modules:E326
lib/ansible/modules/network/nxos/nxos_acl.py validate-modules:E327
lib/ansible/modules/network/nxos/nxos_acl.py validate-modules:E337
lib/ansible/modules/network/nxos/nxos_acl.py validate-modules:E338
lib/ansible/modules/network/nxos/nxos_acl_interface.py future-import-boilerplate
lib/ansible/modules/network/nxos/nxos_acl_interface.py metaclass-boilerplate
lib/ansible/modules/network/nxos/nxos_acl_interface.py validate-modules:E324
lib/ansible/modules/network/nxos/nxos_acl_interface.py validate-modules:E327
lib/ansible/modules/network/nxos/nxos_acl_interface.py validate-modules:E337
lib/ansible/modules/network/nxos/nxos_acl_interface.py validate-modules:E338
lib/ansible/modules/network/nxos/nxos_banner.py future-import-boilerplate
lib/ansible/modules/network/nxos/nxos_banner.py metaclass-boilerplate
lib/ansible/modules/network/nxos/nxos_banner.py validate-modules:E324
lib/ansible/modules/network/nxos/nxos_banner.py validate-modules:E327
lib/ansible/modules/network/nxos/nxos_banner.py validate-modules:E338
lib/ansible/modules/network/nxos/nxos_bfd_global.py future-import-boilerplate
lib/ansible/modules/network/nxos/nxos_bfd_global.py metaclass-boilerplate
lib/ansible/modules/network/nxos/nxos_bfd_global.py validate-modules:E324
lib/ansible/modules/network/nxos/nxos_bfd_global.py validate-modules:E327
lib/ansible/modules/network/nxos/nxos_bgp.py future-import-boilerplate
lib/ansible/modules/network/nxos/nxos_bgp.py metaclass-boilerplate
lib/ansible/modules/network/nxos/nxos_bgp.py validate-modules:E324
lib/ansible/modules/network/nxos/nxos_bgp.py validate-modules:E326
lib/ansible/modules/network/nxos/nxos_bgp.py validate-modules:E327
lib/ansible/modules/network/nxos/nxos_bgp.py validate-modules:E337
lib/ansible/modules/network/nxos/nxos_bgp.py validate-modules:E338
lib/ansible/modules/network/nxos/nxos_bgp_af.py future-import-boilerplate
lib/ansible/modules/network/nxos/nxos_bgp_af.py metaclass-boilerplate
lib/ansible/modules/network/nxos/nxos_bgp_af.py validate-modules:E324
lib/ansible/modules/network/nxos/nxos_bgp_af.py validate-modules:E327
lib/ansible/modules/network/nxos/nxos_bgp_af.py validate-modules:E337
lib/ansible/modules/network/nxos/nxos_bgp_af.py validate-modules:E338
lib/ansible/modules/network/nxos/nxos_bgp_neighbor.py future-import-boilerplate
lib/ansible/modules/network/nxos/nxos_bgp_neighbor.py metaclass-boilerplate
lib/ansible/modules/network/nxos/nxos_bgp_neighbor.py validate-modules:E324
lib/ansible/modules/network/nxos/nxos_bgp_neighbor.py validate-modules:E327
lib/ansible/modules/network/nxos/nxos_bgp_neighbor.py validate-modules:E337
lib/ansible/modules/network/nxos/nxos_bgp_neighbor.py validate-modules:E338
lib/ansible/modules/network/nxos/nxos_bgp_neighbor_af.py future-import-boilerplate
lib/ansible/modules/network/nxos/nxos_bgp_neighbor_af.py metaclass-boilerplate
lib/ansible/modules/network/nxos/nxos_bgp_neighbor_af.py validate-modules:E324
lib/ansible/modules/network/nxos/nxos_bgp_neighbor_af.py validate-modules:E326
lib/ansible/modules/network/nxos/nxos_bgp_neighbor_af.py validate-modules:E327
lib/ansible/modules/network/nxos/nxos_bgp_neighbor_af.py validate-modules:E337
lib/ansible/modules/network/nxos/nxos_bgp_neighbor_af.py validate-modules:E338
lib/ansible/modules/network/nxos/nxos_command.py validate-modules:E324
lib/ansible/modules/network/nxos/nxos_command.py validate-modules:E326
lib/ansible/modules/network/nxos/nxos_command.py validate-modules:E327
lib/ansible/modules/network/nxos/nxos_command.py validate-modules:E337
lib/ansible/modules/network/nxos/nxos_command.py validate-modules:E338
lib/ansible/modules/network/nxos/nxos_config.py future-import-boilerplate
lib/ansible/modules/network/nxos/nxos_config.py metaclass-boilerplate
lib/ansible/modules/network/nxos/nxos_config.py validate-modules:E324
lib/ansible/modules/network/nxos/nxos_config.py validate-modules:E327
lib/ansible/modules/network/nxos/nxos_config.py validate-modules:E337
lib/ansible/modules/network/nxos/nxos_config.py validate-modules:E338
lib/ansible/modules/network/nxos/nxos_evpn_global.py future-import-boilerplate
lib/ansible/modules/network/nxos/nxos_evpn_global.py metaclass-boilerplate
lib/ansible/modules/network/nxos/nxos_evpn_global.py validate-modules:E324
lib/ansible/modules/network/nxos/nxos_evpn_global.py validate-modules:E327
lib/ansible/modules/network/nxos/nxos_evpn_vni.py future-import-boilerplate
lib/ansible/modules/network/nxos/nxos_evpn_vni.py metaclass-boilerplate
lib/ansible/modules/network/nxos/nxos_evpn_vni.py validate-modules:E324
lib/ansible/modules/network/nxos/nxos_evpn_vni.py validate-modules:E327
lib/ansible/modules/network/nxos/nxos_evpn_vni.py validate-modules:E337
lib/ansible/modules/network/nxos/nxos_evpn_vni.py validate-modules:E338
lib/ansible/modules/network/nxos/nxos_facts.py future-import-boilerplate
lib/ansible/modules/network/nxos/nxos_facts.py metaclass-boilerplate
lib/ansible/modules/network/nxos/nxos_facts.py validate-modules:E324
lib/ansible/modules/network/nxos/nxos_facts.py validate-modules:E327
lib/ansible/modules/network/nxos/nxos_facts.py validate-modules:E337
lib/ansible/modules/network/nxos/nxos_feature.py future-import-boilerplate
lib/ansible/modules/network/nxos/nxos_feature.py metaclass-boilerplate
lib/ansible/modules/network/nxos/nxos_feature.py validate-modules:E324
lib/ansible/modules/network/nxos/nxos_feature.py validate-modules:E327
lib/ansible/modules/network/nxos/nxos_feature.py validate-modules:E337
lib/ansible/modules/network/nxos/nxos_feature.py validate-modules:E338
lib/ansible/modules/network/nxos/nxos_file_copy.py future-import-boilerplate
lib/ansible/modules/network/nxos/nxos_file_copy.py metaclass-boilerplate
lib/ansible/modules/network/nxos/nxos_file_copy.py validate-modules:E324
lib/ansible/modules/network/nxos/nxos_file_copy.py validate-modules:E327
lib/ansible/modules/network/nxos/nxos_file_copy.py validate-modules:E337
lib/ansible/modules/network/nxos/nxos_file_copy.py validate-modules:E338
lib/ansible/modules/network/nxos/nxos_gir.py future-import-boilerplate
lib/ansible/modules/network/nxos/nxos_gir.py metaclass-boilerplate
lib/ansible/modules/network/nxos/nxos_gir.py validate-modules:E324
lib/ansible/modules/network/nxos/nxos_gir.py validate-modules:E326
lib/ansible/modules/network/nxos/nxos_gir.py validate-modules:E327
lib/ansible/modules/network/nxos/nxos_gir.py validate-modules:E337
lib/ansible/modules/network/nxos/nxos_gir.py validate-modules:E338
lib/ansible/modules/network/nxos/nxos_gir_profile_management.py future-import-boilerplate
lib/ansible/modules/network/nxos/nxos_gir_profile_management.py metaclass-boilerplate
lib/ansible/modules/network/nxos/nxos_gir_profile_management.py validate-modules:E324
lib/ansible/modules/network/nxos/nxos_gir_profile_management.py validate-modules:E327
lib/ansible/modules/network/nxos/nxos_gir_profile_management.py validate-modules:E337
lib/ansible/modules/network/nxos/nxos_gir_profile_management.py validate-modules:E338
lib/ansible/modules/network/nxos/nxos_hsrp.py future-import-boilerplate
lib/ansible/modules/network/nxos/nxos_hsrp.py metaclass-boilerplate
lib/ansible/modules/network/nxos/nxos_hsrp.py validate-modules:E324
lib/ansible/modules/network/nxos/nxos_hsrp.py validate-modules:E327
lib/ansible/modules/network/nxos/nxos_hsrp.py validate-modules:E337
lib/ansible/modules/network/nxos/nxos_hsrp.py validate-modules:E338
lib/ansible/modules/network/nxos/nxos_igmp.py future-import-boilerplate
lib/ansible/modules/network/nxos/nxos_igmp.py metaclass-boilerplate
lib/ansible/modules/network/nxos/nxos_igmp.py validate-modules:E324
lib/ansible/modules/network/nxos/nxos_igmp.py validate-modules:E327
lib/ansible/modules/network/nxos/nxos_igmp.py validate-modules:E338
lib/ansible/modules/network/nxos/nxos_igmp_interface.py future-import-boilerplate
lib/ansible/modules/network/nxos/nxos_igmp_interface.py metaclass-boilerplate
lib/ansible/modules/network/nxos/nxos_igmp_interface.py validate-modules:E324
lib/ansible/modules/network/nxos/nxos_igmp_interface.py validate-modules:E326
lib/ansible/modules/network/nxos/nxos_igmp_interface.py validate-modules:E327
lib/ansible/modules/network/nxos/nxos_igmp_interface.py validate-modules:E337
lib/ansible/modules/network/nxos/nxos_igmp_interface.py validate-modules:E338
lib/ansible/modules/network/nxos/nxos_igmp_snooping.py future-import-boilerplate
lib/ansible/modules/network/nxos/nxos_igmp_snooping.py metaclass-boilerplate
lib/ansible/modules/network/nxos/nxos_igmp_snooping.py validate-modules:E324
lib/ansible/modules/network/nxos/nxos_igmp_snooping.py validate-modules:E327
lib/ansible/modules/network/nxos/nxos_igmp_snooping.py validate-modules:E337
lib/ansible/modules/network/nxos/nxos_igmp_snooping.py validate-modules:E338
lib/ansible/modules/network/nxos/nxos_install_os.py future-import-boilerplate
lib/ansible/modules/network/nxos/nxos_install_os.py metaclass-boilerplate
lib/ansible/modules/network/nxos/nxos_install_os.py validate-modules:E324
lib/ansible/modules/network/nxos/nxos_install_os.py validate-modules:E327
lib/ansible/modules/network/nxos/nxos_install_os.py validate-modules:E338
lib/ansible/modules/network/nxos/nxos_interface.py future-import-boilerplate
lib/ansible/modules/network/nxos/nxos_interface.py metaclass-boilerplate
lib/ansible/modules/network/nxos/nxos_interface.py validate-modules:E322
lib/ansible/modules/network/nxos/nxos_interface.py validate-modules:E324
lib/ansible/modules/network/nxos/nxos_interface.py validate-modules:E326
lib/ansible/modules/network/nxos/nxos_interface.py validate-modules:E327
lib/ansible/modules/network/nxos/nxos_interface.py validate-modules:E337
lib/ansible/modules/network/nxos/nxos_interface.py validate-modules:E338
lib/ansible/modules/network/nxos/nxos_interface.py validate-modules:E340
lib/ansible/modules/network/nxos/nxos_interface_ospf.py future-import-boilerplate
lib/ansible/modules/network/nxos/nxos_interface_ospf.py metaclass-boilerplate
lib/ansible/modules/network/nxos/nxos_interface_ospf.py validate-modules:E324
lib/ansible/modules/network/nxos/nxos_interface_ospf.py validate-modules:E327
lib/ansible/modules/network/nxos/nxos_interface_ospf.py validate-modules:E337
lib/ansible/modules/network/nxos/nxos_interface_ospf.py validate-modules:E338
lib/ansible/modules/network/nxos/nxos_l2_interface.py validate-modules:E322
lib/ansible/modules/network/nxos/nxos_l2_interface.py validate-modules:E324
lib/ansible/modules/network/nxos/nxos_l2_interface.py validate-modules:E326
lib/ansible/modules/network/nxos/nxos_l2_interface.py validate-modules:E327
lib/ansible/modules/network/nxos/nxos_l2_interface.py validate-modules:E337
lib/ansible/modules/network/nxos/nxos_l2_interface.py validate-modules:E338
lib/ansible/modules/network/nxos/nxos_l2_interface.py validate-modules:E340
lib/ansible/modules/network/nxos/nxos_l3_interface.py validate-modules:E322
lib/ansible/modules/network/nxos/nxos_l3_interface.py validate-modules:E324
lib/ansible/modules/network/nxos/nxos_l3_interface.py validate-modules:E326
lib/ansible/modules/network/nxos/nxos_l3_interface.py validate-modules:E327
lib/ansible/modules/network/nxos/nxos_l3_interface.py validate-modules:E337
lib/ansible/modules/network/nxos/nxos_l3_interface.py validate-modules:E338
lib/ansible/modules/network/nxos/nxos_l3_interface.py validate-modules:E340
lib/ansible/modules/network/nxos/nxos_lag_interfaces.py validate-modules:E326
lib/ansible/modules/network/nxos/nxos_lldp.py validate-modules:E324
lib/ansible/modules/network/nxos/nxos_lldp.py validate-modules:E326
lib/ansible/modules/network/nxos/nxos_lldp.py validate-modules:E327
lib/ansible/modules/network/nxos/nxos_lldp.py validate-modules:E338
lib/ansible/modules/network/nxos/nxos_logging.py future-import-boilerplate
lib/ansible/modules/network/nxos/nxos_logging.py metaclass-boilerplate
lib/ansible/modules/network/nxos/nxos_logging.py validate-modules:E324
lib/ansible/modules/network/nxos/nxos_logging.py validate-modules:E327
lib/ansible/modules/network/nxos/nxos_logging.py validate-modules:E337
lib/ansible/modules/network/nxos/nxos_logging.py validate-modules:E338
lib/ansible/modules/network/nxos/nxos_ntp.py validate-modules:E324
lib/ansible/modules/network/nxos/nxos_ntp.py validate-modules:E327
lib/ansible/modules/network/nxos/nxos_ntp.py validate-modules:E337
lib/ansible/modules/network/nxos/nxos_ntp.py validate-modules:E338
lib/ansible/modules/network/nxos/nxos_ntp_auth.py future-import-boilerplate
lib/ansible/modules/network/nxos/nxos_ntp_auth.py metaclass-boilerplate
lib/ansible/modules/network/nxos/nxos_ntp_auth.py validate-modules:E324
lib/ansible/modules/network/nxos/nxos_ntp_auth.py validate-modules:E327
lib/ansible/modules/network/nxos/nxos_ntp_auth.py validate-modules:E337
lib/ansible/modules/network/nxos/nxos_ntp_auth.py validate-modules:E338
lib/ansible/modules/network/nxos/nxos_ntp_options.py future-import-boilerplate
lib/ansible/modules/network/nxos/nxos_ntp_options.py metaclass-boilerplate
lib/ansible/modules/network/nxos/nxos_ntp_options.py validate-modules:E324
lib/ansible/modules/network/nxos/nxos_ntp_options.py validate-modules:E327
lib/ansible/modules/network/nxos/nxos_ntp_options.py validate-modules:E337
lib/ansible/modules/network/nxos/nxos_ntp_options.py validate-modules:E338
lib/ansible/modules/network/nxos/nxos_nxapi.py future-import-boilerplate
lib/ansible/modules/network/nxos/nxos_nxapi.py metaclass-boilerplate
lib/ansible/modules/network/nxos/nxos_nxapi.py validate-modules:E324
lib/ansible/modules/network/nxos/nxos_nxapi.py validate-modules:E326
lib/ansible/modules/network/nxos/nxos_nxapi.py validate-modules:E327
lib/ansible/modules/network/nxos/nxos_nxapi.py validate-modules:E337
lib/ansible/modules/network/nxos/nxos_nxapi.py validate-modules:E338
lib/ansible/modules/network/nxos/nxos_ospf.py future-import-boilerplate
lib/ansible/modules/network/nxos/nxos_ospf.py metaclass-boilerplate
lib/ansible/modules/network/nxos/nxos_ospf.py validate-modules:E324
lib/ansible/modules/network/nxos/nxos_ospf.py validate-modules:E327
lib/ansible/modules/network/nxos/nxos_ospf.py validate-modules:E337
lib/ansible/modules/network/nxos/nxos_ospf.py validate-modules:E338
lib/ansible/modules/network/nxos/nxos_ospf_vrf.py future-import-boilerplate
lib/ansible/modules/network/nxos/nxos_ospf_vrf.py metaclass-boilerplate
lib/ansible/modules/network/nxos/nxos_ospf_vrf.py validate-modules:E324
lib/ansible/modules/network/nxos/nxos_ospf_vrf.py validate-modules:E327
lib/ansible/modules/network/nxos/nxos_ospf_vrf.py validate-modules:E337
lib/ansible/modules/network/nxos/nxos_ospf_vrf.py validate-modules:E338
lib/ansible/modules/network/nxos/nxos_overlay_global.py future-import-boilerplate
lib/ansible/modules/network/nxos/nxos_overlay_global.py metaclass-boilerplate
lib/ansible/modules/network/nxos/nxos_overlay_global.py validate-modules:E324
lib/ansible/modules/network/nxos/nxos_overlay_global.py validate-modules:E327
lib/ansible/modules/network/nxos/nxos_overlay_global.py validate-modules:E337
lib/ansible/modules/network/nxos/nxos_pim.py future-import-boilerplate
lib/ansible/modules/network/nxos/nxos_pim.py metaclass-boilerplate
lib/ansible/modules/network/nxos/nxos_pim.py validate-modules:E324
lib/ansible/modules/network/nxos/nxos_pim.py validate-modules:E327
lib/ansible/modules/network/nxos/nxos_pim.py validate-modules:E337
lib/ansible/modules/network/nxos/nxos_pim_interface.py future-import-boilerplate
lib/ansible/modules/network/nxos/nxos_pim_interface.py metaclass-boilerplate
lib/ansible/modules/network/nxos/nxos_pim_interface.py validate-modules:E324
lib/ansible/modules/network/nxos/nxos_pim_interface.py validate-modules:E326
lib/ansible/modules/network/nxos/nxos_pim_interface.py validate-modules:E327
lib/ansible/modules/network/nxos/nxos_pim_rp_address.py future-import-boilerplate
lib/ansible/modules/network/nxos/nxos_pim_rp_address.py metaclass-boilerplate
lib/ansible/modules/network/nxos/nxos_pim_rp_address.py validate-modules:E324
lib/ansible/modules/network/nxos/nxos_pim_rp_address.py validate-modules:E326
lib/ansible/modules/network/nxos/nxos_pim_rp_address.py validate-modules:E327
lib/ansible/modules/network/nxos/nxos_pim_rp_address.py validate-modules:E337
lib/ansible/modules/network/nxos/nxos_pim_rp_address.py validate-modules:E338
lib/ansible/modules/network/nxos/nxos_ping.py future-import-boilerplate
lib/ansible/modules/network/nxos/nxos_ping.py metaclass-boilerplate
lib/ansible/modules/network/nxos/nxos_ping.py validate-modules:E324
lib/ansible/modules/network/nxos/nxos_ping.py validate-modules:E327
lib/ansible/modules/network/nxos/nxos_ping.py validate-modules:E337
lib/ansible/modules/network/nxos/nxos_ping.py validate-modules:E338
lib/ansible/modules/network/nxos/nxos_reboot.py future-import-boilerplate
lib/ansible/modules/network/nxos/nxos_reboot.py metaclass-boilerplate
lib/ansible/modules/network/nxos/nxos_reboot.py validate-modules:E324
lib/ansible/modules/network/nxos/nxos_reboot.py validate-modules:E327
lib/ansible/modules/network/nxos/nxos_rollback.py future-import-boilerplate
lib/ansible/modules/network/nxos/nxos_rollback.py metaclass-boilerplate
lib/ansible/modules/network/nxos/nxos_rollback.py validate-modules:E324
lib/ansible/modules/network/nxos/nxos_rollback.py validate-modules:E327
lib/ansible/modules/network/nxos/nxos_rollback.py validate-modules:E338
lib/ansible/modules/network/nxos/nxos_rpm.py future-import-boilerplate
lib/ansible/modules/network/nxos/nxos_rpm.py metaclass-boilerplate
lib/ansible/modules/network/nxos/nxos_rpm.py validate-modules:E322
lib/ansible/modules/network/nxos/nxos_rpm.py validate-modules:E324
lib/ansible/modules/network/nxos/nxos_rpm.py validate-modules:E326
lib/ansible/modules/network/nxos/nxos_rpm.py validate-modules:E327
lib/ansible/modules/network/nxos/nxos_rpm.py validate-modules:E337
lib/ansible/modules/network/nxos/nxos_rpm.py validate-modules:E338
lib/ansible/modules/network/nxos/nxos_rpm.py validate-modules:E340
lib/ansible/modules/network/nxos/nxos_smu.py future-import-boilerplate
lib/ansible/modules/network/nxos/nxos_smu.py metaclass-boilerplate
lib/ansible/modules/network/nxos/nxos_smu.py validate-modules:E324
lib/ansible/modules/network/nxos/nxos_smu.py validate-modules:E327
lib/ansible/modules/network/nxos/nxos_smu.py validate-modules:E338
lib/ansible/modules/network/nxos/nxos_snapshot.py future-import-boilerplate
lib/ansible/modules/network/nxos/nxos_snapshot.py metaclass-boilerplate
lib/ansible/modules/network/nxos/nxos_snapshot.py validate-modules:E324
lib/ansible/modules/network/nxos/nxos_snapshot.py validate-modules:E327
lib/ansible/modules/network/nxos/nxos_snapshot.py validate-modules:E337
lib/ansible/modules/network/nxos/nxos_snapshot.py validate-modules:E338
lib/ansible/modules/network/nxos/nxos_snmp_community.py future-import-boilerplate
lib/ansible/modules/network/nxos/nxos_snmp_community.py metaclass-boilerplate
lib/ansible/modules/network/nxos/nxos_snmp_community.py validate-modules:E324
lib/ansible/modules/network/nxos/nxos_snmp_community.py validate-modules:E327
lib/ansible/modules/network/nxos/nxos_snmp_community.py validate-modules:E337
lib/ansible/modules/network/nxos/nxos_snmp_community.py validate-modules:E338
lib/ansible/modules/network/nxos/nxos_snmp_contact.py future-import-boilerplate
lib/ansible/modules/network/nxos/nxos_snmp_contact.py metaclass-boilerplate
lib/ansible/modules/network/nxos/nxos_snmp_contact.py validate-modules:E324
lib/ansible/modules/network/nxos/nxos_snmp_contact.py validate-modules:E327
lib/ansible/modules/network/nxos/nxos_snmp_contact.py validate-modules:E337
lib/ansible/modules/network/nxos/nxos_snmp_contact.py validate-modules:E338
lib/ansible/modules/network/nxos/nxos_snmp_host.py future-import-boilerplate
lib/ansible/modules/network/nxos/nxos_snmp_host.py metaclass-boilerplate
lib/ansible/modules/network/nxos/nxos_snmp_host.py validate-modules:E324
lib/ansible/modules/network/nxos/nxos_snmp_host.py validate-modules:E327
lib/ansible/modules/network/nxos/nxos_snmp_host.py validate-modules:E337
lib/ansible/modules/network/nxos/nxos_snmp_host.py validate-modules:E338
lib/ansible/modules/network/nxos/nxos_snmp_location.py future-import-boilerplate
lib/ansible/modules/network/nxos/nxos_snmp_location.py metaclass-boilerplate
lib/ansible/modules/network/nxos/nxos_snmp_location.py validate-modules:E324
lib/ansible/modules/network/nxos/nxos_snmp_location.py validate-modules:E327
lib/ansible/modules/network/nxos/nxos_snmp_location.py validate-modules:E337
lib/ansible/modules/network/nxos/nxos_snmp_location.py validate-modules:E338
lib/ansible/modules/network/nxos/nxos_snmp_traps.py future-import-boilerplate
lib/ansible/modules/network/nxos/nxos_snmp_traps.py metaclass-boilerplate
lib/ansible/modules/network/nxos/nxos_snmp_traps.py validate-modules:E324
lib/ansible/modules/network/nxos/nxos_snmp_traps.py validate-modules:E327
lib/ansible/modules/network/nxos/nxos_snmp_traps.py validate-modules:E338
lib/ansible/modules/network/nxos/nxos_snmp_user.py future-import-boilerplate
lib/ansible/modules/network/nxos/nxos_snmp_user.py metaclass-boilerplate
lib/ansible/modules/network/nxos/nxos_snmp_user.py validate-modules:E324
lib/ansible/modules/network/nxos/nxos_snmp_user.py validate-modules:E327
lib/ansible/modules/network/nxos/nxos_snmp_user.py validate-modules:E337
lib/ansible/modules/network/nxos/nxos_snmp_user.py validate-modules:E338
lib/ansible/modules/network/nxos/nxos_static_route.py future-import-boilerplate
lib/ansible/modules/network/nxos/nxos_static_route.py metaclass-boilerplate
lib/ansible/modules/network/nxos/nxos_static_route.py validate-modules:E322
lib/ansible/modules/network/nxos/nxos_static_route.py validate-modules:E324
lib/ansible/modules/network/nxos/nxos_static_route.py validate-modules:E326
lib/ansible/modules/network/nxos/nxos_static_route.py validate-modules:E327
lib/ansible/modules/network/nxos/nxos_static_route.py validate-modules:E337
lib/ansible/modules/network/nxos/nxos_static_route.py validate-modules:E338
lib/ansible/modules/network/nxos/nxos_static_route.py validate-modules:E340
lib/ansible/modules/network/nxos/nxos_system.py future-import-boilerplate
lib/ansible/modules/network/nxos/nxos_system.py metaclass-boilerplate
lib/ansible/modules/network/nxos/nxos_system.py validate-modules:E324
lib/ansible/modules/network/nxos/nxos_system.py validate-modules:E327
lib/ansible/modules/network/nxos/nxos_system.py validate-modules:E337
lib/ansible/modules/network/nxos/nxos_system.py validate-modules:E338
lib/ansible/modules/network/nxos/nxos_udld.py future-import-boilerplate
lib/ansible/modules/network/nxos/nxos_udld.py metaclass-boilerplate
lib/ansible/modules/network/nxos/nxos_udld.py validate-modules:E324
lib/ansible/modules/network/nxos/nxos_udld.py validate-modules:E327
lib/ansible/modules/network/nxos/nxos_udld.py validate-modules:E337
lib/ansible/modules/network/nxos/nxos_udld.py validate-modules:E338
lib/ansible/modules/network/nxos/nxos_udld_interface.py future-import-boilerplate
lib/ansible/modules/network/nxos/nxos_udld_interface.py metaclass-boilerplate
lib/ansible/modules/network/nxos/nxos_udld_interface.py validate-modules:E324
lib/ansible/modules/network/nxos/nxos_udld_interface.py validate-modules:E327
lib/ansible/modules/network/nxos/nxos_udld_interface.py validate-modules:E337
lib/ansible/modules/network/nxos/nxos_udld_interface.py validate-modules:E338
lib/ansible/modules/network/nxos/nxos_user.py future-import-boilerplate
lib/ansible/modules/network/nxos/nxos_user.py metaclass-boilerplate
lib/ansible/modules/network/nxos/nxos_user.py validate-modules:E322
lib/ansible/modules/network/nxos/nxos_user.py validate-modules:E324
lib/ansible/modules/network/nxos/nxos_user.py validate-modules:E326
lib/ansible/modules/network/nxos/nxos_user.py validate-modules:E327
lib/ansible/modules/network/nxos/nxos_user.py validate-modules:E337
lib/ansible/modules/network/nxos/nxos_user.py validate-modules:E338
lib/ansible/modules/network/nxos/nxos_user.py validate-modules:E340
lib/ansible/modules/network/nxos/nxos_vlan.py future-import-boilerplate
lib/ansible/modules/network/nxos/nxos_vlan.py metaclass-boilerplate
lib/ansible/modules/network/nxos/nxos_vlan.py validate-modules:E322
lib/ansible/modules/network/nxos/nxos_vlan.py validate-modules:E324
lib/ansible/modules/network/nxos/nxos_vlan.py validate-modules:E326
lib/ansible/modules/network/nxos/nxos_vlan.py validate-modules:E327
lib/ansible/modules/network/nxos/nxos_vlan.py validate-modules:E337
lib/ansible/modules/network/nxos/nxos_vlan.py validate-modules:E338
lib/ansible/modules/network/nxos/nxos_vlan.py validate-modules:E340
lib/ansible/modules/network/nxos/nxos_vpc.py future-import-boilerplate
lib/ansible/modules/network/nxos/nxos_vpc.py metaclass-boilerplate
lib/ansible/modules/network/nxos/nxos_vpc.py validate-modules:E324
lib/ansible/modules/network/nxos/nxos_vpc.py validate-modules:E327
lib/ansible/modules/network/nxos/nxos_vpc.py validate-modules:E337
lib/ansible/modules/network/nxos/nxos_vpc.py validate-modules:E338
lib/ansible/modules/network/nxos/nxos_vpc_interface.py future-import-boilerplate
lib/ansible/modules/network/nxos/nxos_vpc_interface.py metaclass-boilerplate
lib/ansible/modules/network/nxos/nxos_vpc_interface.py validate-modules:E324
lib/ansible/modules/network/nxos/nxos_vpc_interface.py validate-modules:E327
lib/ansible/modules/network/nxos/nxos_vpc_interface.py validate-modules:E337
lib/ansible/modules/network/nxos/nxos_vpc_interface.py validate-modules:E338
lib/ansible/modules/network/nxos/nxos_vrf.py future-import-boilerplate
lib/ansible/modules/network/nxos/nxos_vrf.py metaclass-boilerplate
lib/ansible/modules/network/nxos/nxos_vrf.py validate-modules:E322
lib/ansible/modules/network/nxos/nxos_vrf.py validate-modules:E324
lib/ansible/modules/network/nxos/nxos_vrf.py validate-modules:E326
lib/ansible/modules/network/nxos/nxos_vrf.py validate-modules:E327
lib/ansible/modules/network/nxos/nxos_vrf.py validate-modules:E337
lib/ansible/modules/network/nxos/nxos_vrf.py validate-modules:E340
lib/ansible/modules/network/nxos/nxos_vrf_af.py future-import-boilerplate
lib/ansible/modules/network/nxos/nxos_vrf_af.py metaclass-boilerplate
lib/ansible/modules/network/nxos/nxos_vrf_af.py validate-modules:E324
lib/ansible/modules/network/nxos/nxos_vrf_af.py validate-modules:E327
lib/ansible/modules/network/nxos/nxos_vrf_af.py validate-modules:E338
lib/ansible/modules/network/nxos/nxos_vrf_interface.py future-import-boilerplate
lib/ansible/modules/network/nxos/nxos_vrf_interface.py metaclass-boilerplate
lib/ansible/modules/network/nxos/nxos_vrf_interface.py validate-modules:E324
lib/ansible/modules/network/nxos/nxos_vrf_interface.py validate-modules:E327
lib/ansible/modules/network/nxos/nxos_vrf_interface.py validate-modules:E337
lib/ansible/modules/network/nxos/nxos_vrf_interface.py validate-modules:E338
lib/ansible/modules/network/nxos/nxos_vrrp.py future-import-boilerplate
lib/ansible/modules/network/nxos/nxos_vrrp.py metaclass-boilerplate
lib/ansible/modules/network/nxos/nxos_vrrp.py validate-modules:E324
lib/ansible/modules/network/nxos/nxos_vrrp.py validate-modules:E327
lib/ansible/modules/network/nxos/nxos_vrrp.py validate-modules:E337
lib/ansible/modules/network/nxos/nxos_vrrp.py validate-modules:E338
lib/ansible/modules/network/nxos/nxos_vtp_domain.py future-import-boilerplate
lib/ansible/modules/network/nxos/nxos_vtp_domain.py metaclass-boilerplate
lib/ansible/modules/network/nxos/nxos_vtp_domain.py validate-modules:E324
lib/ansible/modules/network/nxos/nxos_vtp_domain.py validate-modules:E327
lib/ansible/modules/network/nxos/nxos_vtp_domain.py validate-modules:E337
lib/ansible/modules/network/nxos/nxos_vtp_password.py future-import-boilerplate
lib/ansible/modules/network/nxos/nxos_vtp_password.py metaclass-boilerplate
lib/ansible/modules/network/nxos/nxos_vtp_password.py validate-modules:E324
lib/ansible/modules/network/nxos/nxos_vtp_password.py validate-modules:E327
lib/ansible/modules/network/nxos/nxos_vtp_password.py validate-modules:E337
lib/ansible/modules/network/nxos/nxos_vtp_password.py validate-modules:E338
lib/ansible/modules/network/nxos/nxos_vtp_version.py future-import-boilerplate
lib/ansible/modules/network/nxos/nxos_vtp_version.py metaclass-boilerplate
lib/ansible/modules/network/nxos/nxos_vtp_version.py validate-modules:E324
lib/ansible/modules/network/nxos/nxos_vtp_version.py validate-modules:E327
lib/ansible/modules/network/nxos/nxos_vtp_version.py validate-modules:E337
lib/ansible/modules/network/nxos/nxos_vxlan_vtep.py future-import-boilerplate
lib/ansible/modules/network/nxos/nxos_vxlan_vtep.py metaclass-boilerplate
lib/ansible/modules/network/nxos/nxos_vxlan_vtep.py validate-modules:E324
lib/ansible/modules/network/nxos/nxos_vxlan_vtep.py validate-modules:E327
lib/ansible/modules/network/nxos/nxos_vxlan_vtep.py validate-modules:E337
lib/ansible/modules/network/nxos/nxos_vxlan_vtep.py validate-modules:E338
lib/ansible/modules/network/nxos/nxos_vxlan_vtep_vni.py future-import-boilerplate
lib/ansible/modules/network/nxos/nxos_vxlan_vtep_vni.py metaclass-boilerplate
lib/ansible/modules/network/nxos/nxos_vxlan_vtep_vni.py validate-modules:E324
lib/ansible/modules/network/nxos/nxos_vxlan_vtep_vni.py validate-modules:E327
lib/ansible/modules/network/nxos/nxos_vxlan_vtep_vni.py validate-modules:E337
lib/ansible/modules/network/nxos/nxos_vxlan_vtep_vni.py validate-modules:E338
lib/ansible/modules/network/onyx/onyx_bgp.py validate-modules:E337
lib/ansible/modules/network/onyx/onyx_bgp.py validate-modules:E338
lib/ansible/modules/network/onyx/onyx_buffer_pool.py validate-modules:E337
lib/ansible/modules/network/onyx/onyx_buffer_pool.py validate-modules:E338
lib/ansible/modules/network/onyx/onyx_command.py validate-modules:E323
lib/ansible/modules/network/onyx/onyx_command.py validate-modules:E337
lib/ansible/modules/network/onyx/onyx_command.py validate-modules:E338
lib/ansible/modules/network/onyx/onyx_config.py validate-modules:E323
lib/ansible/modules/network/onyx/onyx_config.py validate-modules:E337
lib/ansible/modules/network/onyx/onyx_config.py validate-modules:E338
lib/ansible/modules/network/onyx/onyx_facts.py validate-modules:E337
lib/ansible/modules/network/onyx/onyx_igmp.py validate-modules:E337
lib/ansible/modules/network/onyx/onyx_igmp.py validate-modules:E338
lib/ansible/modules/network/onyx/onyx_igmp_interface.py validate-modules:E338
lib/ansible/modules/network/onyx/onyx_igmp_vlan.py validate-modules:E337
lib/ansible/modules/network/onyx/onyx_igmp_vlan.py validate-modules:E338
lib/ansible/modules/network/onyx/onyx_interface.py validate-modules:E322
lib/ansible/modules/network/onyx/onyx_interface.py validate-modules:E323
lib/ansible/modules/network/onyx/onyx_interface.py validate-modules:E326
lib/ansible/modules/network/onyx/onyx_interface.py validate-modules:E337
lib/ansible/modules/network/onyx/onyx_interface.py validate-modules:E338
lib/ansible/modules/network/onyx/onyx_interface.py validate-modules:E340
lib/ansible/modules/network/onyx/onyx_l2_interface.py validate-modules:E322
lib/ansible/modules/network/onyx/onyx_l2_interface.py validate-modules:E326
lib/ansible/modules/network/onyx/onyx_l2_interface.py validate-modules:E337
lib/ansible/modules/network/onyx/onyx_l2_interface.py validate-modules:E338
lib/ansible/modules/network/onyx/onyx_l2_interface.py validate-modules:E340
lib/ansible/modules/network/onyx/onyx_l3_interface.py validate-modules:E322
lib/ansible/modules/network/onyx/onyx_l3_interface.py validate-modules:E326
lib/ansible/modules/network/onyx/onyx_l3_interface.py validate-modules:E337
lib/ansible/modules/network/onyx/onyx_l3_interface.py validate-modules:E338
lib/ansible/modules/network/onyx/onyx_l3_interface.py validate-modules:E340
lib/ansible/modules/network/onyx/onyx_linkagg.py validate-modules:E322
lib/ansible/modules/network/onyx/onyx_linkagg.py validate-modules:E324
lib/ansible/modules/network/onyx/onyx_linkagg.py validate-modules:E326
lib/ansible/modules/network/onyx/onyx_linkagg.py validate-modules:E337
lib/ansible/modules/network/onyx/onyx_linkagg.py validate-modules:E338
lib/ansible/modules/network/onyx/onyx_linkagg.py validate-modules:E340
lib/ansible/modules/network/onyx/onyx_lldp.py validate-modules:E338
lib/ansible/modules/network/onyx/onyx_lldp_interface.py validate-modules:E322
lib/ansible/modules/network/onyx/onyx_lldp_interface.py validate-modules:E326
lib/ansible/modules/network/onyx/onyx_lldp_interface.py validate-modules:E337
lib/ansible/modules/network/onyx/onyx_lldp_interface.py validate-modules:E338
lib/ansible/modules/network/onyx/onyx_lldp_interface.py validate-modules:E340
lib/ansible/modules/network/onyx/onyx_magp.py validate-modules:E337
lib/ansible/modules/network/onyx/onyx_magp.py validate-modules:E338
lib/ansible/modules/network/onyx/onyx_mlag_ipl.py validate-modules:E338
lib/ansible/modules/network/onyx/onyx_mlag_vip.py validate-modules:E324
lib/ansible/modules/network/onyx/onyx_mlag_vip.py validate-modules:E337
lib/ansible/modules/network/onyx/onyx_mlag_vip.py validate-modules:E338
lib/ansible/modules/network/onyx/onyx_ospf.py validate-modules:E337
lib/ansible/modules/network/onyx/onyx_ospf.py validate-modules:E338
lib/ansible/modules/network/onyx/onyx_pfc_interface.py validate-modules:E322
lib/ansible/modules/network/onyx/onyx_pfc_interface.py validate-modules:E326
lib/ansible/modules/network/onyx/onyx_pfc_interface.py validate-modules:E337
lib/ansible/modules/network/onyx/onyx_pfc_interface.py validate-modules:E338
lib/ansible/modules/network/onyx/onyx_pfc_interface.py validate-modules:E340
lib/ansible/modules/network/onyx/onyx_protocol.py validate-modules:E338
lib/ansible/modules/network/onyx/onyx_ptp_global.py validate-modules:E337
lib/ansible/modules/network/onyx/onyx_ptp_global.py validate-modules:E338
lib/ansible/modules/network/onyx/onyx_ptp_interface.py validate-modules:E337
lib/ansible/modules/network/onyx/onyx_ptp_interface.py validate-modules:E338
lib/ansible/modules/network/onyx/onyx_qos.py validate-modules:E337
lib/ansible/modules/network/onyx/onyx_qos.py validate-modules:E338
lib/ansible/modules/network/onyx/onyx_traffic_class.py validate-modules:E337
lib/ansible/modules/network/onyx/onyx_traffic_class.py validate-modules:E338
lib/ansible/modules/network/onyx/onyx_vlan.py validate-modules:E322
lib/ansible/modules/network/onyx/onyx_vlan.py validate-modules:E326
lib/ansible/modules/network/onyx/onyx_vlan.py validate-modules:E337
lib/ansible/modules/network/onyx/onyx_vlan.py validate-modules:E338
lib/ansible/modules/network/onyx/onyx_vlan.py validate-modules:E340
lib/ansible/modules/network/onyx/onyx_vxlan.py validate-modules:E322
lib/ansible/modules/network/onyx/onyx_vxlan.py validate-modules:E337
lib/ansible/modules/network/onyx/onyx_vxlan.py validate-modules:E340
lib/ansible/modules/network/opx/opx_cps.py validate-modules:E337
lib/ansible/modules/network/ordnance/ordnance_config.py validate-modules:E322
lib/ansible/modules/network/ordnance/ordnance_config.py validate-modules:E324
lib/ansible/modules/network/ordnance/ordnance_config.py validate-modules:E337
lib/ansible/modules/network/ordnance/ordnance_config.py validate-modules:E338
lib/ansible/modules/network/ordnance/ordnance_facts.py validate-modules:E322
lib/ansible/modules/network/ordnance/ordnance_facts.py validate-modules:E324
lib/ansible/modules/network/ordnance/ordnance_facts.py validate-modules:E337
lib/ansible/modules/network/ordnance/ordnance_facts.py validate-modules:E338
lib/ansible/modules/network/ovs/openvswitch_bridge.py validate-modules:E326
lib/ansible/modules/network/ovs/openvswitch_bridge.py validate-modules:E337
lib/ansible/modules/network/ovs/openvswitch_bridge.py validate-modules:E338
lib/ansible/modules/network/ovs/openvswitch_db.py validate-modules:E337
lib/ansible/modules/network/ovs/openvswitch_db.py validate-modules:E338
lib/ansible/modules/network/ovs/openvswitch_port.py validate-modules:E337
lib/ansible/modules/network/ovs/openvswitch_port.py validate-modules:E338
lib/ansible/modules/network/panos/_panos_admin.py future-import-boilerplate
lib/ansible/modules/network/panos/_panos_admin.py metaclass-boilerplate
lib/ansible/modules/network/panos/_panos_admin.py validate-modules:E338
lib/ansible/modules/network/panos/_panos_admpwd.py future-import-boilerplate
lib/ansible/modules/network/panos/_panos_admpwd.py metaclass-boilerplate
lib/ansible/modules/network/panos/_panos_admpwd.py validate-modules:E338
lib/ansible/modules/network/panos/_panos_cert_gen_ssh.py future-import-boilerplate
lib/ansible/modules/network/panos/_panos_cert_gen_ssh.py metaclass-boilerplate
lib/ansible/modules/network/panos/_panos_cert_gen_ssh.py validate-modules:E338
lib/ansible/modules/network/panos/_panos_check.py future-import-boilerplate
lib/ansible/modules/network/panos/_panos_check.py metaclass-boilerplate
lib/ansible/modules/network/panos/_panos_check.py validate-modules:E337
lib/ansible/modules/network/panos/_panos_commit.py future-import-boilerplate
lib/ansible/modules/network/panos/_panos_commit.py metaclass-boilerplate
lib/ansible/modules/network/panos/_panos_commit.py validate-modules:E337
lib/ansible/modules/network/panos/_panos_commit.py validate-modules:E338
lib/ansible/modules/network/panos/_panos_dag.py future-import-boilerplate
lib/ansible/modules/network/panos/_panos_dag.py metaclass-boilerplate
lib/ansible/modules/network/panos/_panos_dag.py validate-modules:E338
lib/ansible/modules/network/panos/_panos_dag_tags.py future-import-boilerplate
lib/ansible/modules/network/panos/_panos_dag_tags.py metaclass-boilerplate
lib/ansible/modules/network/panos/_panos_dag_tags.py validate-modules:E337
lib/ansible/modules/network/panos/_panos_dag_tags.py validate-modules:E338
lib/ansible/modules/network/panos/_panos_import.py future-import-boilerplate
lib/ansible/modules/network/panos/_panos_import.py metaclass-boilerplate
lib/ansible/modules/network/panos/_panos_import.py validate-modules:E338
lib/ansible/modules/network/panos/_panos_interface.py future-import-boilerplate
lib/ansible/modules/network/panos/_panos_interface.py metaclass-boilerplate
lib/ansible/modules/network/panos/_panos_interface.py validate-modules:E338
lib/ansible/modules/network/panos/_panos_lic.py future-import-boilerplate
lib/ansible/modules/network/panos/_panos_lic.py metaclass-boilerplate
lib/ansible/modules/network/panos/_panos_lic.py validate-modules:E338
lib/ansible/modules/network/panos/_panos_loadcfg.py future-import-boilerplate
lib/ansible/modules/network/panos/_panos_loadcfg.py metaclass-boilerplate
lib/ansible/modules/network/panos/_panos_loadcfg.py validate-modules:E338
lib/ansible/modules/network/panos/_panos_match_rule.py future-import-boilerplate
lib/ansible/modules/network/panos/_panos_match_rule.py metaclass-boilerplate
lib/ansible/modules/network/panos/_panos_match_rule.py validate-modules:E337
lib/ansible/modules/network/panos/_panos_match_rule.py validate-modules:E338
lib/ansible/modules/network/panos/_panos_mgtconfig.py future-import-boilerplate
lib/ansible/modules/network/panos/_panos_mgtconfig.py metaclass-boilerplate
lib/ansible/modules/network/panos/_panos_mgtconfig.py validate-modules:E338
lib/ansible/modules/network/panos/_panos_nat_policy.py future-import-boilerplate
lib/ansible/modules/network/panos/_panos_nat_policy.py metaclass-boilerplate
lib/ansible/modules/network/panos/_panos_nat_rule.py validate-modules:E337
lib/ansible/modules/network/panos/_panos_nat_rule.py validate-modules:E338
lib/ansible/modules/network/panos/_panos_object.py future-import-boilerplate
lib/ansible/modules/network/panos/_panos_object.py metaclass-boilerplate
lib/ansible/modules/network/panos/_panos_object.py validate-modules:E337
lib/ansible/modules/network/panos/_panos_object.py validate-modules:E338
lib/ansible/modules/network/panos/_panos_op.py future-import-boilerplate
lib/ansible/modules/network/panos/_panos_op.py metaclass-boilerplate
lib/ansible/modules/network/panos/_panos_op.py validate-modules:E338
lib/ansible/modules/network/panos/_panos_pg.py future-import-boilerplate
lib/ansible/modules/network/panos/_panos_pg.py metaclass-boilerplate
lib/ansible/modules/network/panos/_panos_pg.py validate-modules:E338
lib/ansible/modules/network/panos/_panos_query_rules.py future-import-boilerplate
lib/ansible/modules/network/panos/_panos_query_rules.py metaclass-boilerplate
lib/ansible/modules/network/panos/_panos_query_rules.py validate-modules:E338
lib/ansible/modules/network/panos/_panos_restart.py future-import-boilerplate
lib/ansible/modules/network/panos/_panos_restart.py metaclass-boilerplate
lib/ansible/modules/network/panos/_panos_sag.py future-import-boilerplate
lib/ansible/modules/network/panos/_panos_sag.py metaclass-boilerplate
lib/ansible/modules/network/panos/_panos_sag.py validate-modules:E337
lib/ansible/modules/network/panos/_panos_sag.py validate-modules:E338
lib/ansible/modules/network/panos/_panos_security_policy.py future-import-boilerplate
lib/ansible/modules/network/panos/_panos_security_policy.py metaclass-boilerplate
lib/ansible/modules/network/panos/_panos_security_rule.py validate-modules:E337
lib/ansible/modules/network/panos/_panos_security_rule.py validate-modules:E338
lib/ansible/modules/network/panos/_panos_set.py future-import-boilerplate
lib/ansible/modules/network/panos/_panos_set.py metaclass-boilerplate
lib/ansible/modules/network/panos/_panos_set.py validate-modules:E338
lib/ansible/modules/network/radware/vdirect_commit.py validate-modules:E337
lib/ansible/modules/network/radware/vdirect_commit.py validate-modules:E338
lib/ansible/modules/network/radware/vdirect_file.py validate-modules:E337
lib/ansible/modules/network/radware/vdirect_file.py validate-modules:E338
lib/ansible/modules/network/radware/vdirect_runnable.py validate-modules:E337
lib/ansible/modules/network/radware/vdirect_runnable.py validate-modules:E338
lib/ansible/modules/network/restconf/restconf_config.py validate-modules:E338
lib/ansible/modules/network/restconf/restconf_get.py validate-modules:E338
lib/ansible/modules/network/routeros/routeros_command.py validate-modules:E337
lib/ansible/modules/network/routeros/routeros_command.py validate-modules:E338
lib/ansible/modules/network/routeros/routeros_facts.py validate-modules:E337
lib/ansible/modules/network/skydive/skydive_capture.py validate-modules:E322
lib/ansible/modules/network/skydive/skydive_capture.py validate-modules:E323
lib/ansible/modules/network/skydive/skydive_capture.py validate-modules:E337
lib/ansible/modules/network/skydive/skydive_capture.py validate-modules:E338
lib/ansible/modules/network/skydive/skydive_edge.py validate-modules:E322
lib/ansible/modules/network/skydive/skydive_edge.py validate-modules:E323
lib/ansible/modules/network/skydive/skydive_edge.py validate-modules:E337
lib/ansible/modules/network/skydive/skydive_edge.py validate-modules:E338
lib/ansible/modules/network/skydive/skydive_node.py validate-modules:E322
lib/ansible/modules/network/skydive/skydive_node.py validate-modules:E323
lib/ansible/modules/network/skydive/skydive_node.py validate-modules:E337
lib/ansible/modules/network/skydive/skydive_node.py validate-modules:E338
lib/ansible/modules/network/slxos/slxos_command.py validate-modules:E337
lib/ansible/modules/network/slxos/slxos_command.py validate-modules:E338
lib/ansible/modules/network/slxos/slxos_config.py validate-modules:E337
lib/ansible/modules/network/slxos/slxos_config.py validate-modules:E338
lib/ansible/modules/network/slxos/slxos_facts.py validate-modules:E337
lib/ansible/modules/network/slxos/slxos_interface.py validate-modules:E322
lib/ansible/modules/network/slxos/slxos_interface.py validate-modules:E326
lib/ansible/modules/network/slxos/slxos_interface.py validate-modules:E337
lib/ansible/modules/network/slxos/slxos_interface.py validate-modules:E338
lib/ansible/modules/network/slxos/slxos_interface.py validate-modules:E340
lib/ansible/modules/network/slxos/slxos_l2_interface.py validate-modules:E322
lib/ansible/modules/network/slxos/slxos_l2_interface.py validate-modules:E326
lib/ansible/modules/network/slxos/slxos_l2_interface.py validate-modules:E337
lib/ansible/modules/network/slxos/slxos_l2_interface.py validate-modules:E338
lib/ansible/modules/network/slxos/slxos_l2_interface.py validate-modules:E340
lib/ansible/modules/network/slxos/slxos_l3_interface.py validate-modules:E322
lib/ansible/modules/network/slxos/slxos_l3_interface.py validate-modules:E326
lib/ansible/modules/network/slxos/slxos_l3_interface.py validate-modules:E337
lib/ansible/modules/network/slxos/slxos_l3_interface.py validate-modules:E338
lib/ansible/modules/network/slxos/slxos_l3_interface.py validate-modules:E340
lib/ansible/modules/network/slxos/slxos_linkagg.py validate-modules:E322
lib/ansible/modules/network/slxos/slxos_linkagg.py validate-modules:E326
lib/ansible/modules/network/slxos/slxos_linkagg.py validate-modules:E337
lib/ansible/modules/network/slxos/slxos_linkagg.py validate-modules:E338
lib/ansible/modules/network/slxos/slxos_linkagg.py validate-modules:E340
lib/ansible/modules/network/slxos/slxos_lldp.py validate-modules:E338
lib/ansible/modules/network/slxos/slxos_vlan.py validate-modules:E322
lib/ansible/modules/network/slxos/slxos_vlan.py validate-modules:E326
lib/ansible/modules/network/slxos/slxos_vlan.py validate-modules:E337
lib/ansible/modules/network/slxos/slxos_vlan.py validate-modules:E338
lib/ansible/modules/network/slxos/slxos_vlan.py validate-modules:E340
lib/ansible/modules/network/sros/sros_command.py validate-modules:E324
lib/ansible/modules/network/sros/sros_command.py validate-modules:E337
lib/ansible/modules/network/sros/sros_command.py validate-modules:E338
lib/ansible/modules/network/sros/sros_config.py validate-modules:E323
lib/ansible/modules/network/sros/sros_config.py validate-modules:E324
lib/ansible/modules/network/sros/sros_config.py validate-modules:E337
lib/ansible/modules/network/sros/sros_config.py validate-modules:E338
lib/ansible/modules/network/sros/sros_rollback.py validate-modules:E324
lib/ansible/modules/network/sros/sros_rollback.py validate-modules:E337
lib/ansible/modules/network/sros/sros_rollback.py validate-modules:E338
lib/ansible/modules/network/voss/voss_command.py validate-modules:E337
lib/ansible/modules/network/voss/voss_command.py validate-modules:E338
lib/ansible/modules/network/voss/voss_config.py validate-modules:E337
lib/ansible/modules/network/voss/voss_config.py validate-modules:E338
lib/ansible/modules/network/voss/voss_facts.py validate-modules:E337
lib/ansible/modules/network/vyos/vyos_banner.py future-import-boilerplate
lib/ansible/modules/network/vyos/vyos_banner.py metaclass-boilerplate
lib/ansible/modules/network/vyos/vyos_banner.py validate-modules:E324
lib/ansible/modules/network/vyos/vyos_banner.py validate-modules:E338
lib/ansible/modules/network/vyos/vyos_command.py future-import-boilerplate
lib/ansible/modules/network/vyos/vyos_command.py metaclass-boilerplate
lib/ansible/modules/network/vyos/vyos_command.py pylint:blacklisted-name
lib/ansible/modules/network/vyos/vyos_command.py validate-modules:E324
lib/ansible/modules/network/vyos/vyos_command.py validate-modules:E337
lib/ansible/modules/network/vyos/vyos_command.py validate-modules:E338
lib/ansible/modules/network/vyos/vyos_config.py future-import-boilerplate
lib/ansible/modules/network/vyos/vyos_config.py metaclass-boilerplate
lib/ansible/modules/network/vyos/vyos_config.py validate-modules:E324
lib/ansible/modules/network/vyos/vyos_config.py validate-modules:E337
lib/ansible/modules/network/vyos/vyos_config.py validate-modules:E338
lib/ansible/modules/network/vyos/vyos_facts.py future-import-boilerplate
lib/ansible/modules/network/vyos/vyos_facts.py metaclass-boilerplate
lib/ansible/modules/network/vyos/vyos_facts.py validate-modules:E324
lib/ansible/modules/network/vyos/vyos_facts.py validate-modules:E337
lib/ansible/modules/network/vyos/_vyos_interface.py future-import-boilerplate
lib/ansible/modules/network/vyos/_vyos_interface.py metaclass-boilerplate
lib/ansible/modules/network/vyos/_vyos_interface.py validate-modules:E322
lib/ansible/modules/network/vyos/_vyos_interface.py validate-modules:E324
lib/ansible/modules/network/vyos/_vyos_interface.py validate-modules:E326
lib/ansible/modules/network/vyos/_vyos_interface.py validate-modules:E337
lib/ansible/modules/network/vyos/_vyos_interface.py validate-modules:E338
lib/ansible/modules/network/vyos/_vyos_interface.py validate-modules:E340
lib/ansible/modules/network/vyos/vyos_l3_interface.py future-import-boilerplate
lib/ansible/modules/network/vyos/vyos_l3_interface.py metaclass-boilerplate
lib/ansible/modules/network/vyos/vyos_l3_interface.py validate-modules:E322
lib/ansible/modules/network/vyos/vyos_l3_interface.py validate-modules:E324
lib/ansible/modules/network/vyos/vyos_l3_interface.py validate-modules:E326
lib/ansible/modules/network/vyos/vyos_l3_interface.py validate-modules:E337
lib/ansible/modules/network/vyos/vyos_l3_interface.py validate-modules:E338
lib/ansible/modules/network/vyos/vyos_l3_interface.py validate-modules:E340
lib/ansible/modules/network/vyos/vyos_linkagg.py future-import-boilerplate
lib/ansible/modules/network/vyos/vyos_linkagg.py metaclass-boilerplate
lib/ansible/modules/network/vyos/vyos_linkagg.py validate-modules:E322
lib/ansible/modules/network/vyos/vyos_linkagg.py validate-modules:E324
lib/ansible/modules/network/vyos/vyos_linkagg.py validate-modules:E326
lib/ansible/modules/network/vyos/vyos_linkagg.py validate-modules:E337
lib/ansible/modules/network/vyos/vyos_linkagg.py validate-modules:E338
lib/ansible/modules/network/vyos/vyos_linkagg.py validate-modules:E340
lib/ansible/modules/network/vyos/vyos_lldp.py future-import-boilerplate
lib/ansible/modules/network/vyos/vyos_lldp.py metaclass-boilerplate
lib/ansible/modules/network/vyos/vyos_lldp.py validate-modules:E322
lib/ansible/modules/network/vyos/vyos_lldp.py validate-modules:E324
lib/ansible/modules/network/vyos/vyos_lldp.py validate-modules:E326
lib/ansible/modules/network/vyos/vyos_lldp.py validate-modules:E337
lib/ansible/modules/network/vyos/vyos_lldp.py validate-modules:E338
lib/ansible/modules/network/vyos/vyos_lldp_interface.py future-import-boilerplate
lib/ansible/modules/network/vyos/vyos_lldp_interface.py metaclass-boilerplate
lib/ansible/modules/network/vyos/vyos_lldp_interface.py validate-modules:E322
lib/ansible/modules/network/vyos/vyos_lldp_interface.py validate-modules:E324
lib/ansible/modules/network/vyos/vyos_lldp_interface.py validate-modules:E326
lib/ansible/modules/network/vyos/vyos_lldp_interface.py validate-modules:E337
lib/ansible/modules/network/vyos/vyos_lldp_interface.py validate-modules:E338
lib/ansible/modules/network/vyos/vyos_lldp_interface.py validate-modules:E340
lib/ansible/modules/network/vyos/vyos_logging.py future-import-boilerplate
lib/ansible/modules/network/vyos/vyos_logging.py metaclass-boilerplate
lib/ansible/modules/network/vyos/vyos_logging.py validate-modules:E322
lib/ansible/modules/network/vyos/vyos_logging.py validate-modules:E324
lib/ansible/modules/network/vyos/vyos_logging.py validate-modules:E326
lib/ansible/modules/network/vyos/vyos_logging.py validate-modules:E337
lib/ansible/modules/network/vyos/vyos_logging.py validate-modules:E338
lib/ansible/modules/network/vyos/vyos_logging.py validate-modules:E340
lib/ansible/modules/network/vyos/vyos_ping.py validate-modules:E324
lib/ansible/modules/network/vyos/vyos_ping.py validate-modules:E337
lib/ansible/modules/network/vyos/vyos_static_route.py future-import-boilerplate
lib/ansible/modules/network/vyos/vyos_static_route.py metaclass-boilerplate
lib/ansible/modules/network/vyos/vyos_static_route.py validate-modules:E322
lib/ansible/modules/network/vyos/vyos_static_route.py validate-modules:E324
lib/ansible/modules/network/vyos/vyos_static_route.py validate-modules:E326
lib/ansible/modules/network/vyos/vyos_static_route.py validate-modules:E337
lib/ansible/modules/network/vyos/vyos_static_route.py validate-modules:E338
lib/ansible/modules/network/vyos/vyos_static_route.py validate-modules:E340
lib/ansible/modules/network/vyos/vyos_system.py future-import-boilerplate
lib/ansible/modules/network/vyos/vyos_system.py metaclass-boilerplate
lib/ansible/modules/network/vyos/vyos_system.py validate-modules:E324
lib/ansible/modules/network/vyos/vyos_system.py validate-modules:E337
lib/ansible/modules/network/vyos/vyos_user.py future-import-boilerplate
lib/ansible/modules/network/vyos/vyos_user.py metaclass-boilerplate
lib/ansible/modules/network/vyos/vyos_user.py validate-modules:E322
lib/ansible/modules/network/vyos/vyos_user.py validate-modules:E324
lib/ansible/modules/network/vyos/vyos_user.py validate-modules:E326
lib/ansible/modules/network/vyos/vyos_user.py validate-modules:E337
lib/ansible/modules/network/vyos/vyos_user.py validate-modules:E338
lib/ansible/modules/network/vyos/vyos_user.py validate-modules:E340
lib/ansible/modules/network/vyos/vyos_vlan.py validate-modules:E322
lib/ansible/modules/network/vyos/vyos_vlan.py validate-modules:E324
lib/ansible/modules/network/vyos/vyos_vlan.py validate-modules:E326
lib/ansible/modules/network/vyos/vyos_vlan.py validate-modules:E337
lib/ansible/modules/network/vyos/vyos_vlan.py validate-modules:E338
lib/ansible/modules/network/vyos/vyos_vlan.py validate-modules:E340
lib/ansible/modules/notification/bearychat.py validate-modules:E337
lib/ansible/modules/notification/campfire.py validate-modules:E338
lib/ansible/modules/notification/catapult.py validate-modules:E337
lib/ansible/modules/notification/catapult.py validate-modules:E338
lib/ansible/modules/notification/cisco_spark.py validate-modules:E322
lib/ansible/modules/notification/cisco_spark.py validate-modules:E324
lib/ansible/modules/notification/cisco_spark.py validate-modules:E338
lib/ansible/modules/notification/flowdock.py validate-modules:E338
lib/ansible/modules/notification/grove.py validate-modules:E337
lib/ansible/modules/notification/hall.py validate-modules:E324
lib/ansible/modules/notification/hall.py validate-modules:E337
lib/ansible/modules/notification/hipchat.py validate-modules:E322
lib/ansible/modules/notification/hipchat.py validate-modules:E324
lib/ansible/modules/notification/hipchat.py validate-modules:E338
lib/ansible/modules/notification/irc.py validate-modules:E322
lib/ansible/modules/notification/irc.py validate-modules:E324
lib/ansible/modules/notification/irc.py validate-modules:E326
lib/ansible/modules/notification/irc.py validate-modules:E337
lib/ansible/modules/notification/irc.py validate-modules:E338
lib/ansible/modules/notification/jabber.py validate-modules:E337
lib/ansible/modules/notification/jabber.py validate-modules:E338
lib/ansible/modules/notification/logentries_msg.py validate-modules:E337
lib/ansible/modules/notification/mail.py validate-modules:E322
lib/ansible/modules/notification/mail.py validate-modules:E324
lib/ansible/modules/notification/mail.py validate-modules:E337
lib/ansible/modules/notification/matrix.py validate-modules:E337
lib/ansible/modules/notification/mattermost.py validate-modules:E337
lib/ansible/modules/notification/mqtt.py validate-modules:E324
lib/ansible/modules/notification/mqtt.py validate-modules:E337
lib/ansible/modules/notification/mqtt.py validate-modules:E338
lib/ansible/modules/notification/nexmo.py validate-modules:E337
lib/ansible/modules/notification/nexmo.py validate-modules:E338
lib/ansible/modules/notification/office_365_connector_card.py validate-modules:E337
lib/ansible/modules/notification/office_365_connector_card.py validate-modules:E338
lib/ansible/modules/notification/pushbullet.py validate-modules:E322
lib/ansible/modules/notification/pushbullet.py validate-modules:E337
lib/ansible/modules/notification/pushover.py validate-modules:E324
lib/ansible/modules/notification/pushover.py validate-modules:E326
lib/ansible/modules/notification/pushover.py validate-modules:E337
lib/ansible/modules/notification/pushover.py validate-modules:E338
lib/ansible/modules/notification/rabbitmq_publish.py validate-modules:E337
lib/ansible/modules/notification/rocketchat.py validate-modules:E317
lib/ansible/modules/notification/rocketchat.py validate-modules:E337
lib/ansible/modules/notification/say.py validate-modules:E338
lib/ansible/modules/notification/sendgrid.py validate-modules:E322
lib/ansible/modules/notification/sendgrid.py validate-modules:E337
lib/ansible/modules/notification/sendgrid.py validate-modules:E338
lib/ansible/modules/notification/slack.py validate-modules:E324
lib/ansible/modules/notification/slack.py validate-modules:E337
lib/ansible/modules/notification/syslogger.py validate-modules:E337
lib/ansible/modules/notification/telegram.py validate-modules:E337
lib/ansible/modules/notification/twilio.py validate-modules:E337
lib/ansible/modules/notification/twilio.py validate-modules:E338
lib/ansible/modules/notification/typetalk.py validate-modules:E337
lib/ansible/modules/notification/typetalk.py validate-modules:E338
lib/ansible/modules/packaging/language/bower.py validate-modules:E337
lib/ansible/modules/packaging/language/bower.py validate-modules:E338
lib/ansible/modules/packaging/language/bundler.py validate-modules:E324
lib/ansible/modules/packaging/language/bundler.py validate-modules:E337
lib/ansible/modules/packaging/language/bundler.py validate-modules:E338
lib/ansible/modules/packaging/language/composer.py validate-modules:E336
lib/ansible/modules/packaging/language/composer.py validate-modules:E337
lib/ansible/modules/packaging/language/cpanm.py validate-modules:E337
lib/ansible/modules/packaging/language/cpanm.py validate-modules:E338
lib/ansible/modules/packaging/language/easy_install.py validate-modules:E324
lib/ansible/modules/packaging/language/easy_install.py validate-modules:E337
lib/ansible/modules/packaging/language/easy_install.py validate-modules:E338
lib/ansible/modules/packaging/language/gem.py validate-modules:E337
lib/ansible/modules/packaging/language/maven_artifact.py validate-modules:E324
lib/ansible/modules/packaging/language/maven_artifact.py validate-modules:E337
lib/ansible/modules/packaging/language/maven_artifact.py validate-modules:E338
lib/ansible/modules/packaging/language/pear.py validate-modules:E322
lib/ansible/modules/packaging/language/pear.py validate-modules:E326
lib/ansible/modules/packaging/language/pear.py validate-modules:E337
lib/ansible/modules/packaging/language/pear.py validate-modules:E338
lib/ansible/modules/packaging/language/pip.py pylint:blacklisted-name
lib/ansible/modules/packaging/language/yarn.py validate-modules:E337
lib/ansible/modules/packaging/language/yarn.py validate-modules:E338
lib/ansible/modules/packaging/os/apk.py validate-modules:E326
lib/ansible/modules/packaging/os/apk.py validate-modules:E337
lib/ansible/modules/packaging/os/apk.py validate-modules:E338
lib/ansible/modules/packaging/os/apt.py validate-modules:E322
lib/ansible/modules/packaging/os/apt.py validate-modules:E324
lib/ansible/modules/packaging/os/apt.py validate-modules:E336
lib/ansible/modules/packaging/os/apt.py validate-modules:E337
lib/ansible/modules/packaging/os/apt_key.py validate-modules:E322
lib/ansible/modules/packaging/os/apt_key.py validate-modules:E337
lib/ansible/modules/packaging/os/apt_repo.py validate-modules:E337
lib/ansible/modules/packaging/os/apt_repository.py validate-modules:E322
lib/ansible/modules/packaging/os/apt_repository.py validate-modules:E324
lib/ansible/modules/packaging/os/apt_repository.py validate-modules:E336
lib/ansible/modules/packaging/os/apt_repository.py validate-modules:E337
lib/ansible/modules/packaging/os/apt_rpm.py validate-modules:E322
lib/ansible/modules/packaging/os/apt_rpm.py validate-modules:E324
lib/ansible/modules/packaging/os/apt_rpm.py validate-modules:E326
lib/ansible/modules/packaging/os/apt_rpm.py validate-modules:E336
lib/ansible/modules/packaging/os/apt_rpm.py validate-modules:E337
lib/ansible/modules/packaging/os/dnf.py validate-modules:E336
lib/ansible/modules/packaging/os/dnf.py validate-modules:E337
lib/ansible/modules/packaging/os/dnf.py validate-modules:E338
lib/ansible/modules/packaging/os/dpkg_selections.py validate-modules:E338
lib/ansible/modules/packaging/os/flatpak.py validate-modules:E210
lib/ansible/modules/packaging/os/flatpak.py validate-modules:E337
lib/ansible/modules/packaging/os/flatpak_remote.py validate-modules:E210
lib/ansible/modules/packaging/os/flatpak_remote.py validate-modules:E337
lib/ansible/modules/packaging/os/homebrew.py validate-modules:E326
lib/ansible/modules/packaging/os/homebrew.py validate-modules:E336
lib/ansible/modules/packaging/os/homebrew.py validate-modules:E337
lib/ansible/modules/packaging/os/homebrew.py validate-modules:E338
lib/ansible/modules/packaging/os/homebrew_cask.py validate-modules:E326
lib/ansible/modules/packaging/os/homebrew_cask.py validate-modules:E336
lib/ansible/modules/packaging/os/homebrew_cask.py validate-modules:E337
lib/ansible/modules/packaging/os/homebrew_cask.py validate-modules:E338
lib/ansible/modules/packaging/os/homebrew_tap.py validate-modules:E337
lib/ansible/modules/packaging/os/homebrew_tap.py validate-modules:E338
lib/ansible/modules/packaging/os/layman.py validate-modules:E322
lib/ansible/modules/packaging/os/layman.py validate-modules:E338
lib/ansible/modules/packaging/os/macports.py validate-modules:E326
lib/ansible/modules/packaging/os/macports.py validate-modules:E337
lib/ansible/modules/packaging/os/macports.py validate-modules:E338
lib/ansible/modules/packaging/os/openbsd_pkg.py validate-modules:E326
lib/ansible/modules/packaging/os/openbsd_pkg.py validate-modules:E337
lib/ansible/modules/packaging/os/opkg.py validate-modules:E322
lib/ansible/modules/packaging/os/opkg.py validate-modules:E324
lib/ansible/modules/packaging/os/opkg.py validate-modules:E326
lib/ansible/modules/packaging/os/opkg.py validate-modules:E336
lib/ansible/modules/packaging/os/opkg.py validate-modules:E338
lib/ansible/modules/packaging/os/package_facts.py validate-modules:E326
lib/ansible/modules/packaging/os/package_facts.py validate-modules:E338
lib/ansible/modules/packaging/os/pacman.py validate-modules:E326
lib/ansible/modules/packaging/os/pacman.py validate-modules:E336
lib/ansible/modules/packaging/os/pacman.py validate-modules:E337
lib/ansible/modules/packaging/os/pkg5.py validate-modules:E326
lib/ansible/modules/packaging/os/pkg5.py validate-modules:E337
lib/ansible/modules/packaging/os/pkg5_publisher.py validate-modules:E337
lib/ansible/modules/packaging/os/pkg5_publisher.py validate-modules:E338
lib/ansible/modules/packaging/os/pkgin.py validate-modules:E322
lib/ansible/modules/packaging/os/pkgin.py validate-modules:E337
lib/ansible/modules/packaging/os/pkgin.py validate-modules:E338
lib/ansible/modules/packaging/os/pkgng.py validate-modules:E322
lib/ansible/modules/packaging/os/pkgng.py validate-modules:E337
lib/ansible/modules/packaging/os/pkgng.py validate-modules:E338
lib/ansible/modules/packaging/os/pkgutil.py validate-modules:E338
lib/ansible/modules/packaging/os/portage.py validate-modules:E322
lib/ansible/modules/packaging/os/portage.py validate-modules:E337
lib/ansible/modules/packaging/os/portage.py validate-modules:E338
lib/ansible/modules/packaging/os/portinstall.py validate-modules:E322
lib/ansible/modules/packaging/os/portinstall.py validate-modules:E338
lib/ansible/modules/packaging/os/pulp_repo.py validate-modules:E322
lib/ansible/modules/packaging/os/pulp_repo.py validate-modules:E324
lib/ansible/modules/packaging/os/pulp_repo.py validate-modules:E338
lib/ansible/modules/packaging/os/redhat_subscription.py validate-modules:E337
lib/ansible/modules/packaging/os/redhat_subscription.py validate-modules:E338
lib/ansible/modules/packaging/os/rhn_channel.py validate-modules:E322
lib/ansible/modules/packaging/os/rhn_channel.py validate-modules:E326
lib/ansible/modules/packaging/os/rhn_channel.py validate-modules:E337
lib/ansible/modules/packaging/os/rhsm_release.py validate-modules:E337
lib/ansible/modules/packaging/os/rhsm_repository.py validate-modules:E324
lib/ansible/modules/packaging/os/rhsm_repository.py validate-modules:E337
lib/ansible/modules/packaging/os/rhsm_repository.py validate-modules:E338
lib/ansible/modules/packaging/os/rpm_key.py validate-modules:E337
lib/ansible/modules/packaging/os/slackpkg.py validate-modules:E322
lib/ansible/modules/packaging/os/slackpkg.py validate-modules:E324
lib/ansible/modules/packaging/os/slackpkg.py validate-modules:E326
lib/ansible/modules/packaging/os/slackpkg.py validate-modules:E336
lib/ansible/modules/packaging/os/slackpkg.py validate-modules:E337
lib/ansible/modules/packaging/os/slackpkg.py validate-modules:E338
lib/ansible/modules/packaging/os/snap.py validate-modules:E337
lib/ansible/modules/packaging/os/sorcery.py validate-modules:E337
lib/ansible/modules/packaging/os/sorcery.py validate-modules:E338
lib/ansible/modules/packaging/os/svr4pkg.py validate-modules:E338
lib/ansible/modules/packaging/os/swdepot.py validate-modules:E322
lib/ansible/modules/packaging/os/swdepot.py validate-modules:E338
lib/ansible/modules/packaging/os/swupd.py validate-modules:E337
lib/ansible/modules/packaging/os/urpmi.py validate-modules:E322
lib/ansible/modules/packaging/os/urpmi.py validate-modules:E324
lib/ansible/modules/packaging/os/urpmi.py validate-modules:E326
lib/ansible/modules/packaging/os/urpmi.py validate-modules:E336
lib/ansible/modules/packaging/os/urpmi.py validate-modules:E337
lib/ansible/modules/packaging/os/xbps.py validate-modules:E322
lib/ansible/modules/packaging/os/xbps.py validate-modules:E326
lib/ansible/modules/packaging/os/xbps.py validate-modules:E336
lib/ansible/modules/packaging/os/xbps.py validate-modules:E337
lib/ansible/modules/packaging/os/xbps.py validate-modules:E338
lib/ansible/modules/packaging/os/yum.py pylint:blacklisted-name
lib/ansible/modules/packaging/os/yum.py validate-modules:E322
lib/ansible/modules/packaging/os/yum.py validate-modules:E324
lib/ansible/modules/packaging/os/yum.py validate-modules:E336
lib/ansible/modules/packaging/os/yum.py validate-modules:E337
lib/ansible/modules/packaging/os/yum.py validate-modules:E338
lib/ansible/modules/packaging/os/yum_repository.py validate-modules:E322
lib/ansible/modules/packaging/os/yum_repository.py validate-modules:E324
lib/ansible/modules/packaging/os/yum_repository.py validate-modules:E337
lib/ansible/modules/packaging/os/yum_repository.py validate-modules:E338
lib/ansible/modules/packaging/os/zypper.py validate-modules:E326
lib/ansible/modules/packaging/os/zypper.py validate-modules:E337
lib/ansible/modules/packaging/os/zypper.py validate-modules:E338
lib/ansible/modules/packaging/os/zypper_repository.py validate-modules:E337
lib/ansible/modules/packaging/os/zypper_repository.py validate-modules:E338
lib/ansible/modules/remote_management/cobbler/cobbler_sync.py validate-modules:E337
lib/ansible/modules/remote_management/cobbler/cobbler_system.py validate-modules:E337
lib/ansible/modules/remote_management/cpm/cpm_plugconfig.py validate-modules:E337
lib/ansible/modules/remote_management/cpm/cpm_plugconfig.py validate-modules:E338
lib/ansible/modules/remote_management/cpm/cpm_plugcontrol.py validate-modules:E337
lib/ansible/modules/remote_management/cpm/cpm_plugcontrol.py validate-modules:E338
lib/ansible/modules/remote_management/cpm/cpm_serial_port_config.py validate-modules:E337
lib/ansible/modules/remote_management/cpm/cpm_serial_port_info.py validate-modules:E337
lib/ansible/modules/remote_management/cpm/cpm_user.py validate-modules:E337
lib/ansible/modules/remote_management/cpm/cpm_user.py validate-modules:E338
lib/ansible/modules/remote_management/dellemc/idrac_server_config_profile.py validate-modules:E337
lib/ansible/modules/remote_management/dellemc/idrac_server_config_profile.py validate-modules:E338
lib/ansible/modules/remote_management/foreman/_foreman.py validate-modules:E337
lib/ansible/modules/remote_management/foreman/_katello.py validate-modules:E337
lib/ansible/modules/remote_management/hpilo/hpilo_boot.py validate-modules:E326
lib/ansible/modules/remote_management/hpilo/hpilo_boot.py validate-modules:E337
lib/ansible/modules/remote_management/hpilo/hpilo_facts.py validate-modules:E337
lib/ansible/modules/remote_management/hpilo/hponcfg.py validate-modules:E337
lib/ansible/modules/remote_management/imc/imc_rest.py validate-modules:E337
lib/ansible/modules/remote_management/intersight/intersight_rest_api.py validate-modules:E337
lib/ansible/modules/remote_management/ipmi/ipmi_boot.py validate-modules:E326
lib/ansible/modules/remote_management/ipmi/ipmi_boot.py validate-modules:E337
lib/ansible/modules/remote_management/ipmi/ipmi_boot.py validate-modules:E338
lib/ansible/modules/remote_management/ipmi/ipmi_power.py validate-modules:E326
lib/ansible/modules/remote_management/ipmi/ipmi_power.py validate-modules:E337
lib/ansible/modules/remote_management/ipmi/ipmi_power.py validate-modules:E338
lib/ansible/modules/remote_management/lxca/lxca_cmms.py validate-modules:E338
lib/ansible/modules/remote_management/lxca/lxca_nodes.py validate-modules:E338
lib/ansible/modules/remote_management/manageiq/manageiq_alert_profiles.py validate-modules:E335
lib/ansible/modules/remote_management/manageiq/manageiq_alert_profiles.py validate-modules:E337
lib/ansible/modules/remote_management/manageiq/manageiq_alert_profiles.py validate-modules:E338
lib/ansible/modules/remote_management/manageiq/manageiq_alerts.py validate-modules:E335
lib/ansible/modules/remote_management/manageiq/manageiq_alerts.py validate-modules:E337
lib/ansible/modules/remote_management/manageiq/manageiq_alerts.py validate-modules:E338
lib/ansible/modules/remote_management/manageiq/manageiq_group.py validate-modules:E335
lib/ansible/modules/remote_management/manageiq/manageiq_group.py validate-modules:E337
lib/ansible/modules/remote_management/manageiq/manageiq_group.py validate-modules:E338
lib/ansible/modules/remote_management/manageiq/manageiq_policies.py validate-modules:E335
lib/ansible/modules/remote_management/manageiq/manageiq_policies.py validate-modules:E337
lib/ansible/modules/remote_management/manageiq/manageiq_provider.py validate-modules:E322
lib/ansible/modules/remote_management/manageiq/manageiq_provider.py validate-modules:E324
lib/ansible/modules/remote_management/manageiq/manageiq_provider.py validate-modules:E326
lib/ansible/modules/remote_management/manageiq/manageiq_provider.py validate-modules:E335
lib/ansible/modules/remote_management/manageiq/manageiq_provider.py validate-modules:E337
lib/ansible/modules/remote_management/manageiq/manageiq_provider.py validate-modules:E338
lib/ansible/modules/remote_management/manageiq/manageiq_tags.py validate-modules:E335
lib/ansible/modules/remote_management/manageiq/manageiq_tags.py validate-modules:E337
lib/ansible/modules/remote_management/manageiq/manageiq_tenant.py validate-modules:E335
lib/ansible/modules/remote_management/manageiq/manageiq_tenant.py validate-modules:E337
lib/ansible/modules/remote_management/manageiq/manageiq_tenant.py validate-modules:E338
lib/ansible/modules/remote_management/manageiq/manageiq_user.py validate-modules:E335
lib/ansible/modules/remote_management/manageiq/manageiq_user.py validate-modules:E337
lib/ansible/modules/remote_management/manageiq/manageiq_user.py validate-modules:E338
lib/ansible/modules/remote_management/oneview/oneview_datacenter_facts.py validate-modules:E322
lib/ansible/modules/remote_management/oneview/oneview_datacenter_facts.py validate-modules:E337
lib/ansible/modules/remote_management/oneview/oneview_enclosure_facts.py validate-modules:E322
lib/ansible/modules/remote_management/oneview/oneview_enclosure_facts.py validate-modules:E337
lib/ansible/modules/remote_management/oneview/oneview_ethernet_network.py validate-modules:E322
lib/ansible/modules/remote_management/oneview/oneview_ethernet_network.py validate-modules:E337
lib/ansible/modules/remote_management/oneview/oneview_ethernet_network_facts.py validate-modules:E322
lib/ansible/modules/remote_management/oneview/oneview_ethernet_network_facts.py validate-modules:E337
lib/ansible/modules/remote_management/oneview/oneview_fc_network.py validate-modules:E322
lib/ansible/modules/remote_management/oneview/oneview_fc_network.py validate-modules:E337
lib/ansible/modules/remote_management/oneview/oneview_fc_network.py validate-modules:E338
lib/ansible/modules/remote_management/oneview/oneview_fc_network_facts.py validate-modules:E322
lib/ansible/modules/remote_management/oneview/oneview_fc_network_facts.py validate-modules:E337
lib/ansible/modules/remote_management/oneview/oneview_fcoe_network.py validate-modules:E322
lib/ansible/modules/remote_management/oneview/oneview_fcoe_network.py validate-modules:E337
lib/ansible/modules/remote_management/oneview/oneview_fcoe_network.py validate-modules:E338
lib/ansible/modules/remote_management/oneview/oneview_fcoe_network_facts.py validate-modules:E322
lib/ansible/modules/remote_management/oneview/oneview_fcoe_network_facts.py validate-modules:E337
lib/ansible/modules/remote_management/oneview/oneview_logical_interconnect_group.py validate-modules:E322
lib/ansible/modules/remote_management/oneview/oneview_logical_interconnect_group.py validate-modules:E337
lib/ansible/modules/remote_management/oneview/oneview_logical_interconnect_group.py validate-modules:E338
lib/ansible/modules/remote_management/oneview/oneview_logical_interconnect_group_facts.py validate-modules:E322
lib/ansible/modules/remote_management/oneview/oneview_logical_interconnect_group_facts.py validate-modules:E337
lib/ansible/modules/remote_management/oneview/oneview_network_set.py validate-modules:E322
lib/ansible/modules/remote_management/oneview/oneview_network_set.py validate-modules:E337
lib/ansible/modules/remote_management/oneview/oneview_network_set.py validate-modules:E338
lib/ansible/modules/remote_management/oneview/oneview_network_set_facts.py validate-modules:E322
lib/ansible/modules/remote_management/oneview/oneview_network_set_facts.py validate-modules:E337
lib/ansible/modules/remote_management/oneview/oneview_san_manager.py validate-modules:E322
lib/ansible/modules/remote_management/oneview/oneview_san_manager.py validate-modules:E337
lib/ansible/modules/remote_management/oneview/oneview_san_manager_facts.py validate-modules:E322
lib/ansible/modules/remote_management/oneview/oneview_san_manager_facts.py validate-modules:E337
lib/ansible/modules/remote_management/stacki/stacki_host.py validate-modules:E317
lib/ansible/modules/remote_management/stacki/stacki_host.py validate-modules:E322
lib/ansible/modules/remote_management/stacki/stacki_host.py validate-modules:E324
lib/ansible/modules/remote_management/stacki/stacki_host.py validate-modules:E326
lib/ansible/modules/remote_management/stacki/stacki_host.py validate-modules:E337
lib/ansible/modules/remote_management/ucs/ucs_disk_group_policy.py validate-modules:E323
lib/ansible/modules/remote_management/ucs/ucs_disk_group_policy.py validate-modules:E324
lib/ansible/modules/remote_management/ucs/ucs_disk_group_policy.py validate-modules:E326
lib/ansible/modules/remote_management/ucs/ucs_disk_group_policy.py validate-modules:E337
lib/ansible/modules/remote_management/ucs/ucs_ip_pool.py validate-modules:E337
lib/ansible/modules/remote_management/ucs/ucs_lan_connectivity.py validate-modules:E337
lib/ansible/modules/remote_management/ucs/ucs_mac_pool.py validate-modules:E337
lib/ansible/modules/remote_management/ucs/ucs_managed_objects.py validate-modules:E322
lib/ansible/modules/remote_management/ucs/ucs_managed_objects.py validate-modules:E337
lib/ansible/modules/remote_management/ucs/ucs_ntp_server.py validate-modules:E337
lib/ansible/modules/remote_management/ucs/ucs_san_connectivity.py validate-modules:E322
lib/ansible/modules/remote_management/ucs/ucs_san_connectivity.py validate-modules:E323
lib/ansible/modules/remote_management/ucs/ucs_san_connectivity.py validate-modules:E337
lib/ansible/modules/remote_management/ucs/ucs_service_profile_template.py validate-modules:E337
lib/ansible/modules/remote_management/ucs/ucs_storage_profile.py validate-modules:E325
lib/ansible/modules/remote_management/ucs/ucs_storage_profile.py validate-modules:E326
lib/ansible/modules/remote_management/ucs/ucs_storage_profile.py validate-modules:E337
lib/ansible/modules/remote_management/ucs/ucs_timezone.py validate-modules:E337
lib/ansible/modules/remote_management/ucs/ucs_uuid_pool.py validate-modules:E337
lib/ansible/modules/remote_management/ucs/ucs_vhba_template.py validate-modules:E322
lib/ansible/modules/remote_management/ucs/ucs_vhba_template.py validate-modules:E323
lib/ansible/modules/remote_management/ucs/ucs_vhba_template.py validate-modules:E337
lib/ansible/modules/remote_management/ucs/ucs_vlans.py validate-modules:E337
lib/ansible/modules/remote_management/ucs/ucs_vnic_template.py validate-modules:E326
lib/ansible/modules/remote_management/ucs/ucs_vnic_template.py validate-modules:E337
lib/ansible/modules/remote_management/ucs/ucs_vsans.py validate-modules:E322
lib/ansible/modules/remote_management/ucs/ucs_vsans.py validate-modules:E337
lib/ansible/modules/remote_management/ucs/ucs_wwn_pool.py validate-modules:E322
lib/ansible/modules/remote_management/ucs/ucs_wwn_pool.py validate-modules:E323
lib/ansible/modules/remote_management/ucs/ucs_wwn_pool.py validate-modules:E337
lib/ansible/modules/remote_management/wakeonlan.py validate-modules:E337
lib/ansible/modules/source_control/_github_hooks.py validate-modules:E338
lib/ansible/modules/source_control/bzr.py validate-modules:E337
lib/ansible/modules/source_control/git.py pylint:blacklisted-name
lib/ansible/modules/source_control/git.py use-argspec-type-path
lib/ansible/modules/source_control/git.py validate-modules:E337
lib/ansible/modules/source_control/git.py validate-modules:E338
lib/ansible/modules/source_control/git_config.py validate-modules:E337
lib/ansible/modules/source_control/git_config.py validate-modules:E338
lib/ansible/modules/source_control/github_deploy_key.py validate-modules:E336
lib/ansible/modules/source_control/github_deploy_key.py validate-modules:E337
lib/ansible/modules/source_control/github_deploy_key.py validate-modules:E338
lib/ansible/modules/source_control/github_issue.py validate-modules:E337
lib/ansible/modules/source_control/github_issue.py validate-modules:E338
lib/ansible/modules/source_control/github_key.py validate-modules:E338
lib/ansible/modules/source_control/github_release.py validate-modules:E337
lib/ansible/modules/source_control/github_release.py validate-modules:E338
lib/ansible/modules/source_control/github_webhook.py validate-modules:E337
lib/ansible/modules/source_control/github_webhook_info.py validate-modules:E337
lib/ansible/modules/source_control/hg.py validate-modules:E337
lib/ansible/modules/source_control/subversion.py validate-modules:E322
lib/ansible/modules/source_control/subversion.py validate-modules:E337
lib/ansible/modules/storage/emc/emc_vnx_sg_member.py validate-modules:E337
lib/ansible/modules/storage/emc/emc_vnx_sg_member.py validate-modules:E338
lib/ansible/modules/storage/glusterfs/gluster_heal_facts.py validate-modules:E337
lib/ansible/modules/storage/glusterfs/gluster_peer.py validate-modules:E337
lib/ansible/modules/storage/glusterfs/gluster_volume.py validate-modules:E337
lib/ansible/modules/storage/ibm/ibm_sa_domain.py validate-modules:E338
lib/ansible/modules/storage/ibm/ibm_sa_host.py validate-modules:E338
lib/ansible/modules/storage/ibm/ibm_sa_host_ports.py validate-modules:E338
lib/ansible/modules/storage/ibm/ibm_sa_pool.py validate-modules:E338
lib/ansible/modules/storage/ibm/ibm_sa_vol.py validate-modules:E338
lib/ansible/modules/storage/ibm/ibm_sa_vol_map.py validate-modules:E338
lib/ansible/modules/storage/infinidat/infini_export.py validate-modules:E323
lib/ansible/modules/storage/infinidat/infini_export.py validate-modules:E324
lib/ansible/modules/storage/infinidat/infini_export.py validate-modules:E337
lib/ansible/modules/storage/infinidat/infini_export.py validate-modules:E338
lib/ansible/modules/storage/infinidat/infini_export_client.py validate-modules:E323
lib/ansible/modules/storage/infinidat/infini_export_client.py validate-modules:E338
lib/ansible/modules/storage/infinidat/infini_fs.py validate-modules:E338
lib/ansible/modules/storage/infinidat/infini_host.py validate-modules:E337
lib/ansible/modules/storage/infinidat/infini_host.py validate-modules:E338
lib/ansible/modules/storage/infinidat/infini_pool.py validate-modules:E338
lib/ansible/modules/storage/infinidat/infini_vol.py validate-modules:E338
lib/ansible/modules/storage/netapp/_na_cdot_aggregate.py validate-modules:E337
lib/ansible/modules/storage/netapp/_na_cdot_aggregate.py validate-modules:E338
lib/ansible/modules/storage/netapp/_na_cdot_license.py validate-modules:E329
lib/ansible/modules/storage/netapp/_na_cdot_license.py validate-modules:E337
lib/ansible/modules/storage/netapp/_na_cdot_lun.py validate-modules:E337
lib/ansible/modules/storage/netapp/_na_cdot_lun.py validate-modules:E338
lib/ansible/modules/storage/netapp/_na_cdot_qtree.py validate-modules:E337
lib/ansible/modules/storage/netapp/_na_cdot_qtree.py validate-modules:E338
lib/ansible/modules/storage/netapp/_na_cdot_svm.py validate-modules:E337
lib/ansible/modules/storage/netapp/_na_cdot_svm.py validate-modules:E338
lib/ansible/modules/storage/netapp/_na_cdot_user.py validate-modules:E337
lib/ansible/modules/storage/netapp/_na_cdot_user.py validate-modules:E338
lib/ansible/modules/storage/netapp/_na_cdot_user_role.py validate-modules:E337
lib/ansible/modules/storage/netapp/_na_cdot_user_role.py validate-modules:E338
lib/ansible/modules/storage/netapp/_na_cdot_volume.py validate-modules:E317
lib/ansible/modules/storage/netapp/_na_cdot_volume.py validate-modules:E322
lib/ansible/modules/storage/netapp/_na_cdot_volume.py validate-modules:E324
lib/ansible/modules/storage/netapp/_na_cdot_volume.py validate-modules:E337
lib/ansible/modules/storage/netapp/_na_cdot_volume.py validate-modules:E338
lib/ansible/modules/storage/netapp/_sf_account_manager.py validate-modules:E337
lib/ansible/modules/storage/netapp/_sf_account_manager.py validate-modules:E338
lib/ansible/modules/storage/netapp/_sf_check_connections.py validate-modules:E337
lib/ansible/modules/storage/netapp/_sf_snapshot_schedule_manager.py validate-modules:E337
lib/ansible/modules/storage/netapp/_sf_snapshot_schedule_manager.py validate-modules:E338
lib/ansible/modules/storage/netapp/_sf_volume_access_group_manager.py validate-modules:E337
lib/ansible/modules/storage/netapp/_sf_volume_access_group_manager.py validate-modules:E338
lib/ansible/modules/storage/netapp/_sf_volume_manager.py validate-modules:E322
lib/ansible/modules/storage/netapp/_sf_volume_manager.py validate-modules:E336
lib/ansible/modules/storage/netapp/_sf_volume_manager.py validate-modules:E337
lib/ansible/modules/storage/netapp/_sf_volume_manager.py validate-modules:E338
lib/ansible/modules/storage/netapp/na_elementsw_access_group.py validate-modules:E337
lib/ansible/modules/storage/netapp/na_elementsw_access_group.py validate-modules:E338
lib/ansible/modules/storage/netapp/na_elementsw_account.py validate-modules:E337
lib/ansible/modules/storage/netapp/na_elementsw_account.py validate-modules:E338
lib/ansible/modules/storage/netapp/na_elementsw_admin_users.py validate-modules:E337
lib/ansible/modules/storage/netapp/na_elementsw_admin_users.py validate-modules:E338
lib/ansible/modules/storage/netapp/na_elementsw_backup.py validate-modules:E337
lib/ansible/modules/storage/netapp/na_elementsw_backup.py validate-modules:E338
lib/ansible/modules/storage/netapp/na_elementsw_check_connections.py validate-modules:E337
lib/ansible/modules/storage/netapp/na_elementsw_cluster.py validate-modules:E337
lib/ansible/modules/storage/netapp/na_elementsw_cluster_config.py validate-modules:E337
lib/ansible/modules/storage/netapp/na_elementsw_cluster_pair.py validate-modules:E337
lib/ansible/modules/storage/netapp/na_elementsw_cluster_pair.py validate-modules:E338
lib/ansible/modules/storage/netapp/na_elementsw_cluster_snmp.py validate-modules:E337
lib/ansible/modules/storage/netapp/na_elementsw_drive.py validate-modules:E337
lib/ansible/modules/storage/netapp/na_elementsw_drive.py validate-modules:E338
lib/ansible/modules/storage/netapp/na_elementsw_initiators.py validate-modules:E322
lib/ansible/modules/storage/netapp/na_elementsw_initiators.py validate-modules:E337
lib/ansible/modules/storage/netapp/na_elementsw_initiators.py validate-modules:E338
lib/ansible/modules/storage/netapp/na_elementsw_ldap.py validate-modules:E337
lib/ansible/modules/storage/netapp/na_elementsw_network_interfaces.py validate-modules:E337
lib/ansible/modules/storage/netapp/na_elementsw_node.py validate-modules:E337
lib/ansible/modules/storage/netapp/na_elementsw_node.py validate-modules:E338
lib/ansible/modules/storage/netapp/na_elementsw_snapshot.py validate-modules:E337
lib/ansible/modules/storage/netapp/na_elementsw_snapshot.py validate-modules:E338
lib/ansible/modules/storage/netapp/na_elementsw_snapshot_restore.py validate-modules:E337
lib/ansible/modules/storage/netapp/na_elementsw_snapshot_schedule.py validate-modules:E337
lib/ansible/modules/storage/netapp/na_elementsw_snapshot_schedule.py validate-modules:E338
lib/ansible/modules/storage/netapp/na_elementsw_vlan.py validate-modules:E337
lib/ansible/modules/storage/netapp/na_elementsw_vlan.py validate-modules:E338
lib/ansible/modules/storage/netapp/na_elementsw_volume.py validate-modules:E336
lib/ansible/modules/storage/netapp/na_elementsw_volume.py validate-modules:E337
lib/ansible/modules/storage/netapp/na_elementsw_volume.py validate-modules:E338
lib/ansible/modules/storage/netapp/na_elementsw_volume_clone.py validate-modules:E337
lib/ansible/modules/storage/netapp/na_elementsw_volume_clone.py validate-modules:E338
lib/ansible/modules/storage/netapp/na_elementsw_volume_pair.py validate-modules:E337
lib/ansible/modules/storage/netapp/na_elementsw_volume_pair.py validate-modules:E338
lib/ansible/modules/storage/netapp/na_ontap_aggregate.py validate-modules:E337
lib/ansible/modules/storage/netapp/na_ontap_aggregate.py validate-modules:E338
lib/ansible/modules/storage/netapp/na_ontap_autosupport.py validate-modules:E337
lib/ansible/modules/storage/netapp/na_ontap_autosupport.py validate-modules:E338
lib/ansible/modules/storage/netapp/na_ontap_broadcast_domain.py validate-modules:E337
lib/ansible/modules/storage/netapp/na_ontap_broadcast_domain.py validate-modules:E338
lib/ansible/modules/storage/netapp/na_ontap_broadcast_domain_ports.py validate-modules:E337
lib/ansible/modules/storage/netapp/na_ontap_broadcast_domain_ports.py validate-modules:E338
lib/ansible/modules/storage/netapp/na_ontap_cg_snapshot.py validate-modules:E337
lib/ansible/modules/storage/netapp/na_ontap_cg_snapshot.py validate-modules:E338
lib/ansible/modules/storage/netapp/na_ontap_cifs.py validate-modules:E337
lib/ansible/modules/storage/netapp/na_ontap_cifs_acl.py validate-modules:E337
lib/ansible/modules/storage/netapp/na_ontap_cifs_server.py validate-modules:E337
lib/ansible/modules/storage/netapp/na_ontap_cifs_server.py validate-modules:E338
lib/ansible/modules/storage/netapp/na_ontap_cluster.py validate-modules:E337
lib/ansible/modules/storage/netapp/na_ontap_cluster.py validate-modules:E338
lib/ansible/modules/storage/netapp/na_ontap_cluster_ha.py validate-modules:E338
lib/ansible/modules/storage/netapp/na_ontap_cluster_peer.py validate-modules:E337
lib/ansible/modules/storage/netapp/na_ontap_command.py validate-modules:E337
lib/ansible/modules/storage/netapp/na_ontap_disks.py validate-modules:E337
lib/ansible/modules/storage/netapp/na_ontap_dns.py validate-modules:E337
lib/ansible/modules/storage/netapp/na_ontap_dns.py validate-modules:E338
lib/ansible/modules/storage/netapp/na_ontap_export_policy.py validate-modules:E337
lib/ansible/modules/storage/netapp/na_ontap_export_policy_rule.py validate-modules:E337
lib/ansible/modules/storage/netapp/na_ontap_fcp.py validate-modules:E337
lib/ansible/modules/storage/netapp/na_ontap_fcp.py validate-modules:E338
lib/ansible/modules/storage/netapp/na_ontap_firewall_policy.py validate-modules:E337
lib/ansible/modules/storage/netapp/na_ontap_firewall_policy.py validate-modules:E338
lib/ansible/modules/storage/netapp/na_ontap_flexcache.py validate-modules:E337
lib/ansible/modules/storage/netapp/na_ontap_gather_facts.py validate-modules:E337
lib/ansible/modules/storage/netapp/na_ontap_gather_facts.py validate-modules:E338
lib/ansible/modules/storage/netapp/na_ontap_igroup.py validate-modules:E337
lib/ansible/modules/storage/netapp/na_ontap_igroup.py validate-modules:E338
lib/ansible/modules/storage/netapp/na_ontap_igroup_initiator.py validate-modules:E337
lib/ansible/modules/storage/netapp/na_ontap_igroup_initiator.py validate-modules:E338
lib/ansible/modules/storage/netapp/na_ontap_interface.py validate-modules:E337
lib/ansible/modules/storage/netapp/na_ontap_interface.py validate-modules:E338
lib/ansible/modules/storage/netapp/na_ontap_ipspace.py validate-modules:E337
lib/ansible/modules/storage/netapp/na_ontap_ipspace.py validate-modules:E338
lib/ansible/modules/storage/netapp/na_ontap_iscsi.py validate-modules:E337
lib/ansible/modules/storage/netapp/na_ontap_iscsi.py validate-modules:E338
lib/ansible/modules/storage/netapp/na_ontap_job_schedule.py validate-modules:E337
lib/ansible/modules/storage/netapp/na_ontap_job_schedule.py validate-modules:E338
lib/ansible/modules/storage/netapp/na_ontap_license.py validate-modules:E337
lib/ansible/modules/storage/netapp/na_ontap_license.py validate-modules:E338
lib/ansible/modules/storage/netapp/na_ontap_lun.py validate-modules:E337
lib/ansible/modules/storage/netapp/na_ontap_lun.py validate-modules:E338
lib/ansible/modules/storage/netapp/na_ontap_lun_copy.py validate-modules:E337
lib/ansible/modules/storage/netapp/na_ontap_lun_copy.py validate-modules:E338
lib/ansible/modules/storage/netapp/na_ontap_lun_map.py validate-modules:E337
lib/ansible/modules/storage/netapp/na_ontap_lun_map.py validate-modules:E338
lib/ansible/modules/storage/netapp/na_ontap_motd.py validate-modules:E338
lib/ansible/modules/storage/netapp/na_ontap_net_ifgrp.py validate-modules:E337
lib/ansible/modules/storage/netapp/na_ontap_net_ifgrp.py validate-modules:E338
lib/ansible/modules/storage/netapp/na_ontap_net_port.py validate-modules:E337
lib/ansible/modules/storage/netapp/na_ontap_net_port.py validate-modules:E338
lib/ansible/modules/storage/netapp/na_ontap_net_routes.py validate-modules:E337
lib/ansible/modules/storage/netapp/na_ontap_net_routes.py validate-modules:E338
lib/ansible/modules/storage/netapp/na_ontap_net_subnet.py validate-modules:E337
lib/ansible/modules/storage/netapp/na_ontap_net_subnet.py validate-modules:E338
lib/ansible/modules/storage/netapp/na_ontap_net_vlan.py validate-modules:E337
lib/ansible/modules/storage/netapp/na_ontap_net_vlan.py validate-modules:E338
lib/ansible/modules/storage/netapp/na_ontap_nfs.py validate-modules:E336
lib/ansible/modules/storage/netapp/na_ontap_nfs.py validate-modules:E337
lib/ansible/modules/storage/netapp/na_ontap_nfs.py validate-modules:E338
lib/ansible/modules/storage/netapp/na_ontap_node.py validate-modules:E337
lib/ansible/modules/storage/netapp/na_ontap_ntp.py validate-modules:E337
lib/ansible/modules/storage/netapp/na_ontap_ntp.py validate-modules:E338
lib/ansible/modules/storage/netapp/na_ontap_nvme.py validate-modules:E337
lib/ansible/modules/storage/netapp/na_ontap_nvme_namespace.py validate-modules:E337
lib/ansible/modules/storage/netapp/na_ontap_nvme_subsystem.py validate-modules:E337
lib/ansible/modules/storage/netapp/na_ontap_portset.py validate-modules:E337
lib/ansible/modules/storage/netapp/na_ontap_portset.py validate-modules:E338
lib/ansible/modules/storage/netapp/na_ontap_qos_policy_group.py validate-modules:E337
lib/ansible/modules/storage/netapp/na_ontap_qtree.py validate-modules:E338
lib/ansible/modules/storage/netapp/na_ontap_security_key_manager.py validate-modules:E337
lib/ansible/modules/storage/netapp/na_ontap_security_key_manager.py validate-modules:E338
lib/ansible/modules/storage/netapp/na_ontap_service_processor_network.py validate-modules:E337
lib/ansible/modules/storage/netapp/na_ontap_service_processor_network.py validate-modules:E338
lib/ansible/modules/storage/netapp/na_ontap_snapmirror.py validate-modules:E337
lib/ansible/modules/storage/netapp/na_ontap_snapshot.py validate-modules:E337
lib/ansible/modules/storage/netapp/na_ontap_snapshot.py validate-modules:E338
lib/ansible/modules/storage/netapp/na_ontap_snapshot_policy.py validate-modules:E337
lib/ansible/modules/storage/netapp/na_ontap_snapshot_policy.py validate-modules:E338
lib/ansible/modules/storage/netapp/na_ontap_snmp.py validate-modules:E337
lib/ansible/modules/storage/netapp/na_ontap_software_update.py validate-modules:E337
lib/ansible/modules/storage/netapp/na_ontap_svm.py validate-modules:E337
lib/ansible/modules/storage/netapp/na_ontap_svm.py validate-modules:E338
lib/ansible/modules/storage/netapp/na_ontap_svm_options.py validate-modules:E337
lib/ansible/modules/storage/netapp/na_ontap_ucadapter.py validate-modules:E337
lib/ansible/modules/storage/netapp/na_ontap_ucadapter.py validate-modules:E338
lib/ansible/modules/storage/netapp/na_ontap_unix_group.py validate-modules:E337
lib/ansible/modules/storage/netapp/na_ontap_unix_group.py validate-modules:E338
lib/ansible/modules/storage/netapp/na_ontap_unix_user.py validate-modules:E337
lib/ansible/modules/storage/netapp/na_ontap_unix_user.py validate-modules:E338
lib/ansible/modules/storage/netapp/na_ontap_user.py validate-modules:E337
lib/ansible/modules/storage/netapp/na_ontap_user.py validate-modules:E338
lib/ansible/modules/storage/netapp/na_ontap_user_role.py validate-modules:E337
lib/ansible/modules/storage/netapp/na_ontap_user_role.py validate-modules:E338
lib/ansible/modules/storage/netapp/na_ontap_volume.py validate-modules:E337
lib/ansible/modules/storage/netapp/na_ontap_volume.py validate-modules:E338
lib/ansible/modules/storage/netapp/na_ontap_volume_clone.py validate-modules:E338
lib/ansible/modules/storage/netapp/na_ontap_vscan_on_access_policy.py validate-modules:E337
lib/ansible/modules/storage/netapp/na_ontap_vscan_on_access_policy.py validate-modules:E338
lib/ansible/modules/storage/netapp/na_ontap_vscan_on_demand_task.py validate-modules:E337
lib/ansible/modules/storage/netapp/na_ontap_vscan_on_demand_task.py validate-modules:E338
lib/ansible/modules/storage/netapp/na_ontap_vscan_scanner_pool.py validate-modules:E337
lib/ansible/modules/storage/netapp/na_ontap_vscan_scanner_pool.py validate-modules:E338
lib/ansible/modules/storage/netapp/na_ontap_vserver_peer.py validate-modules:E337
lib/ansible/modules/storage/netapp/netapp_e_alerts.py validate-modules:E337
lib/ansible/modules/storage/netapp/netapp_e_amg.py validate-modules:E322
lib/ansible/modules/storage/netapp/netapp_e_amg.py validate-modules:E337
lib/ansible/modules/storage/netapp/netapp_e_amg.py validate-modules:E338
lib/ansible/modules/storage/netapp/netapp_e_amg_role.py validate-modules:E322
lib/ansible/modules/storage/netapp/netapp_e_amg_role.py validate-modules:E337
lib/ansible/modules/storage/netapp/netapp_e_amg_role.py validate-modules:E338
lib/ansible/modules/storage/netapp/netapp_e_amg_sync.py validate-modules:E337
lib/ansible/modules/storage/netapp/netapp_e_asup.py validate-modules:E337
lib/ansible/modules/storage/netapp/netapp_e_auditlog.py validate-modules:E337
lib/ansible/modules/storage/netapp/netapp_e_auth.py validate-modules:E337
lib/ansible/modules/storage/netapp/netapp_e_auth.py validate-modules:E338
lib/ansible/modules/storage/netapp/netapp_e_facts.py validate-modules:E337
lib/ansible/modules/storage/netapp/netapp_e_facts.py validate-modules:E338
lib/ansible/modules/storage/netapp/netapp_e_flashcache.py validate-modules:E322
lib/ansible/modules/storage/netapp/netapp_e_flashcache.py validate-modules:E326
lib/ansible/modules/storage/netapp/netapp_e_flashcache.py validate-modules:E337
lib/ansible/modules/storage/netapp/netapp_e_flashcache.py validate-modules:E338
lib/ansible/modules/storage/netapp/netapp_e_global.py validate-modules:E337
lib/ansible/modules/storage/netapp/netapp_e_host.py validate-modules:E337
lib/ansible/modules/storage/netapp/netapp_e_hostgroup.py validate-modules:E337
lib/ansible/modules/storage/netapp/netapp_e_hostgroup.py validate-modules:E338
lib/ansible/modules/storage/netapp/netapp_e_iscsi_interface.py validate-modules:E337
lib/ansible/modules/storage/netapp/netapp_e_iscsi_target.py validate-modules:E337
lib/ansible/modules/storage/netapp/netapp_e_ldap.py validate-modules:E337
lib/ansible/modules/storage/netapp/netapp_e_lun_mapping.py validate-modules:E337
lib/ansible/modules/storage/netapp/netapp_e_lun_mapping.py validate-modules:E338
lib/ansible/modules/storage/netapp/netapp_e_mgmt_interface.py validate-modules:E337
lib/ansible/modules/storage/netapp/netapp_e_snapshot_group.py validate-modules:E322
lib/ansible/modules/storage/netapp/netapp_e_snapshot_group.py validate-modules:E326
lib/ansible/modules/storage/netapp/netapp_e_snapshot_group.py validate-modules:E337
lib/ansible/modules/storage/netapp/netapp_e_snapshot_group.py validate-modules:E338
lib/ansible/modules/storage/netapp/netapp_e_snapshot_images.py validate-modules:E322
lib/ansible/modules/storage/netapp/netapp_e_snapshot_images.py validate-modules:E337
lib/ansible/modules/storage/netapp/netapp_e_snapshot_images.py validate-modules:E338
lib/ansible/modules/storage/netapp/netapp_e_snapshot_volume.py validate-modules:E324
lib/ansible/modules/storage/netapp/netapp_e_snapshot_volume.py validate-modules:E326
lib/ansible/modules/storage/netapp/netapp_e_snapshot_volume.py validate-modules:E337
lib/ansible/modules/storage/netapp/netapp_e_storage_system.py validate-modules:E322
lib/ansible/modules/storage/netapp/netapp_e_storage_system.py validate-modules:E324
lib/ansible/modules/storage/netapp/netapp_e_storage_system.py validate-modules:E337
lib/ansible/modules/storage/netapp/netapp_e_storage_system.py validate-modules:E338
lib/ansible/modules/storage/netapp/netapp_e_storagepool.py validate-modules:E322
lib/ansible/modules/storage/netapp/netapp_e_storagepool.py validate-modules:E326
lib/ansible/modules/storage/netapp/netapp_e_storagepool.py validate-modules:E337
lib/ansible/modules/storage/netapp/netapp_e_storagepool.py validate-modules:E338
lib/ansible/modules/storage/netapp/netapp_e_syslog.py validate-modules:E337
lib/ansible/modules/storage/netapp/netapp_e_syslog.py validate-modules:E338
lib/ansible/modules/storage/netapp/netapp_e_volume.py validate-modules:E324
lib/ansible/modules/storage/netapp/netapp_e_volume.py validate-modules:E327
lib/ansible/modules/storage/netapp/netapp_e_volume.py validate-modules:E337
lib/ansible/modules/storage/netapp/netapp_e_volume.py validate-modules:E338
lib/ansible/modules/storage/netapp/netapp_e_volume_copy.py validate-modules:E322
lib/ansible/modules/storage/netapp/netapp_e_volume_copy.py validate-modules:E323
lib/ansible/modules/storage/netapp/netapp_e_volume_copy.py validate-modules:E324
lib/ansible/modules/storage/netapp/netapp_e_volume_copy.py validate-modules:E326
lib/ansible/modules/storage/netapp/netapp_e_volume_copy.py validate-modules:E335
lib/ansible/modules/storage/netapp/netapp_e_volume_copy.py validate-modules:E337
lib/ansible/modules/storage/purestorage/purefa_dsrole.py validate-modules:E337
lib/ansible/modules/storage/purestorage/purefa_pgsnap.py validate-modules:E337
lib/ansible/modules/storage/purestorage/purefb_fs.py validate-modules:E324
lib/ansible/modules/storage/zfs/zfs.py validate-modules:E337
lib/ansible/modules/storage/zfs/zfs_delegate_admin.py validate-modules:E337
lib/ansible/modules/storage/zfs/zfs_facts.py validate-modules:E323
lib/ansible/modules/storage/zfs/zfs_facts.py validate-modules:E337
lib/ansible/modules/storage/zfs/zpool_facts.py validate-modules:E323
lib/ansible/modules/storage/zfs/zpool_facts.py validate-modules:E337
lib/ansible/modules/system/alternatives.py pylint:blacklisted-name
lib/ansible/modules/system/authorized_key.py validate-modules:E337
lib/ansible/modules/system/beadm.py pylint:blacklisted-name
lib/ansible/modules/system/cronvar.py pylint:blacklisted-name
lib/ansible/modules/system/dconf.py pylint:blacklisted-name
lib/ansible/modules/system/dconf.py validate-modules:E337
lib/ansible/modules/system/dconf.py validate-modules:E338
lib/ansible/modules/system/filesystem.py pylint:blacklisted-name
lib/ansible/modules/system/filesystem.py validate-modules:E338
lib/ansible/modules/system/gconftool2.py pylint:blacklisted-name
lib/ansible/modules/system/gconftool2.py validate-modules:E337
lib/ansible/modules/system/getent.py validate-modules:E337
lib/ansible/modules/system/hostname.py validate-modules:E337
lib/ansible/modules/system/interfaces_file.py pylint:blacklisted-name
lib/ansible/modules/system/interfaces_file.py validate-modules:E337
lib/ansible/modules/system/iptables.py pylint:blacklisted-name
lib/ansible/modules/system/java_cert.py pylint:blacklisted-name
lib/ansible/modules/system/java_keystore.py validate-modules:E338
lib/ansible/modules/system/kernel_blacklist.py validate-modules:E337
lib/ansible/modules/system/known_hosts.py validate-modules:E324
lib/ansible/modules/system/known_hosts.py validate-modules:E337
lib/ansible/modules/system/known_hosts.py validate-modules:E338
lib/ansible/modules/system/locale_gen.py validate-modules:E337
lib/ansible/modules/system/lvg.py pylint:blacklisted-name
lib/ansible/modules/system/lvol.py pylint:blacklisted-name
lib/ansible/modules/system/lvol.py validate-modules:E337
lib/ansible/modules/system/mksysb.py validate-modules:E338
lib/ansible/modules/system/modprobe.py validate-modules:E337
lib/ansible/modules/system/nosh.py validate-modules:E337
lib/ansible/modules/system/nosh.py validate-modules:E338
lib/ansible/modules/system/openwrt_init.py validate-modules:E337
lib/ansible/modules/system/openwrt_init.py validate-modules:E338
lib/ansible/modules/system/pam_limits.py validate-modules:E337
lib/ansible/modules/system/parted.py pylint:blacklisted-name
lib/ansible/modules/system/puppet.py use-argspec-type-path
lib/ansible/modules/system/puppet.py validate-modules:E322
lib/ansible/modules/system/puppet.py validate-modules:E336
lib/ansible/modules/system/puppet.py validate-modules:E337
lib/ansible/modules/system/python_requirements_info.py validate-modules:E337
lib/ansible/modules/system/runit.py validate-modules:E322
lib/ansible/modules/system/runit.py validate-modules:E324
lib/ansible/modules/system/runit.py validate-modules:E337
lib/ansible/modules/system/seboolean.py validate-modules:E337
lib/ansible/modules/system/selinux.py validate-modules:E337
lib/ansible/modules/system/selogin.py validate-modules:E337
lib/ansible/modules/system/service.py validate-modules:E210
lib/ansible/modules/system/service.py validate-modules:E323
lib/ansible/modules/system/setup.py validate-modules:E337
lib/ansible/modules/system/setup.py validate-modules:E338
lib/ansible/modules/system/sysctl.py validate-modules:E337
lib/ansible/modules/system/sysctl.py validate-modules:E338
lib/ansible/modules/system/syspatch.py validate-modules:E337
lib/ansible/modules/system/systemd.py validate-modules:E336
lib/ansible/modules/system/systemd.py validate-modules:E337
lib/ansible/modules/system/sysvinit.py validate-modules:E337
lib/ansible/modules/system/timezone.py pylint:blacklisted-name
lib/ansible/modules/system/user.py validate-modules:E210
lib/ansible/modules/system/user.py validate-modules:E324
lib/ansible/modules/system/user.py validate-modules:E327
lib/ansible/modules/system/xfconf.py validate-modules:E337
lib/ansible/modules/utilities/helper/_accelerate.py ansible-doc
lib/ansible/modules/utilities/logic/async_status.py use-argspec-type-path
lib/ansible/modules/utilities/logic/async_status.py validate-modules!skip
lib/ansible/modules/utilities/logic/async_wrapper.py use-argspec-type-path
lib/ansible/modules/utilities/logic/wait_for.py pylint:blacklisted-name
lib/ansible/modules/web_infrastructure/ansible_tower/tower_credential.py validate-modules:E326
lib/ansible/modules/web_infrastructure/ansible_tower/tower_credential_type.py validate-modules:E337
lib/ansible/modules/web_infrastructure/ansible_tower/tower_credential_type.py validate-modules:E338
lib/ansible/modules/web_infrastructure/ansible_tower/tower_group.py use-argspec-type-path
lib/ansible/modules/web_infrastructure/ansible_tower/tower_group.py validate-modules:E324
lib/ansible/modules/web_infrastructure/ansible_tower/tower_group.py validate-modules:E338
lib/ansible/modules/web_infrastructure/ansible_tower/tower_host.py use-argspec-type-path
lib/ansible/modules/web_infrastructure/ansible_tower/tower_host.py validate-modules:E338
lib/ansible/modules/web_infrastructure/ansible_tower/tower_inventory.py validate-modules:E338
lib/ansible/modules/web_infrastructure/ansible_tower/tower_inventory_source.py validate-modules:E337
lib/ansible/modules/web_infrastructure/ansible_tower/tower_inventory_source.py validate-modules:E338
lib/ansible/modules/web_infrastructure/ansible_tower/tower_job_cancel.py validate-modules:E337
lib/ansible/modules/web_infrastructure/ansible_tower/tower_job_launch.py validate-modules:E323
lib/ansible/modules/web_infrastructure/ansible_tower/tower_job_launch.py validate-modules:E337
lib/ansible/modules/web_infrastructure/ansible_tower/tower_job_launch.py validate-modules:E338
lib/ansible/modules/web_infrastructure/ansible_tower/tower_job_list.py validate-modules:E337
lib/ansible/modules/web_infrastructure/ansible_tower/tower_job_list.py validate-modules:E338
lib/ansible/modules/web_infrastructure/ansible_tower/tower_job_template.py validate-modules:E322
lib/ansible/modules/web_infrastructure/ansible_tower/tower_job_template.py validate-modules:E337
lib/ansible/modules/web_infrastructure/ansible_tower/tower_job_template.py validate-modules:E338
lib/ansible/modules/web_infrastructure/ansible_tower/tower_job_wait.py validate-modules:E337
lib/ansible/modules/web_infrastructure/ansible_tower/tower_label.py validate-modules:E338
lib/ansible/modules/web_infrastructure/ansible_tower/tower_notification.py validate-modules:E337
lib/ansible/modules/web_infrastructure/ansible_tower/tower_notification.py validate-modules:E338
lib/ansible/modules/web_infrastructure/ansible_tower/tower_organization.py validate-modules:E338
lib/ansible/modules/web_infrastructure/ansible_tower/tower_project.py validate-modules:E337
lib/ansible/modules/web_infrastructure/ansible_tower/tower_project.py validate-modules:E338
lib/ansible/modules/web_infrastructure/ansible_tower/tower_receive.py validate-modules:E337
lib/ansible/modules/web_infrastructure/ansible_tower/tower_role.py validate-modules:E338
lib/ansible/modules/web_infrastructure/ansible_tower/tower_send.py validate-modules:E337
lib/ansible/modules/web_infrastructure/ansible_tower/tower_send.py validate-modules:E338
lib/ansible/modules/web_infrastructure/ansible_tower/tower_settings.py validate-modules:E338
lib/ansible/modules/web_infrastructure/ansible_tower/tower_team.py validate-modules:E322
lib/ansible/modules/web_infrastructure/ansible_tower/tower_team.py validate-modules:E338
lib/ansible/modules/web_infrastructure/ansible_tower/tower_user.py validate-modules:E338
lib/ansible/modules/web_infrastructure/ansible_tower/tower_workflow_launch.py validate-modules:E337
lib/ansible/modules/web_infrastructure/ansible_tower/tower_workflow_launch.py validate-modules:E338
lib/ansible/modules/web_infrastructure/ansible_tower/tower_workflow_template.py validate-modules:E338
lib/ansible/modules/web_infrastructure/apache2_mod_proxy.py validate-modules:E317
lib/ansible/modules/web_infrastructure/apache2_mod_proxy.py validate-modules:E326
lib/ansible/modules/web_infrastructure/apache2_mod_proxy.py validate-modules:E337
lib/ansible/modules/web_infrastructure/apache2_module.py validate-modules:E337
lib/ansible/modules/web_infrastructure/apache2_module.py validate-modules:E338
lib/ansible/modules/web_infrastructure/deploy_helper.py validate-modules:E337
lib/ansible/modules/web_infrastructure/deploy_helper.py validate-modules:E338
lib/ansible/modules/web_infrastructure/django_manage.py validate-modules:E317
lib/ansible/modules/web_infrastructure/django_manage.py validate-modules:E322
lib/ansible/modules/web_infrastructure/django_manage.py validate-modules:E326
lib/ansible/modules/web_infrastructure/django_manage.py validate-modules:E337
lib/ansible/modules/web_infrastructure/django_manage.py validate-modules:E338
lib/ansible/modules/web_infrastructure/ejabberd_user.py validate-modules:E337
lib/ansible/modules/web_infrastructure/ejabberd_user.py validate-modules:E338
lib/ansible/modules/web_infrastructure/gunicorn.py validate-modules:E322
lib/ansible/modules/web_infrastructure/gunicorn.py validate-modules:E337
lib/ansible/modules/web_infrastructure/htpasswd.py validate-modules:E326
lib/ansible/modules/web_infrastructure/htpasswd.py validate-modules:E338
lib/ansible/modules/web_infrastructure/jenkins_job.py validate-modules:E338
lib/ansible/modules/web_infrastructure/jenkins_job_info.py validate-modules:E338
lib/ansible/modules/web_infrastructure/jenkins_plugin.py use-argspec-type-path
lib/ansible/modules/web_infrastructure/jenkins_plugin.py validate-modules:E322
lib/ansible/modules/web_infrastructure/jenkins_plugin.py validate-modules:E337
lib/ansible/modules/web_infrastructure/jenkins_plugin.py validate-modules:E338
lib/ansible/modules/web_infrastructure/jenkins_script.py validate-modules:E337
lib/ansible/modules/web_infrastructure/jira.py validate-modules:E322
lib/ansible/modules/web_infrastructure/jira.py validate-modules:E337
lib/ansible/modules/web_infrastructure/jira.py validate-modules:E338
lib/ansible/modules/web_infrastructure/nginx_status_facts.py validate-modules:E337
lib/ansible/modules/web_infrastructure/nginx_status_facts.py validate-modules:E338
lib/ansible/modules/web_infrastructure/rundeck_acl_policy.py pylint:blacklisted-name
lib/ansible/modules/web_infrastructure/rundeck_acl_policy.py validate-modules:E337
lib/ansible/modules/web_infrastructure/rundeck_project.py validate-modules:E337
lib/ansible/modules/web_infrastructure/sophos_utm/utm_aaa_group_info.py validate-modules:E337
lib/ansible/modules/web_infrastructure/sophos_utm/utm_ca_host_key_cert.py validate-modules:E337
lib/ansible/modules/web_infrastructure/sophos_utm/utm_ca_host_key_cert_info.py validate-modules:E337
lib/ansible/modules/web_infrastructure/sophos_utm/utm_dns_host.py validate-modules:E337
lib/ansible/modules/web_infrastructure/sophos_utm/utm_network_interface_address.py validate-modules:E337
lib/ansible/modules/web_infrastructure/sophos_utm/utm_network_interface_address_info.py validate-modules:E337
lib/ansible/modules/web_infrastructure/sophos_utm/utm_proxy_auth_profile.py validate-modules:E337
lib/ansible/modules/web_infrastructure/sophos_utm/utm_proxy_frontend.py validate-modules:E337
lib/ansible/modules/web_infrastructure/sophos_utm/utm_proxy_frontend_info.py validate-modules:E337
lib/ansible/modules/web_infrastructure/sophos_utm/utm_proxy_location.py validate-modules:E337
lib/ansible/modules/web_infrastructure/sophos_utm/utm_proxy_location_info.py validate-modules:E337
lib/ansible/modules/web_infrastructure/supervisorctl.py validate-modules:E337
lib/ansible/modules/web_infrastructure/supervisorctl.py validate-modules:E338
lib/ansible/modules/web_infrastructure/taiga_issue.py validate-modules:E337
lib/ansible/modules/web_infrastructure/taiga_issue.py validate-modules:E338
lib/ansible/modules/windows/_win_msi.py future-import-boilerplate
lib/ansible/modules/windows/_win_msi.py metaclass-boilerplate
lib/ansible/modules/windows/async_status.ps1 pslint:PSCustomUseLiteralPath
lib/ansible/modules/windows/setup.ps1 pslint:PSCustomUseLiteralPath
lib/ansible/modules/windows/win_acl.py future-import-boilerplate
lib/ansible/modules/windows/win_acl.py metaclass-boilerplate
lib/ansible/modules/windows/win_acl_inheritance.ps1 pslint:PSAvoidTrailingWhitespace
lib/ansible/modules/windows/win_acl_inheritance.py future-import-boilerplate
lib/ansible/modules/windows/win_acl_inheritance.py metaclass-boilerplate
lib/ansible/modules/windows/win_audit_policy_system.py future-import-boilerplate
lib/ansible/modules/windows/win_audit_policy_system.py metaclass-boilerplate
lib/ansible/modules/windows/win_audit_rule.ps1 pslint:PSCustomUseLiteralPath
lib/ansible/modules/windows/win_audit_rule.py future-import-boilerplate
lib/ansible/modules/windows/win_audit_rule.py metaclass-boilerplate
lib/ansible/modules/windows/win_certificate_store.ps1 validate-modules:E337
lib/ansible/modules/windows/win_certificate_store.py future-import-boilerplate
lib/ansible/modules/windows/win_certificate_store.py metaclass-boilerplate
lib/ansible/modules/windows/win_chocolatey.ps1 pslint:PSCustomUseLiteralPath
lib/ansible/modules/windows/win_chocolatey.py future-import-boilerplate
lib/ansible/modules/windows/win_chocolatey.py metaclass-boilerplate
lib/ansible/modules/windows/win_chocolatey_config.ps1 pslint:PSCustomUseLiteralPath
lib/ansible/modules/windows/win_chocolatey_config.py future-import-boilerplate
lib/ansible/modules/windows/win_chocolatey_config.py metaclass-boilerplate
lib/ansible/modules/windows/win_chocolatey_facts.ps1 pslint:PSCustomUseLiteralPath
lib/ansible/modules/windows/win_chocolatey_facts.py future-import-boilerplate
lib/ansible/modules/windows/win_chocolatey_facts.py metaclass-boilerplate
lib/ansible/modules/windows/win_chocolatey_feature.py future-import-boilerplate
lib/ansible/modules/windows/win_chocolatey_feature.py metaclass-boilerplate
lib/ansible/modules/windows/win_chocolatey_source.ps1 pslint:PSCustomUseLiteralPath
lib/ansible/modules/windows/win_chocolatey_source.py future-import-boilerplate
lib/ansible/modules/windows/win_chocolatey_source.py metaclass-boilerplate
lib/ansible/modules/windows/win_command.py future-import-boilerplate
lib/ansible/modules/windows/win_command.py metaclass-boilerplate
lib/ansible/modules/windows/win_copy.ps1 pslint:PSUseApprovedVerbs
lib/ansible/modules/windows/win_copy.py future-import-boilerplate
lib/ansible/modules/windows/win_copy.py metaclass-boilerplate
lib/ansible/modules/windows/win_credential.ps1 pslint:PSCustomUseLiteralPath
lib/ansible/modules/windows/win_credential.ps1 validate-modules:E337
lib/ansible/modules/windows/win_credential.py future-import-boilerplate
lib/ansible/modules/windows/win_credential.py metaclass-boilerplate
lib/ansible/modules/windows/win_defrag.py future-import-boilerplate
lib/ansible/modules/windows/win_defrag.py metaclass-boilerplate
lib/ansible/modules/windows/win_disk_facts.py future-import-boilerplate
lib/ansible/modules/windows/win_disk_facts.py metaclass-boilerplate
lib/ansible/modules/windows/win_disk_image.py future-import-boilerplate
lib/ansible/modules/windows/win_disk_image.py metaclass-boilerplate
lib/ansible/modules/windows/win_dns_client.ps1 pslint:PSCustomUseLiteralPath
lib/ansible/modules/windows/win_dns_client.ps1 pslint:PSUseApprovedVerbs
lib/ansible/modules/windows/win_dns_client.py future-import-boilerplate
lib/ansible/modules/windows/win_dns_client.py metaclass-boilerplate
lib/ansible/modules/windows/win_dns_record.py future-import-boilerplate
lib/ansible/modules/windows/win_dns_record.py metaclass-boilerplate
lib/ansible/modules/windows/win_domain.ps1 pslint:PSAvoidUsingEmptyCatchBlock # Keep
lib/ansible/modules/windows/win_domain.ps1 pslint:PSUseApprovedVerbs
lib/ansible/modules/windows/win_domain.py future-import-boilerplate
lib/ansible/modules/windows/win_domain.py metaclass-boilerplate
lib/ansible/modules/windows/win_domain_computer.py future-import-boilerplate
lib/ansible/modules/windows/win_domain_computer.py metaclass-boilerplate
lib/ansible/modules/windows/win_domain_controller.ps1 pslint:PSAvoidGlobalVars # New PR
lib/ansible/modules/windows/win_domain_controller.ps1 pslint:PSCustomUseLiteralPath
lib/ansible/modules/windows/win_domain_controller.ps1 pslint:PSUseApprovedVerbs
lib/ansible/modules/windows/win_domain_controller.py future-import-boilerplate
lib/ansible/modules/windows/win_domain_controller.py metaclass-boilerplate
lib/ansible/modules/windows/win_domain_group.py future-import-boilerplate
lib/ansible/modules/windows/win_domain_group.py metaclass-boilerplate
lib/ansible/modules/windows/win_domain_group_membership.py future-import-boilerplate
lib/ansible/modules/windows/win_domain_group_membership.py metaclass-boilerplate
lib/ansible/modules/windows/win_domain_membership.ps1 pslint:PSAvoidGlobalVars # New PR
lib/ansible/modules/windows/win_domain_membership.ps1 pslint:PSCustomUseLiteralPath
lib/ansible/modules/windows/win_domain_membership.ps1 pslint:PSUseApprovedVerbs
lib/ansible/modules/windows/win_domain_membership.py future-import-boilerplate
lib/ansible/modules/windows/win_domain_membership.py metaclass-boilerplate
lib/ansible/modules/windows/win_domain_user.py future-import-boilerplate
lib/ansible/modules/windows/win_domain_user.py metaclass-boilerplate
lib/ansible/modules/windows/win_dotnet_ngen.ps1 pslint:PSCustomUseLiteralPath
lib/ansible/modules/windows/win_dotnet_ngen.py future-import-boilerplate
lib/ansible/modules/windows/win_dotnet_ngen.py metaclass-boilerplate
lib/ansible/modules/windows/win_dsc.ps1 pslint:PSAvoidUsingEmptyCatchBlock # Keep
lib/ansible/modules/windows/win_dsc.ps1 pslint:PSCustomUseLiteralPath
lib/ansible/modules/windows/win_dsc.py future-import-boilerplate
lib/ansible/modules/windows/win_dsc.py metaclass-boilerplate
lib/ansible/modules/windows/win_environment.py future-import-boilerplate
lib/ansible/modules/windows/win_environment.py metaclass-boilerplate
lib/ansible/modules/windows/win_eventlog.ps1 pslint:PSCustomUseLiteralPath
lib/ansible/modules/windows/win_eventlog.py future-import-boilerplate
lib/ansible/modules/windows/win_eventlog.py metaclass-boilerplate
lib/ansible/modules/windows/win_eventlog_entry.py future-import-boilerplate
lib/ansible/modules/windows/win_eventlog_entry.py metaclass-boilerplate
lib/ansible/modules/windows/win_feature.ps1 pslint:PSCustomUseLiteralPath
lib/ansible/modules/windows/win_feature.py future-import-boilerplate
lib/ansible/modules/windows/win_feature.py metaclass-boilerplate
lib/ansible/modules/windows/win_file.py future-import-boilerplate
lib/ansible/modules/windows/win_file.py metaclass-boilerplate
lib/ansible/modules/windows/win_file_version.ps1 pslint:PSCustomUseLiteralPath
lib/ansible/modules/windows/win_file_version.py future-import-boilerplate
lib/ansible/modules/windows/win_file_version.py metaclass-boilerplate
lib/ansible/modules/windows/win_find.ps1 pslint:PSAvoidUsingEmptyCatchBlock # Keep
lib/ansible/modules/windows/win_find.py future-import-boilerplate
lib/ansible/modules/windows/win_find.py metaclass-boilerplate
lib/ansible/modules/windows/win_firewall.py future-import-boilerplate
lib/ansible/modules/windows/win_firewall.py metaclass-boilerplate
lib/ansible/modules/windows/win_firewall_rule.ps1 pslint:PSUseApprovedVerbs
lib/ansible/modules/windows/win_firewall_rule.py future-import-boilerplate
lib/ansible/modules/windows/win_firewall_rule.py metaclass-boilerplate
lib/ansible/modules/windows/win_format.py future-import-boilerplate
lib/ansible/modules/windows/win_format.py metaclass-boilerplate
lib/ansible/modules/windows/win_get_url.py future-import-boilerplate
lib/ansible/modules/windows/win_get_url.py metaclass-boilerplate
lib/ansible/modules/windows/win_group.py future-import-boilerplate
lib/ansible/modules/windows/win_group.py metaclass-boilerplate
lib/ansible/modules/windows/win_group_membership.py future-import-boilerplate
lib/ansible/modules/windows/win_group_membership.py metaclass-boilerplate
lib/ansible/modules/windows/win_hostname.py future-import-boilerplate
lib/ansible/modules/windows/win_hostname.py metaclass-boilerplate
lib/ansible/modules/windows/win_hosts.py future-import-boilerplate
lib/ansible/modules/windows/win_hosts.py metaclass-boilerplate
lib/ansible/modules/windows/win_hotfix.ps1 pslint:PSCustomUseLiteralPath
lib/ansible/modules/windows/win_hotfix.ps1 pslint:PSUseApprovedVerbs
lib/ansible/modules/windows/win_hotfix.py future-import-boilerplate
lib/ansible/modules/windows/win_hotfix.py metaclass-boilerplate
lib/ansible/modules/windows/win_http_proxy.ps1 validate-modules:E337
lib/ansible/modules/windows/win_http_proxy.py future-import-boilerplate
lib/ansible/modules/windows/win_http_proxy.py metaclass-boilerplate
lib/ansible/modules/windows/win_iis_virtualdirectory.ps1 pslint:PSCustomUseLiteralPath
lib/ansible/modules/windows/win_iis_virtualdirectory.py future-import-boilerplate
lib/ansible/modules/windows/win_iis_virtualdirectory.py metaclass-boilerplate
lib/ansible/modules/windows/win_iis_webapplication.ps1 pslint:PSCustomUseLiteralPath
lib/ansible/modules/windows/win_iis_webapplication.py future-import-boilerplate
lib/ansible/modules/windows/win_iis_webapplication.py metaclass-boilerplate
lib/ansible/modules/windows/win_iis_webapppool.ps1 pslint:PSCustomUseLiteralPath
lib/ansible/modules/windows/win_iis_webapppool.py future-import-boilerplate
lib/ansible/modules/windows/win_iis_webapppool.py metaclass-boilerplate
lib/ansible/modules/windows/win_iis_webbinding.ps1 pslint:PSCustomUseLiteralPath
lib/ansible/modules/windows/win_iis_webbinding.ps1 pslint:PSUseApprovedVerbs
lib/ansible/modules/windows/win_iis_webbinding.py future-import-boilerplate
lib/ansible/modules/windows/win_iis_webbinding.py metaclass-boilerplate
lib/ansible/modules/windows/win_iis_website.ps1 pslint:PSCustomUseLiteralPath
lib/ansible/modules/windows/win_iis_website.py future-import-boilerplate
lib/ansible/modules/windows/win_iis_website.py metaclass-boilerplate
lib/ansible/modules/windows/win_inet_proxy.ps1 validate-modules:E337
lib/ansible/modules/windows/win_inet_proxy.py future-import-boilerplate
lib/ansible/modules/windows/win_inet_proxy.py metaclass-boilerplate
lib/ansible/modules/windows/win_lineinfile.ps1 pslint:PSCustomUseLiteralPath
lib/ansible/modules/windows/win_lineinfile.py future-import-boilerplate
lib/ansible/modules/windows/win_lineinfile.py metaclass-boilerplate
lib/ansible/modules/windows/win_mapped_drive.ps1 pslint:PSCustomUseLiteralPath
lib/ansible/modules/windows/win_mapped_drive.py future-import-boilerplate
lib/ansible/modules/windows/win_mapped_drive.py metaclass-boilerplate
lib/ansible/modules/windows/win_msg.py future-import-boilerplate
lib/ansible/modules/windows/win_msg.py metaclass-boilerplate
lib/ansible/modules/windows/win_nssm.py future-import-boilerplate
lib/ansible/modules/windows/win_nssm.py metaclass-boilerplate
lib/ansible/modules/windows/win_optional_feature.py future-import-boilerplate
lib/ansible/modules/windows/win_optional_feature.py metaclass-boilerplate
lib/ansible/modules/windows/win_owner.py future-import-boilerplate
lib/ansible/modules/windows/win_owner.py metaclass-boilerplate
lib/ansible/modules/windows/win_package.ps1 pslint:PSCustomUseLiteralPath
lib/ansible/modules/windows/win_package.ps1 pslint:PSUseApprovedVerbs
lib/ansible/modules/windows/win_package.py future-import-boilerplate
lib/ansible/modules/windows/win_package.py metaclass-boilerplate
lib/ansible/modules/windows/win_pagefile.ps1 pslint:PSCustomUseLiteralPath
lib/ansible/modules/windows/win_pagefile.ps1 pslint:PSUseDeclaredVarsMoreThanAssignments # New PR - bug test_path should be testPath
lib/ansible/modules/windows/win_pagefile.ps1 pslint:PSUseSupportsShouldProcess
lib/ansible/modules/windows/win_pagefile.py future-import-boilerplate
lib/ansible/modules/windows/win_pagefile.py metaclass-boilerplate
lib/ansible/modules/windows/win_partition.py future-import-boilerplate
lib/ansible/modules/windows/win_partition.py metaclass-boilerplate
lib/ansible/modules/windows/win_path.py future-import-boilerplate
lib/ansible/modules/windows/win_path.py metaclass-boilerplate
lib/ansible/modules/windows/win_pester.ps1 pslint:PSCustomUseLiteralPath
lib/ansible/modules/windows/win_pester.py future-import-boilerplate
lib/ansible/modules/windows/win_pester.py metaclass-boilerplate
lib/ansible/modules/windows/win_ping.py future-import-boilerplate
lib/ansible/modules/windows/win_ping.py metaclass-boilerplate
lib/ansible/modules/windows/win_power_plan.py future-import-boilerplate
lib/ansible/modules/windows/win_power_plan.py metaclass-boilerplate
lib/ansible/modules/windows/win_product_facts.ps1 pslint:PSCustomUseLiteralPath
lib/ansible/modules/windows/win_product_facts.py future-import-boilerplate
lib/ansible/modules/windows/win_product_facts.py metaclass-boilerplate
lib/ansible/modules/windows/win_psexec.ps1 validate-modules:E337
lib/ansible/modules/windows/win_psexec.py future-import-boilerplate
lib/ansible/modules/windows/win_psexec.py metaclass-boilerplate
lib/ansible/modules/windows/win_psmodule.py future-import-boilerplate
lib/ansible/modules/windows/win_psmodule.py metaclass-boilerplate
lib/ansible/modules/windows/win_psrepository.py future-import-boilerplate
lib/ansible/modules/windows/win_psrepository.py metaclass-boilerplate
lib/ansible/modules/windows/win_rabbitmq_plugin.ps1 pslint:PSAvoidUsingInvokeExpression
lib/ansible/modules/windows/win_rabbitmq_plugin.ps1 pslint:PSCustomUseLiteralPath
lib/ansible/modules/windows/win_rabbitmq_plugin.py future-import-boilerplate
lib/ansible/modules/windows/win_rabbitmq_plugin.py metaclass-boilerplate
lib/ansible/modules/windows/win_rds_cap.ps1 pslint:PSCustomUseLiteralPath
lib/ansible/modules/windows/win_rds_cap.py future-import-boilerplate
lib/ansible/modules/windows/win_rds_cap.py metaclass-boilerplate
lib/ansible/modules/windows/win_rds_rap.ps1 pslint:PSCustomUseLiteralPath
lib/ansible/modules/windows/win_rds_rap.py future-import-boilerplate
lib/ansible/modules/windows/win_rds_rap.py metaclass-boilerplate
lib/ansible/modules/windows/win_rds_settings.ps1 pslint:PSCustomUseLiteralPath
lib/ansible/modules/windows/win_rds_settings.py future-import-boilerplate
lib/ansible/modules/windows/win_rds_settings.py metaclass-boilerplate
lib/ansible/modules/windows/win_reboot.py future-import-boilerplate
lib/ansible/modules/windows/win_reboot.py metaclass-boilerplate
lib/ansible/modules/windows/win_reg_stat.py future-import-boilerplate
lib/ansible/modules/windows/win_reg_stat.py metaclass-boilerplate
lib/ansible/modules/windows/win_regedit.ps1 pslint:PSCustomUseLiteralPath
lib/ansible/modules/windows/win_regedit.py future-import-boilerplate
lib/ansible/modules/windows/win_regedit.py metaclass-boilerplate
lib/ansible/modules/windows/win_region.ps1 pslint:PSAvoidUsingEmptyCatchBlock # Keep
lib/ansible/modules/windows/win_region.ps1 pslint:PSCustomUseLiteralPath
lib/ansible/modules/windows/win_region.py future-import-boilerplate
lib/ansible/modules/windows/win_region.py metaclass-boilerplate
lib/ansible/modules/windows/win_regmerge.ps1 pslint:PSCustomUseLiteralPath
lib/ansible/modules/windows/win_regmerge.py future-import-boilerplate
lib/ansible/modules/windows/win_regmerge.py metaclass-boilerplate
lib/ansible/modules/windows/win_robocopy.ps1 pslint:PSCustomUseLiteralPath
lib/ansible/modules/windows/win_robocopy.py future-import-boilerplate
lib/ansible/modules/windows/win_robocopy.py metaclass-boilerplate
lib/ansible/modules/windows/win_route.py future-import-boilerplate
lib/ansible/modules/windows/win_route.py metaclass-boilerplate
lib/ansible/modules/windows/win_say.ps1 pslint:PSCustomUseLiteralPath
lib/ansible/modules/windows/win_say.py future-import-boilerplate
lib/ansible/modules/windows/win_say.py metaclass-boilerplate
lib/ansible/modules/windows/win_scheduled_task.py future-import-boilerplate
lib/ansible/modules/windows/win_scheduled_task.py metaclass-boilerplate
lib/ansible/modules/windows/win_scheduled_task_stat.py future-import-boilerplate
lib/ansible/modules/windows/win_scheduled_task_stat.py metaclass-boilerplate
lib/ansible/modules/windows/win_security_policy.ps1 pslint:PSCustomUseLiteralPath
lib/ansible/modules/windows/win_security_policy.ps1 pslint:PSUseApprovedVerbs
lib/ansible/modules/windows/win_security_policy.py future-import-boilerplate
lib/ansible/modules/windows/win_security_policy.py metaclass-boilerplate
lib/ansible/modules/windows/win_service.py future-import-boilerplate
lib/ansible/modules/windows/win_service.py metaclass-boilerplate
lib/ansible/modules/windows/win_share.ps1 pslint:PSCustomUseLiteralPath
lib/ansible/modules/windows/win_share.py future-import-boilerplate
lib/ansible/modules/windows/win_share.py metaclass-boilerplate
lib/ansible/modules/windows/win_shell.ps1 pslint:PSUseApprovedVerbs
lib/ansible/modules/windows/win_shell.py future-import-boilerplate
lib/ansible/modules/windows/win_shell.py metaclass-boilerplate
lib/ansible/modules/windows/win_shortcut.ps1 pslint:PSCustomUseLiteralPath
lib/ansible/modules/windows/win_shortcut.py future-import-boilerplate
lib/ansible/modules/windows/win_shortcut.py metaclass-boilerplate
lib/ansible/modules/windows/win_snmp.ps1 pslint:PSCustomUseLiteralPath
lib/ansible/modules/windows/win_snmp.py future-import-boilerplate
lib/ansible/modules/windows/win_snmp.py metaclass-boilerplate
lib/ansible/modules/windows/win_stat.py future-import-boilerplate
lib/ansible/modules/windows/win_stat.py metaclass-boilerplate
lib/ansible/modules/windows/win_tempfile.py future-import-boilerplate
lib/ansible/modules/windows/win_tempfile.py metaclass-boilerplate
lib/ansible/modules/windows/win_template.py future-import-boilerplate
lib/ansible/modules/windows/win_template.py metaclass-boilerplate
lib/ansible/modules/windows/win_timezone.py future-import-boilerplate
lib/ansible/modules/windows/win_timezone.py metaclass-boilerplate
lib/ansible/modules/windows/win_toast.py future-import-boilerplate
lib/ansible/modules/windows/win_toast.py metaclass-boilerplate
lib/ansible/modules/windows/win_unzip.ps1 pslint:PSCustomUseLiteralPath
lib/ansible/modules/windows/win_unzip.ps1 pslint:PSUseApprovedVerbs
lib/ansible/modules/windows/win_unzip.py future-import-boilerplate
lib/ansible/modules/windows/win_unzip.py metaclass-boilerplate
lib/ansible/modules/windows/win_updates.ps1 pslint:PSCustomUseLiteralPath
lib/ansible/modules/windows/win_updates.py future-import-boilerplate
lib/ansible/modules/windows/win_updates.py metaclass-boilerplate
lib/ansible/modules/windows/win_uri.ps1 pslint:PSAvoidUsingEmptyCatchBlock # Keep
lib/ansible/modules/windows/win_uri.py future-import-boilerplate
lib/ansible/modules/windows/win_uri.py metaclass-boilerplate
lib/ansible/modules/windows/win_user.py future-import-boilerplate
lib/ansible/modules/windows/win_user.py metaclass-boilerplate
lib/ansible/modules/windows/win_user_profile.ps1 pslint:PSCustomUseLiteralPath
lib/ansible/modules/windows/win_user_profile.ps1 validate-modules:E337
lib/ansible/modules/windows/win_user_profile.py future-import-boilerplate
lib/ansible/modules/windows/win_user_profile.py metaclass-boilerplate
lib/ansible/modules/windows/win_user_right.py future-import-boilerplate
lib/ansible/modules/windows/win_user_right.py metaclass-boilerplate
lib/ansible/modules/windows/win_wait_for.ps1 pslint:PSCustomUseLiteralPath
lib/ansible/modules/windows/win_wait_for.py future-import-boilerplate
lib/ansible/modules/windows/win_wait_for.py metaclass-boilerplate
lib/ansible/modules/windows/win_wait_for_process.py future-import-boilerplate
lib/ansible/modules/windows/win_wait_for_process.py metaclass-boilerplate
lib/ansible/modules/windows/win_wakeonlan.py future-import-boilerplate
lib/ansible/modules/windows/win_wakeonlan.py metaclass-boilerplate
lib/ansible/modules/windows/win_webpicmd.ps1 pslint:PSAvoidUsingInvokeExpression
lib/ansible/modules/windows/win_webpicmd.py future-import-boilerplate
lib/ansible/modules/windows/win_webpicmd.py metaclass-boilerplate
lib/ansible/modules/windows/win_whoami.py future-import-boilerplate
lib/ansible/modules/windows/win_whoami.py metaclass-boilerplate
lib/ansible/modules/windows/win_xml.ps1 pslint:PSCustomUseLiteralPath
lib/ansible/modules/windows/win_xml.py future-import-boilerplate
lib/ansible/modules/windows/win_xml.py metaclass-boilerplate
lib/ansible/parsing/vault/__init__.py pylint:blacklisted-name
lib/ansible/playbook/base.py pylint:blacklisted-name
lib/ansible/playbook/helpers.py pylint:blacklisted-name
lib/ansible/playbook/role/__init__.py pylint:blacklisted-name
lib/ansible/plugins/action/__init__.py action-plugin-docs # action plugin base class, not an actual action plugin
lib/ansible/plugins/action/aireos.py action-plugin-docs # base class for deprecated network platform modules using `connection: local`
lib/ansible/plugins/action/aruba.py action-plugin-docs # base class for deprecated network platform modules using `connection: local`
lib/ansible/plugins/action/asa.py action-plugin-docs # base class for deprecated network platform modules using `connection: local`
lib/ansible/plugins/action/bigip.py action-plugin-docs # undocumented action plugin to fix, existed before sanity test was added
lib/ansible/plugins/action/bigiq.py action-plugin-docs # undocumented action plugin to fix, existed before sanity test was added
lib/ansible/plugins/action/ce.py action-plugin-docs # base class for deprecated network platform modules using `connection: local`
lib/ansible/plugins/action/ce_template.py action-plugin-docs # undocumented action plugin to fix, existed before sanity test was added
lib/ansible/plugins/action/cnos.py action-plugin-docs # base class for deprecated network platform modules using `connection: local`
lib/ansible/plugins/action/dellos10.py action-plugin-docs # base class for deprecated network platform modules using `connection: local`
lib/ansible/plugins/action/dellos6.py action-plugin-docs # base class for deprecated network platform modules using `connection: local`
lib/ansible/plugins/action/dellos9.py action-plugin-docs # base class for deprecated network platform modules using `connection: local`
lib/ansible/plugins/action/enos.py action-plugin-docs # base class for deprecated network platform modules using `connection: local`
lib/ansible/plugins/action/eos.py action-plugin-docs # base class for deprecated network platform modules using `connection: local`
lib/ansible/plugins/action/ios.py action-plugin-docs # base class for deprecated network platform modules using `connection: local`
lib/ansible/plugins/action/iosxr.py action-plugin-docs # base class for deprecated network platform modules using `connection: local`
lib/ansible/plugins/action/ironware.py action-plugin-docs # base class for deprecated network platform modules using `connection: local`
lib/ansible/plugins/action/junos.py action-plugin-docs # base class for deprecated network platform modules using `connection: local`
lib/ansible/plugins/action/net_base.py action-plugin-docs # base class for other net_* action plugins which have a matching module
lib/ansible/plugins/action/netconf.py action-plugin-docs # base class for deprecated network platform modules using `connection: local`
lib/ansible/plugins/action/network.py action-plugin-docs # base class for network action plugins
lib/ansible/plugins/action/normal.py action-plugin-docs # default action plugin for modules without a dedicated action plugin
lib/ansible/plugins/action/nxos.py action-plugin-docs # base class for deprecated network platform modules using `connection: local`
lib/ansible/plugins/action/sros.py action-plugin-docs # base class for deprecated network platform modules using `connection: local`
lib/ansible/plugins/action/vyos.py action-plugin-docs # base class for deprecated network platform modules using `connection: local`
lib/ansible/plugins/cache/base.py ansible-doc # not a plugin, but a stub for backwards compatibility
lib/ansible/plugins/callback/hipchat.py pylint:blacklisted-name
lib/ansible/plugins/connection/lxc.py pylint:blacklisted-name
lib/ansible/plugins/doc_fragments/a10.py future-import-boilerplate
lib/ansible/plugins/doc_fragments/a10.py metaclass-boilerplate
lib/ansible/plugins/doc_fragments/aci.py future-import-boilerplate
lib/ansible/plugins/doc_fragments/aci.py metaclass-boilerplate
lib/ansible/plugins/doc_fragments/acme.py future-import-boilerplate
lib/ansible/plugins/doc_fragments/acme.py metaclass-boilerplate
lib/ansible/plugins/doc_fragments/aireos.py future-import-boilerplate
lib/ansible/plugins/doc_fragments/aireos.py metaclass-boilerplate
lib/ansible/plugins/doc_fragments/alicloud.py future-import-boilerplate
lib/ansible/plugins/doc_fragments/alicloud.py metaclass-boilerplate
lib/ansible/plugins/doc_fragments/aruba.py future-import-boilerplate
lib/ansible/plugins/doc_fragments/aruba.py metaclass-boilerplate
lib/ansible/plugins/doc_fragments/asa.py future-import-boilerplate
lib/ansible/plugins/doc_fragments/asa.py metaclass-boilerplate
lib/ansible/plugins/doc_fragments/auth_basic.py future-import-boilerplate
lib/ansible/plugins/doc_fragments/auth_basic.py metaclass-boilerplate
lib/ansible/plugins/doc_fragments/avi.py future-import-boilerplate
lib/ansible/plugins/doc_fragments/avi.py metaclass-boilerplate
lib/ansible/plugins/doc_fragments/aws.py future-import-boilerplate
lib/ansible/plugins/doc_fragments/aws.py metaclass-boilerplate
lib/ansible/plugins/doc_fragments/aws_credentials.py future-import-boilerplate
lib/ansible/plugins/doc_fragments/aws_credentials.py metaclass-boilerplate
lib/ansible/plugins/doc_fragments/aws_region.py future-import-boilerplate
lib/ansible/plugins/doc_fragments/aws_region.py metaclass-boilerplate
lib/ansible/plugins/doc_fragments/azure.py future-import-boilerplate
lib/ansible/plugins/doc_fragments/azure.py metaclass-boilerplate
lib/ansible/plugins/doc_fragments/azure_tags.py future-import-boilerplate
lib/ansible/plugins/doc_fragments/azure_tags.py metaclass-boilerplate
lib/ansible/plugins/doc_fragments/backup.py future-import-boilerplate
lib/ansible/plugins/doc_fragments/backup.py metaclass-boilerplate
lib/ansible/plugins/doc_fragments/ce.py future-import-boilerplate
lib/ansible/plugins/doc_fragments/ce.py metaclass-boilerplate
lib/ansible/plugins/doc_fragments/checkpoint_commands.py future-import-boilerplate
lib/ansible/plugins/doc_fragments/checkpoint_commands.py metaclass-boilerplate
lib/ansible/plugins/doc_fragments/checkpoint_objects.py future-import-boilerplate
lib/ansible/plugins/doc_fragments/checkpoint_objects.py metaclass-boilerplate
lib/ansible/plugins/doc_fragments/cloudscale.py future-import-boilerplate
lib/ansible/plugins/doc_fragments/cloudscale.py metaclass-boilerplate
lib/ansible/plugins/doc_fragments/cloudstack.py future-import-boilerplate
lib/ansible/plugins/doc_fragments/cloudstack.py metaclass-boilerplate
lib/ansible/plugins/doc_fragments/cnos.py future-import-boilerplate
lib/ansible/plugins/doc_fragments/cnos.py metaclass-boilerplate
lib/ansible/plugins/doc_fragments/constructed.py future-import-boilerplate
lib/ansible/plugins/doc_fragments/constructed.py metaclass-boilerplate
lib/ansible/plugins/doc_fragments/decrypt.py future-import-boilerplate
lib/ansible/plugins/doc_fragments/decrypt.py metaclass-boilerplate
lib/ansible/plugins/doc_fragments/default_callback.py future-import-boilerplate
lib/ansible/plugins/doc_fragments/default_callback.py metaclass-boilerplate
lib/ansible/plugins/doc_fragments/dellos10.py future-import-boilerplate
lib/ansible/plugins/doc_fragments/dellos10.py metaclass-boilerplate
lib/ansible/plugins/doc_fragments/dellos6.py future-import-boilerplate
lib/ansible/plugins/doc_fragments/dellos6.py metaclass-boilerplate
lib/ansible/plugins/doc_fragments/dellos9.py future-import-boilerplate
lib/ansible/plugins/doc_fragments/dellos9.py metaclass-boilerplate
lib/ansible/plugins/doc_fragments/digital_ocean.py future-import-boilerplate
lib/ansible/plugins/doc_fragments/digital_ocean.py metaclass-boilerplate
lib/ansible/plugins/doc_fragments/dimensiondata.py future-import-boilerplate
lib/ansible/plugins/doc_fragments/dimensiondata.py metaclass-boilerplate
lib/ansible/plugins/doc_fragments/dimensiondata_wait.py future-import-boilerplate
lib/ansible/plugins/doc_fragments/dimensiondata_wait.py metaclass-boilerplate
lib/ansible/plugins/doc_fragments/docker.py future-import-boilerplate
lib/ansible/plugins/doc_fragments/docker.py metaclass-boilerplate
lib/ansible/plugins/doc_fragments/ec2.py future-import-boilerplate
lib/ansible/plugins/doc_fragments/ec2.py metaclass-boilerplate
lib/ansible/plugins/doc_fragments/emc.py future-import-boilerplate
lib/ansible/plugins/doc_fragments/emc.py metaclass-boilerplate
lib/ansible/plugins/doc_fragments/enos.py future-import-boilerplate
lib/ansible/plugins/doc_fragments/enos.py metaclass-boilerplate
lib/ansible/plugins/doc_fragments/eos.py future-import-boilerplate
lib/ansible/plugins/doc_fragments/eos.py metaclass-boilerplate
lib/ansible/plugins/doc_fragments/exoscale.py future-import-boilerplate
lib/ansible/plugins/doc_fragments/exoscale.py metaclass-boilerplate
lib/ansible/plugins/doc_fragments/f5.py future-import-boilerplate
lib/ansible/plugins/doc_fragments/f5.py metaclass-boilerplate
lib/ansible/plugins/doc_fragments/files.py future-import-boilerplate
lib/ansible/plugins/doc_fragments/files.py metaclass-boilerplate
lib/ansible/plugins/doc_fragments/fortios.py future-import-boilerplate
lib/ansible/plugins/doc_fragments/fortios.py metaclass-boilerplate
lib/ansible/plugins/doc_fragments/gcp.py future-import-boilerplate
lib/ansible/plugins/doc_fragments/gcp.py metaclass-boilerplate
lib/ansible/plugins/doc_fragments/hcloud.py future-import-boilerplate
lib/ansible/plugins/doc_fragments/hcloud.py metaclass-boilerplate
lib/ansible/plugins/doc_fragments/hetzner.py future-import-boilerplate
lib/ansible/plugins/doc_fragments/hetzner.py metaclass-boilerplate
lib/ansible/plugins/doc_fragments/hpe3par.py future-import-boilerplate
lib/ansible/plugins/doc_fragments/hpe3par.py metaclass-boilerplate
lib/ansible/plugins/doc_fragments/hwc.py future-import-boilerplate
lib/ansible/plugins/doc_fragments/hwc.py metaclass-boilerplate
lib/ansible/plugins/doc_fragments/infinibox.py future-import-boilerplate
lib/ansible/plugins/doc_fragments/infinibox.py metaclass-boilerplate
lib/ansible/plugins/doc_fragments/influxdb.py future-import-boilerplate
lib/ansible/plugins/doc_fragments/influxdb.py metaclass-boilerplate
lib/ansible/plugins/doc_fragments/ingate.py future-import-boilerplate
lib/ansible/plugins/doc_fragments/ingate.py metaclass-boilerplate
lib/ansible/plugins/doc_fragments/intersight.py future-import-boilerplate
lib/ansible/plugins/doc_fragments/intersight.py metaclass-boilerplate
lib/ansible/plugins/doc_fragments/inventory_cache.py future-import-boilerplate
lib/ansible/plugins/doc_fragments/inventory_cache.py metaclass-boilerplate
lib/ansible/plugins/doc_fragments/ios.py future-import-boilerplate
lib/ansible/plugins/doc_fragments/ios.py metaclass-boilerplate
lib/ansible/plugins/doc_fragments/iosxr.py future-import-boilerplate
lib/ansible/plugins/doc_fragments/iosxr.py metaclass-boilerplate
lib/ansible/plugins/doc_fragments/ipa.py future-import-boilerplate
lib/ansible/plugins/doc_fragments/ipa.py metaclass-boilerplate
lib/ansible/plugins/doc_fragments/ironware.py future-import-boilerplate
lib/ansible/plugins/doc_fragments/ironware.py metaclass-boilerplate
lib/ansible/plugins/doc_fragments/junos.py future-import-boilerplate
lib/ansible/plugins/doc_fragments/junos.py metaclass-boilerplate
lib/ansible/plugins/doc_fragments/k8s_auth_options.py future-import-boilerplate
lib/ansible/plugins/doc_fragments/k8s_auth_options.py metaclass-boilerplate
lib/ansible/plugins/doc_fragments/k8s_name_options.py future-import-boilerplate
lib/ansible/plugins/doc_fragments/k8s_name_options.py metaclass-boilerplate
lib/ansible/plugins/doc_fragments/k8s_resource_options.py future-import-boilerplate
lib/ansible/plugins/doc_fragments/k8s_resource_options.py metaclass-boilerplate
lib/ansible/plugins/doc_fragments/k8s_scale_options.py future-import-boilerplate
lib/ansible/plugins/doc_fragments/k8s_scale_options.py metaclass-boilerplate
lib/ansible/plugins/doc_fragments/k8s_state_options.py future-import-boilerplate
lib/ansible/plugins/doc_fragments/k8s_state_options.py metaclass-boilerplate
lib/ansible/plugins/doc_fragments/keycloak.py future-import-boilerplate
lib/ansible/plugins/doc_fragments/keycloak.py metaclass-boilerplate
lib/ansible/plugins/doc_fragments/kubevirt_common_options.py future-import-boilerplate
lib/ansible/plugins/doc_fragments/kubevirt_common_options.py metaclass-boilerplate
lib/ansible/plugins/doc_fragments/kubevirt_vm_options.py future-import-boilerplate
lib/ansible/plugins/doc_fragments/kubevirt_vm_options.py metaclass-boilerplate
lib/ansible/plugins/doc_fragments/ldap.py future-import-boilerplate
lib/ansible/plugins/doc_fragments/ldap.py metaclass-boilerplate
lib/ansible/plugins/doc_fragments/lxca_common.py future-import-boilerplate
lib/ansible/plugins/doc_fragments/lxca_common.py metaclass-boilerplate
lib/ansible/plugins/doc_fragments/manageiq.py future-import-boilerplate
lib/ansible/plugins/doc_fragments/manageiq.py metaclass-boilerplate
lib/ansible/plugins/doc_fragments/meraki.py future-import-boilerplate
lib/ansible/plugins/doc_fragments/meraki.py metaclass-boilerplate
lib/ansible/plugins/doc_fragments/mso.py future-import-boilerplate
lib/ansible/plugins/doc_fragments/mso.py metaclass-boilerplate
lib/ansible/plugins/doc_fragments/mysql.py future-import-boilerplate
lib/ansible/plugins/doc_fragments/mysql.py metaclass-boilerplate
lib/ansible/plugins/doc_fragments/netapp.py future-import-boilerplate
lib/ansible/plugins/doc_fragments/netapp.py metaclass-boilerplate
lib/ansible/plugins/doc_fragments/netconf.py future-import-boilerplate
lib/ansible/plugins/doc_fragments/netconf.py metaclass-boilerplate
lib/ansible/plugins/doc_fragments/netscaler.py future-import-boilerplate
lib/ansible/plugins/doc_fragments/netscaler.py metaclass-boilerplate
lib/ansible/plugins/doc_fragments/network_agnostic.py future-import-boilerplate
lib/ansible/plugins/doc_fragments/network_agnostic.py metaclass-boilerplate
lib/ansible/plugins/doc_fragments/nios.py future-import-boilerplate
lib/ansible/plugins/doc_fragments/nios.py metaclass-boilerplate
lib/ansible/plugins/doc_fragments/nso.py future-import-boilerplate
lib/ansible/plugins/doc_fragments/nso.py metaclass-boilerplate
lib/ansible/plugins/doc_fragments/nxos.py future-import-boilerplate
lib/ansible/plugins/doc_fragments/nxos.py metaclass-boilerplate
lib/ansible/plugins/doc_fragments/oneview.py future-import-boilerplate
lib/ansible/plugins/doc_fragments/oneview.py metaclass-boilerplate
lib/ansible/plugins/doc_fragments/online.py future-import-boilerplate
lib/ansible/plugins/doc_fragments/online.py metaclass-boilerplate
lib/ansible/plugins/doc_fragments/onyx.py future-import-boilerplate
lib/ansible/plugins/doc_fragments/onyx.py metaclass-boilerplate
lib/ansible/plugins/doc_fragments/opennebula.py future-import-boilerplate
lib/ansible/plugins/doc_fragments/opennebula.py metaclass-boilerplate
lib/ansible/plugins/doc_fragments/openstack.py future-import-boilerplate
lib/ansible/plugins/doc_fragments/openstack.py metaclass-boilerplate
lib/ansible/plugins/doc_fragments/openswitch.py future-import-boilerplate
lib/ansible/plugins/doc_fragments/openswitch.py metaclass-boilerplate
lib/ansible/plugins/doc_fragments/oracle.py future-import-boilerplate
lib/ansible/plugins/doc_fragments/oracle.py metaclass-boilerplate
lib/ansible/plugins/doc_fragments/oracle_creatable_resource.py future-import-boilerplate
lib/ansible/plugins/doc_fragments/oracle_creatable_resource.py metaclass-boilerplate
lib/ansible/plugins/doc_fragments/oracle_display_name_option.py future-import-boilerplate
lib/ansible/plugins/doc_fragments/oracle_display_name_option.py metaclass-boilerplate
lib/ansible/plugins/doc_fragments/oracle_name_option.py future-import-boilerplate
lib/ansible/plugins/doc_fragments/oracle_name_option.py metaclass-boilerplate
lib/ansible/plugins/doc_fragments/oracle_tags.py future-import-boilerplate
lib/ansible/plugins/doc_fragments/oracle_tags.py metaclass-boilerplate
lib/ansible/plugins/doc_fragments/oracle_wait_options.py future-import-boilerplate
lib/ansible/plugins/doc_fragments/oracle_wait_options.py metaclass-boilerplate
lib/ansible/plugins/doc_fragments/ovirt.py future-import-boilerplate
lib/ansible/plugins/doc_fragments/ovirt.py metaclass-boilerplate
lib/ansible/plugins/doc_fragments/ovirt_facts.py future-import-boilerplate
lib/ansible/plugins/doc_fragments/ovirt_facts.py metaclass-boilerplate
lib/ansible/plugins/doc_fragments/panos.py future-import-boilerplate
lib/ansible/plugins/doc_fragments/panos.py metaclass-boilerplate
lib/ansible/plugins/doc_fragments/postgres.py future-import-boilerplate
lib/ansible/plugins/doc_fragments/postgres.py metaclass-boilerplate
lib/ansible/plugins/doc_fragments/proxysql.py future-import-boilerplate
lib/ansible/plugins/doc_fragments/proxysql.py metaclass-boilerplate
lib/ansible/plugins/doc_fragments/purestorage.py future-import-boilerplate
lib/ansible/plugins/doc_fragments/purestorage.py metaclass-boilerplate
lib/ansible/plugins/doc_fragments/rabbitmq.py future-import-boilerplate
lib/ansible/plugins/doc_fragments/rabbitmq.py metaclass-boilerplate
lib/ansible/plugins/doc_fragments/rackspace.py future-import-boilerplate
lib/ansible/plugins/doc_fragments/rackspace.py metaclass-boilerplate
lib/ansible/plugins/doc_fragments/return_common.py future-import-boilerplate
lib/ansible/plugins/doc_fragments/return_common.py metaclass-boilerplate
lib/ansible/plugins/doc_fragments/scaleway.py future-import-boilerplate
lib/ansible/plugins/doc_fragments/scaleway.py metaclass-boilerplate
lib/ansible/plugins/doc_fragments/shell_common.py future-import-boilerplate
lib/ansible/plugins/doc_fragments/shell_common.py metaclass-boilerplate
lib/ansible/plugins/doc_fragments/shell_windows.py future-import-boilerplate
lib/ansible/plugins/doc_fragments/shell_windows.py metaclass-boilerplate
lib/ansible/plugins/doc_fragments/skydive.py future-import-boilerplate
lib/ansible/plugins/doc_fragments/skydive.py metaclass-boilerplate
lib/ansible/plugins/doc_fragments/sros.py future-import-boilerplate
lib/ansible/plugins/doc_fragments/sros.py metaclass-boilerplate
lib/ansible/plugins/doc_fragments/tower.py future-import-boilerplate
lib/ansible/plugins/doc_fragments/tower.py metaclass-boilerplate
lib/ansible/plugins/doc_fragments/ucs.py future-import-boilerplate
lib/ansible/plugins/doc_fragments/ucs.py metaclass-boilerplate
lib/ansible/plugins/doc_fragments/url.py future-import-boilerplate
lib/ansible/plugins/doc_fragments/url.py metaclass-boilerplate
lib/ansible/plugins/doc_fragments/utm.py future-import-boilerplate
lib/ansible/plugins/doc_fragments/utm.py metaclass-boilerplate
lib/ansible/plugins/doc_fragments/validate.py future-import-boilerplate
lib/ansible/plugins/doc_fragments/validate.py metaclass-boilerplate
lib/ansible/plugins/doc_fragments/vca.py future-import-boilerplate
lib/ansible/plugins/doc_fragments/vca.py metaclass-boilerplate
lib/ansible/plugins/doc_fragments/vexata.py future-import-boilerplate
lib/ansible/plugins/doc_fragments/vexata.py metaclass-boilerplate
lib/ansible/plugins/doc_fragments/vmware.py future-import-boilerplate
lib/ansible/plugins/doc_fragments/vmware.py metaclass-boilerplate
lib/ansible/plugins/doc_fragments/vmware_rest_client.py future-import-boilerplate
lib/ansible/plugins/doc_fragments/vmware_rest_client.py metaclass-boilerplate
lib/ansible/plugins/doc_fragments/vultr.py future-import-boilerplate
lib/ansible/plugins/doc_fragments/vultr.py metaclass-boilerplate
lib/ansible/plugins/doc_fragments/vyos.py future-import-boilerplate
lib/ansible/plugins/doc_fragments/vyos.py metaclass-boilerplate
lib/ansible/plugins/doc_fragments/xenserver.py future-import-boilerplate
lib/ansible/plugins/doc_fragments/xenserver.py metaclass-boilerplate
lib/ansible/plugins/doc_fragments/zabbix.py future-import-boilerplate
lib/ansible/plugins/doc_fragments/zabbix.py metaclass-boilerplate
lib/ansible/plugins/lookup/sequence.py pylint:blacklisted-name
lib/ansible/plugins/strategy/__init__.py pylint:blacklisted-name
lib/ansible/plugins/strategy/linear.py pylint:blacklisted-name
lib/ansible/vars/hostvars.py pylint:blacklisted-name
setup.py future-import-boilerplate
setup.py metaclass-boilerplate
test/integration/targets/ansible-runner/files/adhoc_example1.py future-import-boilerplate
test/integration/targets/ansible-runner/files/adhoc_example1.py metaclass-boilerplate
test/integration/targets/ansible-runner/files/playbook_example1.py future-import-boilerplate
test/integration/targets/ansible-runner/files/playbook_example1.py metaclass-boilerplate
test/integration/targets/async/library/async_test.py future-import-boilerplate
test/integration/targets/async/library/async_test.py metaclass-boilerplate
test/integration/targets/async_fail/library/async_test.py future-import-boilerplate
test/integration/targets/async_fail/library/async_test.py metaclass-boilerplate
test/integration/targets/aws_lambda/files/mini_lambda.py future-import-boilerplate
test/integration/targets/aws_lambda/files/mini_lambda.py metaclass-boilerplate
test/integration/targets/collections/collection_root_sys/ansible_collections/testns/coll_in_sys/plugins/modules/systestmodule.py future-import-boilerplate
test/integration/targets/collections/collection_root_sys/ansible_collections/testns/coll_in_sys/plugins/modules/systestmodule.py metaclass-boilerplate
test/integration/targets/collections/collection_root_sys/ansible_collections/testns/testcoll/plugins/modules/maskedmodule.py future-import-boilerplate
test/integration/targets/collections/collection_root_sys/ansible_collections/testns/testcoll/plugins/modules/maskedmodule.py metaclass-boilerplate
test/integration/targets/collections/collection_root_sys/ansible_collections/testns/testcoll/plugins/modules/testmodule.py future-import-boilerplate
test/integration/targets/collections/collection_root_sys/ansible_collections/testns/testcoll/plugins/modules/testmodule.py metaclass-boilerplate
test/integration/targets/collections/collection_root_user/ansible_collections/testns/testcoll/plugins/action/plugin_lookup.py future-import-boilerplate
test/integration/targets/collections/collection_root_user/ansible_collections/testns/testcoll/plugins/action/plugin_lookup.py metaclass-boilerplate
test/integration/targets/collections/collection_root_user/ansible_collections/testns/testcoll/plugins/callback/usercallback.py future-import-boilerplate
test/integration/targets/collections/collection_root_user/ansible_collections/testns/testcoll/plugins/callback/usercallback.py metaclass-boilerplate
test/integration/targets/collections/collection_root_user/ansible_collections/testns/testcoll/plugins/connection/localconn.py future-import-boilerplate
test/integration/targets/collections/collection_root_user/ansible_collections/testns/testcoll/plugins/connection/localconn.py metaclass-boilerplate
test/integration/targets/collections/collection_root_user/ansible_collections/testns/testcoll/plugins/doc_fragments/frag.py future-import-boilerplate
test/integration/targets/collections/collection_root_user/ansible_collections/testns/testcoll/plugins/doc_fragments/frag.py metaclass-boilerplate
test/integration/targets/collections/collection_root_user/ansible_collections/testns/testcoll/plugins/filter/myfilters.py future-import-boilerplate
test/integration/targets/collections/collection_root_user/ansible_collections/testns/testcoll/plugins/filter/myfilters.py metaclass-boilerplate
test/integration/targets/collections/collection_root_user/ansible_collections/testns/testcoll/plugins/lookup/mylookup.py future-import-boilerplate
test/integration/targets/collections/collection_root_user/ansible_collections/testns/testcoll/plugins/lookup/mylookup.py metaclass-boilerplate
test/integration/targets/collections/collection_root_user/ansible_collections/testns/testcoll/plugins/module_utils/MyPSMU.psm1 pslint:PSUseApprovedVerbs
test/integration/targets/collections/collection_root_user/ansible_collections/testns/testcoll/plugins/module_utils/base.py future-import-boilerplate
test/integration/targets/collections/collection_root_user/ansible_collections/testns/testcoll/plugins/module_utils/base.py metaclass-boilerplate
test/integration/targets/collections/collection_root_user/ansible_collections/testns/testcoll/plugins/module_utils/leaf.py future-import-boilerplate
test/integration/targets/collections/collection_root_user/ansible_collections/testns/testcoll/plugins/module_utils/leaf.py metaclass-boilerplate
test/integration/targets/collections/collection_root_user/ansible_collections/testns/testcoll/plugins/module_utils/secondary.py future-import-boilerplate
test/integration/targets/collections/collection_root_user/ansible_collections/testns/testcoll/plugins/module_utils/secondary.py metaclass-boilerplate
test/integration/targets/collections/collection_root_user/ansible_collections/testns/testcoll/plugins/module_utils/subpkg/submod.py future-import-boilerplate
test/integration/targets/collections/collection_root_user/ansible_collections/testns/testcoll/plugins/module_utils/subpkg/submod.py metaclass-boilerplate
test/integration/targets/collections/collection_root_user/ansible_collections/testns/testcoll/plugins/module_utils/subpkg_with_init/__init__.py future-import-boilerplate
test/integration/targets/collections/collection_root_user/ansible_collections/testns/testcoll/plugins/module_utils/subpkg_with_init/__init__.py metaclass-boilerplate
test/integration/targets/collections/collection_root_user/ansible_collections/testns/testcoll/plugins/modules/ping.py future-import-boilerplate
test/integration/targets/collections/collection_root_user/ansible_collections/testns/testcoll/plugins/modules/ping.py metaclass-boilerplate
test/integration/targets/collections/collection_root_user/ansible_collections/testns/testcoll/plugins/modules/testmodule.py future-import-boilerplate
test/integration/targets/collections/collection_root_user/ansible_collections/testns/testcoll/plugins/modules/testmodule.py metaclass-boilerplate
test/integration/targets/collections/collection_root_user/ansible_collections/testns/testcoll/plugins/modules/uses_base_mu_granular_nested_import.py future-import-boilerplate
test/integration/targets/collections/collection_root_user/ansible_collections/testns/testcoll/plugins/modules/uses_base_mu_granular_nested_import.py metaclass-boilerplate
test/integration/targets/collections/collection_root_user/ansible_collections/testns/testcoll/plugins/modules/uses_leaf_mu_flat_import.py future-import-boilerplate
test/integration/targets/collections/collection_root_user/ansible_collections/testns/testcoll/plugins/modules/uses_leaf_mu_flat_import.py metaclass-boilerplate
test/integration/targets/collections/collection_root_user/ansible_collections/testns/testcoll/plugins/modules/uses_leaf_mu_granular_import.py future-import-boilerplate
test/integration/targets/collections/collection_root_user/ansible_collections/testns/testcoll/plugins/modules/uses_leaf_mu_granular_import.py metaclass-boilerplate
test/integration/targets/collections/collection_root_user/ansible_collections/testns/testcoll/plugins/modules/uses_leaf_mu_module_import_from.py future-import-boilerplate
test/integration/targets/collections/collection_root_user/ansible_collections/testns/testcoll/plugins/modules/uses_leaf_mu_module_import_from.py metaclass-boilerplate
test/integration/targets/collections/collection_root_user/ansible_collections/testns/testcoll/plugins/modules/win_selfcontained.py future-import-boilerplate
test/integration/targets/collections/collection_root_user/ansible_collections/testns/testcoll/plugins/modules/win_selfcontained.py metaclass-boilerplate
test/integration/targets/collections/collection_root_user/ansible_collections/testns/testcoll/plugins/test/mytests.py future-import-boilerplate
test/integration/targets/collections/collection_root_user/ansible_collections/testns/testcoll/plugins/test/mytests.py metaclass-boilerplate
test/integration/targets/collections/collections/ansible_collections/testns/content_adj/plugins/modules/contentadjmodule.py future-import-boilerplate
test/integration/targets/collections/collections/ansible_collections/testns/content_adj/plugins/modules/contentadjmodule.py metaclass-boilerplate
test/integration/targets/collections/library/ping.py future-import-boilerplate
test/integration/targets/collections/library/ping.py metaclass-boilerplate
test/integration/targets/expect/files/test_command.py future-import-boilerplate
test/integration/targets/expect/files/test_command.py metaclass-boilerplate
test/integration/targets/get_certificate/files/process_certs.py future-import-boilerplate
test/integration/targets/get_certificate/files/process_certs.py metaclass-boilerplate
test/integration/targets/get_url/files/testserver.py future-import-boilerplate
test/integration/targets/get_url/files/testserver.py metaclass-boilerplate
test/integration/targets/group/files/gidget.py future-import-boilerplate
test/integration/targets/group/files/gidget.py metaclass-boilerplate
test/integration/targets/ignore_unreachable/fake_connectors/bad_exec.py future-import-boilerplate
test/integration/targets/ignore_unreachable/fake_connectors/bad_exec.py metaclass-boilerplate
test/integration/targets/ignore_unreachable/fake_connectors/bad_put_file.py future-import-boilerplate
test/integration/targets/ignore_unreachable/fake_connectors/bad_put_file.py metaclass-boilerplate
test/integration/targets/inventory_aws_conformance/inventory_diff.py future-import-boilerplate
test/integration/targets/inventory_aws_conformance/inventory_diff.py metaclass-boilerplate
test/integration/targets/inventory_aws_conformance/lib/boto/__init__.py future-import-boilerplate
test/integration/targets/inventory_aws_conformance/lib/boto/__init__.py metaclass-boilerplate
test/integration/targets/inventory_aws_conformance/lib/boto/ec2/__init__.py future-import-boilerplate
test/integration/targets/inventory_aws_conformance/lib/boto/ec2/__init__.py metaclass-boilerplate
test/integration/targets/inventory_aws_conformance/lib/boto/elasticache/__init__.py future-import-boilerplate
test/integration/targets/inventory_aws_conformance/lib/boto/elasticache/__init__.py metaclass-boilerplate
test/integration/targets/inventory_aws_conformance/lib/boto/exception.py future-import-boilerplate
test/integration/targets/inventory_aws_conformance/lib/boto/exception.py metaclass-boilerplate
test/integration/targets/inventory_aws_conformance/lib/boto/exceptions.py future-import-boilerplate
test/integration/targets/inventory_aws_conformance/lib/boto/exceptions.py metaclass-boilerplate
test/integration/targets/inventory_aws_conformance/lib/boto/mocks/instances.py future-import-boilerplate
test/integration/targets/inventory_aws_conformance/lib/boto/mocks/instances.py metaclass-boilerplate
test/integration/targets/inventory_aws_conformance/lib/boto/session.py future-import-boilerplate
test/integration/targets/inventory_aws_conformance/lib/boto/session.py metaclass-boilerplate
test/integration/targets/inventory_cloudscale/filter_plugins/group_name.py future-import-boilerplate
test/integration/targets/inventory_cloudscale/filter_plugins/group_name.py metaclass-boilerplate
test/integration/targets/inventory_kubevirt_conformance/inventory_diff.py future-import-boilerplate
test/integration/targets/inventory_kubevirt_conformance/inventory_diff.py metaclass-boilerplate
test/integration/targets/inventory_kubevirt_conformance/server.py future-import-boilerplate
test/integration/targets/inventory_kubevirt_conformance/server.py metaclass-boilerplate
test/integration/targets/jinja2_native_types/filter_plugins/native_plugins.py future-import-boilerplate
test/integration/targets/jinja2_native_types/filter_plugins/native_plugins.py metaclass-boilerplate
test/integration/targets/lambda_policy/files/mini_http_lambda.py future-import-boilerplate
test/integration/targets/lambda_policy/files/mini_http_lambda.py metaclass-boilerplate
test/integration/targets/lookup_properties/lookup-8859-15.ini no-smart-quotes
test/integration/targets/module_precedence/lib_with_extension/ping.py future-import-boilerplate
test/integration/targets/module_precedence/lib_with_extension/ping.py metaclass-boilerplate
test/integration/targets/module_precedence/multiple_roles/bar/library/ping.py future-import-boilerplate
test/integration/targets/module_precedence/multiple_roles/bar/library/ping.py metaclass-boilerplate
test/integration/targets/module_precedence/multiple_roles/foo/library/ping.py future-import-boilerplate
test/integration/targets/module_precedence/multiple_roles/foo/library/ping.py metaclass-boilerplate
test/integration/targets/module_precedence/roles_with_extension/foo/library/ping.py future-import-boilerplate
test/integration/targets/module_precedence/roles_with_extension/foo/library/ping.py metaclass-boilerplate
test/integration/targets/module_utils/library/test.py future-import-boilerplate
test/integration/targets/module_utils/library/test.py metaclass-boilerplate
test/integration/targets/module_utils/library/test_env_override.py future-import-boilerplate
test/integration/targets/module_utils/library/test_env_override.py metaclass-boilerplate
test/integration/targets/module_utils/library/test_failure.py future-import-boilerplate
test/integration/targets/module_utils/library/test_failure.py metaclass-boilerplate
test/integration/targets/module_utils/library/test_override.py future-import-boilerplate
test/integration/targets/module_utils/library/test_override.py metaclass-boilerplate
test/integration/targets/module_utils/module_utils/a/b/c/d/e/f/g/h/__init__.py future-import-boilerplate
test/integration/targets/module_utils/module_utils/a/b/c/d/e/f/g/h/__init__.py metaclass-boilerplate
test/integration/targets/module_utils/module_utils/bar0/foo.py future-import-boilerplate
test/integration/targets/module_utils/module_utils/bar0/foo.py metaclass-boilerplate
test/integration/targets/module_utils/module_utils/bar0/foo.py pylint:blacklisted-name
test/integration/targets/module_utils/module_utils/bar1/__init__.py future-import-boilerplate
test/integration/targets/module_utils/module_utils/bar1/__init__.py metaclass-boilerplate
test/integration/targets/module_utils/module_utils/bar2/__init__.py future-import-boilerplate
test/integration/targets/module_utils/module_utils/bar2/__init__.py metaclass-boilerplate
test/integration/targets/module_utils/module_utils/baz1/one.py future-import-boilerplate
test/integration/targets/module_utils/module_utils/baz1/one.py metaclass-boilerplate
test/integration/targets/module_utils/module_utils/baz2/one.py future-import-boilerplate
test/integration/targets/module_utils/module_utils/baz2/one.py metaclass-boilerplate
test/integration/targets/module_utils/module_utils/facts.py future-import-boilerplate
test/integration/targets/module_utils/module_utils/facts.py metaclass-boilerplate
test/integration/targets/module_utils/module_utils/foo.py future-import-boilerplate
test/integration/targets/module_utils/module_utils/foo.py metaclass-boilerplate
test/integration/targets/module_utils/module_utils/foo.py pylint:blacklisted-name
test/integration/targets/module_utils/module_utils/foo0.py future-import-boilerplate
test/integration/targets/module_utils/module_utils/foo0.py metaclass-boilerplate
test/integration/targets/module_utils/module_utils/foo1.py future-import-boilerplate
test/integration/targets/module_utils/module_utils/foo1.py metaclass-boilerplate
test/integration/targets/module_utils/module_utils/foo2.py future-import-boilerplate
test/integration/targets/module_utils/module_utils/foo2.py metaclass-boilerplate
test/integration/targets/module_utils/module_utils/qux1/quux.py future-import-boilerplate
test/integration/targets/module_utils/module_utils/qux1/quux.py metaclass-boilerplate
test/integration/targets/module_utils/module_utils/qux2/quux.py future-import-boilerplate
test/integration/targets/module_utils/module_utils/qux2/quux.py metaclass-boilerplate
test/integration/targets/module_utils/module_utils/qux2/quuz.py future-import-boilerplate
test/integration/targets/module_utils/module_utils/qux2/quuz.py metaclass-boilerplate
test/integration/targets/module_utils/module_utils/service.py future-import-boilerplate
test/integration/targets/module_utils/module_utils/service.py metaclass-boilerplate
test/integration/targets/module_utils/module_utils/spam1/ham/eggs/__init__.py future-import-boilerplate
test/integration/targets/module_utils/module_utils/spam1/ham/eggs/__init__.py metaclass-boilerplate
test/integration/targets/module_utils/module_utils/spam2/ham/eggs/__init__.py future-import-boilerplate
test/integration/targets/module_utils/module_utils/spam2/ham/eggs/__init__.py metaclass-boilerplate
test/integration/targets/module_utils/module_utils/spam3/ham/bacon.py future-import-boilerplate
test/integration/targets/module_utils/module_utils/spam3/ham/bacon.py metaclass-boilerplate
test/integration/targets/module_utils/module_utils/spam4/ham/bacon.py future-import-boilerplate
test/integration/targets/module_utils/module_utils/spam4/ham/bacon.py metaclass-boilerplate
test/integration/targets/module_utils/module_utils/spam5/ham/bacon.py future-import-boilerplate
test/integration/targets/module_utils/module_utils/spam5/ham/bacon.py metaclass-boilerplate
test/integration/targets/module_utils/module_utils/spam5/ham/eggs.py future-import-boilerplate
test/integration/targets/module_utils/module_utils/spam5/ham/eggs.py metaclass-boilerplate
test/integration/targets/module_utils/module_utils/spam6/ham/__init__.py future-import-boilerplate
test/integration/targets/module_utils/module_utils/spam6/ham/__init__.py metaclass-boilerplate
test/integration/targets/module_utils/module_utils/spam7/ham/__init__.py future-import-boilerplate
test/integration/targets/module_utils/module_utils/spam7/ham/__init__.py metaclass-boilerplate
test/integration/targets/module_utils/module_utils/spam7/ham/bacon.py future-import-boilerplate
test/integration/targets/module_utils/module_utils/spam7/ham/bacon.py metaclass-boilerplate
test/integration/targets/module_utils/module_utils/spam8/ham/__init__.py future-import-boilerplate
test/integration/targets/module_utils/module_utils/spam8/ham/__init__.py metaclass-boilerplate
test/integration/targets/module_utils/module_utils/spam8/ham/bacon.py future-import-boilerplate
test/integration/targets/module_utils/module_utils/spam8/ham/bacon.py metaclass-boilerplate
test/integration/targets/module_utils/module_utils/sub/bam.py future-import-boilerplate
test/integration/targets/module_utils/module_utils/sub/bam.py metaclass-boilerplate
test/integration/targets/module_utils/module_utils/sub/bam/bam.py future-import-boilerplate
test/integration/targets/module_utils/module_utils/sub/bam/bam.py metaclass-boilerplate
test/integration/targets/module_utils/module_utils/sub/bar/__init__.py pylint:blacklisted-name
test/integration/targets/module_utils/module_utils/sub/bar/bam.py future-import-boilerplate
test/integration/targets/module_utils/module_utils/sub/bar/bam.py metaclass-boilerplate
test/integration/targets/module_utils/module_utils/sub/bar/bar.py future-import-boilerplate
test/integration/targets/module_utils/module_utils/sub/bar/bar.py metaclass-boilerplate
test/integration/targets/module_utils/module_utils/sub/bar/bar.py pylint:blacklisted-name
test/integration/targets/module_utils/module_utils/yak/zebra/foo.py future-import-boilerplate
test/integration/targets/module_utils/module_utils/yak/zebra/foo.py metaclass-boilerplate
test/integration/targets/module_utils/module_utils/yak/zebra/foo.py pylint:blacklisted-name
test/integration/targets/module_utils/other_mu_dir/a/b/c/d/e/f/g/h/__init__.py future-import-boilerplate
test/integration/targets/module_utils/other_mu_dir/a/b/c/d/e/f/g/h/__init__.py metaclass-boilerplate
test/integration/targets/module_utils/other_mu_dir/facts.py future-import-boilerplate
test/integration/targets/module_utils/other_mu_dir/facts.py metaclass-boilerplate
test/integration/targets/module_utils/other_mu_dir/json_utils.py future-import-boilerplate
test/integration/targets/module_utils/other_mu_dir/json_utils.py metaclass-boilerplate
test/integration/targets/module_utils/other_mu_dir/mork.py future-import-boilerplate
test/integration/targets/module_utils/other_mu_dir/mork.py metaclass-boilerplate
test/integration/targets/old_style_modules_posix/library/helloworld.sh shebang
test/integration/targets/pause/test-pause.py future-import-boilerplate
test/integration/targets/pause/test-pause.py metaclass-boilerplate
test/integration/targets/pip/files/ansible_test_pip_chdir/__init__.py future-import-boilerplate
test/integration/targets/pip/files/ansible_test_pip_chdir/__init__.py metaclass-boilerplate
test/integration/targets/pip/files/setup.py future-import-boilerplate
test/integration/targets/pip/files/setup.py metaclass-boilerplate
test/integration/targets/run_modules/library/test.py future-import-boilerplate
test/integration/targets/run_modules/library/test.py metaclass-boilerplate
test/integration/targets/s3_bucket_notification/files/mini_lambda.py future-import-boilerplate
test/integration/targets/s3_bucket_notification/files/mini_lambda.py metaclass-boilerplate
test/integration/targets/script/files/no_shebang.py future-import-boilerplate
test/integration/targets/script/files/no_shebang.py metaclass-boilerplate
test/integration/targets/service/files/ansible_test_service.py future-import-boilerplate
test/integration/targets/service/files/ansible_test_service.py metaclass-boilerplate
test/integration/targets/setup_rpm_repo/files/create-repo.py future-import-boilerplate
test/integration/targets/setup_rpm_repo/files/create-repo.py metaclass-boilerplate
test/integration/targets/sns_topic/files/sns_topic_lambda/sns_topic_lambda.py future-import-boilerplate
test/integration/targets/sns_topic/files/sns_topic_lambda/sns_topic_lambda.py metaclass-boilerplate
test/integration/targets/supervisorctl/files/sendProcessStdin.py future-import-boilerplate
test/integration/targets/supervisorctl/files/sendProcessStdin.py metaclass-boilerplate
test/integration/targets/template/files/encoding_1252_utf-8.expected no-smart-quotes
test/integration/targets/template/files/encoding_1252_windows-1252.expected no-smart-quotes
test/integration/targets/template/files/foo.dos.txt line-endings
test/integration/targets/template/role_filter/filter_plugins/myplugin.py future-import-boilerplate
test/integration/targets/template/role_filter/filter_plugins/myplugin.py metaclass-boilerplate
test/integration/targets/template/templates/encoding_1252.j2 no-smart-quotes
test/integration/targets/test_infra/library/test.py future-import-boilerplate
test/integration/targets/test_infra/library/test.py metaclass-boilerplate
test/integration/targets/unicode/unicode.yml no-smart-quotes
test/integration/targets/uri/files/testserver.py future-import-boilerplate
test/integration/targets/uri/files/testserver.py metaclass-boilerplate
test/integration/targets/var_precedence/ansible-var-precedence-check.py future-import-boilerplate
test/integration/targets/var_precedence/ansible-var-precedence-check.py metaclass-boilerplate
test/integration/targets/vars_prompt/test-vars_prompt.py future-import-boilerplate
test/integration/targets/vars_prompt/test-vars_prompt.py metaclass-boilerplate
test/integration/targets/vault/test-vault-client.py future-import-boilerplate
test/integration/targets/vault/test-vault-client.py metaclass-boilerplate
test/integration/targets/wait_for/files/testserver.py future-import-boilerplate
test/integration/targets/wait_for/files/testserver.py metaclass-boilerplate
test/integration/targets/want_json_modules_posix/library/helloworld.py future-import-boilerplate
test/integration/targets/want_json_modules_posix/library/helloworld.py metaclass-boilerplate
test/integration/targets/win_audit_rule/library/test_get_audit_rule.ps1 pslint:PSCustomUseLiteralPath
test/integration/targets/win_chocolatey/files/tools/chocolateyUninstall.ps1 pslint:PSCustomUseLiteralPath
test/integration/targets/win_chocolatey_source/library/choco_source.ps1 pslint:PSCustomUseLiteralPath
test/integration/targets/win_csharp_utils/library/ansible_basic_tests.ps1 pslint:PSCustomUseLiteralPath
test/integration/targets/win_csharp_utils/library/ansible_basic_tests.ps1 pslint:PSUseDeclaredVarsMoreThanAssignments # test setup requires vars to be set globally and not referenced in the same scope
test/integration/targets/win_csharp_utils/library/ansible_become_tests.ps1 pslint:PSCustomUseLiteralPath
test/integration/targets/win_dsc/files/xTestDsc/1.0.0/DSCResources/ANSIBLE_xSetReboot/ANSIBLE_xSetReboot.psm1 pslint!skip
test/integration/targets/win_dsc/files/xTestDsc/1.0.0/DSCResources/ANSIBLE_xTestResource/ANSIBLE_xTestResource.psm1 pslint!skip
test/integration/targets/win_dsc/files/xTestDsc/1.0.0/xTestDsc.psd1 pslint!skip
test/integration/targets/win_dsc/files/xTestDsc/1.0.1/DSCResources/ANSIBLE_xTestResource/ANSIBLE_xTestResource.psm1 pslint!skip
test/integration/targets/win_dsc/files/xTestDsc/1.0.1/xTestDsc.psd1 pslint!skip
test/integration/targets/win_exec_wrapper/library/test_fail.ps1 pslint:PSCustomUseLiteralPath
test/integration/targets/win_iis_webbinding/library/test_get_webbindings.ps1 pslint:PSUseApprovedVerbs
test/integration/targets/win_module_utils/library/argv_parser_test.ps1 pslint:PSUseApprovedVerbs
test/integration/targets/win_module_utils/library/backup_file_test.ps1 pslint:PSCustomUseLiteralPath
test/integration/targets/win_module_utils/library/command_util_test.ps1 pslint:PSCustomUseLiteralPath
test/integration/targets/win_module_utils/library/legacy_only_new_way_win_line_ending.ps1 line-endings
test/integration/targets/win_module_utils/library/legacy_only_old_way_win_line_ending.ps1 line-endings
test/integration/targets/win_ping/library/win_ping_syntax_error.ps1 pslint!skip
test/integration/targets/win_psmodule/files/module/template.psd1 pslint!skip
test/integration/targets/win_psmodule/files/module/template.psm1 pslint!skip
test/integration/targets/win_psmodule/files/setup_modules.ps1 pslint:PSCustomUseLiteralPath
test/integration/targets/win_reboot/templates/post_reboot.ps1 pslint:PSCustomUseLiteralPath
test/integration/targets/win_regmerge/templates/win_line_ending.j2 line-endings
test/integration/targets/win_script/files/test_script.ps1 pslint:PSAvoidUsingWriteHost # Keep
test/integration/targets/win_script/files/test_script_creates_file.ps1 pslint:PSAvoidUsingCmdletAliases
test/integration/targets/win_script/files/test_script_removes_file.ps1 pslint:PSCustomUseLiteralPath
test/integration/targets/win_script/files/test_script_with_args.ps1 pslint:PSAvoidUsingWriteHost # Keep
test/integration/targets/win_script/files/test_script_with_splatting.ps1 pslint:PSAvoidUsingWriteHost # Keep
test/integration/targets/win_stat/library/test_symlink_file.ps1 pslint:PSCustomUseLiteralPath
test/integration/targets/win_template/files/foo.dos.txt line-endings
test/integration/targets/win_user_right/library/test_get_right.ps1 pslint:PSCustomUseLiteralPath
test/legacy/cleanup_ec2.py future-import-boilerplate
test/legacy/cleanup_ec2.py metaclass-boilerplate
test/legacy/cleanup_gce.py future-import-boilerplate
test/legacy/cleanup_gce.py metaclass-boilerplate
test/legacy/cleanup_gce.py pylint:blacklisted-name
test/legacy/cleanup_rax.py future-import-boilerplate
test/legacy/cleanup_rax.py metaclass-boilerplate
test/legacy/consul_running.py future-import-boilerplate
test/legacy/consul_running.py metaclass-boilerplate
test/legacy/gce_credentials.py future-import-boilerplate
test/legacy/gce_credentials.py metaclass-boilerplate
test/legacy/gce_credentials.py pylint:blacklisted-name
test/legacy/setup_gce.py future-import-boilerplate
test/legacy/setup_gce.py metaclass-boilerplate
test/runner/requirements/constraints.txt test-constraints
test/runner/requirements/integration.cloud.azure.txt test-constraints
test/runner/setup/windows-httptester.ps1 pslint:PSCustomUseLiteralPath
test/sanity/pylint/plugins/string_format.py use-compat-six
test/units/cli/arguments/test_optparse_helpers.py future-import-boilerplate
test/units/config/manager/test_find_ini_config_file.py future-import-boilerplate
test/units/contrib/inventory/test_vmware_inventory.py future-import-boilerplate
test/units/contrib/inventory/test_vmware_inventory.py metaclass-boilerplate
test/units/contrib/inventory/test_vmware_inventory.py pylint:blacklisted-name
test/units/executor/test_play_iterator.py pylint:blacklisted-name
test/units/mock/path.py future-import-boilerplate
test/units/mock/path.py metaclass-boilerplate
test/units/mock/yaml_helper.py future-import-boilerplate
test/units/mock/yaml_helper.py metaclass-boilerplate
test/units/module_utils/acme/test_acme.py future-import-boilerplate
test/units/module_utils/acme/test_acme.py metaclass-boilerplate
test/units/module_utils/aws/test_aws_module.py metaclass-boilerplate
test/units/module_utils/basic/test__symbolic_mode_to_octal.py future-import-boilerplate
test/units/module_utils/basic/test_deprecate_warn.py future-import-boilerplate
test/units/module_utils/basic/test_deprecate_warn.py metaclass-boilerplate
test/units/module_utils/basic/test_exit_json.py future-import-boilerplate
test/units/module_utils/basic/test_get_file_attributes.py future-import-boilerplate
test/units/module_utils/basic/test_heuristic_log_sanitize.py future-import-boilerplate
test/units/module_utils/basic/test_run_command.py future-import-boilerplate
test/units/module_utils/basic/test_run_command.py pylint:blacklisted-name
test/units/module_utils/basic/test_safe_eval.py future-import-boilerplate
test/units/module_utils/basic/test_tmpdir.py future-import-boilerplate
test/units/module_utils/cloud/test_backoff.py future-import-boilerplate
test/units/module_utils/cloud/test_backoff.py metaclass-boilerplate
test/units/module_utils/common/test_dict_transformations.py future-import-boilerplate
test/units/module_utils/common/test_dict_transformations.py metaclass-boilerplate
test/units/module_utils/conftest.py future-import-boilerplate
test/units/module_utils/conftest.py metaclass-boilerplate
test/units/module_utils/docker/test_common.py future-import-boilerplate
test/units/module_utils/docker/test_common.py metaclass-boilerplate
test/units/module_utils/ec2/test_aws.py future-import-boilerplate
test/units/module_utils/ec2/test_aws.py metaclass-boilerplate
test/units/module_utils/facts/base.py future-import-boilerplate
test/units/module_utils/facts/hardware/test_sunos_get_uptime_facts.py future-import-boilerplate
test/units/module_utils/facts/hardware/test_sunos_get_uptime_facts.py metaclass-boilerplate
test/units/module_utils/facts/network/test_generic_bsd.py future-import-boilerplate
test/units/module_utils/facts/other/test_facter.py future-import-boilerplate
test/units/module_utils/facts/other/test_ohai.py future-import-boilerplate
test/units/module_utils/facts/system/test_lsb.py future-import-boilerplate
test/units/module_utils/facts/test_ansible_collector.py future-import-boilerplate
test/units/module_utils/facts/test_collector.py future-import-boilerplate
test/units/module_utils/facts/test_collectors.py future-import-boilerplate
test/units/module_utils/facts/test_facts.py future-import-boilerplate
test/units/module_utils/facts/test_timeout.py future-import-boilerplate
test/units/module_utils/facts/test_utils.py future-import-boilerplate
test/units/module_utils/gcp/test_auth.py future-import-boilerplate
test/units/module_utils/gcp/test_auth.py metaclass-boilerplate
test/units/module_utils/gcp/test_gcp_utils.py future-import-boilerplate
test/units/module_utils/gcp/test_gcp_utils.py metaclass-boilerplate
test/units/module_utils/gcp/test_utils.py future-import-boilerplate
test/units/module_utils/gcp/test_utils.py metaclass-boilerplate
test/units/module_utils/hwc/test_dict_comparison.py future-import-boilerplate
test/units/module_utils/hwc/test_dict_comparison.py metaclass-boilerplate
test/units/module_utils/hwc/test_hwc_utils.py future-import-boilerplate
test/units/module_utils/hwc/test_hwc_utils.py metaclass-boilerplate
test/units/module_utils/json_utils/test_filter_non_json_lines.py future-import-boilerplate
test/units/module_utils/net_tools/test_netbox.py future-import-boilerplate
test/units/module_utils/net_tools/test_netbox.py metaclass-boilerplate
test/units/module_utils/network/aci/test_aci.py future-import-boilerplate
test/units/module_utils/network/aci/test_aci.py metaclass-boilerplate
test/units/module_utils/network/avi/test_avi_api_utils.py future-import-boilerplate
test/units/module_utils/network/avi/test_avi_api_utils.py metaclass-boilerplate
test/units/module_utils/network/ftd/test_common.py future-import-boilerplate
test/units/module_utils/network/ftd/test_common.py metaclass-boilerplate
test/units/module_utils/network/ftd/test_configuration.py future-import-boilerplate
test/units/module_utils/network/ftd/test_configuration.py metaclass-boilerplate
test/units/module_utils/network/ftd/test_device.py future-import-boilerplate
test/units/module_utils/network/ftd/test_device.py metaclass-boilerplate
test/units/module_utils/network/ftd/test_fdm_swagger_parser.py future-import-boilerplate
test/units/module_utils/network/ftd/test_fdm_swagger_parser.py metaclass-boilerplate
test/units/module_utils/network/ftd/test_fdm_swagger_validator.py future-import-boilerplate
test/units/module_utils/network/ftd/test_fdm_swagger_validator.py metaclass-boilerplate
test/units/module_utils/network/ftd/test_fdm_swagger_with_real_data.py future-import-boilerplate
test/units/module_utils/network/ftd/test_fdm_swagger_with_real_data.py metaclass-boilerplate
test/units/module_utils/network/ftd/test_upsert_functionality.py future-import-boilerplate
test/units/module_utils/network/ftd/test_upsert_functionality.py metaclass-boilerplate
test/units/module_utils/network/nso/test_nso.py metaclass-boilerplate
test/units/module_utils/parsing/test_convert_bool.py future-import-boilerplate
test/units/module_utils/postgresql/test_postgres.py future-import-boilerplate
test/units/module_utils/postgresql/test_postgres.py metaclass-boilerplate
test/units/module_utils/remote_management/dellemc/test_ome.py future-import-boilerplate
test/units/module_utils/remote_management/dellemc/test_ome.py metaclass-boilerplate
test/units/module_utils/test_database.py future-import-boilerplate
test/units/module_utils/test_database.py metaclass-boilerplate
test/units/module_utils/test_distro.py future-import-boilerplate
test/units/module_utils/test_distro.py metaclass-boilerplate
test/units/module_utils/test_hetzner.py future-import-boilerplate
test/units/module_utils/test_hetzner.py metaclass-boilerplate
test/units/module_utils/test_kubevirt.py future-import-boilerplate
test/units/module_utils/test_kubevirt.py metaclass-boilerplate
test/units/module_utils/test_netapp.py future-import-boilerplate
test/units/module_utils/test_text.py future-import-boilerplate
test/units/module_utils/test_utm_utils.py future-import-boilerplate
test/units/module_utils/test_utm_utils.py metaclass-boilerplate
test/units/module_utils/urls/test_Request.py replace-urlopen
test/units/module_utils/urls/test_fetch_url.py replace-urlopen
test/units/module_utils/xenserver/FakeAnsibleModule.py future-import-boilerplate
test/units/module_utils/xenserver/FakeAnsibleModule.py metaclass-boilerplate
test/units/module_utils/xenserver/FakeXenAPI.py future-import-boilerplate
test/units/module_utils/xenserver/FakeXenAPI.py metaclass-boilerplate
test/units/modules/cloud/amazon/test_aws_api_gateway.py metaclass-boilerplate
test/units/modules/cloud/amazon/test_aws_direct_connect_connection.py future-import-boilerplate
test/units/modules/cloud/amazon/test_aws_direct_connect_connection.py metaclass-boilerplate
test/units/modules/cloud/amazon/test_aws_direct_connect_link_aggregation_group.py future-import-boilerplate
test/units/modules/cloud/amazon/test_aws_direct_connect_link_aggregation_group.py metaclass-boilerplate
test/units/modules/cloud/amazon/test_aws_s3.py future-import-boilerplate
test/units/modules/cloud/amazon/test_aws_s3.py metaclass-boilerplate
test/units/modules/cloud/amazon/test_cloudformation.py future-import-boilerplate
test/units/modules/cloud/amazon/test_cloudformation.py metaclass-boilerplate
test/units/modules/cloud/amazon/test_data_pipeline.py future-import-boilerplate
test/units/modules/cloud/amazon/test_data_pipeline.py metaclass-boilerplate
test/units/modules/cloud/amazon/test_ec2_group.py future-import-boilerplate
test/units/modules/cloud/amazon/test_ec2_group.py metaclass-boilerplate
test/units/modules/cloud/amazon/test_ec2_utils.py future-import-boilerplate
test/units/modules/cloud/amazon/test_ec2_utils.py metaclass-boilerplate
test/units/modules/cloud/amazon/test_ec2_vpc_nat_gateway.py future-import-boilerplate
test/units/modules/cloud/amazon/test_ec2_vpc_nat_gateway.py metaclass-boilerplate
test/units/modules/cloud/amazon/test_ec2_vpc_nat_gateway.py pylint:blacklisted-name
test/units/modules/cloud/amazon/test_ec2_vpc_vpn.py future-import-boilerplate
test/units/modules/cloud/amazon/test_ec2_vpc_vpn.py metaclass-boilerplate
test/units/modules/cloud/amazon/test_ec2_vpc_vpn.py pylint:blacklisted-name
test/units/modules/cloud/amazon/test_iam_password_policy.py future-import-boilerplate
test/units/modules/cloud/amazon/test_iam_password_policy.py metaclass-boilerplate
test/units/modules/cloud/amazon/test_kinesis_stream.py future-import-boilerplate
test/units/modules/cloud/amazon/test_kinesis_stream.py metaclass-boilerplate
test/units/modules/cloud/amazon/test_lambda.py metaclass-boilerplate
test/units/modules/cloud/amazon/test_lambda_policy.py metaclass-boilerplate
test/units/modules/cloud/amazon/test_redshift_cross_region_snapshots.py future-import-boilerplate
test/units/modules/cloud/amazon/test_redshift_cross_region_snapshots.py metaclass-boilerplate
test/units/modules/cloud/amazon/test_route53_zone.py future-import-boilerplate
test/units/modules/cloud/amazon/test_route53_zone.py metaclass-boilerplate
test/units/modules/cloud/amazon/test_s3_bucket.py future-import-boilerplate
test/units/modules/cloud/amazon/test_s3_bucket.py metaclass-boilerplate
test/units/modules/cloud/amazon/test_s3_bucket_notification.py future-import-boilerplate
test/units/modules/cloud/amazon/test_s3_bucket_notification.py metaclass-boilerplate
test/units/modules/cloud/cloudstack/test_cs_traffic_type.py future-import-boilerplate
test/units/modules/cloud/cloudstack/test_cs_traffic_type.py metaclass-boilerplate
test/units/modules/cloud/docker/test_docker_container.py future-import-boilerplate
test/units/modules/cloud/docker/test_docker_container.py metaclass-boilerplate
test/units/modules/cloud/docker/test_docker_network.py future-import-boilerplate
test/units/modules/cloud/docker/test_docker_network.py metaclass-boilerplate
test/units/modules/cloud/docker/test_docker_swarm_service.py future-import-boilerplate
test/units/modules/cloud/docker/test_docker_swarm_service.py metaclass-boilerplate
test/units/modules/cloud/docker/test_docker_volume.py future-import-boilerplate
test/units/modules/cloud/docker/test_docker_volume.py metaclass-boilerplate
test/units/modules/cloud/google/test_gce_tag.py future-import-boilerplate
test/units/modules/cloud/google/test_gce_tag.py metaclass-boilerplate
test/units/modules/cloud/google/test_gcp_forwarding_rule.py future-import-boilerplate
test/units/modules/cloud/google/test_gcp_forwarding_rule.py metaclass-boilerplate
test/units/modules/cloud/google/test_gcp_url_map.py future-import-boilerplate
test/units/modules/cloud/google/test_gcp_url_map.py metaclass-boilerplate
test/units/modules/cloud/kubevirt/test_kubevirt_rs.py future-import-boilerplate
test/units/modules/cloud/kubevirt/test_kubevirt_rs.py metaclass-boilerplate
test/units/modules/cloud/kubevirt/test_kubevirt_vm.py future-import-boilerplate
test/units/modules/cloud/kubevirt/test_kubevirt_vm.py metaclass-boilerplate
test/units/modules/cloud/linode/conftest.py future-import-boilerplate
test/units/modules/cloud/linode/conftest.py metaclass-boilerplate
test/units/modules/cloud/linode/test_linode.py metaclass-boilerplate
test/units/modules/cloud/linode_v4/conftest.py future-import-boilerplate
test/units/modules/cloud/linode_v4/conftest.py metaclass-boilerplate
test/units/modules/cloud/linode_v4/test_linode_v4.py metaclass-boilerplate
test/units/modules/cloud/misc/test_terraform.py future-import-boilerplate
test/units/modules/cloud/misc/test_terraform.py metaclass-boilerplate
test/units/modules/cloud/misc/virt_net/conftest.py future-import-boilerplate
test/units/modules/cloud/misc/virt_net/conftest.py metaclass-boilerplate
test/units/modules/cloud/misc/virt_net/test_virt_net.py future-import-boilerplate
test/units/modules/cloud/misc/virt_net/test_virt_net.py metaclass-boilerplate
test/units/modules/cloud/openstack/test_os_server.py future-import-boilerplate
test/units/modules/cloud/openstack/test_os_server.py metaclass-boilerplate
test/units/modules/cloud/xenserver/FakeAnsibleModule.py future-import-boilerplate
test/units/modules/cloud/xenserver/FakeAnsibleModule.py metaclass-boilerplate
test/units/modules/cloud/xenserver/FakeXenAPI.py future-import-boilerplate
test/units/modules/cloud/xenserver/FakeXenAPI.py metaclass-boilerplate
test/units/modules/conftest.py future-import-boilerplate
test/units/modules/conftest.py metaclass-boilerplate
test/units/modules/crypto/test_luks_device.py future-import-boilerplate
test/units/modules/crypto/test_luks_device.py metaclass-boilerplate
test/units/modules/files/test_copy.py future-import-boilerplate
test/units/modules/messaging/rabbitmq/test_rabbimq_user.py future-import-boilerplate
test/units/modules/messaging/rabbitmq/test_rabbimq_user.py metaclass-boilerplate
test/units/modules/monitoring/test_circonus_annotation.py future-import-boilerplate
test/units/modules/monitoring/test_circonus_annotation.py metaclass-boilerplate
test/units/modules/monitoring/test_icinga2_feature.py future-import-boilerplate
test/units/modules/monitoring/test_icinga2_feature.py metaclass-boilerplate
test/units/modules/monitoring/test_pagerduty.py future-import-boilerplate
test/units/modules/monitoring/test_pagerduty.py metaclass-boilerplate
test/units/modules/monitoring/test_pagerduty_alert.py future-import-boilerplate
test/units/modules/monitoring/test_pagerduty_alert.py metaclass-boilerplate
test/units/modules/net_tools/test_nmcli.py future-import-boilerplate
test/units/modules/net_tools/test_nmcli.py metaclass-boilerplate
test/units/modules/network/avi/test_avi_user.py future-import-boilerplate
test/units/modules/network/avi/test_avi_user.py metaclass-boilerplate
test/units/modules/network/checkpoint/test_checkpoint_access_rule.py future-import-boilerplate
test/units/modules/network/checkpoint/test_checkpoint_access_rule.py metaclass-boilerplate
test/units/modules/network/checkpoint/test_checkpoint_host.py future-import-boilerplate
test/units/modules/network/checkpoint/test_checkpoint_host.py metaclass-boilerplate
test/units/modules/network/checkpoint/test_checkpoint_session.py future-import-boilerplate
test/units/modules/network/checkpoint/test_checkpoint_session.py metaclass-boilerplate
test/units/modules/network/checkpoint/test_checkpoint_task_facts.py future-import-boilerplate
test/units/modules/network/checkpoint/test_checkpoint_task_facts.py metaclass-boilerplate
test/units/modules/network/checkpoint/test_cp_network.py future-import-boilerplate
test/units/modules/network/checkpoint/test_cp_network.py metaclass-boilerplate
test/units/modules/network/cloudvision/test_cv_server_provision.py future-import-boilerplate
test/units/modules/network/cloudvision/test_cv_server_provision.py metaclass-boilerplate
test/units/modules/network/cumulus/test_nclu.py future-import-boilerplate
test/units/modules/network/cumulus/test_nclu.py metaclass-boilerplate
test/units/modules/network/ftd/test_ftd_configuration.py future-import-boilerplate
test/units/modules/network/ftd/test_ftd_configuration.py metaclass-boilerplate
test/units/modules/network/ftd/test_ftd_file_download.py future-import-boilerplate
test/units/modules/network/ftd/test_ftd_file_download.py metaclass-boilerplate
test/units/modules/network/ftd/test_ftd_file_upload.py future-import-boilerplate
test/units/modules/network/ftd/test_ftd_file_upload.py metaclass-boilerplate
test/units/modules/network/ftd/test_ftd_install.py future-import-boilerplate
test/units/modules/network/ftd/test_ftd_install.py metaclass-boilerplate
test/units/modules/network/netscaler/netscaler_module.py future-import-boilerplate
test/units/modules/network/netscaler/netscaler_module.py metaclass-boilerplate
test/units/modules/network/netscaler/test_netscaler_cs_action.py future-import-boilerplate
test/units/modules/network/netscaler/test_netscaler_cs_action.py metaclass-boilerplate
test/units/modules/network/netscaler/test_netscaler_cs_policy.py future-import-boilerplate
test/units/modules/network/netscaler/test_netscaler_cs_policy.py metaclass-boilerplate
test/units/modules/network/netscaler/test_netscaler_cs_vserver.py future-import-boilerplate
test/units/modules/network/netscaler/test_netscaler_cs_vserver.py metaclass-boilerplate
test/units/modules/network/netscaler/test_netscaler_gslb_service.py future-import-boilerplate
test/units/modules/network/netscaler/test_netscaler_gslb_service.py metaclass-boilerplate
test/units/modules/network/netscaler/test_netscaler_gslb_site.py future-import-boilerplate
test/units/modules/network/netscaler/test_netscaler_gslb_site.py metaclass-boilerplate
test/units/modules/network/netscaler/test_netscaler_gslb_vserver.py future-import-boilerplate
test/units/modules/network/netscaler/test_netscaler_gslb_vserver.py metaclass-boilerplate
test/units/modules/network/netscaler/test_netscaler_lb_monitor.py future-import-boilerplate
test/units/modules/network/netscaler/test_netscaler_lb_monitor.py metaclass-boilerplate
test/units/modules/network/netscaler/test_netscaler_lb_vserver.py future-import-boilerplate
test/units/modules/network/netscaler/test_netscaler_lb_vserver.py metaclass-boilerplate
test/units/modules/network/netscaler/test_netscaler_module_utils.py future-import-boilerplate
test/units/modules/network/netscaler/test_netscaler_module_utils.py metaclass-boilerplate
test/units/modules/network/netscaler/test_netscaler_nitro_request.py future-import-boilerplate
test/units/modules/network/netscaler/test_netscaler_nitro_request.py metaclass-boilerplate
test/units/modules/network/netscaler/test_netscaler_save_config.py future-import-boilerplate
test/units/modules/network/netscaler/test_netscaler_save_config.py metaclass-boilerplate
test/units/modules/network/netscaler/test_netscaler_server.py future-import-boilerplate
test/units/modules/network/netscaler/test_netscaler_server.py metaclass-boilerplate
test/units/modules/network/netscaler/test_netscaler_service.py future-import-boilerplate
test/units/modules/network/netscaler/test_netscaler_service.py metaclass-boilerplate
test/units/modules/network/netscaler/test_netscaler_servicegroup.py future-import-boilerplate
test/units/modules/network/netscaler/test_netscaler_servicegroup.py metaclass-boilerplate
test/units/modules/network/netscaler/test_netscaler_ssl_certkey.py future-import-boilerplate
test/units/modules/network/netscaler/test_netscaler_ssl_certkey.py metaclass-boilerplate
test/units/modules/network/nso/nso_module.py metaclass-boilerplate
test/units/modules/network/nso/test_nso_action.py metaclass-boilerplate
test/units/modules/network/nso/test_nso_config.py metaclass-boilerplate
test/units/modules/network/nso/test_nso_query.py metaclass-boilerplate
test/units/modules/network/nso/test_nso_show.py metaclass-boilerplate
test/units/modules/network/nso/test_nso_verify.py metaclass-boilerplate
test/units/modules/network/nuage/nuage_module.py future-import-boilerplate
test/units/modules/network/nuage/nuage_module.py metaclass-boilerplate
test/units/modules/network/nuage/test_nuage_vspk.py future-import-boilerplate
test/units/modules/network/nuage/test_nuage_vspk.py metaclass-boilerplate
test/units/modules/network/nxos/test_nxos_acl_interface.py metaclass-boilerplate
test/units/modules/network/radware/test_vdirect_commit.py future-import-boilerplate
test/units/modules/network/radware/test_vdirect_commit.py metaclass-boilerplate
test/units/modules/network/radware/test_vdirect_file.py future-import-boilerplate
test/units/modules/network/radware/test_vdirect_file.py metaclass-boilerplate
test/units/modules/network/radware/test_vdirect_runnable.py future-import-boilerplate
test/units/modules/network/radware/test_vdirect_runnable.py metaclass-boilerplate
test/units/modules/network/routeros/fixtures/system_package_print line-endings
test/units/modules/notification/test_slack.py future-import-boilerplate
test/units/modules/notification/test_slack.py metaclass-boilerplate
test/units/modules/packaging/language/test_gem.py future-import-boilerplate
test/units/modules/packaging/language/test_gem.py metaclass-boilerplate
test/units/modules/packaging/language/test_pip.py future-import-boilerplate
test/units/modules/packaging/language/test_pip.py metaclass-boilerplate
test/units/modules/packaging/os/conftest.py future-import-boilerplate
test/units/modules/packaging/os/conftest.py metaclass-boilerplate
test/units/modules/packaging/os/test_apk.py future-import-boilerplate
test/units/modules/packaging/os/test_apk.py metaclass-boilerplate
test/units/modules/packaging/os/test_apt.py future-import-boilerplate
test/units/modules/packaging/os/test_apt.py metaclass-boilerplate
test/units/modules/packaging/os/test_apt.py pylint:blacklisted-name
test/units/modules/packaging/os/test_rhn_channel.py future-import-boilerplate
test/units/modules/packaging/os/test_rhn_channel.py metaclass-boilerplate
test/units/modules/packaging/os/test_rhn_register.py future-import-boilerplate
test/units/modules/packaging/os/test_rhn_register.py metaclass-boilerplate
test/units/modules/packaging/os/test_yum.py future-import-boilerplate
test/units/modules/packaging/os/test_yum.py metaclass-boilerplate
test/units/modules/remote_management/dellemc/test_ome_device_info.py future-import-boilerplate
test/units/modules/remote_management/dellemc/test_ome_device_info.py metaclass-boilerplate
test/units/modules/remote_management/lxca/test_lxca_cmms.py future-import-boilerplate
test/units/modules/remote_management/lxca/test_lxca_cmms.py metaclass-boilerplate
test/units/modules/remote_management/lxca/test_lxca_nodes.py future-import-boilerplate
test/units/modules/remote_management/lxca/test_lxca_nodes.py metaclass-boilerplate
test/units/modules/remote_management/oneview/conftest.py future-import-boilerplate
test/units/modules/remote_management/oneview/conftest.py metaclass-boilerplate
test/units/modules/remote_management/oneview/hpe_test_utils.py future-import-boilerplate
test/units/modules/remote_management/oneview/hpe_test_utils.py metaclass-boilerplate
test/units/modules/remote_management/oneview/oneview_module_loader.py future-import-boilerplate
test/units/modules/remote_management/oneview/oneview_module_loader.py metaclass-boilerplate
test/units/modules/remote_management/oneview/test_oneview_datacenter_facts.py future-import-boilerplate
test/units/modules/remote_management/oneview/test_oneview_datacenter_facts.py metaclass-boilerplate
test/units/modules/remote_management/oneview/test_oneview_enclosure_facts.py future-import-boilerplate
test/units/modules/remote_management/oneview/test_oneview_enclosure_facts.py metaclass-boilerplate
test/units/modules/remote_management/oneview/test_oneview_ethernet_network.py future-import-boilerplate
test/units/modules/remote_management/oneview/test_oneview_ethernet_network.py metaclass-boilerplate
test/units/modules/remote_management/oneview/test_oneview_ethernet_network_facts.py future-import-boilerplate
test/units/modules/remote_management/oneview/test_oneview_ethernet_network_facts.py metaclass-boilerplate
test/units/modules/remote_management/oneview/test_oneview_fc_network.py future-import-boilerplate
test/units/modules/remote_management/oneview/test_oneview_fc_network.py metaclass-boilerplate
test/units/modules/remote_management/oneview/test_oneview_fc_network_facts.py future-import-boilerplate
test/units/modules/remote_management/oneview/test_oneview_fc_network_facts.py metaclass-boilerplate
test/units/modules/remote_management/oneview/test_oneview_fcoe_network.py future-import-boilerplate
test/units/modules/remote_management/oneview/test_oneview_fcoe_network.py metaclass-boilerplate
test/units/modules/remote_management/oneview/test_oneview_fcoe_network_facts.py future-import-boilerplate
test/units/modules/remote_management/oneview/test_oneview_fcoe_network_facts.py metaclass-boilerplate
test/units/modules/remote_management/oneview/test_oneview_logical_interconnect_group.py future-import-boilerplate
test/units/modules/remote_management/oneview/test_oneview_logical_interconnect_group.py metaclass-boilerplate
test/units/modules/remote_management/oneview/test_oneview_logical_interconnect_group_facts.py future-import-boilerplate
test/units/modules/remote_management/oneview/test_oneview_logical_interconnect_group_facts.py metaclass-boilerplate
test/units/modules/remote_management/oneview/test_oneview_network_set.py future-import-boilerplate
test/units/modules/remote_management/oneview/test_oneview_network_set.py metaclass-boilerplate
test/units/modules/remote_management/oneview/test_oneview_network_set_facts.py future-import-boilerplate
test/units/modules/remote_management/oneview/test_oneview_network_set_facts.py metaclass-boilerplate
test/units/modules/remote_management/oneview/test_oneview_san_manager.py future-import-boilerplate
test/units/modules/remote_management/oneview/test_oneview_san_manager.py metaclass-boilerplate
test/units/modules/remote_management/oneview/test_oneview_san_manager_facts.py future-import-boilerplate
test/units/modules/remote_management/oneview/test_oneview_san_manager_facts.py metaclass-boilerplate
test/units/modules/source_control/gitlab.py future-import-boilerplate
test/units/modules/source_control/gitlab.py metaclass-boilerplate
test/units/modules/source_control/test_bitbucket_access_key.py future-import-boilerplate
test/units/modules/source_control/test_bitbucket_access_key.py metaclass-boilerplate
test/units/modules/source_control/test_bitbucket_pipeline_key_pair.py future-import-boilerplate
test/units/modules/source_control/test_bitbucket_pipeline_key_pair.py metaclass-boilerplate
test/units/modules/source_control/test_bitbucket_pipeline_known_host.py future-import-boilerplate
test/units/modules/source_control/test_bitbucket_pipeline_known_host.py metaclass-boilerplate
test/units/modules/source_control/test_bitbucket_pipeline_variable.py future-import-boilerplate
test/units/modules/source_control/test_bitbucket_pipeline_variable.py metaclass-boilerplate
test/units/modules/source_control/test_gitlab_deploy_key.py future-import-boilerplate
test/units/modules/source_control/test_gitlab_deploy_key.py metaclass-boilerplate
test/units/modules/source_control/test_gitlab_group.py future-import-boilerplate
test/units/modules/source_control/test_gitlab_group.py metaclass-boilerplate
test/units/modules/source_control/test_gitlab_hook.py future-import-boilerplate
test/units/modules/source_control/test_gitlab_hook.py metaclass-boilerplate
test/units/modules/source_control/test_gitlab_project.py future-import-boilerplate
test/units/modules/source_control/test_gitlab_project.py metaclass-boilerplate
test/units/modules/source_control/test_gitlab_runner.py future-import-boilerplate
test/units/modules/source_control/test_gitlab_runner.py metaclass-boilerplate
test/units/modules/source_control/test_gitlab_user.py future-import-boilerplate
test/units/modules/source_control/test_gitlab_user.py metaclass-boilerplate
test/units/modules/storage/hpe3par/test_ss_3par_cpg.py future-import-boilerplate
test/units/modules/storage/hpe3par/test_ss_3par_cpg.py metaclass-boilerplate
test/units/modules/storage/netapp/test_na_elementsw_cluster_config.py future-import-boilerplate
test/units/modules/storage/netapp/test_na_elementsw_cluster_config.py metaclass-boilerplate
test/units/modules/storage/netapp/test_na_elementsw_cluster_snmp.py future-import-boilerplate
test/units/modules/storage/netapp/test_na_elementsw_cluster_snmp.py metaclass-boilerplate
test/units/modules/storage/netapp/test_na_elementsw_initiators.py future-import-boilerplate
test/units/modules/storage/netapp/test_na_elementsw_initiators.py metaclass-boilerplate
test/units/modules/storage/netapp/test_na_ontap_aggregate.py future-import-boilerplate
test/units/modules/storage/netapp/test_na_ontap_aggregate.py metaclass-boilerplate
test/units/modules/storage/netapp/test_na_ontap_autosupport.py future-import-boilerplate
test/units/modules/storage/netapp/test_na_ontap_autosupport.py metaclass-boilerplate
test/units/modules/storage/netapp/test_na_ontap_broadcast_domain.py future-import-boilerplate
test/units/modules/storage/netapp/test_na_ontap_broadcast_domain.py metaclass-boilerplate
test/units/modules/storage/netapp/test_na_ontap_cifs.py future-import-boilerplate
test/units/modules/storage/netapp/test_na_ontap_cifs.py metaclass-boilerplate
test/units/modules/storage/netapp/test_na_ontap_cifs_server.py future-import-boilerplate
test/units/modules/storage/netapp/test_na_ontap_cifs_server.py metaclass-boilerplate
test/units/modules/storage/netapp/test_na_ontap_cluster.py future-import-boilerplate
test/units/modules/storage/netapp/test_na_ontap_cluster.py metaclass-boilerplate
test/units/modules/storage/netapp/test_na_ontap_cluster_peer.py future-import-boilerplate
test/units/modules/storage/netapp/test_na_ontap_cluster_peer.py metaclass-boilerplate
test/units/modules/storage/netapp/test_na_ontap_command.py future-import-boilerplate
test/units/modules/storage/netapp/test_na_ontap_command.py metaclass-boilerplate
test/units/modules/storage/netapp/test_na_ontap_export_policy_rule.py future-import-boilerplate
test/units/modules/storage/netapp/test_na_ontap_export_policy_rule.py metaclass-boilerplate
test/units/modules/storage/netapp/test_na_ontap_firewall_policy.py future-import-boilerplate
test/units/modules/storage/netapp/test_na_ontap_firewall_policy.py metaclass-boilerplate
test/units/modules/storage/netapp/test_na_ontap_flexcache.py future-import-boilerplate
test/units/modules/storage/netapp/test_na_ontap_flexcache.py metaclass-boilerplate
test/units/modules/storage/netapp/test_na_ontap_gather_facts.py future-import-boilerplate
test/units/modules/storage/netapp/test_na_ontap_gather_facts.py metaclass-boilerplate
test/units/modules/storage/netapp/test_na_ontap_igroup.py future-import-boilerplate
test/units/modules/storage/netapp/test_na_ontap_igroup.py metaclass-boilerplate
test/units/modules/storage/netapp/test_na_ontap_igroup_initiator.py future-import-boilerplate
test/units/modules/storage/netapp/test_na_ontap_igroup_initiator.py metaclass-boilerplate
test/units/modules/storage/netapp/test_na_ontap_interface.py future-import-boilerplate
test/units/modules/storage/netapp/test_na_ontap_interface.py metaclass-boilerplate
test/units/modules/storage/netapp/test_na_ontap_ipspace.py future-import-boilerplate
test/units/modules/storage/netapp/test_na_ontap_ipspace.py metaclass-boilerplate
test/units/modules/storage/netapp/test_na_ontap_job_schedule.py future-import-boilerplate
test/units/modules/storage/netapp/test_na_ontap_job_schedule.py metaclass-boilerplate
test/units/modules/storage/netapp/test_na_ontap_lun_copy.py future-import-boilerplate
test/units/modules/storage/netapp/test_na_ontap_lun_copy.py metaclass-boilerplate
test/units/modules/storage/netapp/test_na_ontap_lun_map.py future-import-boilerplate
test/units/modules/storage/netapp/test_na_ontap_lun_map.py metaclass-boilerplate
test/units/modules/storage/netapp/test_na_ontap_motd.py future-import-boilerplate
test/units/modules/storage/netapp/test_na_ontap_motd.py metaclass-boilerplate
test/units/modules/storage/netapp/test_na_ontap_net_ifgrp.py future-import-boilerplate
test/units/modules/storage/netapp/test_na_ontap_net_ifgrp.py metaclass-boilerplate
test/units/modules/storage/netapp/test_na_ontap_net_port.py future-import-boilerplate
test/units/modules/storage/netapp/test_na_ontap_net_port.py metaclass-boilerplate
test/units/modules/storage/netapp/test_na_ontap_net_routes.py future-import-boilerplate
test/units/modules/storage/netapp/test_na_ontap_net_routes.py metaclass-boilerplate
test/units/modules/storage/netapp/test_na_ontap_net_subnet.py future-import-boilerplate
test/units/modules/storage/netapp/test_na_ontap_net_subnet.py metaclass-boilerplate
test/units/modules/storage/netapp/test_na_ontap_nfs.py future-import-boilerplate
test/units/modules/storage/netapp/test_na_ontap_nfs.py metaclass-boilerplate
test/units/modules/storage/netapp/test_na_ontap_nvme.py future-import-boilerplate
test/units/modules/storage/netapp/test_na_ontap_nvme.py metaclass-boilerplate
test/units/modules/storage/netapp/test_na_ontap_nvme_namespace.py future-import-boilerplate
test/units/modules/storage/netapp/test_na_ontap_nvme_namespace.py metaclass-boilerplate
test/units/modules/storage/netapp/test_na_ontap_nvme_subsystem.py future-import-boilerplate
test/units/modules/storage/netapp/test_na_ontap_nvme_subsystem.py metaclass-boilerplate
test/units/modules/storage/netapp/test_na_ontap_portset.py future-import-boilerplate
test/units/modules/storage/netapp/test_na_ontap_portset.py metaclass-boilerplate
test/units/modules/storage/netapp/test_na_ontap_qos_policy_group.py future-import-boilerplate
test/units/modules/storage/netapp/test_na_ontap_qos_policy_group.py metaclass-boilerplate
test/units/modules/storage/netapp/test_na_ontap_quotas.py future-import-boilerplate
test/units/modules/storage/netapp/test_na_ontap_quotas.py metaclass-boilerplate
test/units/modules/storage/netapp/test_na_ontap_security_key_manager.py future-import-boilerplate
test/units/modules/storage/netapp/test_na_ontap_security_key_manager.py metaclass-boilerplate
test/units/modules/storage/netapp/test_na_ontap_service_processor_network.py future-import-boilerplate
test/units/modules/storage/netapp/test_na_ontap_service_processor_network.py metaclass-boilerplate
test/units/modules/storage/netapp/test_na_ontap_snapmirror.py future-import-boilerplate
test/units/modules/storage/netapp/test_na_ontap_snapmirror.py metaclass-boilerplate
test/units/modules/storage/netapp/test_na_ontap_snapshot.py future-import-boilerplate
test/units/modules/storage/netapp/test_na_ontap_snapshot.py metaclass-boilerplate
test/units/modules/storage/netapp/test_na_ontap_snapshot_policy.py future-import-boilerplate
test/units/modules/storage/netapp/test_na_ontap_snapshot_policy.py metaclass-boilerplate
test/units/modules/storage/netapp/test_na_ontap_software_update.py future-import-boilerplate
test/units/modules/storage/netapp/test_na_ontap_software_update.py metaclass-boilerplate
test/units/modules/storage/netapp/test_na_ontap_svm.py future-import-boilerplate
test/units/modules/storage/netapp/test_na_ontap_svm.py metaclass-boilerplate
test/units/modules/storage/netapp/test_na_ontap_ucadapter.py future-import-boilerplate
test/units/modules/storage/netapp/test_na_ontap_ucadapter.py metaclass-boilerplate
test/units/modules/storage/netapp/test_na_ontap_unix_group.py future-import-boilerplate
test/units/modules/storage/netapp/test_na_ontap_unix_group.py metaclass-boilerplate
test/units/modules/storage/netapp/test_na_ontap_unix_user.py future-import-boilerplate
test/units/modules/storage/netapp/test_na_ontap_unix_user.py metaclass-boilerplate
test/units/modules/storage/netapp/test_na_ontap_user.py future-import-boilerplate
test/units/modules/storage/netapp/test_na_ontap_user.py metaclass-boilerplate
test/units/modules/storage/netapp/test_na_ontap_user_role.py future-import-boilerplate
test/units/modules/storage/netapp/test_na_ontap_user_role.py metaclass-boilerplate
test/units/modules/storage/netapp/test_na_ontap_volume.py future-import-boilerplate
test/units/modules/storage/netapp/test_na_ontap_volume.py metaclass-boilerplate
test/units/modules/storage/netapp/test_na_ontap_volume_clone.py future-import-boilerplate
test/units/modules/storage/netapp/test_na_ontap_volume_clone.py metaclass-boilerplate
test/units/modules/storage/netapp/test_na_ontap_vscan.py future-import-boilerplate
test/units/modules/storage/netapp/test_na_ontap_vscan.py metaclass-boilerplate
test/units/modules/storage/netapp/test_na_ontap_vscan_on_access_policy.py future-import-boilerplate
test/units/modules/storage/netapp/test_na_ontap_vscan_on_access_policy.py metaclass-boilerplate
test/units/modules/storage/netapp/test_na_ontap_vscan_on_demand_task.py future-import-boilerplate
test/units/modules/storage/netapp/test_na_ontap_vscan_on_demand_task.py metaclass-boilerplate
test/units/modules/storage/netapp/test_na_ontap_vscan_scanner_pool.py future-import-boilerplate
test/units/modules/storage/netapp/test_na_ontap_vscan_scanner_pool.py metaclass-boilerplate
test/units/modules/storage/netapp/test_netapp.py metaclass-boilerplate
test/units/modules/storage/netapp/test_netapp_e_alerts.py future-import-boilerplate
test/units/modules/storage/netapp/test_netapp_e_asup.py future-import-boilerplate
test/units/modules/storage/netapp/test_netapp_e_auditlog.py future-import-boilerplate
test/units/modules/storage/netapp/test_netapp_e_global.py future-import-boilerplate
test/units/modules/storage/netapp/test_netapp_e_host.py future-import-boilerplate
test/units/modules/storage/netapp/test_netapp_e_iscsi_interface.py future-import-boilerplate
test/units/modules/storage/netapp/test_netapp_e_iscsi_target.py future-import-boilerplate
test/units/modules/storage/netapp/test_netapp_e_ldap.py future-import-boilerplate
test/units/modules/storage/netapp/test_netapp_e_mgmt_interface.py future-import-boilerplate
test/units/modules/storage/netapp/test_netapp_e_syslog.py future-import-boilerplate
test/units/modules/storage/netapp/test_netapp_e_volume.py future-import-boilerplate
test/units/modules/system/interfaces_file/test_interfaces_file.py future-import-boilerplate
test/units/modules/system/interfaces_file/test_interfaces_file.py metaclass-boilerplate
test/units/modules/system/interfaces_file/test_interfaces_file.py pylint:blacklisted-name
test/units/modules/system/test_iptables.py future-import-boilerplate
test/units/modules/system/test_iptables.py metaclass-boilerplate
test/units/modules/system/test_java_keystore.py future-import-boilerplate
test/units/modules/system/test_java_keystore.py metaclass-boilerplate
test/units/modules/system/test_known_hosts.py future-import-boilerplate
test/units/modules/system/test_known_hosts.py metaclass-boilerplate
test/units/modules/system/test_known_hosts.py pylint:ansible-bad-function
test/units/modules/system/test_linux_mountinfo.py future-import-boilerplate
test/units/modules/system/test_linux_mountinfo.py metaclass-boilerplate
test/units/modules/system/test_pamd.py metaclass-boilerplate
test/units/modules/system/test_parted.py future-import-boilerplate
test/units/modules/system/test_systemd.py future-import-boilerplate
test/units/modules/system/test_systemd.py metaclass-boilerplate
test/units/modules/system/test_ufw.py future-import-boilerplate
test/units/modules/system/test_ufw.py metaclass-boilerplate
test/units/modules/utils.py future-import-boilerplate
test/units/modules/utils.py metaclass-boilerplate
test/units/modules/web_infrastructure/test_apache2_module.py future-import-boilerplate
test/units/modules/web_infrastructure/test_apache2_module.py metaclass-boilerplate
test/units/modules/web_infrastructure/test_jenkins_plugin.py future-import-boilerplate
test/units/modules/web_infrastructure/test_jenkins_plugin.py metaclass-boilerplate
test/units/parsing/utils/test_addresses.py future-import-boilerplate
test/units/parsing/utils/test_addresses.py metaclass-boilerplate
test/units/parsing/vault/test_vault.py pylint:blacklisted-name
test/units/playbook/role/test_role.py pylint:blacklisted-name
test/units/playbook/test_attribute.py future-import-boilerplate
test/units/playbook/test_attribute.py metaclass-boilerplate
test/units/playbook/test_conditional.py future-import-boilerplate
test/units/playbook/test_conditional.py metaclass-boilerplate
test/units/plugins/action/test_synchronize.py future-import-boilerplate
test/units/plugins/action/test_synchronize.py metaclass-boilerplate
test/units/plugins/httpapi/test_checkpoint.py future-import-boilerplate
test/units/plugins/httpapi/test_checkpoint.py metaclass-boilerplate
test/units/plugins/httpapi/test_ftd.py future-import-boilerplate
test/units/plugins/httpapi/test_ftd.py metaclass-boilerplate
test/units/plugins/inventory/test_constructed.py future-import-boilerplate
test/units/plugins/inventory/test_constructed.py metaclass-boilerplate
test/units/plugins/inventory/test_group.py future-import-boilerplate
test/units/plugins/inventory/test_group.py metaclass-boilerplate
test/units/plugins/inventory/test_host.py future-import-boilerplate
test/units/plugins/inventory/test_host.py metaclass-boilerplate
test/units/plugins/loader_fixtures/import_fixture.py future-import-boilerplate
test/units/plugins/lookup/test_aws_secret.py metaclass-boilerplate
test/units/plugins/lookup/test_aws_ssm.py metaclass-boilerplate
test/units/plugins/shell/test_cmd.py future-import-boilerplate
test/units/plugins/shell/test_cmd.py metaclass-boilerplate
test/units/plugins/shell/test_powershell.py future-import-boilerplate
test/units/plugins/shell/test_powershell.py metaclass-boilerplate
test/units/plugins/test_plugins.py pylint:blacklisted-name
test/units/pytest/plugins/ansible_pytest_collections.py metaclass-boilerplate
test/units/pytest/plugins/ansible_pytest_coverage.py metaclass-boilerplate
test/units/template/test_templar.py pylint:blacklisted-name
test/units/test_constants.py future-import-boilerplate
test/units/test_context.py future-import-boilerplate
test/units/utils/amazon_placebo_fixtures.py future-import-boilerplate
test/units/utils/amazon_placebo_fixtures.py metaclass-boilerplate
test/units/utils/fixtures/collections/ansible_collections/my_namespace/my_collection/plugins/action/my_action.py future-import-boilerplate
test/units/utils/fixtures/collections/ansible_collections/my_namespace/my_collection/plugins/action/my_action.py metaclass-boilerplate
test/units/utils/fixtures/collections/ansible_collections/my_namespace/my_collection/plugins/module_utils/my_other_util.py future-import-boilerplate
test/units/utils/fixtures/collections/ansible_collections/my_namespace/my_collection/plugins/module_utils/my_other_util.py metaclass-boilerplate
test/units/utils/fixtures/collections/ansible_collections/my_namespace/my_collection/plugins/module_utils/my_util.py future-import-boilerplate
test/units/utils/fixtures/collections/ansible_collections/my_namespace/my_collection/plugins/module_utils/my_util.py metaclass-boilerplate
test/units/utils/kubevirt_fixtures.py future-import-boilerplate
test/units/utils/kubevirt_fixtures.py metaclass-boilerplate
test/units/utils/test_cleanup_tmp_file.py future-import-boilerplate
test/units/utils/test_context_objects.py future-import-boilerplate
test/units/utils/test_encrypt.py future-import-boilerplate
test/units/utils/test_encrypt.py metaclass-boilerplate
test/units/utils/test_helpers.py future-import-boilerplate
test/units/utils/test_helpers.py metaclass-boilerplate
test/units/utils/test_shlex.py future-import-boilerplate
test/units/utils/test_shlex.py metaclass-boilerplate
test/utils/shippable/timing.py shebang
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 59,918 |
mongodb_shard tests fail on Ubuntu 1604
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
Ubuntu test cases are failing with following -
```
09:14 The full traceback is:
09:14 Traceback (most recent call last):
09:14 File "/root/.ansible/tmp/ansible-tmp-1564657836.74-41084974317188/AnsiballZ_apt_key.py", line 149, in <module>
09:14 _ansiballz_main()
09:14 File "/root/.ansible/tmp/ansible-tmp-1564657836.74-41084974317188/AnsiballZ_apt_key.py", line 141, in _ansiballz_main
09:14 invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)
09:14 File "/root/.ansible/tmp/ansible-tmp-1564657836.74-41084974317188/AnsiballZ_apt_key.py", line 78, in invoke_module
09:14 imp.load_module('__main__', mod, module, MOD_DESC)
09:14 File "/tmp/ansible_apt_key_payload_LfAgyL/__main__.py", line 116, in <module>
09:14 File "/tmp/ansible_apt_key_payload_LfAgyL/ansible_apt_key_payload.zip/ansible/module_utils/urls.py", line 99, in <module>
09:14 File "/usr/local/lib/python2.7/dist-packages/urllib3/contrib/pyopenssl.py", line 46, in <module>
09:14 import OpenSSL.SSL
09:14 File "/usr/lib/python2.7/dist-packages/OpenSSL/__init__.py", line 8, in <module>
09:14 from OpenSSL import rand, crypto, SSL
09:14 File "/usr/lib/python2.7/dist-packages/OpenSSL/SSL.py", line 118, in <module>
09:14 SSL_ST_INIT = _lib.SSL_ST_INIT
09:14 AttributeError: 'module' object has no attribute 'SSL_ST_INIT'
09:14
09:14 fatal: [testhost]: FAILED! => {
09:14 "changed": false,
09:14 "module_stderr": "Traceback (most recent call last):\n File \"/root/.ansible/tmp/ansible-tmp-1564657836.74-41084974317188/AnsiballZ_apt_key.py\", line 149, in <module>\n _ansiballz_main()\n File \"/root/.ansible/tmp/ansible-tmp-1564657836.74-41084974317188/AnsiballZ_apt_key.py\", line 141, in _ansiballz_main\n invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)\n File \"/root/.ansible/tmp/ansible-tmp-1564657836.74-41084974317188/AnsiballZ_apt_key.py\", line 78, in invoke_module\n imp.load_module('__main__', mod, module, MOD_DESC)\n File \"/tmp/ansible_apt_key_payload_LfAgyL/__main__.py\", line 116, in <module>\n File \"/tmp/ansible_apt_key_payload_LfAgyL/ansible_apt_key_payload.zip/ansible/module_utils/urls.py\", line 99, in <module>\n File \"/usr/local/lib/python2.7/dist-packages/urllib3/contrib/pyopenssl.py\", line 46, in <module>\n import OpenSSL.SSL\n File \"/usr/lib/python2.7/dist-packages/OpenSSL/__init__.py\", line 8, in <module>\n from OpenSSL import rand, crypto, SSL\n File \"/usr/lib/python2.7/dist-packages/OpenSSL/SSL.py\", line 118, in <module>\n SSL_ST_INIT = _lib.SSL_ST_INIT\nAttributeError: 'module' object has no attribute 'SSL_ST_INIT'\n",
09:14 "module_stdout": "",
09:14 "msg": "MODULE FAILURE\nSee stdout/stderr for the exact error",
09:14 "rc": 1
09:14 }
```
https://app.shippable.com/github/ansible/ansible/runs/135011/59/console
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
lib/ansible/modules/packaging/os/apt_key.py
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
2.9
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
Ubuntu 16.04
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
Above Shippable link
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
Test should pass
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
Test fails with stack trace
|
https://github.com/ansible/ansible/issues/59918
|
https://github.com/ansible/ansible/pull/60083
|
d3da8e4a5b7c87ea6bb4f1345300ddb0a833a6b2
|
0892d48ebc94210d3e0526aa425f7fe7a64d1d2f
| 2019-08-01T11:42:19Z |
python
| 2019-08-05T19:01:58Z |
test/integration/targets/mongodb_replicaset/aliases
|
destructive
shippable/posix/group1
skip/osx
skip/freebsd
skip/rhel
needs/root
disabled
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 59,918 |
mongodb_shard tests fail on Ubuntu 1604
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
Ubuntu test cases are failing with following -
```
09:14 The full traceback is:
09:14 Traceback (most recent call last):
09:14 File "/root/.ansible/tmp/ansible-tmp-1564657836.74-41084974317188/AnsiballZ_apt_key.py", line 149, in <module>
09:14 _ansiballz_main()
09:14 File "/root/.ansible/tmp/ansible-tmp-1564657836.74-41084974317188/AnsiballZ_apt_key.py", line 141, in _ansiballz_main
09:14 invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)
09:14 File "/root/.ansible/tmp/ansible-tmp-1564657836.74-41084974317188/AnsiballZ_apt_key.py", line 78, in invoke_module
09:14 imp.load_module('__main__', mod, module, MOD_DESC)
09:14 File "/tmp/ansible_apt_key_payload_LfAgyL/__main__.py", line 116, in <module>
09:14 File "/tmp/ansible_apt_key_payload_LfAgyL/ansible_apt_key_payload.zip/ansible/module_utils/urls.py", line 99, in <module>
09:14 File "/usr/local/lib/python2.7/dist-packages/urllib3/contrib/pyopenssl.py", line 46, in <module>
09:14 import OpenSSL.SSL
09:14 File "/usr/lib/python2.7/dist-packages/OpenSSL/__init__.py", line 8, in <module>
09:14 from OpenSSL import rand, crypto, SSL
09:14 File "/usr/lib/python2.7/dist-packages/OpenSSL/SSL.py", line 118, in <module>
09:14 SSL_ST_INIT = _lib.SSL_ST_INIT
09:14 AttributeError: 'module' object has no attribute 'SSL_ST_INIT'
09:14
09:14 fatal: [testhost]: FAILED! => {
09:14 "changed": false,
09:14 "module_stderr": "Traceback (most recent call last):\n File \"/root/.ansible/tmp/ansible-tmp-1564657836.74-41084974317188/AnsiballZ_apt_key.py\", line 149, in <module>\n _ansiballz_main()\n File \"/root/.ansible/tmp/ansible-tmp-1564657836.74-41084974317188/AnsiballZ_apt_key.py\", line 141, in _ansiballz_main\n invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)\n File \"/root/.ansible/tmp/ansible-tmp-1564657836.74-41084974317188/AnsiballZ_apt_key.py\", line 78, in invoke_module\n imp.load_module('__main__', mod, module, MOD_DESC)\n File \"/tmp/ansible_apt_key_payload_LfAgyL/__main__.py\", line 116, in <module>\n File \"/tmp/ansible_apt_key_payload_LfAgyL/ansible_apt_key_payload.zip/ansible/module_utils/urls.py\", line 99, in <module>\n File \"/usr/local/lib/python2.7/dist-packages/urllib3/contrib/pyopenssl.py\", line 46, in <module>\n import OpenSSL.SSL\n File \"/usr/lib/python2.7/dist-packages/OpenSSL/__init__.py\", line 8, in <module>\n from OpenSSL import rand, crypto, SSL\n File \"/usr/lib/python2.7/dist-packages/OpenSSL/SSL.py\", line 118, in <module>\n SSL_ST_INIT = _lib.SSL_ST_INIT\nAttributeError: 'module' object has no attribute 'SSL_ST_INIT'\n",
09:14 "module_stdout": "",
09:14 "msg": "MODULE FAILURE\nSee stdout/stderr for the exact error",
09:14 "rc": 1
09:14 }
```
https://app.shippable.com/github/ansible/ansible/runs/135011/59/console
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
lib/ansible/modules/packaging/os/apt_key.py
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
2.9
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
Ubuntu 16.04
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
Above Shippable link
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
Test should pass
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
Test fails with stack trace
|
https://github.com/ansible/ansible/issues/59918
|
https://github.com/ansible/ansible/pull/60083
|
d3da8e4a5b7c87ea6bb4f1345300ddb0a833a6b2
|
0892d48ebc94210d3e0526aa425f7fe7a64d1d2f
| 2019-08-01T11:42:19Z |
python
| 2019-08-05T19:01:58Z |
test/integration/targets/mongodb_shard/aliases
|
destructive
shippable/posix/group1
skip/osx
skip/freebsd
skip/rhel
needs/root
disabled
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 59,918 |
mongodb_shard tests fail on Ubuntu 1604
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
Ubuntu test cases are failing with following -
```
09:14 The full traceback is:
09:14 Traceback (most recent call last):
09:14 File "/root/.ansible/tmp/ansible-tmp-1564657836.74-41084974317188/AnsiballZ_apt_key.py", line 149, in <module>
09:14 _ansiballz_main()
09:14 File "/root/.ansible/tmp/ansible-tmp-1564657836.74-41084974317188/AnsiballZ_apt_key.py", line 141, in _ansiballz_main
09:14 invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)
09:14 File "/root/.ansible/tmp/ansible-tmp-1564657836.74-41084974317188/AnsiballZ_apt_key.py", line 78, in invoke_module
09:14 imp.load_module('__main__', mod, module, MOD_DESC)
09:14 File "/tmp/ansible_apt_key_payload_LfAgyL/__main__.py", line 116, in <module>
09:14 File "/tmp/ansible_apt_key_payload_LfAgyL/ansible_apt_key_payload.zip/ansible/module_utils/urls.py", line 99, in <module>
09:14 File "/usr/local/lib/python2.7/dist-packages/urllib3/contrib/pyopenssl.py", line 46, in <module>
09:14 import OpenSSL.SSL
09:14 File "/usr/lib/python2.7/dist-packages/OpenSSL/__init__.py", line 8, in <module>
09:14 from OpenSSL import rand, crypto, SSL
09:14 File "/usr/lib/python2.7/dist-packages/OpenSSL/SSL.py", line 118, in <module>
09:14 SSL_ST_INIT = _lib.SSL_ST_INIT
09:14 AttributeError: 'module' object has no attribute 'SSL_ST_INIT'
09:14
09:14 fatal: [testhost]: FAILED! => {
09:14 "changed": false,
09:14 "module_stderr": "Traceback (most recent call last):\n File \"/root/.ansible/tmp/ansible-tmp-1564657836.74-41084974317188/AnsiballZ_apt_key.py\", line 149, in <module>\n _ansiballz_main()\n File \"/root/.ansible/tmp/ansible-tmp-1564657836.74-41084974317188/AnsiballZ_apt_key.py\", line 141, in _ansiballz_main\n invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)\n File \"/root/.ansible/tmp/ansible-tmp-1564657836.74-41084974317188/AnsiballZ_apt_key.py\", line 78, in invoke_module\n imp.load_module('__main__', mod, module, MOD_DESC)\n File \"/tmp/ansible_apt_key_payload_LfAgyL/__main__.py\", line 116, in <module>\n File \"/tmp/ansible_apt_key_payload_LfAgyL/ansible_apt_key_payload.zip/ansible/module_utils/urls.py\", line 99, in <module>\n File \"/usr/local/lib/python2.7/dist-packages/urllib3/contrib/pyopenssl.py\", line 46, in <module>\n import OpenSSL.SSL\n File \"/usr/lib/python2.7/dist-packages/OpenSSL/__init__.py\", line 8, in <module>\n from OpenSSL import rand, crypto, SSL\n File \"/usr/lib/python2.7/dist-packages/OpenSSL/SSL.py\", line 118, in <module>\n SSL_ST_INIT = _lib.SSL_ST_INIT\nAttributeError: 'module' object has no attribute 'SSL_ST_INIT'\n",
09:14 "module_stdout": "",
09:14 "msg": "MODULE FAILURE\nSee stdout/stderr for the exact error",
09:14 "rc": 1
09:14 }
```
https://app.shippable.com/github/ansible/ansible/runs/135011/59/console
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
lib/ansible/modules/packaging/os/apt_key.py
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
2.9
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
Ubuntu 16.04
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
Above Shippable link
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
Test should pass
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
Test fails with stack trace
|
https://github.com/ansible/ansible/issues/59918
|
https://github.com/ansible/ansible/pull/60083
|
d3da8e4a5b7c87ea6bb4f1345300ddb0a833a6b2
|
0892d48ebc94210d3e0526aa425f7fe7a64d1d2f
| 2019-08-01T11:42:19Z |
python
| 2019-08-05T19:01:58Z |
test/integration/targets/setup_mongodb/defaults/main.yml
|
mongodb_version: "4.0"
apt_xenial:
keyserver: "keyserver.ubuntu.com"
keyserver_id: "2930ADAE8CAF5059EE73BB4B58712A2291FA4AD5"
repo: "deb [ arch=amd64 ] http://repo.mongodb.org/apt/ubuntu {{ansible_distribution_release}}/mongodb-org/{{mongodb_version}} multiverse"
apt_bionic:
keyserver: "keyserver.ubuntu.com"
keyserver_id: "9DA31620334BD75D9DCB49F368818C72E52529D4"
repo: "deb [ arch=amd64 ] https://repo.mongodb.org/apt/ubuntu {{ansible_distribution_release}}/mongodb-org/{{mongodb_version}} multiverse"
mongodb_packages:
mongod: mongodb-org-server
mongos: mongodb-org-mongos
mongo: mongodb-org-shell
yum:
name: mongodb-org
description: "Official MongoDB {{mongodb_version}} yum repo"
baseurl: https://repo.mongodb.org/yum/redhat/$releasever/mongodb-org/{{mongodb_version}}/x86_64/
gpgcheck: 1
gpgkey: https://www.mongodb.org/static/pgp/server-{{mongodb_version}}.asc
redhat8url: https://repo.mongodb.org/yum/redhat/7/mongodb-org/{{mongodb_version}}/x86_64/
fedoraurl: https://repo.mongodb.org/yum/amazon/2013.03/mongodb-org/{{mongodb_version}}/x86_64/
debian_packages_py2:
- python-dev
- python-setuptools
- python-pip
debian_packages_py36:
- python3.6-dev
- python3-setuptools
- python3-pip
redhat_packages_py2:
- python-devel
- python-setuptools
- python-pip
redhat_packages_py3:
- python3-devel
- python3-setuptools
- python3-pip
pip_packages:
- psutil
- requests[security]
- pymongo
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 59,918 |
mongodb_shard tests fail on Ubuntu 1604
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
Ubuntu test cases are failing with following -
```
09:14 The full traceback is:
09:14 Traceback (most recent call last):
09:14 File "/root/.ansible/tmp/ansible-tmp-1564657836.74-41084974317188/AnsiballZ_apt_key.py", line 149, in <module>
09:14 _ansiballz_main()
09:14 File "/root/.ansible/tmp/ansible-tmp-1564657836.74-41084974317188/AnsiballZ_apt_key.py", line 141, in _ansiballz_main
09:14 invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)
09:14 File "/root/.ansible/tmp/ansible-tmp-1564657836.74-41084974317188/AnsiballZ_apt_key.py", line 78, in invoke_module
09:14 imp.load_module('__main__', mod, module, MOD_DESC)
09:14 File "/tmp/ansible_apt_key_payload_LfAgyL/__main__.py", line 116, in <module>
09:14 File "/tmp/ansible_apt_key_payload_LfAgyL/ansible_apt_key_payload.zip/ansible/module_utils/urls.py", line 99, in <module>
09:14 File "/usr/local/lib/python2.7/dist-packages/urllib3/contrib/pyopenssl.py", line 46, in <module>
09:14 import OpenSSL.SSL
09:14 File "/usr/lib/python2.7/dist-packages/OpenSSL/__init__.py", line 8, in <module>
09:14 from OpenSSL import rand, crypto, SSL
09:14 File "/usr/lib/python2.7/dist-packages/OpenSSL/SSL.py", line 118, in <module>
09:14 SSL_ST_INIT = _lib.SSL_ST_INIT
09:14 AttributeError: 'module' object has no attribute 'SSL_ST_INIT'
09:14
09:14 fatal: [testhost]: FAILED! => {
09:14 "changed": false,
09:14 "module_stderr": "Traceback (most recent call last):\n File \"/root/.ansible/tmp/ansible-tmp-1564657836.74-41084974317188/AnsiballZ_apt_key.py\", line 149, in <module>\n _ansiballz_main()\n File \"/root/.ansible/tmp/ansible-tmp-1564657836.74-41084974317188/AnsiballZ_apt_key.py\", line 141, in _ansiballz_main\n invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)\n File \"/root/.ansible/tmp/ansible-tmp-1564657836.74-41084974317188/AnsiballZ_apt_key.py\", line 78, in invoke_module\n imp.load_module('__main__', mod, module, MOD_DESC)\n File \"/tmp/ansible_apt_key_payload_LfAgyL/__main__.py\", line 116, in <module>\n File \"/tmp/ansible_apt_key_payload_LfAgyL/ansible_apt_key_payload.zip/ansible/module_utils/urls.py\", line 99, in <module>\n File \"/usr/local/lib/python2.7/dist-packages/urllib3/contrib/pyopenssl.py\", line 46, in <module>\n import OpenSSL.SSL\n File \"/usr/lib/python2.7/dist-packages/OpenSSL/__init__.py\", line 8, in <module>\n from OpenSSL import rand, crypto, SSL\n File \"/usr/lib/python2.7/dist-packages/OpenSSL/SSL.py\", line 118, in <module>\n SSL_ST_INIT = _lib.SSL_ST_INIT\nAttributeError: 'module' object has no attribute 'SSL_ST_INIT'\n",
09:14 "module_stdout": "",
09:14 "msg": "MODULE FAILURE\nSee stdout/stderr for the exact error",
09:14 "rc": 1
09:14 }
```
https://app.shippable.com/github/ansible/ansible/runs/135011/59/console
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
lib/ansible/modules/packaging/os/apt_key.py
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
2.9
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
Ubuntu 16.04
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
Above Shippable link
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
Test should pass
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
Test fails with stack trace
|
https://github.com/ansible/ansible/issues/59918
|
https://github.com/ansible/ansible/pull/60083
|
d3da8e4a5b7c87ea6bb4f1345300ddb0a833a6b2
|
0892d48ebc94210d3e0526aa425f7fe7a64d1d2f
| 2019-08-01T11:42:19Z |
python
| 2019-08-05T19:01:58Z |
test/integration/targets/setup_mongodb/handlers/main.yml
|
- name: Remove debian_packages_py2
apt:
name: "{{ debian_packages_py2 }}"
state: absent
- name: Remove debian_packages_py36
apt:
name: "{{ debian_packages_py36 }}"
state: absent
- name: Remove redhat_packages_py2
yum:
name: "{{ redhat_packages_py36 }}"
state: absent
- name: Remove redhat_packages_py36
yum:
name: "{{ redhat_packages_py36 }}"
state: absent
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 59,918 |
mongodb_shard tests fail on Ubuntu 1604
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
Ubuntu test cases are failing with following -
```
09:14 The full traceback is:
09:14 Traceback (most recent call last):
09:14 File "/root/.ansible/tmp/ansible-tmp-1564657836.74-41084974317188/AnsiballZ_apt_key.py", line 149, in <module>
09:14 _ansiballz_main()
09:14 File "/root/.ansible/tmp/ansible-tmp-1564657836.74-41084974317188/AnsiballZ_apt_key.py", line 141, in _ansiballz_main
09:14 invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)
09:14 File "/root/.ansible/tmp/ansible-tmp-1564657836.74-41084974317188/AnsiballZ_apt_key.py", line 78, in invoke_module
09:14 imp.load_module('__main__', mod, module, MOD_DESC)
09:14 File "/tmp/ansible_apt_key_payload_LfAgyL/__main__.py", line 116, in <module>
09:14 File "/tmp/ansible_apt_key_payload_LfAgyL/ansible_apt_key_payload.zip/ansible/module_utils/urls.py", line 99, in <module>
09:14 File "/usr/local/lib/python2.7/dist-packages/urllib3/contrib/pyopenssl.py", line 46, in <module>
09:14 import OpenSSL.SSL
09:14 File "/usr/lib/python2.7/dist-packages/OpenSSL/__init__.py", line 8, in <module>
09:14 from OpenSSL import rand, crypto, SSL
09:14 File "/usr/lib/python2.7/dist-packages/OpenSSL/SSL.py", line 118, in <module>
09:14 SSL_ST_INIT = _lib.SSL_ST_INIT
09:14 AttributeError: 'module' object has no attribute 'SSL_ST_INIT'
09:14
09:14 fatal: [testhost]: FAILED! => {
09:14 "changed": false,
09:14 "module_stderr": "Traceback (most recent call last):\n File \"/root/.ansible/tmp/ansible-tmp-1564657836.74-41084974317188/AnsiballZ_apt_key.py\", line 149, in <module>\n _ansiballz_main()\n File \"/root/.ansible/tmp/ansible-tmp-1564657836.74-41084974317188/AnsiballZ_apt_key.py\", line 141, in _ansiballz_main\n invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)\n File \"/root/.ansible/tmp/ansible-tmp-1564657836.74-41084974317188/AnsiballZ_apt_key.py\", line 78, in invoke_module\n imp.load_module('__main__', mod, module, MOD_DESC)\n File \"/tmp/ansible_apt_key_payload_LfAgyL/__main__.py\", line 116, in <module>\n File \"/tmp/ansible_apt_key_payload_LfAgyL/ansible_apt_key_payload.zip/ansible/module_utils/urls.py\", line 99, in <module>\n File \"/usr/local/lib/python2.7/dist-packages/urllib3/contrib/pyopenssl.py\", line 46, in <module>\n import OpenSSL.SSL\n File \"/usr/lib/python2.7/dist-packages/OpenSSL/__init__.py\", line 8, in <module>\n from OpenSSL import rand, crypto, SSL\n File \"/usr/lib/python2.7/dist-packages/OpenSSL/SSL.py\", line 118, in <module>\n SSL_ST_INIT = _lib.SSL_ST_INIT\nAttributeError: 'module' object has no attribute 'SSL_ST_INIT'\n",
09:14 "module_stdout": "",
09:14 "msg": "MODULE FAILURE\nSee stdout/stderr for the exact error",
09:14 "rc": 1
09:14 }
```
https://app.shippable.com/github/ansible/ansible/runs/135011/59/console
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
lib/ansible/modules/packaging/os/apt_key.py
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
2.9
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
Ubuntu 16.04
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
Above Shippable link
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
Test should pass
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
Test fails with stack trace
|
https://github.com/ansible/ansible/issues/59918
|
https://github.com/ansible/ansible/pull/60083
|
d3da8e4a5b7c87ea6bb4f1345300ddb0a833a6b2
|
0892d48ebc94210d3e0526aa425f7fe7a64d1d2f
| 2019-08-01T11:42:19Z |
python
| 2019-08-05T19:01:58Z |
test/integration/targets/setup_mongodb/tasks/main.yml
|
# (c) 2019, Rhys Campbell <[email protected]>
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
# ============================================================
# https://docs.mongodb.com/manual/tutorial/install-mongodb-on-ubuntu/
# Support for Ubuntu 14.04 has been removed from MongoDB 4.0.10+, 3.6.13+, and 3.4.21+.
# CentOS6 has python version issues
- meta: end_play
when: (ansible_distribution == 'Ubuntu' and ansible_distribution_version == '14.04')
or (ansible_os_family == "RedHat" and ansible_distribution_major_version == '6')
or ansible_os_family == "Suse"
or ansible_distribution == 'Fedora'
or (ansible_distribution == 'CentOS' and ansible_distribution_version == '7')
# Ubuntu
- name: Import MongoDB public GPG Key xenial
apt_key:
keyserver: "{{ apt_xenial.keyserver }}"
id: "{{ apt_xenial.keyserver_id }}"
when:
- ansible_distribution_version == "16.04"
- ansible_distribution == 'Ubuntu'
- name: Add MongoDB repository into sources list xenial
apt_repository:
repo: "{{ apt_xenial.repo }}"
state: present
when:
- ansible_distribution_version == "16.04"
- ansible_distribution == 'Ubuntu'
- name: Import MongoDB public GPG Key bionic
apt_key:
keyserver: "{{ apt_bionic.keyserver }}"
id: "{{ apt_bionic.keyserver_id }}"
when:
- ansible_distribution_version == "18.04"
- ansible_distribution == 'Ubuntu'
- name: Add MongoDB repository into sources list bionic
apt_repository:
repo: "{{ apt_bionic.repo }}"
state: present
when:
- ansible_distribution_version == "18.04"
- ansible_distribution == 'Ubuntu'
- name: Update apt keys
shell: apt-key update && apt-get update
when:
- mongodb_version != "4.0"
- ansible_distribution == 'Ubuntu'
# Need to handle various platforms here. Package name will not always be the same
- name: Ensure mongod package is installed
apt:
name: "{{ mongodb_packages.mongod }}"
state: present
force: yes
when:
- ansible_distribution == 'Ubuntu'
- name: Ensure mongos package is installed
apt:
name: "{{ mongodb_packages.mongos }}"
state: present
force: yes
when:
- ansible_distribution == 'Ubuntu'
- name: Ensure mongo client is installed
apt:
name: "{{ mongodb_packages.mongo }}"
state: present
force: yes
when:
- ansible_distribution == 'Ubuntu'
# EOF Ubuntu
# Redhat
- name: Add MongopDB repo
yum_repository:
name: "{{ yum.name }}"
description: "{{ yum.description }}"
baseurl: "{{ yum.baseurl }}"
gpgcheck: "{{ yum.gpgcheck }}"
gpgkey: "{{ yum.gpgkey }}"
when:
- ansible_os_family == "RedHat"
- ansible_distribution_version.split('.')[0]|int <= 7
- not ansible_distribution == "Fedora"
- name: RedHat 8 repo not yet available so use 7 url
yum_repository:
name: "{{ yum.name }}"
description: "{{ yum.description }}"
baseurl: "{{ yum.redhat8url }}"
gpgcheck: "{{ yum.gpgcheck }}"
gpgkey: "{{ yum.gpgkey }}"
when:
- ansible_os_family == "RedHat"
- ansible_distribution_version.split('.')[0]|int == 8
- not ansible_distribution == "Fedora"
- name: Another url for Fedora based systems
yum_repository:
name: "{{ yum.name }}"
description: "{{ yum.description }}"
baseurl: "{{ yum.fedoraurl }}"
gpgcheck: "{{ yum.gpgcheck }}"
gpgkey: "{{ yum.gpgkey }}"
when:
- ansible_distribution == "Fedora"
- name: Ensure mongod package is installed
yum:
name: "{{ mongodb_packages.mongod }}"
state: present
when: ansible_os_family == "RedHat"
- name: Ensure mongos package is installed
yum:
name: "{{ mongodb_packages.mongos }}"
state: present
when: ansible_os_family == "RedHat"
- name: Ensure mongo client is installed
yum:
name: "{{ mongodb_packages.mongo }}"
state: present
when: ansible_os_family == "RedHat"
# EOF Redhat
- name: Install debian_packages
apt:
name: "{{ debian_packages_py2 }}"
when:
- ansible_os_family == "Debian"
- ansible_distribution_version == "16.04"
notify: Remove debian_packages_py2
- name: Install debian_packages
apt:
name: "{{ debian_packages_py36 }}"
when:
- ansible_os_family == "Debian"
- ansible_distribution_version == "18.04"
notify: Remove debian_packages_py36
- name: Install redhat_packages_py2
yum:
name: "{{ redhat_packages_py2 }}"
when:
- ansible_os_family == "RedHat"
- ansible_distribution_version|float < 8
- not (ansible_os_family == "RedHat" and ansible_distribution_version|float < 8)
notify: Remove redhat_packages_py2
- name: Install redhat_packages_py3
yum:
name: "{{ redhat_packages_py3 }}"
when:
- ansible_os_family == "RedHat"
- ansible_distribution_version|float >= 8
notify: Remove redhat_packages_py3
- name: Install pip packages
pip:
name: "{{ pip_packages }}"
state: present
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 60,043 |
"Database name has not been passed, used default database to connect to."
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below -->
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
postgresql_slot, postgresql_user
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.8.3
config file = /etc/ansible/ansible.cfg
configured module search path = ['~/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.7/site-packages/ansible
executable location = /usr/bin/ansible
python version = 3.7.4 (default, Jul 16 2019, 07:12:58) [GCC 9.1.0]
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
archlinux latest
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
<!--- Paste example playbooks or commands between quotes below -->
```yaml
- postgresql_user:
name: user1
password: password
become: true
become_user: postgres
- postgresql_slot:
name: slot1
become: true
become_user: postgres
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
Consistent warnings. Either both postgresql_user and postgresql_slot should give a warning or neither should.
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
postgresql_slot gives warning "[WARNING]: Database name has not been passed, used default database to connect to." but postgresql_user doesn't, even though both of them have not specified a database.
|
https://github.com/ansible/ansible/issues/60043
|
https://github.com/ansible/ansible/pull/60105
|
aecdfd397ee7e96d7f2b40e3da2cbd24b7bc3f94
|
d2cc9f5f06816935747638f5b4019b84d5932a51
| 2019-08-03T17:32:34Z |
python
| 2019-08-05T20:28:00Z |
lib/ansible/modules/database/postgresql/postgresql_membership.py
|
#!/usr/bin/python
# -*- coding: utf-8 -*-
# Copyright: (c) 2019, Andrew Klychkov (@Andersson007) <[email protected]>
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
ANSIBLE_METADATA = {
'metadata_version': '1.1',
'supported_by': 'community',
'status': ['preview']
}
DOCUMENTATION = r'''
---
module: postgresql_membership
short_description: Add or remove PostgreSQL roles from groups
description:
- Adds or removes PostgreSQL roles from groups (other roles)
U(https://www.postgresql.org/docs/current/role-membership.html).
- Users are roles with login privilege (see U(https://www.postgresql.org/docs/current/role-attributes.html) for more information).
- Groups are PostgreSQL roles usually without LOGIN privelege.
- "Common use case:"
- 1) add a new group (groups) by M(postgresql_user) module
U(https://docs.ansible.com/ansible/latest/modules/postgresql_user_module.html) with I(role_attr_flags=NOLOGIN)
- 2) grant them desired privileges by M(postgresql_privs) module
U(https://docs.ansible.com/ansible/latest/modules/postgresql_privs_module.html)
- 3) add desired PostgreSQL users to the new group (groups) by this module
version_added: '2.8'
options:
groups:
description:
- The list of groups (roles) that need to be granted to or revoked from I(target_roles).
required: yes
type: list
aliases:
- group
- source_role
- source_roles
target_roles:
description:
- The list of target roles (groups will be granted to them).
required: yes
type: list
aliases:
- target_role
- users
- user
fail_on_role:
description:
- If C(yes), fail when group or target_role doesn't exist. If C(no), just warn and continue.
default: yes
type: bool
state:
description:
- Membership state.
- I(state=present) implies the I(groups)must be granted to I(target_roles).
- I(state=absent) implies the I(groups) must be revoked from I(target_roles).
type: str
default: present
choices: [ absent, present ]
db:
description:
- Name of database to connect to.
type: str
aliases:
- login_db
session_role:
description:
- Switch to session_role after connecting.
The specified session_role must be a role that the current login_user is a member of.
- Permissions checking for SQL commands is carried out as though
the session_role were the one that had logged in originally.
type: str
author:
- Andrew Klychkov (@Andersson007)
extends_documentation_fragment: postgres
'''
EXAMPLES = r'''
- name: Grant role read_only to alice and bob
postgresql_membership:
group: read_only
target_roles:
- alice
- bob
state: present
# you can also use target_roles: alice,bob,etc to pass the role list
- name: Revoke role read_only and exec_func from bob. Ignore if roles don't exist
postgresql_membership:
groups:
- read_only
- exec_func
target_role: bob
fail_on_role: no
state: absent
'''
RETURN = r'''
queries:
description: List of executed queries.
returned: always
type: str
sample: [ "GRANT \"user_ro\" TO \"alice\"" ]
granted:
description: Dict of granted groups and roles.
returned: if I(state=present)
type: dict
sample: { "ro_group": [ "alice", "bob" ] }
revoked:
description: Dict of revoked groups and roles.
returned: if I(state=absent)
type: dict
sample: { "ro_group": [ "alice", "bob" ] }
state:
description: Membership state that tried to be set.
returned: always
type: str
sample: "present"
'''
try:
from psycopg2.extras import DictCursor
except ImportError:
# psycopg2 is checked by connect_to_db()
# from ansible.module_utils.postgres
pass
from ansible.module_utils.basic import AnsibleModule
from ansible.module_utils.database import pg_quote_identifier
from ansible.module_utils.postgres import (
connect_to_db,
exec_sql,
get_conn_params,
postgres_common_argument_spec,
)
class PgMembership(object):
def __init__(self, module, cursor, groups, target_roles, fail_on_role):
self.module = module
self.cursor = cursor
self.target_roles = [r.strip() for r in target_roles]
self.groups = [r.strip() for r in groups]
self.executed_queries = []
self.granted = {}
self.revoked = {}
self.fail_on_role = fail_on_role
self.non_existent_roles = []
self.changed = False
self.__check_roles_exist()
def grant(self):
for group in self.groups:
self.granted[group] = []
for role in self.target_roles:
# If role is in a group now, pass:
if self.__check_membership(group, role):
continue
query = "GRANT %s TO %s" % ((pg_quote_identifier(group, 'role'),
(pg_quote_identifier(role, 'role'))))
self.changed = exec_sql(self, query, ddl=True)
if self.changed:
self.granted[group].append(role)
return self.changed
def revoke(self):
for group in self.groups:
self.revoked[group] = []
for role in self.target_roles:
# If role is not in a group now, pass:
if not self.__check_membership(group, role):
continue
query = "REVOKE %s FROM %s" % ((pg_quote_identifier(group, 'role'),
(pg_quote_identifier(role, 'role'))))
self.changed = exec_sql(self, query, ddl=True)
if self.changed:
self.revoked[group].append(role)
return self.changed
def __check_membership(self, src_role, dst_role):
query = ("SELECT ARRAY(SELECT b.rolname FROM "
"pg_catalog.pg_auth_members m "
"JOIN pg_catalog.pg_roles b ON (m.roleid = b.oid) "
"WHERE m.member = r.oid) "
"FROM pg_catalog.pg_roles r "
"WHERE r.rolname = '%s'" % dst_role)
res = exec_sql(self, query, add_to_executed=False)
membership = []
if res:
membership = res[0][0]
if not membership:
return False
if src_role in membership:
return True
return False
def __check_roles_exist(self):
for group in self.groups:
if not self.__role_exists(group):
if self.fail_on_role:
self.module.fail_json(msg="Role %s does not exist" % group)
else:
self.module.warn("Role %s does not exist, pass" % group)
self.non_existent_roles.append(group)
for role in self.target_roles:
if not self.__role_exists(role):
if self.fail_on_role:
self.module.fail_json(msg="Role %s does not exist" % role)
else:
self.module.warn("Role %s does not exist, pass" % role)
if role not in self.groups:
self.non_existent_roles.append(role)
else:
if self.fail_on_role:
self.module.exit_json(msg="Role role '%s' is a member of role '%s'" % (role, role))
else:
self.module.warn("Role role '%s' is a member of role '%s', pass" % (role, role))
# Update role lists, excluding non existent roles:
self.groups = [g for g in self.groups if g not in self.non_existent_roles]
self.target_roles = [r for r in self.target_roles if r not in self.non_existent_roles]
def __role_exists(self, role):
return exec_sql(self, "SELECT 1 FROM pg_roles WHERE rolname = '%s'" % role, add_to_executed=False)
# ===========================================
# Module execution.
#
def main():
argument_spec = postgres_common_argument_spec()
argument_spec.update(
groups=dict(type='list', aliases=['group', 'source_role', 'source_roles']),
target_roles=dict(type='list', aliases=['target_role', 'user', 'users']),
fail_on_role=dict(type='bool', default=True),
state=dict(type='str', default='present', choices=['absent', 'present']),
db=dict(type='str', aliases=['login_db']),
session_role=dict(type='str'),
)
module = AnsibleModule(
argument_spec=argument_spec,
supports_check_mode=True,
)
groups = module.params['groups']
target_roles = module.params['target_roles']
fail_on_role = module.params['fail_on_role']
state = module.params['state']
conn_params = get_conn_params(module, module.params)
db_connection = connect_to_db(module, conn_params, autocommit=False)
cursor = db_connection.cursor(cursor_factory=DictCursor)
##############
# Create the object and do main job:
pg_membership = PgMembership(module, cursor, groups, target_roles, fail_on_role)
if state == 'present':
pg_membership.grant()
elif state == 'absent':
pg_membership.revoke()
# Rollback if it's possible and check_mode:
if module.check_mode:
db_connection.rollback()
else:
db_connection.commit()
cursor.close()
db_connection.close()
# Make return values:
return_dict = dict(
changed=pg_membership.changed,
state=state,
groups=pg_membership.groups,
target_roles=pg_membership.target_roles,
queries=pg_membership.executed_queries,
)
if state == 'present':
return_dict['granted'] = pg_membership.granted
elif state == 'absent':
return_dict['revoked'] = pg_membership.revoked
module.exit_json(**return_dict)
if __name__ == '__main__':
main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 60,043 |
"Database name has not been passed, used default database to connect to."
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below -->
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
postgresql_slot, postgresql_user
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.8.3
config file = /etc/ansible/ansible.cfg
configured module search path = ['~/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.7/site-packages/ansible
executable location = /usr/bin/ansible
python version = 3.7.4 (default, Jul 16 2019, 07:12:58) [GCC 9.1.0]
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
archlinux latest
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
<!--- Paste example playbooks or commands between quotes below -->
```yaml
- postgresql_user:
name: user1
password: password
become: true
become_user: postgres
- postgresql_slot:
name: slot1
become: true
become_user: postgres
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
Consistent warnings. Either both postgresql_user and postgresql_slot should give a warning or neither should.
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
postgresql_slot gives warning "[WARNING]: Database name has not been passed, used default database to connect to." but postgresql_user doesn't, even though both of them have not specified a database.
|
https://github.com/ansible/ansible/issues/60043
|
https://github.com/ansible/ansible/pull/60105
|
aecdfd397ee7e96d7f2b40e3da2cbd24b7bc3f94
|
d2cc9f5f06816935747638f5b4019b84d5932a51
| 2019-08-03T17:32:34Z |
python
| 2019-08-05T20:28:00Z |
lib/ansible/modules/database/postgresql/postgresql_ping.py
|
#!/usr/bin/python
# -*- coding: utf-8 -*-
# Copyright: (c) 2018, Andrew Klychkov (@Andersson007) <[email protected]>
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
ANSIBLE_METADATA = {
'metadata_version': '1.1',
'status': ['preview'],
'supported_by': 'community'
}
DOCUMENTATION = r'''
---
module: postgresql_ping
short_description: Check remote PostgreSQL server availability
description:
- Simple module to check remote PostgreSQL server availability.
version_added: '2.8'
options:
db:
description:
- Name of a database to connect to.
type: str
aliases:
- login_db
author:
- Andrew Klychkov (@Andersson007)
extends_documentation_fragment: postgres
'''
EXAMPLES = r'''
# PostgreSQL ping dbsrv server from the shell:
# ansible dbsrv -m postgresql_ping
# In the example below you need to generate certificates previously.
# See https://www.postgresql.org/docs/current/libpq-ssl.html for more information.
- name: PostgreSQL ping dbsrv server using not default credentials and ssl
postgresql_ping:
db: protected_db
login_host: dbsrv
login_user: secret
login_password: secret_pass
ca_cert: /root/root.crt
ssl_mode: verify-full
'''
RETURN = r'''
is_available:
description: PostgreSQL server availability.
returned: always
type: bool
sample: true
server_version:
description: PostgreSQL server version.
returned: always
type: dict
sample: { major: 10, minor: 1 }
'''
try:
from psycopg2.extras import DictCursor
except ImportError:
# psycopg2 is checked by connect_to_db()
# from ansible.module_utils.postgres
pass
from ansible.module_utils.basic import AnsibleModule
from ansible.module_utils.postgres import (
connect_to_db,
exec_sql,
get_conn_params,
postgres_common_argument_spec,
)
# ===========================================
# PostgreSQL module specific support methods.
#
class PgPing(object):
def __init__(self, module, cursor):
self.module = module
self.cursor = cursor
self.is_available = False
self.version = {}
def do(self):
self.get_pg_version()
return (self.is_available, self.version)
def get_pg_version(self):
query = "SELECT version()"
raw = exec_sql(self, query, add_to_executed=False)[0][0]
if raw:
self.is_available = True
raw = raw.split()[1].split('.')
self.version = dict(
major=int(raw[0]),
minor=int(raw[1]),
)
# ===========================================
# Module execution.
#
def main():
argument_spec = postgres_common_argument_spec()
argument_spec.update(
db=dict(type='str', aliases=['login_db']),
)
module = AnsibleModule(
argument_spec=argument_spec,
supports_check_mode=True,
)
# Set some default values:
cursor = False
db_connection = False
result = dict(
changed=False,
is_available=False,
server_version=dict(),
)
conn_params = get_conn_params(module, module.params)
db_connection = connect_to_db(module, conn_params, fail_on_conn=False)
if db_connection is not None:
cursor = db_connection.cursor(cursor_factory=DictCursor)
# Do job:
pg_ping = PgPing(module, cursor)
if cursor:
# If connection established:
result["is_available"], result["server_version"] = pg_ping.do()
db_connection.rollback()
module.exit_json(**result)
if __name__ == '__main__':
main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 60,043 |
"Database name has not been passed, used default database to connect to."
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below -->
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
postgresql_slot, postgresql_user
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.8.3
config file = /etc/ansible/ansible.cfg
configured module search path = ['~/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.7/site-packages/ansible
executable location = /usr/bin/ansible
python version = 3.7.4 (default, Jul 16 2019, 07:12:58) [GCC 9.1.0]
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
archlinux latest
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
<!--- Paste example playbooks or commands between quotes below -->
```yaml
- postgresql_user:
name: user1
password: password
become: true
become_user: postgres
- postgresql_slot:
name: slot1
become: true
become_user: postgres
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
Consistent warnings. Either both postgresql_user and postgresql_slot should give a warning or neither should.
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
postgresql_slot gives warning "[WARNING]: Database name has not been passed, used default database to connect to." but postgresql_user doesn't, even though both of them have not specified a database.
|
https://github.com/ansible/ansible/issues/60043
|
https://github.com/ansible/ansible/pull/60105
|
aecdfd397ee7e96d7f2b40e3da2cbd24b7bc3f94
|
d2cc9f5f06816935747638f5b4019b84d5932a51
| 2019-08-03T17:32:34Z |
python
| 2019-08-05T20:28:00Z |
lib/ansible/modules/database/postgresql/postgresql_slot.py
|
#!/usr/bin/python
# -*- coding: utf-8 -*-
# Copyright: (c) 2019, John Scalia (@jscalia), Andrew Klychkov (@Andersson007) <[email protected]>
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
ANSIBLE_METADATA = {'metadata_version': '1.1',
'status': ['preview'],
'supported_by': 'community'}
DOCUMENTATION = '''
---
module: postgresql_slot
short_description: Add or remove slots from a PostgreSQL database
description:
- Add or remove physical or logical slots from a PostgreSQL database.
version_added: '2.8'
options:
name:
description:
- Name of the slot to add or remove.
type: str
required: yes
aliases:
- slot_name
slot_type:
description:
- Slot type.
- For more information see
U(https://www.postgresql.org/docs/current/protocol-replication.html) and
U(https://www.postgresql.org/docs/current/logicaldecoding-explanation.html).
type: str
default: physical
choices: [ logical, physical ]
state:
description:
- The slot state.
- I(state=present) implies the slot must be present in the system.
- I(state=absent) implies the I(groups) must be revoked from I(target_roles).
type: str
default: present
choices: [ absent, present ]
immediately_reserve:
description:
- Optional parameter the when C(yes) specifies that the LSN for this replication slot be reserved
immediately, otherwise the default, C(no), specifies that the LSN is reserved on the first connection
from a streaming replication client.
- Is available from PostgreSQL version 9.6.
- Uses only with I(slot_type=physical).
- Mutually exclusive with I(slot_type=logical).
type: bool
default: no
output_plugin:
description:
- All logical slots must indicate which output plugin decoder they're using.
- This parameter does not apply to physical slots.
- It will be ignored with I(slot_type=physical).
type: str
default: "test_decoding"
db:
description:
- Name of database to connect to.
type: str
aliases:
- login_db
session_role:
description:
- Switch to session_role after connecting.
The specified session_role must be a role that the current login_user is a member of.
- Permissions checking for SQL commands is carried out as though
the session_role were the one that had logged in originally.
type: str
notes:
- Physical replication slots were introduced to PostgreSQL with version 9.4,
while logical replication slots were added beginning with version 10.0.
author:
- John Scalia (@jscalia)
- Andrew Klychkov (@Andersson007)
extends_documentation_fragment: postgres
'''
EXAMPLES = r'''
- name: Create physical_one physical slot if doesn't exist
become_user: postgres
postgresql_slot:
slot_name: physical_one
db: ansible
- name: Remove physical_one slot if exists
become_user: postgres
postgresql_slot:
slot_name: physical_one
db: ansible
state: absent
- name: Create logical_one logical slot to the database acme if doen't exist
postgresql_slot:
name: logical_slot_one
slot_type: logical
state: present
output_plugin: custom_decoder_one
- name: Remove logical_one slot if exists from the cluster running on another host and non-standard port
postgresql_slot:
name: logical_one
login_host: mydatabase.example.org
port: 5433
login_user: ourSuperuser
login_password: thePassword
state: absent
'''
RETURN = r'''
name:
description: Name of the slot
returned: always
type: str
sample: "physical_one"
queries:
description: List of executed queries.
returned: always
type: str
sample: [ "SELECT pg_create_physical_replication_slot('physical_one', False, False)" ]
'''
try:
from psycopg2.extras import DictCursor
except ImportError:
# psycopg2 is checked by connect_to_db()
# from ansible.module_utils.postgres
pass
from ansible.module_utils.basic import AnsibleModule
from ansible.module_utils.postgres import (
connect_to_db,
exec_sql,
get_conn_params,
postgres_common_argument_spec,
)
# ===========================================
# PostgreSQL module specific support methods.
#
class PgSlot(object):
def __init__(self, module, cursor, name):
self.module = module
self.cursor = cursor
self.name = name
self.exists = False
self.kind = ''
self.__slot_exists()
self.changed = False
self.executed_queries = []
def create(self, kind='physical', immediately_reserve=False, output_plugin=False, just_check=False):
if self.exists:
if self.kind == kind:
return False
else:
self.module.warn("slot with name '%s' already exists "
"but has another type '%s'" % (self.name, self.kind))
return False
if just_check:
return None
if kind == 'physical':
# Check server version (needs for immedately_reserverd needs 9.6+):
if self.cursor.connection.server_version < 96000:
query = "SELECT pg_create_physical_replication_slot('%s')" % self.name
else:
query = "SELECT pg_create_physical_replication_slot('%s', %s)" % (self.name, immediately_reserve)
elif kind == 'logical':
query = "SELECT pg_create_logical_replication_slot('%s', '%s')" % (self.name, output_plugin)
self.changed = exec_sql(self, query, ddl=True)
def drop(self):
if not self.exists:
return False
query = "SELECT pg_drop_replication_slot('%s')" % self.name
self.changed = exec_sql(self, query, ddl=True)
def __slot_exists(self):
query = "SELECT slot_type FROM pg_replication_slots WHERE slot_name = '%s'" % self.name
res = exec_sql(self, query, add_to_executed=False)
if res:
self.exists = True
self.kind = res[0][0]
# ===========================================
# Module execution.
#
def main():
argument_spec = postgres_common_argument_spec()
argument_spec.update(
db=dict(type="str", aliases=["login_db"]),
name=dict(type="str", aliases=["slot_name"]),
slot_type=dict(type="str", default="physical", choices=["logical", "physical"]),
immediately_reserve=dict(type="bool", default=False),
session_role=dict(type="str"),
output_plugin=dict(type="str", default="test_decoding"),
state=dict(type="str", default="present", choices=["absent", "present"]),
)
module = AnsibleModule(
argument_spec=argument_spec,
supports_check_mode=True,
)
name = module.params["name"]
slot_type = module.params["slot_type"]
immediately_reserve = module.params["immediately_reserve"]
state = module.params["state"]
output_plugin = module.params["output_plugin"]
if immediately_reserve and slot_type == 'logical':
module.fail_json(msg="Module parameters immediately_reserve and slot_type=logical are mutually exclusive")
conn_params = get_conn_params(module, module.params)
db_connection = connect_to_db(module, conn_params, autocommit=True)
cursor = db_connection.cursor(cursor_factory=DictCursor)
##################################
# Create an object and do main job
pg_slot = PgSlot(module, cursor, name)
changed = False
if module.check_mode:
if state == "present":
if not pg_slot.exists:
changed = True
pg_slot.create(slot_type, immediately_reserve, output_plugin, just_check=True)
elif state == "absent":
if pg_slot.exists:
changed = True
else:
if state == "absent":
pg_slot.drop()
elif state == "present":
pg_slot.create(slot_type, immediately_reserve, output_plugin)
changed = pg_slot.changed
db_connection.close()
module.exit_json(changed=changed, name=name, queries=pg_slot.executed_queries)
if __name__ == '__main__':
main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 60,043 |
"Database name has not been passed, used default database to connect to."
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below -->
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
postgresql_slot, postgresql_user
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.8.3
config file = /etc/ansible/ansible.cfg
configured module search path = ['~/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.7/site-packages/ansible
executable location = /usr/bin/ansible
python version = 3.7.4 (default, Jul 16 2019, 07:12:58) [GCC 9.1.0]
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
archlinux latest
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
<!--- Paste example playbooks or commands between quotes below -->
```yaml
- postgresql_user:
name: user1
password: password
become: true
become_user: postgres
- postgresql_slot:
name: slot1
become: true
become_user: postgres
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
Consistent warnings. Either both postgresql_user and postgresql_slot should give a warning or neither should.
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
postgresql_slot gives warning "[WARNING]: Database name has not been passed, used default database to connect to." but postgresql_user doesn't, even though both of them have not specified a database.
|
https://github.com/ansible/ansible/issues/60043
|
https://github.com/ansible/ansible/pull/60105
|
aecdfd397ee7e96d7f2b40e3da2cbd24b7bc3f94
|
d2cc9f5f06816935747638f5b4019b84d5932a51
| 2019-08-03T17:32:34Z |
python
| 2019-08-05T20:28:00Z |
lib/ansible/modules/database/postgresql/postgresql_tablespace.py
|
#!/usr/bin/python
# -*- coding: utf-8 -*-
# Copyright: (c) 2017, Flavien Chantelot (@Dorn-)
# Copyright: (c) 2018, Antoine Levy-Lambert (@antoinell)
# Copyright: (c) 2019, Andrew Klychkov (@Andersson007) <[email protected]>
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
ANSIBLE_METADATA = {
'metadata_version': '1.1',
'supported_by': 'community',
'status': ['preview']
}
DOCUMENTATION = r'''
---
module: postgresql_tablespace
short_description: Add or remove PostgreSQL tablespaces from remote hosts
description:
- Adds or removes PostgreSQL tablespaces from remote hosts
U(https://www.postgresql.org/docs/current/sql-createtablespace.html),
U(https://www.postgresql.org/docs/current/manage-ag-tablespaces.html).
version_added: '2.8'
options:
tablespace:
description:
- Name of the tablespace to add or remove.
required: true
type: str
aliases:
- name
location:
description:
- Path to the tablespace directory in the file system.
- Ensure that the location exists and has right privileges.
type: path
aliases:
- path
state:
description:
- Tablespace state.
- I(state=present) implies the tablespace must be created if it doesn't exist.
- I(state=absent) implies the tablespace must be removed if present.
I(state=absent) is mutually exclusive with I(location), I(owner), i(set).
- See the Notes section for information about check mode restrictions.
type: str
default: present
choices: [ absent, present ]
owner:
description:
- Name of the role to set as an owner of the tablespace.
- If this option is not specified, the tablespace owner is a role that creates the tablespace.
type: str
set:
description:
- Dict of tablespace options to set. Supported from PostgreSQL 9.0.
- For more information see U(https://www.postgresql.org/docs/current/sql-createtablespace.html).
- When reset is passed as an option's value, if the option was set previously, it will be removed
U(https://www.postgresql.org/docs/current/sql-altertablespace.html).
type: dict
rename_to:
description:
- New name of the tablespace.
- The new name cannot begin with pg_, as such names are reserved for system tablespaces.
session_role:
description:
- Switch to session_role after connecting. The specified session_role must
be a role that the current login_user is a member of.
- Permissions checking for SQL commands is carried out as though
the session_role were the one that had logged in originally.
type: str
db:
description:
- Name of database to connect to and run queries against.
type: str
aliases:
- login_db
notes:
- I(state=absent) and I(state=present) (the second one if the tablespace doesn't exist) do not
support check mode because the corresponding PostgreSQL DROP and CREATE TABLESPACE commands
can not be run inside the transaction block.
author:
- Flavien Chantelot (@Dorn-)
- Antoine Levy-Lambert (@antoinell)
- Andrew Klychkov (@Andersson007)
extends_documentation_fragment: postgres
'''
EXAMPLES = r'''
- name: Create a new tablespace called acme and set bob as an its owner
postgresql_tablespace:
name: acme
owner: bob
location: /data/foo
- name: Create a new tablespace called bar with tablespace options
postgresql_tablespace:
name: bar
set:
random_page_cost: 1
seq_page_cost: 1
- name: Reset random_page_cost option
postgresql_tablespace:
name: bar
set:
random_page_cost: reset
- name: Rename the tablespace from bar to pcie_ssd
postgresql_tablespace:
name: bar
rename_to: pcie_ssd
- name: Drop tablespace called bloat
postgresql_tablespace:
name: bloat
state: absent
'''
RETURN = r'''
queries:
description: List of queries that was tried to be executed.
returned: always
type: str
sample: [ "CREATE TABLESPACE bar LOCATION '/incredible/ssd'" ]
tablespace:
description: Tablespace name.
returned: always
type: str
sample: 'ssd'
owner:
description: Tablespace owner.
returned: always
type: str
sample: 'Bob'
options:
description: Tablespace options.
returned: always
type: dict
sample: { 'random_page_cost': 1, 'seq_page_cost': 1 }
location:
description: Path to the tablespace in the file system.
returned: always
type: str
sample: '/incredible/fast/ssd'
newname:
description: New tablespace name
returned: if existent
type: str
sample: new_ssd
state:
description: Tablespace state at the end of execution.
returned: always
type: str
sample: 'present'
'''
try:
from psycopg2 import __version__ as PSYCOPG2_VERSION
from psycopg2.extras import DictCursor
from psycopg2.extensions import ISOLATION_LEVEL_AUTOCOMMIT as AUTOCOMMIT
from psycopg2.extensions import ISOLATION_LEVEL_READ_COMMITTED as READ_COMMITTED
except ImportError:
# psycopg2 is checked by connect_to_db()
# from ansible.module_utils.postgres
pass
from ansible.module_utils.basic import AnsibleModule
from ansible.module_utils.database import pg_quote_identifier
from ansible.module_utils.postgres import (
connect_to_db,
exec_sql,
get_conn_params,
postgres_common_argument_spec,
)
class PgTablespace(object):
"""Class for working with PostgreSQL tablespaces.
Args:
module (AnsibleModule) -- object of AnsibleModule class
cursor (cursor) -- cursor object of psycopg2 library
name (str) -- name of the tablespace
Attrs:
module (AnsibleModule) -- object of AnsibleModule class
cursor (cursor) -- cursor object of psycopg2 library
name (str) -- name of the tablespace
exists (bool) -- flag the tablespace exists in the DB or not
owner (str) -- tablespace owner
location (str) -- path to the tablespace directory in the file system
executed_queries (list) -- list of executed queries
new_name (str) -- new name for the tablespace
opt_not_supported (bool) -- flag indicates a tablespace option is supported or not
"""
def __init__(self, module, cursor, name):
self.module = module
self.cursor = cursor
self.name = name
self.exists = False
self.owner = ''
self.settings = {}
self.location = ''
self.executed_queries = []
self.new_name = ''
self.opt_not_supported = False
# Collect info:
self.get_info()
def get_info(self):
"""Get tablespace information."""
# Check that spcoptions exists:
opt = exec_sql(self, "SELECT 1 FROM information_schema.columns "
"WHERE table_name = 'pg_tablespace' "
"AND column_name = 'spcoptions'", add_to_executed=False)
# For 9.1 version and earlier:
location = exec_sql(self, "SELECT 1 FROM information_schema.columns "
"WHERE table_name = 'pg_tablespace' "
"AND column_name = 'spclocation'", add_to_executed=False)
if location:
location = 'spclocation'
else:
location = 'pg_tablespace_location(t.oid)'
if not opt:
self.opt_not_supported = True
query = ("SELECT r.rolname, (SELECT Null), %s "
"FROM pg_catalog.pg_tablespace AS t "
"JOIN pg_catalog.pg_roles AS r "
"ON t.spcowner = r.oid "
"WHERE t.spcname = '%s'" % (location, self.name))
else:
query = ("SELECT r.rolname, t.spcoptions, %s "
"FROM pg_catalog.pg_tablespace AS t "
"JOIN pg_catalog.pg_roles AS r "
"ON t.spcowner = r.oid "
"WHERE t.spcname = '%s'" % (location, self.name))
res = exec_sql(self, query, add_to_executed=False)
if not res:
self.exists = False
return False
if res[0][0]:
self.exists = True
self.owner = res[0][0]
if res[0][1]:
# Options exist:
for i in res[0][1]:
i = i.split('=')
self.settings[i[0]] = i[1]
if res[0][2]:
# Location exists:
self.location = res[0][2]
def create(self, location):
"""Create tablespace.
Return True if success, otherwise, return False.
args:
location (str) -- tablespace directory path in the FS
"""
query = ("CREATE TABLESPACE %s LOCATION '%s'" % (pg_quote_identifier(self.name, 'database'), location))
return exec_sql(self, query, ddl=True)
def drop(self):
"""Drop tablespace.
Return True if success, otherwise, return False.
"""
return exec_sql(self, "DROP TABLESPACE %s" % pg_quote_identifier(self.name, 'database'), ddl=True)
def set_owner(self, new_owner):
"""Set tablespace owner.
Return True if success, otherwise, return False.
args:
new_owner (str) -- name of a new owner for the tablespace"
"""
if new_owner == self.owner:
return False
query = "ALTER TABLESPACE %s OWNER TO %s" % (pg_quote_identifier(self.name, 'database'), new_owner)
return exec_sql(self, query, ddl=True)
def rename(self, newname):
"""Rename tablespace.
Return True if success, otherwise, return False.
args:
newname (str) -- new name for the tablespace"
"""
query = "ALTER TABLESPACE %s RENAME TO %s" % (pg_quote_identifier(self.name, 'database'), newname)
self.new_name = newname
return exec_sql(self, query, ddl=True)
def set_settings(self, new_settings):
"""Set tablespace settings (options).
If some setting has been changed, set changed = True.
After all settings list is handling, return changed.
args:
new_settings (list) -- list of new settings
"""
# settings must be a dict {'key': 'value'}
if self.opt_not_supported:
return False
changed = False
# Apply new settings:
for i in new_settings:
if new_settings[i] == 'reset':
if i in self.settings:
changed = self.__reset_setting(i)
self.settings[i] = None
elif (i not in self.settings) or (str(new_settings[i]) != self.settings[i]):
changed = self.__set_setting("%s = '%s'" % (i, new_settings[i]))
return changed
def __reset_setting(self, setting):
"""Reset tablespace setting.
Return True if success, otherwise, return False.
args:
setting (str) -- string in format "setting_name = 'setting_value'"
"""
query = "ALTER TABLESPACE %s RESET (%s)" % (pg_quote_identifier(self.name, 'database'), setting)
return exec_sql(self, query, ddl=True)
def __set_setting(self, setting):
"""Set tablespace setting.
Return True if success, otherwise, return False.
args:
setting (str) -- string in format "setting_name = 'setting_value'"
"""
query = "ALTER TABLESPACE %s SET (%s)" % (pg_quote_identifier(self.name, 'database'), setting)
return exec_sql(self, query, ddl=True)
# ===========================================
# Module execution.
#
def main():
argument_spec = postgres_common_argument_spec()
argument_spec.update(
tablespace=dict(type='str', aliases=['name']),
state=dict(type='str', default="present", choices=["absent", "present"]),
location=dict(type='path', aliases=['path']),
owner=dict(type='str'),
set=dict(type='dict'),
rename_to=dict(type='str'),
db=dict(type='str', aliases=['login_db']),
session_role=dict(type='str'),
)
module = AnsibleModule(
argument_spec=argument_spec,
mutually_exclusive=(('positional_args', 'named_args'),),
supports_check_mode=True,
)
tablespace = module.params["tablespace"]
state = module.params["state"]
location = module.params["location"]
owner = module.params["owner"]
rename_to = module.params["rename_to"]
settings = module.params["set"]
if state == 'absent' and (location or owner or rename_to or settings):
module.fail_json(msg="state=absent is mutually exclusive location, "
"owner, rename_to, and set")
conn_params = get_conn_params(module, module.params)
db_connection = connect_to_db(module, conn_params, autocommit=True)
cursor = db_connection.cursor(cursor_factory=DictCursor)
# Change autocommit to False if check_mode:
if module.check_mode:
if PSYCOPG2_VERSION >= '2.4.2':
db_connection.set_session(autocommit=False)
else:
db_connection.set_isolation_level(READ_COMMITTED)
# Set defaults:
autocommit = False
changed = False
##############
# Create PgTablespace object and do main job:
tblspace = PgTablespace(module, cursor, tablespace)
# If tablespace exists with different location, exit:
if tblspace.exists and location and location != tblspace.location:
module.fail_json(msg="Tablespace '%s' exists with different location '%s'" % (tblspace.name, tblspace.location))
# Create new tablespace:
if not tblspace.exists and state == 'present':
if rename_to:
module.fail_json(msg="Tablespace %s does not exist, nothing to rename" % tablespace)
if not location:
module.fail_json(msg="'location' parameter must be passed with "
"state=present if the tablespace doesn't exist")
# Because CREATE TABLESPACE can not be run inside the transaction block:
autocommit = True
if PSYCOPG2_VERSION >= '2.4.2':
db_connection.set_session(autocommit=True)
else:
db_connection.set_isolation_level(AUTOCOMMIT)
changed = tblspace.create(location)
# Drop non-existing tablespace:
elif not tblspace.exists and state == 'absent':
# Nothing to do:
module.fail_json(msg="Tries to drop nonexistent tablespace '%s'" % tblspace.name)
# Drop existing tablespace:
elif tblspace.exists and state == 'absent':
# Because DROP TABLESPACE can not be run inside the transaction block:
autocommit = True
if PSYCOPG2_VERSION >= '2.4.2':
db_connection.set_session(autocommit=True)
else:
db_connection.set_isolation_level(AUTOCOMMIT)
changed = tblspace.drop()
# Rename tablespace:
elif tblspace.exists and rename_to:
if tblspace.name != rename_to:
changed = tblspace.rename(rename_to)
if state == 'present':
# Refresh information:
tblspace.get_info()
# Change owner and settings:
if state == 'present' and tblspace.exists:
if owner:
changed = tblspace.set_owner(owner)
if settings:
changed = tblspace.set_settings(settings)
tblspace.get_info()
# Rollback if it's possible and check_mode:
if not autocommit:
if module.check_mode:
db_connection.rollback()
else:
db_connection.commit()
cursor.close()
db_connection.close()
# Make return values:
kw = dict(
changed=changed,
state='present',
tablespace=tblspace.name,
owner=tblspace.owner,
queries=tblspace.executed_queries,
options=tblspace.settings,
location=tblspace.location,
)
if state == 'present':
kw['state'] = 'present'
if tblspace.new_name:
kw['newname'] = tblspace.new_name
elif state == 'absent':
kw['state'] = 'absent'
module.exit_json(**kw)
if __name__ == '__main__':
main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 59,952 |
VMware: vmware_guest throws AttributeError 'NoneType' object has no attribute 'uuid'
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
Ansible throws an AttributeError exception. If I do not specify a network section, it will create the vm but complain about mismatched number of NICs
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
vmware_guest
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.8.3
config file = /etc/ansible/ansible.cfg
configured module search path = [u'/home/ansible/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python2.7/site-packages/ansible
executable location = /bin/ansible
python version = 2.7.5 (default, Jun 11 2019, 12:19:05) [GCC 4.8.5 20150623 (Red Hat 4.8.5-36)]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
ANSIBLE_PIPELINING(/etc/ansible/ansible.cfg) = True
DEFAULT_PRIVATE_KEY_FILE(/etc/ansible/ansible.cfg) = /home/ansible/.ssh/id_rsa
HOST_KEY_CHECKING(/etc/ansible/ansible.cfg) = False
PARAMIKO_HOST_KEY_AUTO_ADD(/etc/ansible/ansible.cfg) = True
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
vSphere 6.7.0
RHEL 7.6 kernel 3.10.0-957.21.3.el7.x86_64
pyvmomi 6.7.1.2018.12
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
Run the following playbook
<!--- Paste example playbooks or commands between quotes below -->
```yaml
---
# tasks file for vmware-setup
- name: Create or update VM
vmware_guest:
hostname: "{{ vCenter.ip }}"
username: "{{ vCenter.username }}"
password: "{{ vCenter.password }}"
annotation: Provisioned and managed by Ansible
cluster: "{{ cluster_name }}"
datacenter: "{{ datacenter_name }}"
name: "{{ inventory_hostname }}"
folder: "{{ folder_name }}"
state: poweredon
template: "{{ template_name }}"
networks:
- name: "{{ network_name }}"
vlan: 400
ip: "{{ ansible_host }}"
netmask: "{{ network_netmask }}"
customization:
dns_servers: 10.22.148.45
domain: "{{ vCenter.domain }}"
hostname: "{{ inventory_hostname|lower }}"
validate_certs: no
delegate_to: localhost
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
New vm to be created and started
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
AttributeError thrown
<!--- Paste verbatim command output between quotes -->
```paste below
TASK [vmware-setup : Create or update VM] *********************************************************************************************************************************************************************
task path: /workspace/ansible-fusion/roles/vmware-setup/tasks/main.yml:3
Using module file /usr/lib/python2.7/site-packages/ansible/modules/cloud/vmware/vmware_guest.py
Pipelining is enabled.
<localhost> ESTABLISH LOCAL CONNECTION FOR USER: ansible
<localhost> EXEC /bin/sh -c 'sudo -H -S -n -u root /bin/sh -c '"'"'echo BECOME-SUCCESS-swmzlhdjmcuoqfuzomgxoopqvglmlgub ; /usr/bin/python2'"'"' && sleep 0'
The full traceback is:
Traceback (most recent call last):
File "<stdin>", line 114, in <module>
File "<stdin>", line 106, in _ansiballz_main
File "<stdin>", line 49, in invoke_module
File "/tmp/ansible_vmware_guest_payload_frCA55/__main__.py", line 2669, in <module>
File "/tmp/ansible_vmware_guest_payload_frCA55/__main__.py", line 2658, in main
File "/tmp/ansible_vmware_guest_payload_frCA55/__main__.py", line 2175, in deploy_vm
File "/tmp/ansible_vmware_guest_payload_frCA55/__main__.py", line 1355, in configure_network
AttributeError: 'NoneType' object has no attribute 'uuid'
fatal: [FUSION-jenkins-slave-1 -> localhost]: FAILED! => {
"changed": false,
"module_stderr": "Traceback (most recent call last):\n File \"<stdin>\", line 114, in <module>\n File \"<stdin>\", line 106, in _ansiballz_main\n File \"<stdin>\", line 49, in invoke_module\n File \"/tmp/ansible_vmware_guest_payload_frCA55/__main__.py\", line 2669, in <module>\n File \"/tmp/ansible_vmware_guest_payload_frCA55/__main__.py\", line 2658, in main\n File \"/tmp/ansible_vmware_guest_payload_frCA55/__main__.py\", line 2175, in deploy_vm\n File \"/tmp/ansible_vmware_guest_payload_frCA55/__main__.py\", line 1355, in configure_network\nAttributeError: 'NoneType' object has no attribute 'uuid'\n",
"module_stdout": "",
"msg": "MODULE FAILURE\nSee stdout/stderr for the exact error",
"rc": 1
}
```
|
https://github.com/ansible/ansible/issues/59952
|
https://github.com/ansible/ansible/pull/60052
|
b101fda4c6acce465619421fd6710c0514a8f2f7
|
2a1393e0e1a67e5d4ef86aa644bda0a908329cf6
| 2019-08-01T19:57:56Z |
python
| 2019-08-06T05:08:50Z |
changelogs/fragments/59952-vmware_guest-check_dvs.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 59,952 |
VMware: vmware_guest throws AttributeError 'NoneType' object has no attribute 'uuid'
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
Ansible throws an AttributeError exception. If I do not specify a network section, it will create the vm but complain about mismatched number of NICs
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
vmware_guest
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.8.3
config file = /etc/ansible/ansible.cfg
configured module search path = [u'/home/ansible/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python2.7/site-packages/ansible
executable location = /bin/ansible
python version = 2.7.5 (default, Jun 11 2019, 12:19:05) [GCC 4.8.5 20150623 (Red Hat 4.8.5-36)]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
ANSIBLE_PIPELINING(/etc/ansible/ansible.cfg) = True
DEFAULT_PRIVATE_KEY_FILE(/etc/ansible/ansible.cfg) = /home/ansible/.ssh/id_rsa
HOST_KEY_CHECKING(/etc/ansible/ansible.cfg) = False
PARAMIKO_HOST_KEY_AUTO_ADD(/etc/ansible/ansible.cfg) = True
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
vSphere 6.7.0
RHEL 7.6 kernel 3.10.0-957.21.3.el7.x86_64
pyvmomi 6.7.1.2018.12
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
Run the following playbook
<!--- Paste example playbooks or commands between quotes below -->
```yaml
---
# tasks file for vmware-setup
- name: Create or update VM
vmware_guest:
hostname: "{{ vCenter.ip }}"
username: "{{ vCenter.username }}"
password: "{{ vCenter.password }}"
annotation: Provisioned and managed by Ansible
cluster: "{{ cluster_name }}"
datacenter: "{{ datacenter_name }}"
name: "{{ inventory_hostname }}"
folder: "{{ folder_name }}"
state: poweredon
template: "{{ template_name }}"
networks:
- name: "{{ network_name }}"
vlan: 400
ip: "{{ ansible_host }}"
netmask: "{{ network_netmask }}"
customization:
dns_servers: 10.22.148.45
domain: "{{ vCenter.domain }}"
hostname: "{{ inventory_hostname|lower }}"
validate_certs: no
delegate_to: localhost
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
New vm to be created and started
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
AttributeError thrown
<!--- Paste verbatim command output between quotes -->
```paste below
TASK [vmware-setup : Create or update VM] *********************************************************************************************************************************************************************
task path: /workspace/ansible-fusion/roles/vmware-setup/tasks/main.yml:3
Using module file /usr/lib/python2.7/site-packages/ansible/modules/cloud/vmware/vmware_guest.py
Pipelining is enabled.
<localhost> ESTABLISH LOCAL CONNECTION FOR USER: ansible
<localhost> EXEC /bin/sh -c 'sudo -H -S -n -u root /bin/sh -c '"'"'echo BECOME-SUCCESS-swmzlhdjmcuoqfuzomgxoopqvglmlgub ; /usr/bin/python2'"'"' && sleep 0'
The full traceback is:
Traceback (most recent call last):
File "<stdin>", line 114, in <module>
File "<stdin>", line 106, in _ansiballz_main
File "<stdin>", line 49, in invoke_module
File "/tmp/ansible_vmware_guest_payload_frCA55/__main__.py", line 2669, in <module>
File "/tmp/ansible_vmware_guest_payload_frCA55/__main__.py", line 2658, in main
File "/tmp/ansible_vmware_guest_payload_frCA55/__main__.py", line 2175, in deploy_vm
File "/tmp/ansible_vmware_guest_payload_frCA55/__main__.py", line 1355, in configure_network
AttributeError: 'NoneType' object has no attribute 'uuid'
fatal: [FUSION-jenkins-slave-1 -> localhost]: FAILED! => {
"changed": false,
"module_stderr": "Traceback (most recent call last):\n File \"<stdin>\", line 114, in <module>\n File \"<stdin>\", line 106, in _ansiballz_main\n File \"<stdin>\", line 49, in invoke_module\n File \"/tmp/ansible_vmware_guest_payload_frCA55/__main__.py\", line 2669, in <module>\n File \"/tmp/ansible_vmware_guest_payload_frCA55/__main__.py\", line 2658, in main\n File \"/tmp/ansible_vmware_guest_payload_frCA55/__main__.py\", line 2175, in deploy_vm\n File \"/tmp/ansible_vmware_guest_payload_frCA55/__main__.py\", line 1355, in configure_network\nAttributeError: 'NoneType' object has no attribute 'uuid'\n",
"module_stdout": "",
"msg": "MODULE FAILURE\nSee stdout/stderr for the exact error",
"rc": 1
}
```
|
https://github.com/ansible/ansible/issues/59952
|
https://github.com/ansible/ansible/pull/60052
|
b101fda4c6acce465619421fd6710c0514a8f2f7
|
2a1393e0e1a67e5d4ef86aa644bda0a908329cf6
| 2019-08-01T19:57:56Z |
python
| 2019-08-06T05:08:50Z |
lib/ansible/modules/cloud/vmware/vmware_guest.py
|
#!/usr/bin/python
# -*- coding: utf-8 -*-
#
# This module is also sponsored by E.T.A.I. (www.etai.fr)
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
ANSIBLE_METADATA = {'metadata_version': '1.1',
'status': ['preview'],
'supported_by': 'community'}
DOCUMENTATION = r'''
---
module: vmware_guest
short_description: Manages virtual machines in vCenter
description: >
This module can be used to create new virtual machines from templates or other virtual machines,
manage power state of virtual machine such as power on, power off, suspend, shutdown, reboot, restart etc.,
modify various virtual machine components like network, disk, customization etc.,
rename a virtual machine and remove a virtual machine with associated components.
version_added: '2.2'
author:
- Loic Blot (@nerzhul) <[email protected]>
- Philippe Dellaert (@pdellaert) <[email protected]>
- Abhijeet Kasurde (@Akasurde) <[email protected]>
requirements:
- python >= 2.6
- PyVmomi
notes:
- Please make sure that the user used for vmware_guest has the correct level of privileges.
- For example, following is the list of minimum privileges required by users to create virtual machines.
- " DataStore > Allocate Space"
- " Virtual Machine > Configuration > Add New Disk"
- " Virtual Machine > Configuration > Add or Remove Device"
- " Virtual Machine > Inventory > Create New"
- " Network > Assign Network"
- " Resource > Assign Virtual Machine to Resource Pool"
- "Module may require additional privileges as well, which may be required for gathering facts - e.g. ESXi configurations."
- Tested on vSphere 5.5, 6.0, 6.5 and 6.7
- Use SCSI disks instead of IDE when you want to expand online disks by specifing a SCSI controller
- "For additional information please visit Ansible VMware community wiki - U(https://github.com/ansible/community/wiki/VMware)."
options:
state:
description:
- Specify the state the virtual machine should be in.
- 'If C(state) is set to C(present) and virtual machine exists, ensure the virtual machine
configurations conforms to task arguments.'
- 'If C(state) is set to C(absent) and virtual machine exists, then the specified virtual machine
is removed with its associated components.'
- 'If C(state) is set to one of the following C(poweredon), C(poweredoff), C(present), C(restarted), C(suspended)
and virtual machine does not exists, then virtual machine is deployed with given parameters.'
- 'If C(state) is set to C(poweredon) and virtual machine exists with powerstate other than powered on,
then the specified virtual machine is powered on.'
- 'If C(state) is set to C(poweredoff) and virtual machine exists with powerstate other than powered off,
then the specified virtual machine is powered off.'
- 'If C(state) is set to C(restarted) and virtual machine exists, then the virtual machine is restarted.'
- 'If C(state) is set to C(suspended) and virtual machine exists, then the virtual machine is set to suspended mode.'
- 'If C(state) is set to C(shutdownguest) and virtual machine exists, then the virtual machine is shutdown.'
- 'If C(state) is set to C(rebootguest) and virtual machine exists, then the virtual machine is rebooted.'
default: present
choices: [ present, absent, poweredon, poweredoff, restarted, suspended, shutdownguest, rebootguest ]
name:
description:
- Name of the virtual machine to work with.
- Virtual machine names in vCenter are not necessarily unique, which may be problematic, see C(name_match).
- 'If multiple virtual machines with same name exists, then C(folder) is required parameter to
identify uniqueness of the virtual machine.'
- This parameter is required, if C(state) is set to C(poweredon), C(poweredoff), C(present), C(restarted), C(suspended)
and virtual machine does not exists.
- This parameter is case sensitive.
required: yes
name_match:
description:
- If multiple virtual machines matching the name, use the first or last found.
default: 'first'
choices: [ first, last ]
uuid:
description:
- UUID of the virtual machine to manage if known, this is VMware's unique identifier.
- This is required if C(name) is not supplied.
- If virtual machine does not exists, then this parameter is ignored.
- Please note that a supplied UUID will be ignored on virtual machine creation, as VMware creates the UUID internally.
use_instance_uuid:
description:
- Whether to use the VMware instance UUID rather than the BIOS UUID.
default: no
type: bool
version_added: '2.8'
template:
description:
- Template or existing virtual machine used to create new virtual machine.
- If this value is not set, virtual machine is created without using a template.
- If the virtual machine already exists, this parameter will be ignored.
- This parameter is case sensitive.
- You can also specify template or VM UUID for identifying source. version_added 2.8. Use C(hw_product_uuid) from M(vmware_guest_facts) as UUID value.
- From version 2.8 onwards, absolute path to virtual machine or template can be used.
aliases: [ 'template_src' ]
is_template:
description:
- Flag the instance as a template.
- This will mark the given virtual machine as template.
default: 'no'
type: bool
version_added: '2.3'
folder:
description:
- Destination folder, absolute path to find an existing guest or create the new guest.
- The folder should include the datacenter. ESX's datacenter is ha-datacenter.
- This parameter is case sensitive.
- This parameter is required, while deploying new virtual machine. version_added 2.5.
- 'If multiple machines are found with same name, this parameter is used to identify
uniqueness of the virtual machine. version_added 2.5'
- 'Examples:'
- ' folder: /ha-datacenter/vm'
- ' folder: ha-datacenter/vm'
- ' folder: /datacenter1/vm'
- ' folder: datacenter1/vm'
- ' folder: /datacenter1/vm/folder1'
- ' folder: datacenter1/vm/folder1'
- ' folder: /folder1/datacenter1/vm'
- ' folder: folder1/datacenter1/vm'
- ' folder: /folder1/datacenter1/vm/folder2'
hardware:
description:
- Manage virtual machine's hardware attributes.
- All parameters case sensitive.
- 'Valid attributes are:'
- ' - C(hotadd_cpu) (boolean): Allow virtual CPUs to be added while the virtual machine is running.'
- ' - C(hotremove_cpu) (boolean): Allow virtual CPUs to be removed while the virtual machine is running.
version_added: 2.5'
- ' - C(hotadd_memory) (boolean): Allow memory to be added while the virtual machine is running.'
- ' - C(memory_mb) (integer): Amount of memory in MB.'
- ' - C(nested_virt) (bool): Enable nested virtualization. version_added: 2.5'
- ' - C(num_cpus) (integer): Number of CPUs.'
- ' - C(num_cpu_cores_per_socket) (integer): Number of Cores Per Socket.'
- " C(num_cpus) must be a multiple of C(num_cpu_cores_per_socket).
For example to create a VM with 2 sockets of 4 cores, specify C(num_cpus): 8 and C(num_cpu_cores_per_socket): 4"
- ' - C(scsi) (string): Valid values are C(buslogic), C(lsilogic), C(lsilogicsas) and C(paravirtual) (default).'
- " - C(memory_reservation_lock) (boolean): If set true, memory resource reservation for the virtual machine
will always be equal to the virtual machine's memory size. version_added: 2.5"
- ' - C(max_connections) (integer): Maximum number of active remote display connections for the virtual machines.
version_added: 2.5.'
- ' - C(mem_limit) (integer): The memory utilization of a virtual machine will not exceed this limit. Unit is MB.
version_added: 2.5'
- ' - C(mem_reservation) (integer): The amount of memory resource that is guaranteed available to the virtual
machine. Unit is MB. C(memory_reservation) is alias to this. version_added: 2.5'
- ' - C(cpu_limit) (integer): The CPU utilization of a virtual machine will not exceed this limit. Unit is MHz.
version_added: 2.5'
- ' - C(cpu_reservation) (integer): The amount of CPU resource that is guaranteed available to the virtual machine.
Unit is MHz. version_added: 2.5'
- ' - C(version) (integer): The Virtual machine hardware versions. Default is 10 (ESXi 5.5 and onwards).
Please check VMware documentation for correct virtual machine hardware version.
Incorrect hardware version may lead to failure in deployment. If hardware version is already equal to the given
version then no action is taken. version_added: 2.6'
- ' - C(boot_firmware) (string): Choose which firmware should be used to boot the virtual machine.
Allowed values are "bios" and "efi". version_added: 2.7'
- ' - C(virt_based_security) (bool): Enable Virtualization Based Security feature for Windows 10.
(Support from Virtual machine hardware version 14, Guest OS Windows 10 64 bit, Windows Server 2016)'
guest_id:
description:
- Set the guest ID.
- This parameter is case sensitive.
- 'Examples:'
- " virtual machine with RHEL7 64 bit, will be 'rhel7_64Guest'"
- " virtual machine with CensOS 64 bit, will be 'centos64Guest'"
- " virtual machine with Ubuntu 64 bit, will be 'ubuntu64Guest'"
- This field is required when creating a virtual machine, not required when creating from the template.
- >
Valid values are referenced here:
U(https://code.vmware.com/apis/358/vsphere#/doc/vim.vm.GuestOsDescriptor.GuestOsIdentifier.html)
version_added: '2.3'
disk:
description:
- A list of disks to add.
- This parameter is case sensitive.
- Shrinking disks is not supported.
- Removing existing disks of the virtual machine is not supported.
- 'Valid attributes are:'
- ' - C(size_[tb,gb,mb,kb]) (integer): Disk storage size in specified unit.'
- ' - C(type) (string): Valid values are:'
- ' - C(thin) thin disk'
- ' - C(eagerzeroedthick) eagerzeroedthick disk, added in version 2.5'
- ' Default: C(None) thick disk, no eagerzero.'
- ' - C(datastore) (string): The name of datastore which will be used for the disk. If C(autoselect_datastore) is set to True,
then will select the less used datastore whose name contains this "disk.datastore" string.'
- ' - C(filename) (string): Existing disk image to be used. Filename must already exist on the datastore.'
- ' Specify filename string in C([datastore_name] path/to/file.vmdk) format. Added in version 2.8.'
- ' - C(autoselect_datastore) (bool): select the less used datastore. "disk.datastore" and "disk.autoselect_datastore"
will not be used if C(datastore) is specified outside this C(disk) configuration.'
- ' - C(disk_mode) (string): Type of disk mode. Added in version 2.6'
- ' - Available options are :'
- ' - C(persistent): Changes are immediately and permanently written to the virtual disk. This is default.'
- ' - C(independent_persistent): Same as persistent, but not affected by snapshots.'
- ' - C(independent_nonpersistent): Changes to virtual disk are made to a redo log and discarded at power off, but not affected by snapshots.'
cdrom:
description:
- A CD-ROM configuration for the virtual machine.
- 'Valid attributes are:'
- ' - C(type) (string): The type of CD-ROM, valid options are C(none), C(client) or C(iso). With C(none) the CD-ROM will be disconnected but present.'
- ' - C(iso_path) (string): The datastore path to the ISO file to use, in the form of C([datastore1] path/to/file.iso). Required if type is set C(iso).'
version_added: '2.5'
resource_pool:
description:
- Use the given resource pool for virtual machine operation.
- This parameter is case sensitive.
- Resource pool should be child of the selected host parent.
version_added: '2.3'
wait_for_ip_address:
description:
- Wait until vCenter detects an IP address for the virtual machine.
- This requires vmware-tools (vmtoolsd) to properly work after creation.
- "vmware-tools needs to be installed on the given virtual machine in order to work with this parameter."
default: 'no'
type: bool
wait_for_customization:
description:
- Wait until vCenter detects all guest customizations as successfully completed.
- When enabled, the VM will automatically be powered on.
default: 'no'
type: bool
version_added: '2.8'
state_change_timeout:
description:
- If the C(state) is set to C(shutdownguest), by default the module will return immediately after sending the shutdown signal.
- If this argument is set to a positive integer, the module will instead wait for the virtual machine to reach the poweredoff state.
- The value sets a timeout in seconds for the module to wait for the state change.
default: 0
version_added: '2.6'
snapshot_src:
description:
- Name of the existing snapshot to use to create a clone of a virtual machine.
- This parameter is case sensitive.
- While creating linked clone using C(linked_clone) parameter, this parameter is required.
version_added: '2.4'
linked_clone:
description:
- Whether to create a linked clone from the snapshot specified.
- If specified, then C(snapshot_src) is required parameter.
default: 'no'
type: bool
version_added: '2.4'
force:
description:
- Ignore warnings and complete the actions.
- This parameter is useful while removing virtual machine which is powered on state.
- 'This module reflects the VMware vCenter API and UI workflow, as such, in some cases the `force` flag will
be mandatory to perform the action to ensure you are certain the action has to be taken, no matter what the consequence.
This is specifically the case for removing a powered on the virtual machine when C(state) is set to C(absent).'
default: 'no'
type: bool
datacenter:
description:
- Destination datacenter for the deploy operation.
- This parameter is case sensitive.
default: ha-datacenter
cluster:
description:
- The cluster name where the virtual machine will run.
- This is a required parameter, if C(esxi_hostname) is not set.
- C(esxi_hostname) and C(cluster) are mutually exclusive parameters.
- This parameter is case sensitive.
version_added: '2.3'
esxi_hostname:
description:
- The ESXi hostname where the virtual machine will run.
- This is a required parameter, if C(cluster) is not set.
- C(esxi_hostname) and C(cluster) are mutually exclusive parameters.
- This parameter is case sensitive.
annotation:
description:
- A note or annotation to include in the virtual machine.
version_added: '2.3'
customvalues:
description:
- Define a list of custom values to set on virtual machine.
- A custom value object takes two fields C(key) and C(value).
- Incorrect key and values will be ignored.
version_added: '2.3'
networks:
description:
- A list of networks (in the order of the NICs).
- Removing NICs is not allowed, while reconfiguring the virtual machine.
- All parameters and VMware object names are case sensetive.
- 'One of the below parameters is required per entry:'
- ' - C(name) (string): Name of the portgroup or distributed virtual portgroup for this interface.
When specifying distributed virtual portgroup make sure given C(esxi_hostname) or C(cluster) is associated with it.'
- ' - C(vlan) (integer): VLAN number for this interface.'
- 'Optional parameters per entry (used for virtual hardware):'
- ' - C(device_type) (string): Virtual network device (one of C(e1000), C(e1000e), C(pcnet32), C(vmxnet2), C(vmxnet3) (default), C(sriov)).'
- ' - C(mac) (string): Customize MAC address.'
- ' - C(dvswitch_name) (string): Name of the distributed vSwitch.
This value is required if multiple distributed portgroups exists with the same name. version_added 2.7'
- ' - C(start_connected) (bool): Indicates that virtual network adapter starts with associated virtual machine powers on. version_added: 2.5'
- 'Optional parameters per entry (used for OS customization):'
- ' - C(type) (string): Type of IP assignment (either C(dhcp) or C(static)). C(dhcp) is default.'
- ' - C(ip) (string): Static IP address (implies C(type: static)).'
- ' - C(netmask) (string): Static netmask required for C(ip).'
- ' - C(gateway) (string): Static gateway.'
- ' - C(dns_servers) (string): DNS servers for this network interface (Windows).'
- ' - C(domain) (string): Domain name for this network interface (Windows).'
- ' - C(wake_on_lan) (bool): Indicates if wake-on-LAN is enabled on this virtual network adapter. version_added: 2.5'
- ' - C(allow_guest_control) (bool): Enables guest control over whether the connectable device is connected. version_added: 2.5'
version_added: '2.3'
customization:
description:
- Parameters for OS customization when cloning from the template or the virtual machine, or apply to the existing virtual machine directly.
- Not all operating systems are supported for customization with respective vCenter version,
please check VMware documentation for respective OS customization.
- For supported customization operating system matrix, (see U(http://partnerweb.vmware.com/programs/guestOS/guest-os-customization-matrix.pdf))
- All parameters and VMware object names are case sensitive.
- Linux based OSes requires Perl package to be installed for OS customizations.
- 'Common parameters (Linux/Windows):'
- ' - C(existing_vm) (bool): If set to C(True), do OS customization on the specified virtual machine directly.
If set to C(False) or not specified, do OS customization when cloning from the template or the virtual machine. version_added: 2.8'
- ' - C(dns_servers) (list): List of DNS servers to configure.'
- ' - C(dns_suffix) (list): List of domain suffixes, also known as DNS search path (default: C(domain) parameter).'
- ' - C(domain) (string): DNS domain name to use.'
- ' - C(hostname) (string): Computer hostname (default: shorted C(name) parameter). Allowed characters are alphanumeric (uppercase and lowercase)
and minus, rest of the characters are dropped as per RFC 952.'
- 'Parameters related to Linux customization:'
- ' - C(timezone) (string): Timezone (See List of supported time zones for different vSphere versions in Linux/Unix
systems (2145518) U(https://kb.vmware.com/s/article/2145518)). version_added: 2.9'
- ' - C(hwclockUTC) (bool): Specifies whether the hardware clock is in UTC or local time.
True when the hardware clock is in UTC, False when the hardware clock is in local time. version_added: 2.9'
- 'Parameters related to Windows customization:'
- ' - C(autologon) (bool): Auto logon after virtual machine customization (default: False).'
- ' - C(autologoncount) (int): Number of autologon after reboot (default: 1).'
- ' - C(domainadmin) (string): User used to join in AD domain (mandatory with C(joindomain)).'
- ' - C(domainadminpassword) (string): Password used to join in AD domain (mandatory with C(joindomain)).'
- ' - C(fullname) (string): Server owner name (default: Administrator).'
- ' - C(joindomain) (string): AD domain to join (Not compatible with C(joinworkgroup)).'
- ' - C(joinworkgroup) (string): Workgroup to join (Not compatible with C(joindomain), default: WORKGROUP).'
- ' - C(orgname) (string): Organisation name (default: ACME).'
- ' - C(password) (string): Local administrator password.'
- ' - C(productid) (string): Product ID.'
- ' - C(runonce) (list): List of commands to run at first user logon.'
- ' - C(timezone) (int): Timezone (See U(https://msdn.microsoft.com/en-us/library/ms912391.aspx)).'
version_added: '2.3'
vapp_properties:
description:
- A list of vApp properties
- 'For full list of attributes and types refer to: U(https://github.com/vmware/pyvmomi/blob/master/docs/vim/vApp/PropertyInfo.rst)'
- 'Basic attributes are:'
- ' - C(id) (string): Property id - required.'
- ' - C(value) (string): Property value.'
- ' - C(type) (string): Value type, string type by default.'
- ' - C(operation): C(remove): This attribute is required only when removing properties.'
version_added: '2.6'
customization_spec:
description:
- Unique name identifying the requested customization specification.
- This parameter is case sensitive.
- If set, then overrides C(customization) parameter values.
version_added: '2.6'
datastore:
description:
- Specify datastore or datastore cluster to provision virtual machine.
- 'This parameter takes precedence over "disk.datastore" parameter.'
- 'This parameter can be used to override datastore or datastore cluster setting of the virtual machine when deployed
from the template.'
- Please see example for more usage.
version_added: '2.7'
convert:
description:
- Specify convert disk type while cloning template or virtual machine.
choices: [ thin, thick, eagerzeroedthick ]
version_added: '2.8'
extends_documentation_fragment: vmware.documentation
'''
EXAMPLES = r'''
- name: Create a virtual machine on given ESXi hostname
vmware_guest:
hostname: "{{ vcenter_hostname }}"
username: "{{ vcenter_username }}"
password: "{{ vcenter_password }}"
validate_certs: no
folder: /DC1/vm/
name: test_vm_0001
state: poweredon
guest_id: centos64Guest
# This is hostname of particular ESXi server on which user wants VM to be deployed
esxi_hostname: "{{ esxi_hostname }}"
disk:
- size_gb: 10
type: thin
datastore: datastore1
hardware:
memory_mb: 512
num_cpus: 4
scsi: paravirtual
networks:
- name: VM Network
mac: aa:bb:dd:aa:00:14
ip: 10.10.10.100
netmask: 255.255.255.0
device_type: vmxnet3
wait_for_ip_address: yes
delegate_to: localhost
register: deploy_vm
- name: Create a virtual machine from a template
vmware_guest:
hostname: "{{ vcenter_hostname }}"
username: "{{ vcenter_username }}"
password: "{{ vcenter_password }}"
validate_certs: no
folder: /testvms
name: testvm_2
state: poweredon
template: template_el7
disk:
- size_gb: 10
type: thin
datastore: g73_datastore
hardware:
memory_mb: 512
num_cpus: 6
num_cpu_cores_per_socket: 3
scsi: paravirtual
memory_reservation_lock: True
mem_limit: 8096
mem_reservation: 4096
cpu_limit: 8096
cpu_reservation: 4096
max_connections: 5
hotadd_cpu: True
hotremove_cpu: True
hotadd_memory: False
version: 12 # Hardware version of virtual machine
boot_firmware: "efi"
cdrom:
type: iso
iso_path: "[datastore1] livecd.iso"
networks:
- name: VM Network
mac: aa:bb:dd:aa:00:14
wait_for_ip_address: yes
delegate_to: localhost
register: deploy
- name: Clone a virtual machine from Windows template and customize
vmware_guest:
hostname: "{{ vcenter_hostname }}"
username: "{{ vcenter_username }}"
password: "{{ vcenter_password }}"
validate_certs: no
datacenter: datacenter1
cluster: cluster
name: testvm-2
template: template_windows
networks:
- name: VM Network
ip: 192.168.1.100
netmask: 255.255.255.0
gateway: 192.168.1.1
mac: aa:bb:dd:aa:00:14
domain: my_domain
dns_servers:
- 192.168.1.1
- 192.168.1.2
- vlan: 1234
type: dhcp
customization:
autologon: yes
dns_servers:
- 192.168.1.1
- 192.168.1.2
domain: my_domain
password: new_vm_password
runonce:
- powershell.exe -ExecutionPolicy Unrestricted -File C:\Windows\Temp\ConfigureRemotingForAnsible.ps1 -ForceNewSSLCert -EnableCredSSP
delegate_to: localhost
- name: Clone a virtual machine from Linux template and customize
vmware_guest:
hostname: "{{ vcenter_hostname }}"
username: "{{ vcenter_username }}"
password: "{{ vcenter_password }}"
validate_certs: no
datacenter: "{{ datacenter }}"
state: present
folder: /DC1/vm
template: "{{ template }}"
name: "{{ vm_name }}"
cluster: DC1_C1
networks:
- name: VM Network
ip: 192.168.10.11
netmask: 255.255.255.0
wait_for_ip_address: True
customization:
domain: "{{ guest_domain }}"
dns_servers:
- 8.9.9.9
- 7.8.8.9
dns_suffix:
- example.com
- example2.com
delegate_to: localhost
- name: Rename a virtual machine (requires the virtual machine's uuid)
vmware_guest:
hostname: "{{ vcenter_hostname }}"
username: "{{ vcenter_username }}"
password: "{{ vcenter_password }}"
validate_certs: no
uuid: "{{ vm_uuid }}"
name: new_name
state: present
delegate_to: localhost
- name: Remove a virtual machine by uuid
vmware_guest:
hostname: "{{ vcenter_hostname }}"
username: "{{ vcenter_username }}"
password: "{{ vcenter_password }}"
validate_certs: no
uuid: "{{ vm_uuid }}"
state: absent
delegate_to: localhost
- name: Manipulate vApp properties
vmware_guest:
hostname: "{{ vcenter_hostname }}"
username: "{{ vcenter_username }}"
password: "{{ vcenter_password }}"
validate_certs: no
name: vm_name
state: present
vapp_properties:
- id: remoteIP
category: Backup
label: Backup server IP
type: str
value: 10.10.10.1
- id: old_property
operation: remove
delegate_to: localhost
- name: Set powerstate of a virtual machine to poweroff by using UUID
vmware_guest:
hostname: "{{ vcenter_hostname }}"
username: "{{ vcenter_username }}"
password: "{{ vcenter_password }}"
validate_certs: no
uuid: "{{ vm_uuid }}"
state: poweredoff
delegate_to: localhost
- name: Deploy a virtual machine in a datastore different from the datastore of the template
vmware_guest:
hostname: "{{ vcenter_hostname }}"
username: "{{ vcenter_username }}"
password: "{{ vcenter_password }}"
name: "{{ vm_name }}"
state: present
template: "{{ template_name }}"
# Here datastore can be different which holds template
datastore: "{{ virtual_machine_datastore }}"
hardware:
memory_mb: 512
num_cpus: 2
scsi: paravirtual
delegate_to: localhost
'''
RETURN = r'''
instance:
description: metadata about the new virtual machine
returned: always
type: dict
sample: None
'''
import re
import time
import string
HAS_PYVMOMI = False
try:
from pyVmomi import vim, vmodl, VmomiSupport
HAS_PYVMOMI = True
except ImportError:
pass
from random import randint
from ansible.module_utils.basic import AnsibleModule
from ansible.module_utils.common.network import is_mac
from ansible.module_utils._text import to_text, to_native
from ansible.module_utils.vmware import (find_obj, gather_vm_facts, get_all_objs,
compile_folder_path_for_object, serialize_spec,
vmware_argument_spec, set_vm_power_state, PyVmomi,
find_dvs_by_name, find_dvspg_by_name, wait_for_vm_ip,
wait_for_task, TaskError)
class PyVmomiDeviceHelper(object):
""" This class is a helper to create easily VMware Objects for PyVmomiHelper """
def __init__(self, module):
self.module = module
self.next_disk_unit_number = 0
self.scsi_device_type = {
'lsilogic': vim.vm.device.VirtualLsiLogicController,
'paravirtual': vim.vm.device.ParaVirtualSCSIController,
'buslogic': vim.vm.device.VirtualBusLogicController,
'lsilogicsas': vim.vm.device.VirtualLsiLogicSASController,
}
def create_scsi_controller(self, scsi_type):
scsi_ctl = vim.vm.device.VirtualDeviceSpec()
scsi_ctl.operation = vim.vm.device.VirtualDeviceSpec.Operation.add
scsi_device = self.scsi_device_type.get(scsi_type, vim.vm.device.ParaVirtualSCSIController)
scsi_ctl.device = scsi_device()
scsi_ctl.device.busNumber = 0
# While creating a new SCSI controller, temporary key value
# should be unique negative integers
scsi_ctl.device.key = -randint(1000, 9999)
scsi_ctl.device.hotAddRemove = True
scsi_ctl.device.sharedBus = 'noSharing'
scsi_ctl.device.scsiCtlrUnitNumber = 7
return scsi_ctl
def is_scsi_controller(self, device):
return isinstance(device, tuple(self.scsi_device_type.values()))
@staticmethod
def create_ide_controller():
ide_ctl = vim.vm.device.VirtualDeviceSpec()
ide_ctl.operation = vim.vm.device.VirtualDeviceSpec.Operation.add
ide_ctl.device = vim.vm.device.VirtualIDEController()
ide_ctl.device.deviceInfo = vim.Description()
# While creating a new IDE controller, temporary key value
# should be unique negative integers
ide_ctl.device.key = -randint(200, 299)
ide_ctl.device.busNumber = 0
return ide_ctl
@staticmethod
def create_cdrom(ide_ctl, cdrom_type, iso_path=None):
cdrom_spec = vim.vm.device.VirtualDeviceSpec()
cdrom_spec.operation = vim.vm.device.VirtualDeviceSpec.Operation.add
cdrom_spec.device = vim.vm.device.VirtualCdrom()
cdrom_spec.device.controllerKey = ide_ctl.device.key
cdrom_spec.device.key = -1
cdrom_spec.device.connectable = vim.vm.device.VirtualDevice.ConnectInfo()
cdrom_spec.device.connectable.allowGuestControl = True
cdrom_spec.device.connectable.startConnected = (cdrom_type != "none")
if cdrom_type in ["none", "client"]:
cdrom_spec.device.backing = vim.vm.device.VirtualCdrom.RemotePassthroughBackingInfo()
elif cdrom_type == "iso":
cdrom_spec.device.backing = vim.vm.device.VirtualCdrom.IsoBackingInfo(fileName=iso_path)
return cdrom_spec
@staticmethod
def is_equal_cdrom(vm_obj, cdrom_device, cdrom_type, iso_path):
if cdrom_type == "none":
return (isinstance(cdrom_device.backing, vim.vm.device.VirtualCdrom.RemotePassthroughBackingInfo) and
cdrom_device.connectable.allowGuestControl and
not cdrom_device.connectable.startConnected and
(vm_obj.runtime.powerState != vim.VirtualMachinePowerState.poweredOn or not cdrom_device.connectable.connected))
elif cdrom_type == "client":
return (isinstance(cdrom_device.backing, vim.vm.device.VirtualCdrom.RemotePassthroughBackingInfo) and
cdrom_device.connectable.allowGuestControl and
cdrom_device.connectable.startConnected and
(vm_obj.runtime.powerState != vim.VirtualMachinePowerState.poweredOn or cdrom_device.connectable.connected))
elif cdrom_type == "iso":
return (isinstance(cdrom_device.backing, vim.vm.device.VirtualCdrom.IsoBackingInfo) and
cdrom_device.backing.fileName == iso_path and
cdrom_device.connectable.allowGuestControl and
cdrom_device.connectable.startConnected and
(vm_obj.runtime.powerState != vim.VirtualMachinePowerState.poweredOn or cdrom_device.connectable.connected))
def create_scsi_disk(self, scsi_ctl, disk_index=None):
diskspec = vim.vm.device.VirtualDeviceSpec()
diskspec.operation = vim.vm.device.VirtualDeviceSpec.Operation.add
diskspec.device = vim.vm.device.VirtualDisk()
diskspec.device.backing = vim.vm.device.VirtualDisk.FlatVer2BackingInfo()
diskspec.device.controllerKey = scsi_ctl.device.key
if self.next_disk_unit_number == 7:
raise AssertionError()
if disk_index == 7:
raise AssertionError()
"""
Configure disk unit number.
"""
if disk_index is not None:
diskspec.device.unitNumber = disk_index
self.next_disk_unit_number = disk_index + 1
else:
diskspec.device.unitNumber = self.next_disk_unit_number
self.next_disk_unit_number += 1
# unit number 7 is reserved to SCSI controller, increase next index
if self.next_disk_unit_number == 7:
self.next_disk_unit_number += 1
return diskspec
def get_device(self, device_type, name):
nic_dict = dict(pcnet32=vim.vm.device.VirtualPCNet32(),
vmxnet2=vim.vm.device.VirtualVmxnet2(),
vmxnet3=vim.vm.device.VirtualVmxnet3(),
e1000=vim.vm.device.VirtualE1000(),
e1000e=vim.vm.device.VirtualE1000e(),
sriov=vim.vm.device.VirtualSriovEthernetCard(),
)
if device_type in nic_dict:
return nic_dict[device_type]
else:
self.module.fail_json(msg='Invalid device_type "%s"'
' for network "%s"' % (device_type, name))
def create_nic(self, device_type, device_label, device_infos):
nic = vim.vm.device.VirtualDeviceSpec()
nic.device = self.get_device(device_type, device_infos['name'])
nic.device.wakeOnLanEnabled = bool(device_infos.get('wake_on_lan', True))
nic.device.deviceInfo = vim.Description()
nic.device.deviceInfo.label = device_label
nic.device.deviceInfo.summary = device_infos['name']
nic.device.connectable = vim.vm.device.VirtualDevice.ConnectInfo()
nic.device.connectable.startConnected = bool(device_infos.get('start_connected', True))
nic.device.connectable.allowGuestControl = bool(device_infos.get('allow_guest_control', True))
nic.device.connectable.connected = True
if 'mac' in device_infos and is_mac(device_infos['mac']):
nic.device.addressType = 'manual'
nic.device.macAddress = device_infos['mac']
else:
nic.device.addressType = 'generated'
return nic
def integer_value(self, input_value, name):
"""
Function to return int value for given input, else return error
Args:
input_value: Input value to retrive int value from
name: Name of the Input value (used to build error message)
Returns: (int) if integer value can be obtained, otherwise will send a error message.
"""
if isinstance(input_value, int):
return input_value
elif isinstance(input_value, str) and input_value.isdigit():
return int(input_value)
else:
self.module.fail_json(msg='"%s" attribute should be an'
' integer value.' % name)
class PyVmomiCache(object):
""" This class caches references to objects which are requested multiples times but not modified """
def __init__(self, content, dc_name=None):
self.content = content
self.dc_name = dc_name
self.networks = {}
self.clusters = {}
self.esx_hosts = {}
self.parent_datacenters = {}
def find_obj(self, content, types, name, confine_to_datacenter=True):
""" Wrapper around find_obj to set datacenter context """
result = find_obj(content, types, name)
if result and confine_to_datacenter:
if to_text(self.get_parent_datacenter(result).name) != to_text(self.dc_name):
result = None
objects = self.get_all_objs(content, types, confine_to_datacenter=True)
for obj in objects:
if name is None or to_text(obj.name) == to_text(name):
return obj
return result
def get_all_objs(self, content, types, confine_to_datacenter=True):
""" Wrapper around get_all_objs to set datacenter context """
objects = get_all_objs(content, types)
if confine_to_datacenter:
if hasattr(objects, 'items'):
# resource pools come back as a dictionary
# make a copy
tmpobjs = objects.copy()
for k, v in objects.items():
parent_dc = self.get_parent_datacenter(k)
if parent_dc.name != self.dc_name:
tmpobjs.pop(k, None)
objects = tmpobjs
else:
# everything else should be a list
objects = [x for x in objects if self.get_parent_datacenter(x).name == self.dc_name]
return objects
def get_network(self, network):
if network not in self.networks:
self.networks[network] = self.find_obj(self.content, [vim.Network], network)
return self.networks[network]
def get_cluster(self, cluster):
if cluster not in self.clusters:
self.clusters[cluster] = self.find_obj(self.content, [vim.ClusterComputeResource], cluster)
return self.clusters[cluster]
def get_esx_host(self, host):
if host not in self.esx_hosts:
self.esx_hosts[host] = self.find_obj(self.content, [vim.HostSystem], host)
return self.esx_hosts[host]
def get_parent_datacenter(self, obj):
""" Walk the parent tree to find the objects datacenter """
if isinstance(obj, vim.Datacenter):
return obj
if obj in self.parent_datacenters:
return self.parent_datacenters[obj]
datacenter = None
while True:
if not hasattr(obj, 'parent'):
break
obj = obj.parent
if isinstance(obj, vim.Datacenter):
datacenter = obj
break
self.parent_datacenters[obj] = datacenter
return datacenter
class PyVmomiHelper(PyVmomi):
def __init__(self, module):
super(PyVmomiHelper, self).__init__(module)
self.device_helper = PyVmomiDeviceHelper(self.module)
self.configspec = None
self.relospec = None
self.change_detected = False # a change was detected and needs to be applied through reconfiguration
self.change_applied = False # a change was applied meaning at least one task succeeded
self.customspec = None
self.cache = PyVmomiCache(self.content, dc_name=self.params['datacenter'])
def gather_facts(self, vm):
return gather_vm_facts(self.content, vm)
def remove_vm(self, vm):
# https://www.vmware.com/support/developer/converter-sdk/conv60_apireference/vim.ManagedEntity.html#destroy
if vm.summary.runtime.powerState.lower() == 'poweredon':
self.module.fail_json(msg="Virtual machine %s found in 'powered on' state, "
"please use 'force' parameter to remove or poweroff VM "
"and try removing VM again." % vm.name)
task = vm.Destroy()
self.wait_for_task(task)
if task.info.state == 'error':
return {'changed': self.change_applied, 'failed': True, 'msg': task.info.error.msg, 'op': 'destroy'}
else:
return {'changed': self.change_applied, 'failed': False}
def configure_guestid(self, vm_obj, vm_creation=False):
# guest_id is not required when using templates
if self.params['template']:
return
# guest_id is only mandatory on VM creation
if vm_creation and self.params['guest_id'] is None:
self.module.fail_json(msg="guest_id attribute is mandatory for VM creation")
if self.params['guest_id'] and \
(vm_obj is None or self.params['guest_id'].lower() != vm_obj.summary.config.guestId.lower()):
self.change_detected = True
self.configspec.guestId = self.params['guest_id']
def configure_resource_alloc_info(self, vm_obj):
"""
Function to configure resource allocation information about virtual machine
:param vm_obj: VM object in case of reconfigure, None in case of deploy
:return: None
"""
rai_change_detected = False
memory_allocation = vim.ResourceAllocationInfo()
cpu_allocation = vim.ResourceAllocationInfo()
if 'hardware' in self.params:
if 'mem_limit' in self.params['hardware']:
mem_limit = None
try:
mem_limit = int(self.params['hardware'].get('mem_limit'))
except ValueError:
self.module.fail_json(msg="hardware.mem_limit attribute should be an integer value.")
memory_allocation.limit = mem_limit
if vm_obj is None or memory_allocation.limit != vm_obj.config.memoryAllocation.limit:
rai_change_detected = True
if 'mem_reservation' in self.params['hardware'] or 'memory_reservation' in self.params['hardware']:
mem_reservation = self.params['hardware'].get('mem_reservation')
if mem_reservation is None:
mem_reservation = self.params['hardware'].get('memory_reservation')
try:
mem_reservation = int(mem_reservation)
except ValueError:
self.module.fail_json(msg="hardware.mem_reservation or hardware.memory_reservation should be an integer value.")
memory_allocation.reservation = mem_reservation
if vm_obj is None or \
memory_allocation.reservation != vm_obj.config.memoryAllocation.reservation:
rai_change_detected = True
if 'cpu_limit' in self.params['hardware']:
cpu_limit = None
try:
cpu_limit = int(self.params['hardware'].get('cpu_limit'))
except ValueError:
self.module.fail_json(msg="hardware.cpu_limit attribute should be an integer value.")
cpu_allocation.limit = cpu_limit
if vm_obj is None or cpu_allocation.limit != vm_obj.config.cpuAllocation.limit:
rai_change_detected = True
if 'cpu_reservation' in self.params['hardware']:
cpu_reservation = None
try:
cpu_reservation = int(self.params['hardware'].get('cpu_reservation'))
except ValueError:
self.module.fail_json(msg="hardware.cpu_reservation should be an integer value.")
cpu_allocation.reservation = cpu_reservation
if vm_obj is None or \
cpu_allocation.reservation != vm_obj.config.cpuAllocation.reservation:
rai_change_detected = True
if rai_change_detected:
self.configspec.memoryAllocation = memory_allocation
self.configspec.cpuAllocation = cpu_allocation
self.change_detected = True
def configure_cpu_and_memory(self, vm_obj, vm_creation=False):
# set cpu/memory/etc
if 'hardware' in self.params:
if 'num_cpus' in self.params['hardware']:
try:
num_cpus = int(self.params['hardware']['num_cpus'])
except ValueError:
self.module.fail_json(msg="hardware.num_cpus attribute should be an integer value.")
# check VM power state and cpu hot-add/hot-remove state before re-config VM
if vm_obj and vm_obj.runtime.powerState == vim.VirtualMachinePowerState.poweredOn:
if not vm_obj.config.cpuHotRemoveEnabled and num_cpus < vm_obj.config.hardware.numCPU:
self.module.fail_json(msg="Configured cpu number is less than the cpu number of the VM, "
"cpuHotRemove is not enabled")
if not vm_obj.config.cpuHotAddEnabled and num_cpus > vm_obj.config.hardware.numCPU:
self.module.fail_json(msg="Configured cpu number is more than the cpu number of the VM, "
"cpuHotAdd is not enabled")
if 'num_cpu_cores_per_socket' in self.params['hardware']:
try:
num_cpu_cores_per_socket = int(self.params['hardware']['num_cpu_cores_per_socket'])
except ValueError:
self.module.fail_json(msg="hardware.num_cpu_cores_per_socket attribute "
"should be an integer value.")
if num_cpus % num_cpu_cores_per_socket != 0:
self.module.fail_json(msg="hardware.num_cpus attribute should be a multiple "
"of hardware.num_cpu_cores_per_socket")
self.configspec.numCoresPerSocket = num_cpu_cores_per_socket
if vm_obj is None or self.configspec.numCoresPerSocket != vm_obj.config.hardware.numCoresPerSocket:
self.change_detected = True
self.configspec.numCPUs = num_cpus
if vm_obj is None or self.configspec.numCPUs != vm_obj.config.hardware.numCPU:
self.change_detected = True
# num_cpu is mandatory for VM creation
elif vm_creation and not self.params['template']:
self.module.fail_json(msg="hardware.num_cpus attribute is mandatory for VM creation")
if 'memory_mb' in self.params['hardware']:
try:
memory_mb = int(self.params['hardware']['memory_mb'])
except ValueError:
self.module.fail_json(msg="Failed to parse hardware.memory_mb value."
" Please refer the documentation and provide"
" correct value.")
# check VM power state and memory hotadd state before re-config VM
if vm_obj and vm_obj.runtime.powerState == vim.VirtualMachinePowerState.poweredOn:
if vm_obj.config.memoryHotAddEnabled and memory_mb < vm_obj.config.hardware.memoryMB:
self.module.fail_json(msg="Configured memory is less than memory size of the VM, "
"operation is not supported")
elif not vm_obj.config.memoryHotAddEnabled and memory_mb != vm_obj.config.hardware.memoryMB:
self.module.fail_json(msg="memoryHotAdd is not enabled")
self.configspec.memoryMB = memory_mb
if vm_obj is None or self.configspec.memoryMB != vm_obj.config.hardware.memoryMB:
self.change_detected = True
# memory_mb is mandatory for VM creation
elif vm_creation and not self.params['template']:
self.module.fail_json(msg="hardware.memory_mb attribute is mandatory for VM creation")
if 'hotadd_memory' in self.params['hardware']:
if vm_obj and vm_obj.runtime.powerState == vim.VirtualMachinePowerState.poweredOn and \
vm_obj.config.memoryHotAddEnabled != bool(self.params['hardware']['hotadd_memory']):
self.module.fail_json(msg="Configure hotadd memory operation is not supported when VM is power on")
self.configspec.memoryHotAddEnabled = bool(self.params['hardware']['hotadd_memory'])
if vm_obj is None or self.configspec.memoryHotAddEnabled != vm_obj.config.memoryHotAddEnabled:
self.change_detected = True
if 'hotadd_cpu' in self.params['hardware']:
if vm_obj and vm_obj.runtime.powerState == vim.VirtualMachinePowerState.poweredOn and \
vm_obj.config.cpuHotAddEnabled != bool(self.params['hardware']['hotadd_cpu']):
self.module.fail_json(msg="Configure hotadd cpu operation is not supported when VM is power on")
self.configspec.cpuHotAddEnabled = bool(self.params['hardware']['hotadd_cpu'])
if vm_obj is None or self.configspec.cpuHotAddEnabled != vm_obj.config.cpuHotAddEnabled:
self.change_detected = True
if 'hotremove_cpu' in self.params['hardware']:
if vm_obj and vm_obj.runtime.powerState == vim.VirtualMachinePowerState.poweredOn and \
vm_obj.config.cpuHotRemoveEnabled != bool(self.params['hardware']['hotremove_cpu']):
self.module.fail_json(msg="Configure hotremove cpu operation is not supported when VM is power on")
self.configspec.cpuHotRemoveEnabled = bool(self.params['hardware']['hotremove_cpu'])
if vm_obj is None or self.configspec.cpuHotRemoveEnabled != vm_obj.config.cpuHotRemoveEnabled:
self.change_detected = True
if 'memory_reservation_lock' in self.params['hardware']:
self.configspec.memoryReservationLockedToMax = bool(self.params['hardware']['memory_reservation_lock'])
if vm_obj is None or self.configspec.memoryReservationLockedToMax != vm_obj.config.memoryReservationLockedToMax:
self.change_detected = True
if 'boot_firmware' in self.params['hardware']:
# boot firmware re-config can cause boot issue
if vm_obj is not None:
return
boot_firmware = self.params['hardware']['boot_firmware'].lower()
if boot_firmware not in ('bios', 'efi'):
self.module.fail_json(msg="hardware.boot_firmware value is invalid [%s]."
" Need one of ['bios', 'efi']." % boot_firmware)
self.configspec.firmware = boot_firmware
self.change_detected = True
def configure_cdrom(self, vm_obj):
# Configure the VM CD-ROM
if "cdrom" in self.params and self.params["cdrom"]:
if "type" not in self.params["cdrom"] or self.params["cdrom"]["type"] not in ["none", "client", "iso"]:
self.module.fail_json(msg="cdrom.type is mandatory")
if self.params["cdrom"]["type"] == "iso" and ("iso_path" not in self.params["cdrom"] or not self.params["cdrom"]["iso_path"]):
self.module.fail_json(msg="cdrom.iso_path is mandatory in case cdrom.type is iso")
if vm_obj and vm_obj.config.template:
# Changing CD-ROM settings on a template is not supported
return
cdrom_spec = None
cdrom_device = self.get_vm_cdrom_device(vm=vm_obj)
iso_path = self.params["cdrom"]["iso_path"] if "iso_path" in self.params["cdrom"] else None
if cdrom_device is None:
# Creating new CD-ROM
ide_device = self.get_vm_ide_device(vm=vm_obj)
if ide_device is None:
# Creating new IDE device
ide_device = self.device_helper.create_ide_controller()
self.change_detected = True
self.configspec.deviceChange.append(ide_device)
elif len(ide_device.device) > 3:
self.module.fail_json(msg="hardware.cdrom specified for a VM or template which already has 4 IDE devices of which none are a cdrom")
cdrom_spec = self.device_helper.create_cdrom(ide_ctl=ide_device, cdrom_type=self.params["cdrom"]["type"], iso_path=iso_path)
if vm_obj and vm_obj.runtime.powerState == vim.VirtualMachinePowerState.poweredOn:
cdrom_spec.device.connectable.connected = (self.params["cdrom"]["type"] != "none")
elif not self.device_helper.is_equal_cdrom(vm_obj=vm_obj, cdrom_device=cdrom_device, cdrom_type=self.params["cdrom"]["type"], iso_path=iso_path):
# Updating an existing CD-ROM
if self.params["cdrom"]["type"] in ["client", "none"]:
cdrom_device.backing = vim.vm.device.VirtualCdrom.RemotePassthroughBackingInfo()
elif self.params["cdrom"]["type"] == "iso":
cdrom_device.backing = vim.vm.device.VirtualCdrom.IsoBackingInfo(fileName=iso_path)
cdrom_device.connectable = vim.vm.device.VirtualDevice.ConnectInfo()
cdrom_device.connectable.allowGuestControl = True
cdrom_device.connectable.startConnected = (self.params["cdrom"]["type"] != "none")
if vm_obj and vm_obj.runtime.powerState == vim.VirtualMachinePowerState.poweredOn:
cdrom_device.connectable.connected = (self.params["cdrom"]["type"] != "none")
cdrom_spec = vim.vm.device.VirtualDeviceSpec()
cdrom_spec.operation = vim.vm.device.VirtualDeviceSpec.Operation.edit
cdrom_spec.device = cdrom_device
if cdrom_spec:
self.change_detected = True
self.configspec.deviceChange.append(cdrom_spec)
def configure_hardware_params(self, vm_obj):
"""
Function to configure hardware related configuration of virtual machine
Args:
vm_obj: virtual machine object
"""
if 'hardware' in self.params:
if 'max_connections' in self.params['hardware']:
# maxMksConnections == max_connections
self.configspec.maxMksConnections = int(self.params['hardware']['max_connections'])
if vm_obj is None or self.configspec.maxMksConnections != vm_obj.config.maxMksConnections:
self.change_detected = True
if 'nested_virt' in self.params['hardware']:
self.configspec.nestedHVEnabled = bool(self.params['hardware']['nested_virt'])
if vm_obj is None or self.configspec.nestedHVEnabled != bool(vm_obj.config.nestedHVEnabled):
self.change_detected = True
if 'version' in self.params['hardware']:
hw_version_check_failed = False
temp_version = self.params['hardware'].get('version', 10)
try:
temp_version = int(temp_version)
except ValueError:
hw_version_check_failed = True
if temp_version not in range(3, 15):
hw_version_check_failed = True
if hw_version_check_failed:
self.module.fail_json(msg="Failed to set hardware.version '%s' value as valid"
" values range from 3 (ESX 2.x) to 14 (ESXi 6.5 and greater)." % temp_version)
# Hardware version is denoted as "vmx-10"
version = "vmx-%02d" % temp_version
self.configspec.version = version
if vm_obj is None or self.configspec.version != vm_obj.config.version:
self.change_detected = True
if vm_obj is not None:
# VM exists and we need to update the hardware version
current_version = vm_obj.config.version
# current_version = "vmx-10"
version_digit = int(current_version.split("-", 1)[-1])
if temp_version < version_digit:
self.module.fail_json(msg="Current hardware version '%d' which is greater than the specified"
" version '%d'. Downgrading hardware version is"
" not supported. Please specify version greater"
" than the current version." % (version_digit,
temp_version))
new_version = "vmx-%02d" % temp_version
try:
task = vm_obj.UpgradeVM_Task(new_version)
self.wait_for_task(task)
if task.info.state == 'error':
return {'changed': self.change_applied, 'failed': True, 'msg': task.info.error.msg, 'op': 'upgrade'}
except vim.fault.AlreadyUpgraded:
# Don't fail if VM is already upgraded.
pass
if 'virt_based_security' in self.params['hardware']:
host_version = self.select_host().summary.config.product.version
if int(host_version.split('.')[0]) < 6 or (int(host_version.split('.')[0]) == 6 and int(host_version.split('.')[1]) < 7):
self.module.fail_json(msg="ESXi version %s not support VBS." % host_version)
guest_ids = ['windows9_64Guest', 'windows9Server64Guest']
if vm_obj is None:
guestid = self.configspec.guestId
else:
guestid = vm_obj.summary.config.guestId
if guestid not in guest_ids:
self.module.fail_json(msg="Guest '%s' not support VBS." % guestid)
if (vm_obj is None and int(self.configspec.version.split('-')[1]) >= 14) or \
(vm_obj and int(vm_obj.config.version.split('-')[1]) >= 14 and (vm_obj.runtime.powerState == vim.VirtualMachinePowerState.poweredOff)):
self.configspec.flags = vim.vm.FlagInfo()
self.configspec.flags.vbsEnabled = bool(self.params['hardware']['virt_based_security'])
if bool(self.params['hardware']['virt_based_security']):
self.configspec.flags.vvtdEnabled = True
self.configspec.nestedHVEnabled = True
if (vm_obj is None and self.configspec.firmware == 'efi') or \
(vm_obj and vm_obj.config.firmware == 'efi'):
self.configspec.bootOptions = vim.vm.BootOptions()
self.configspec.bootOptions.efiSecureBootEnabled = True
else:
self.module.fail_json(msg="Not support VBS when firmware is BIOS.")
if vm_obj is None or self.configspec.flags.vbsEnabled != vm_obj.config.flags.vbsEnabled:
self.change_detected = True
def get_device_by_type(self, vm=None, type=None):
if vm is None or type is None:
return None
for device in vm.config.hardware.device:
if isinstance(device, type):
return device
return None
def get_vm_cdrom_device(self, vm=None):
return self.get_device_by_type(vm=vm, type=vim.vm.device.VirtualCdrom)
def get_vm_ide_device(self, vm=None):
return self.get_device_by_type(vm=vm, type=vim.vm.device.VirtualIDEController)
def get_vm_network_interfaces(self, vm=None):
device_list = []
if vm is None:
return device_list
nw_device_types = (vim.vm.device.VirtualPCNet32, vim.vm.device.VirtualVmxnet2,
vim.vm.device.VirtualVmxnet3, vim.vm.device.VirtualE1000,
vim.vm.device.VirtualE1000e, vim.vm.device.VirtualSriovEthernetCard)
for device in vm.config.hardware.device:
if isinstance(device, nw_device_types):
device_list.append(device)
return device_list
def sanitize_network_params(self):
"""
Sanitize user provided network provided params
Returns: A sanitized list of network params, else fails
"""
network_devices = list()
# Clean up user data here
for network in self.params['networks']:
if 'name' not in network and 'vlan' not in network:
self.module.fail_json(msg="Please specify at least a network name or"
" a VLAN name under VM network list.")
if 'name' in network and self.cache.get_network(network['name']) is None:
self.module.fail_json(msg="Network '%(name)s' does not exist." % network)
elif 'vlan' in network:
dvps = self.cache.get_all_objs(self.content, [vim.dvs.DistributedVirtualPortgroup])
for dvp in dvps:
if hasattr(dvp.config.defaultPortConfig, 'vlan') and \
isinstance(dvp.config.defaultPortConfig.vlan.vlanId, int) and \
str(dvp.config.defaultPortConfig.vlan.vlanId) == str(network['vlan']):
network['name'] = dvp.config.name
break
if 'dvswitch_name' in network and \
dvp.config.distributedVirtualSwitch.name == network['dvswitch_name'] and \
dvp.config.name == network['vlan']:
network['name'] = dvp.config.name
break
if dvp.config.name == network['vlan']:
network['name'] = dvp.config.name
break
else:
self.module.fail_json(msg="VLAN '%(vlan)s' does not exist." % network)
if 'type' in network:
if network['type'] not in ['dhcp', 'static']:
self.module.fail_json(msg="Network type '%(type)s' is not a valid parameter."
" Valid parameters are ['dhcp', 'static']." % network)
if network['type'] != 'static' and ('ip' in network or 'netmask' in network):
self.module.fail_json(msg='Static IP information provided for network "%(name)s",'
' but "type" is set to "%(type)s".' % network)
else:
# Type is optional parameter, if user provided IP or Subnet assume
# network type as 'static'
if 'ip' in network or 'netmask' in network:
network['type'] = 'static'
else:
# User wants network type as 'dhcp'
network['type'] = 'dhcp'
if network.get('type') == 'static':
if 'ip' in network and 'netmask' not in network:
self.module.fail_json(msg="'netmask' is required if 'ip' is"
" specified under VM network list.")
if 'ip' not in network and 'netmask' in network:
self.module.fail_json(msg="'ip' is required if 'netmask' is"
" specified under VM network list.")
validate_device_types = ['pcnet32', 'vmxnet2', 'vmxnet3', 'e1000', 'e1000e', 'sriov']
if 'device_type' in network and network['device_type'] not in validate_device_types:
self.module.fail_json(msg="Device type specified '%s' is not valid."
" Please specify correct device"
" type from ['%s']." % (network['device_type'],
"', '".join(validate_device_types)))
if 'mac' in network and not is_mac(network['mac']):
self.module.fail_json(msg="Device MAC address '%s' is invalid."
" Please provide correct MAC address." % network['mac'])
network_devices.append(network)
return network_devices
def configure_network(self, vm_obj):
# Ignore empty networks, this permits to keep networks when deploying a template/cloning a VM
if len(self.params['networks']) == 0:
return
network_devices = self.sanitize_network_params()
# List current device for Clone or Idempotency
current_net_devices = self.get_vm_network_interfaces(vm=vm_obj)
if len(network_devices) < len(current_net_devices):
self.module.fail_json(msg="Given network device list is lesser than current VM device list (%d < %d). "
"Removing interfaces is not allowed"
% (len(network_devices), len(current_net_devices)))
for key in range(0, len(network_devices)):
nic_change_detected = False
network_name = network_devices[key]['name']
if key < len(current_net_devices) and (vm_obj or self.params['template']):
# We are editing existing network devices, this is either when
# are cloning from VM or Template
nic = vim.vm.device.VirtualDeviceSpec()
nic.operation = vim.vm.device.VirtualDeviceSpec.Operation.edit
nic.device = current_net_devices[key]
if ('wake_on_lan' in network_devices[key] and
nic.device.wakeOnLanEnabled != network_devices[key].get('wake_on_lan')):
nic.device.wakeOnLanEnabled = network_devices[key].get('wake_on_lan')
nic_change_detected = True
if ('start_connected' in network_devices[key] and
nic.device.connectable.startConnected != network_devices[key].get('start_connected')):
nic.device.connectable.startConnected = network_devices[key].get('start_connected')
nic_change_detected = True
if ('allow_guest_control' in network_devices[key] and
nic.device.connectable.allowGuestControl != network_devices[key].get('allow_guest_control')):
nic.device.connectable.allowGuestControl = network_devices[key].get('allow_guest_control')
nic_change_detected = True
if nic.device.deviceInfo.summary != network_name:
nic.device.deviceInfo.summary = network_name
nic_change_detected = True
if 'device_type' in network_devices[key]:
device = self.device_helper.get_device(network_devices[key]['device_type'], network_name)
device_class = type(device)
if not isinstance(nic.device, device_class):
self.module.fail_json(msg="Changing the device type is not possible when interface is already present. "
"The failing device type is %s" % network_devices[key]['device_type'])
# Changing mac address has no effect when editing interface
if 'mac' in network_devices[key] and nic.device.macAddress != current_net_devices[key].macAddress:
self.module.fail_json(msg="Changing MAC address has not effect when interface is already present. "
"The failing new MAC address is %s" % nic.device.macAddress)
else:
# Default device type is vmxnet3, VMware best practice
device_type = network_devices[key].get('device_type', 'vmxnet3')
nic = self.device_helper.create_nic(device_type,
'Network Adapter %s' % (key + 1),
network_devices[key])
nic.operation = vim.vm.device.VirtualDeviceSpec.Operation.add
nic_change_detected = True
if hasattr(self.cache.get_network(network_name), 'portKeys'):
# VDS switch
pg_obj = None
if 'dvswitch_name' in network_devices[key]:
dvs_name = network_devices[key]['dvswitch_name']
dvs_obj = find_dvs_by_name(self.content, dvs_name)
if dvs_obj is None:
self.module.fail_json(msg="Unable to find distributed virtual switch %s" % dvs_name)
pg_obj = find_dvspg_by_name(dvs_obj, network_name)
if pg_obj is None:
self.module.fail_json(msg="Unable to find distributed port group %s" % network_name)
else:
pg_obj = self.cache.find_obj(self.content, [vim.dvs.DistributedVirtualPortgroup], network_name)
if (nic.device.backing and
(not hasattr(nic.device.backing, 'port') or
(nic.device.backing.port.portgroupKey != pg_obj.key or
nic.device.backing.port.switchUuid != pg_obj.config.distributedVirtualSwitch.uuid))):
nic_change_detected = True
dvs_port_connection = vim.dvs.PortConnection()
dvs_port_connection.portgroupKey = pg_obj.key
# If user specifies distributed port group without associating to the hostsystem on which
# virtual machine is going to be deployed then we get error. We can infer that there is no
# association between given distributed port group and host system.
host_system = self.params.get('esxi_hostname')
if host_system and host_system not in [host.config.host.name for host in pg_obj.config.distributedVirtualSwitch.config.host]:
self.module.fail_json(msg="It seems that host system '%s' is not associated with distributed"
" virtual portgroup '%s'. Please make sure host system is associated"
" with given distributed virtual portgroup" % (host_system, pg_obj.name))
# TODO: (akasurde) There is no way to find association between resource pool and distributed virtual portgroup
# For now, check if we are able to find distributed virtual switch
if not pg_obj.config.distributedVirtualSwitch:
self.module.fail_json(msg="Failed to find distributed virtual switch which is associated with"
" distributed virtual portgroup '%s'. Make sure hostsystem is associated with"
" the given distributed virtual portgroup." % pg_obj.name)
dvs_port_connection.switchUuid = pg_obj.config.distributedVirtualSwitch.uuid
nic.device.backing = vim.vm.device.VirtualEthernetCard.DistributedVirtualPortBackingInfo()
nic.device.backing.port = dvs_port_connection
elif isinstance(self.cache.get_network(network_name), vim.OpaqueNetwork):
# NSX-T Logical Switch
nic.device.backing = vim.vm.device.VirtualEthernetCard.OpaqueNetworkBackingInfo()
network_id = self.cache.get_network(network_name).summary.opaqueNetworkId
nic.device.backing.opaqueNetworkType = 'nsx.LogicalSwitch'
nic.device.backing.opaqueNetworkId = network_id
nic.device.deviceInfo.summary = 'nsx.LogicalSwitch: %s' % network_id
nic_change_detected = True
else:
# vSwitch
if not isinstance(nic.device.backing, vim.vm.device.VirtualEthernetCard.NetworkBackingInfo):
nic.device.backing = vim.vm.device.VirtualEthernetCard.NetworkBackingInfo()
nic_change_detected = True
net_obj = self.cache.get_network(network_name)
if nic.device.backing.network != net_obj:
nic.device.backing.network = net_obj
nic_change_detected = True
if nic.device.backing.deviceName != network_name:
nic.device.backing.deviceName = network_name
nic_change_detected = True
if nic_change_detected:
# Change to fix the issue found while configuring opaque network
# VMs cloned from a template with opaque network will get disconnected
# Replacing deprecated config parameter with relocation Spec
if isinstance(self.cache.get_network(network_name), vim.OpaqueNetwork):
self.relospec.deviceChange.append(nic)
else:
self.configspec.deviceChange.append(nic)
self.change_detected = True
def configure_vapp_properties(self, vm_obj):
if len(self.params['vapp_properties']) == 0:
return
for x in self.params['vapp_properties']:
if not x.get('id'):
self.module.fail_json(msg="id is required to set vApp property")
new_vmconfig_spec = vim.vApp.VmConfigSpec()
if vm_obj:
# VM exists
# This is primarily for vcsim/integration tests, unset vAppConfig was not seen on my deployments
orig_spec = vm_obj.config.vAppConfig if vm_obj.config.vAppConfig else new_vmconfig_spec
vapp_properties_current = dict((x.id, x) for x in orig_spec.property)
vapp_properties_to_change = dict((x['id'], x) for x in self.params['vapp_properties'])
# each property must have a unique key
# init key counter with max value + 1
all_keys = [x.key for x in orig_spec.property]
new_property_index = max(all_keys) + 1 if all_keys else 0
for property_id, property_spec in vapp_properties_to_change.items():
is_property_changed = False
new_vapp_property_spec = vim.vApp.PropertySpec()
if property_id in vapp_properties_current:
if property_spec.get('operation') == 'remove':
new_vapp_property_spec.operation = 'remove'
new_vapp_property_spec.removeKey = vapp_properties_current[property_id].key
is_property_changed = True
else:
# this is 'edit' branch
new_vapp_property_spec.operation = 'edit'
new_vapp_property_spec.info = vapp_properties_current[property_id]
try:
for property_name, property_value in property_spec.items():
if property_name == 'operation':
# operation is not an info object property
# if set to anything other than 'remove' we don't fail
continue
# Updating attributes only if needed
if getattr(new_vapp_property_spec.info, property_name) != property_value:
setattr(new_vapp_property_spec.info, property_name, property_value)
is_property_changed = True
except Exception as e:
msg = "Failed to set vApp property field='%s' and value='%s'. Error: %s" % (property_name, property_value, to_text(e))
self.module.fail_json(msg=msg)
else:
if property_spec.get('operation') == 'remove':
# attempt to delete non-existent property
continue
# this is add new property branch
new_vapp_property_spec.operation = 'add'
property_info = vim.vApp.PropertyInfo()
property_info.classId = property_spec.get('classId')
property_info.instanceId = property_spec.get('instanceId')
property_info.id = property_spec.get('id')
property_info.category = property_spec.get('category')
property_info.label = property_spec.get('label')
property_info.type = property_spec.get('type', 'string')
property_info.userConfigurable = property_spec.get('userConfigurable', True)
property_info.defaultValue = property_spec.get('defaultValue')
property_info.value = property_spec.get('value', '')
property_info.description = property_spec.get('description')
new_vapp_property_spec.info = property_info
new_vapp_property_spec.info.key = new_property_index
new_property_index += 1
is_property_changed = True
if is_property_changed:
new_vmconfig_spec.property.append(new_vapp_property_spec)
else:
# New VM
all_keys = [x.key for x in new_vmconfig_spec.property]
new_property_index = max(all_keys) + 1 if all_keys else 0
vapp_properties_to_change = dict((x['id'], x) for x in self.params['vapp_properties'])
is_property_changed = False
for property_id, property_spec in vapp_properties_to_change.items():
new_vapp_property_spec = vim.vApp.PropertySpec()
# this is add new property branch
new_vapp_property_spec.operation = 'add'
property_info = vim.vApp.PropertyInfo()
property_info.classId = property_spec.get('classId')
property_info.instanceId = property_spec.get('instanceId')
property_info.id = property_spec.get('id')
property_info.category = property_spec.get('category')
property_info.label = property_spec.get('label')
property_info.type = property_spec.get('type', 'string')
property_info.userConfigurable = property_spec.get('userConfigurable', True)
property_info.defaultValue = property_spec.get('defaultValue')
property_info.value = property_spec.get('value', '')
property_info.description = property_spec.get('description')
new_vapp_property_spec.info = property_info
new_vapp_property_spec.info.key = new_property_index
new_property_index += 1
is_property_changed = True
if is_property_changed:
new_vmconfig_spec.property.append(new_vapp_property_spec)
if new_vmconfig_spec.property:
self.configspec.vAppConfig = new_vmconfig_spec
self.change_detected = True
def customize_customvalues(self, vm_obj):
if len(self.params['customvalues']) == 0:
return
facts = self.gather_facts(vm_obj)
for kv in self.params['customvalues']:
if 'key' not in kv or 'value' not in kv:
self.module.exit_json(msg="customvalues items required both 'key' and 'value' fields.")
key_id = None
for field in self.content.customFieldsManager.field:
if field.name == kv['key']:
key_id = field.key
break
if not key_id:
self.module.fail_json(msg="Unable to find custom value key %s" % kv['key'])
# If kv is not kv fetched from facts, change it
if kv['key'] not in facts['customvalues'] or facts['customvalues'][kv['key']] != kv['value']:
self.content.customFieldsManager.SetField(entity=vm_obj, key=key_id, value=kv['value'])
self.change_detected = True
def customize_vm(self, vm_obj):
# User specified customization specification
custom_spec_name = self.params.get('customization_spec')
if custom_spec_name:
cc_mgr = self.content.customizationSpecManager
if cc_mgr.DoesCustomizationSpecExist(name=custom_spec_name):
temp_spec = cc_mgr.GetCustomizationSpec(name=custom_spec_name)
self.customspec = temp_spec.spec
return
else:
self.module.fail_json(msg="Unable to find customization specification"
" '%s' in given configuration." % custom_spec_name)
# Network settings
adaptermaps = []
for network in self.params['networks']:
guest_map = vim.vm.customization.AdapterMapping()
guest_map.adapter = vim.vm.customization.IPSettings()
if 'ip' in network and 'netmask' in network:
guest_map.adapter.ip = vim.vm.customization.FixedIp()
guest_map.adapter.ip.ipAddress = str(network['ip'])
guest_map.adapter.subnetMask = str(network['netmask'])
elif 'type' in network and network['type'] == 'dhcp':
guest_map.adapter.ip = vim.vm.customization.DhcpIpGenerator()
if 'gateway' in network:
guest_map.adapter.gateway = network['gateway']
# On Windows, DNS domain and DNS servers can be set by network interface
# https://pubs.vmware.com/vi3/sdk/ReferenceGuide/vim.vm.customization.IPSettings.html
if 'domain' in network:
guest_map.adapter.dnsDomain = network['domain']
elif 'domain' in self.params['customization']:
guest_map.adapter.dnsDomain = self.params['customization']['domain']
if 'dns_servers' in network:
guest_map.adapter.dnsServerList = network['dns_servers']
elif 'dns_servers' in self.params['customization']:
guest_map.adapter.dnsServerList = self.params['customization']['dns_servers']
adaptermaps.append(guest_map)
# Global DNS settings
globalip = vim.vm.customization.GlobalIPSettings()
if 'dns_servers' in self.params['customization']:
globalip.dnsServerList = self.params['customization']['dns_servers']
# TODO: Maybe list the different domains from the interfaces here by default ?
if 'dns_suffix' in self.params['customization']:
dns_suffix = self.params['customization']['dns_suffix']
if isinstance(dns_suffix, list):
globalip.dnsSuffixList = " ".join(dns_suffix)
else:
globalip.dnsSuffixList = dns_suffix
elif 'domain' in self.params['customization']:
globalip.dnsSuffixList = self.params['customization']['domain']
if self.params['guest_id']:
guest_id = self.params['guest_id']
else:
guest_id = vm_obj.summary.config.guestId
# For windows guest OS, use SysPrep
# https://pubs.vmware.com/vi3/sdk/ReferenceGuide/vim.vm.customization.Sysprep.html#field_detail
if 'win' in guest_id:
ident = vim.vm.customization.Sysprep()
ident.userData = vim.vm.customization.UserData()
# Setting hostName, orgName and fullName is mandatory, so we set some default when missing
ident.userData.computerName = vim.vm.customization.FixedName()
# computer name will be truncated to 15 characters if using VM name
default_name = self.params['name'].replace(' ', '')
default_name = ''.join([c for c in default_name if c not in string.punctuation])
ident.userData.computerName.name = str(self.params['customization'].get('hostname', default_name[0:15]))
ident.userData.fullName = str(self.params['customization'].get('fullname', 'Administrator'))
ident.userData.orgName = str(self.params['customization'].get('orgname', 'ACME'))
if 'productid' in self.params['customization']:
ident.userData.productId = str(self.params['customization']['productid'])
ident.guiUnattended = vim.vm.customization.GuiUnattended()
if 'autologon' in self.params['customization']:
ident.guiUnattended.autoLogon = self.params['customization']['autologon']
ident.guiUnattended.autoLogonCount = self.params['customization'].get('autologoncount', 1)
if 'timezone' in self.params['customization']:
# Check if timezone value is a int before proceeding.
ident.guiUnattended.timeZone = self.device_helper.integer_value(
self.params['customization']['timezone'],
'customization.timezone')
ident.identification = vim.vm.customization.Identification()
if self.params['customization'].get('password', '') != '':
ident.guiUnattended.password = vim.vm.customization.Password()
ident.guiUnattended.password.value = str(self.params['customization']['password'])
ident.guiUnattended.password.plainText = True
if 'joindomain' in self.params['customization']:
if 'domainadmin' not in self.params['customization'] or 'domainadminpassword' not in self.params['customization']:
self.module.fail_json(msg="'domainadmin' and 'domainadminpassword' entries are mandatory in 'customization' section to use "
"joindomain feature")
ident.identification.domainAdmin = str(self.params['customization']['domainadmin'])
ident.identification.joinDomain = str(self.params['customization']['joindomain'])
ident.identification.domainAdminPassword = vim.vm.customization.Password()
ident.identification.domainAdminPassword.value = str(self.params['customization']['domainadminpassword'])
ident.identification.domainAdminPassword.plainText = True
elif 'joinworkgroup' in self.params['customization']:
ident.identification.joinWorkgroup = str(self.params['customization']['joinworkgroup'])
if 'runonce' in self.params['customization']:
ident.guiRunOnce = vim.vm.customization.GuiRunOnce()
ident.guiRunOnce.commandList = self.params['customization']['runonce']
else:
# FIXME: We have no clue whether this non-Windows OS is actually Linux, hence it might fail!
# For Linux guest OS, use LinuxPrep
# https://pubs.vmware.com/vi3/sdk/ReferenceGuide/vim.vm.customization.LinuxPrep.html
ident = vim.vm.customization.LinuxPrep()
# TODO: Maybe add domain from interface if missing ?
if 'domain' in self.params['customization']:
ident.domain = str(self.params['customization']['domain'])
ident.hostName = vim.vm.customization.FixedName()
hostname = str(self.params['customization'].get('hostname', self.params['name'].split('.')[0]))
# Remove all characters except alphanumeric and minus which is allowed by RFC 952
valid_hostname = re.sub(r"[^a-zA-Z0-9\-]", "", hostname)
ident.hostName.name = valid_hostname
# List of supported time zones for different vSphere versions in Linux/Unix systems
# https://kb.vmware.com/s/article/2145518
if 'timezone' in self.params['customization']:
ident.timeZone = str(self.params['customization']['timezone'])
if 'hwclockUTC' in self.params['customization']:
ident.hwClockUTC = self.params['customization']['hwclockUTC']
self.customspec = vim.vm.customization.Specification()
self.customspec.nicSettingMap = adaptermaps
self.customspec.globalIPSettings = globalip
self.customspec.identity = ident
def get_vm_scsi_controller(self, vm_obj):
# If vm_obj doesn't exist there is no SCSI controller to find
if vm_obj is None:
return None
for device in vm_obj.config.hardware.device:
if self.device_helper.is_scsi_controller(device):
scsi_ctl = vim.vm.device.VirtualDeviceSpec()
scsi_ctl.device = device
return scsi_ctl
return None
def get_configured_disk_size(self, expected_disk_spec):
# what size is it?
if [x for x in expected_disk_spec.keys() if x.startswith('size_') or x == 'size']:
# size, size_tb, size_gb, size_mb, size_kb
if 'size' in expected_disk_spec:
size_regex = re.compile(r'(\d+(?:\.\d+)?)([tgmkTGMK][bB])')
disk_size_m = size_regex.match(expected_disk_spec['size'])
try:
if disk_size_m:
expected = disk_size_m.group(1)
unit = disk_size_m.group(2)
else:
raise ValueError
if re.match(r'\d+\.\d+', expected):
# We found float value in string, let's typecast it
expected = float(expected)
else:
# We found int value in string, let's typecast it
expected = int(expected)
if not expected or not unit:
raise ValueError
except (TypeError, ValueError, NameError):
# Common failure
self.module.fail_json(msg="Failed to parse disk size please review value"
" provided using documentation.")
else:
param = [x for x in expected_disk_spec.keys() if x.startswith('size_')][0]
unit = param.split('_')[-1].lower()
expected = [x[1] for x in expected_disk_spec.items() if x[0].startswith('size_')][0]
expected = int(expected)
disk_units = dict(tb=3, gb=2, mb=1, kb=0)
if unit in disk_units:
unit = unit.lower()
return expected * (1024 ** disk_units[unit])
else:
self.module.fail_json(msg="%s is not a supported unit for disk size."
" Supported units are ['%s']." % (unit,
"', '".join(disk_units.keys())))
# No size found but disk, fail
self.module.fail_json(
msg="No size, size_kb, size_mb, size_gb or size_tb attribute found into disk configuration")
def find_vmdk(self, vmdk_path):
"""
Takes a vsphere datastore path in the format
[datastore_name] path/to/file.vmdk
Returns vsphere file object or raises RuntimeError
"""
datastore_name, vmdk_fullpath, vmdk_filename, vmdk_folder = self.vmdk_disk_path_split(vmdk_path)
datastore = self.cache.find_obj(self.content, [vim.Datastore], datastore_name)
if datastore is None:
self.module.fail_json(msg="Failed to find the datastore %s" % datastore_name)
return self.find_vmdk_file(datastore, vmdk_fullpath, vmdk_filename, vmdk_folder)
def add_existing_vmdk(self, vm_obj, expected_disk_spec, diskspec, scsi_ctl):
"""
Adds vmdk file described by expected_disk_spec['filename'], retrieves the file
information and adds the correct spec to self.configspec.deviceChange.
"""
filename = expected_disk_spec['filename']
# if this is a new disk, or the disk file names are different
if (vm_obj and diskspec.device.backing.fileName != filename) or vm_obj is None:
vmdk_file = self.find_vmdk(expected_disk_spec['filename'])
diskspec.device.backing.fileName = expected_disk_spec['filename']
diskspec.device.capacityInKB = VmomiSupport.vmodlTypes['long'](vmdk_file.fileSize / 1024)
diskspec.device.key = -1
self.change_detected = True
self.configspec.deviceChange.append(diskspec)
def configure_disks(self, vm_obj):
# Ignore empty disk list, this permits to keep disks when deploying a template/cloning a VM
if len(self.params['disk']) == 0:
return
scsi_ctl = self.get_vm_scsi_controller(vm_obj)
# Create scsi controller only if we are deploying a new VM, not a template or reconfiguring
if vm_obj is None or scsi_ctl is None:
scsi_ctl = self.device_helper.create_scsi_controller(self.get_scsi_type())
self.change_detected = True
self.configspec.deviceChange.append(scsi_ctl)
disks = [x for x in vm_obj.config.hardware.device if isinstance(x, vim.vm.device.VirtualDisk)] \
if vm_obj is not None else None
if disks is not None and self.params.get('disk') and len(self.params.get('disk')) < len(disks):
self.module.fail_json(msg="Provided disks configuration has less disks than "
"the target object (%d vs %d)" % (len(self.params.get('disk')), len(disks)))
disk_index = 0
for expected_disk_spec in self.params.get('disk'):
disk_modified = False
# If we are manipulating and existing objects which has disks and disk_index is in disks
if vm_obj is not None and disks is not None and disk_index < len(disks):
diskspec = vim.vm.device.VirtualDeviceSpec()
# set the operation to edit so that it knows to keep other settings
diskspec.operation = vim.vm.device.VirtualDeviceSpec.Operation.edit
diskspec.device = disks[disk_index]
else:
diskspec = self.device_helper.create_scsi_disk(scsi_ctl, disk_index)
disk_modified = True
# increment index for next disk search
disk_index += 1
# index 7 is reserved to SCSI controller
if disk_index == 7:
disk_index += 1
if 'disk_mode' in expected_disk_spec:
disk_mode = expected_disk_spec.get('disk_mode', 'persistent').lower()
valid_disk_mode = ['persistent', 'independent_persistent', 'independent_nonpersistent']
if disk_mode not in valid_disk_mode:
self.module.fail_json(msg="disk_mode specified is not valid."
" Should be one of ['%s']" % "', '".join(valid_disk_mode))
if (vm_obj and diskspec.device.backing.diskMode != disk_mode) or (vm_obj is None):
diskspec.device.backing.diskMode = disk_mode
disk_modified = True
else:
diskspec.device.backing.diskMode = "persistent"
# is it thin?
if 'type' in expected_disk_spec:
disk_type = expected_disk_spec.get('type', '').lower()
if disk_type == 'thin':
diskspec.device.backing.thinProvisioned = True
elif disk_type == 'eagerzeroedthick':
diskspec.device.backing.eagerlyScrub = True
if 'filename' in expected_disk_spec and expected_disk_spec['filename'] is not None:
self.add_existing_vmdk(vm_obj, expected_disk_spec, diskspec, scsi_ctl)
continue
elif vm_obj is None or self.params['template']:
# We are creating new VM or from Template
# Only create virtual device if not backed by vmdk in original template
if diskspec.device.backing.fileName == '':
diskspec.fileOperation = vim.vm.device.VirtualDeviceSpec.FileOperation.create
# which datastore?
if expected_disk_spec.get('datastore'):
# TODO: This is already handled by the relocation spec,
# but it needs to eventually be handled for all the
# other disks defined
pass
kb = self.get_configured_disk_size(expected_disk_spec)
# VMware doesn't allow to reduce disk sizes
if kb < diskspec.device.capacityInKB:
self.module.fail_json(
msg="Given disk size is smaller than found (%d < %d). Reducing disks is not allowed." %
(kb, diskspec.device.capacityInKB))
if kb != diskspec.device.capacityInKB or disk_modified:
diskspec.device.capacityInKB = kb
self.configspec.deviceChange.append(diskspec)
self.change_detected = True
def select_host(self):
hostsystem = self.cache.get_esx_host(self.params['esxi_hostname'])
if not hostsystem:
self.module.fail_json(msg='Failed to find ESX host "%(esxi_hostname)s"' % self.params)
if hostsystem.runtime.connectionState != 'connected' or hostsystem.runtime.inMaintenanceMode:
self.module.fail_json(msg='ESXi "%(esxi_hostname)s" is in invalid state or in maintenance mode.' % self.params)
return hostsystem
def autoselect_datastore(self):
datastore = None
datastores = self.cache.get_all_objs(self.content, [vim.Datastore])
if datastores is None or len(datastores) == 0:
self.module.fail_json(msg="Unable to find a datastore list when autoselecting")
datastore_freespace = 0
for ds in datastores:
if ds.summary.freeSpace > datastore_freespace:
datastore = ds
datastore_freespace = ds.summary.freeSpace
return datastore
def get_recommended_datastore(self, datastore_cluster_obj=None):
"""
Function to return Storage DRS recommended datastore from datastore cluster
Args:
datastore_cluster_obj: datastore cluster managed object
Returns: Name of recommended datastore from the given datastore cluster
"""
if datastore_cluster_obj is None:
return None
# Check if Datastore Cluster provided by user is SDRS ready
sdrs_status = datastore_cluster_obj.podStorageDrsEntry.storageDrsConfig.podConfig.enabled
if sdrs_status:
# We can get storage recommendation only if SDRS is enabled on given datastorage cluster
pod_sel_spec = vim.storageDrs.PodSelectionSpec()
pod_sel_spec.storagePod = datastore_cluster_obj
storage_spec = vim.storageDrs.StoragePlacementSpec()
storage_spec.podSelectionSpec = pod_sel_spec
storage_spec.type = 'create'
try:
rec = self.content.storageResourceManager.RecommendDatastores(storageSpec=storage_spec)
rec_action = rec.recommendations[0].action[0]
return rec_action.destination.name
except Exception:
# There is some error so we fall back to general workflow
pass
datastore = None
datastore_freespace = 0
for ds in datastore_cluster_obj.childEntity:
if isinstance(ds, vim.Datastore) and ds.summary.freeSpace > datastore_freespace:
# If datastore field is provided, filter destination datastores
datastore = ds
datastore_freespace = ds.summary.freeSpace
if datastore:
return datastore.name
return None
def select_datastore(self, vm_obj=None):
datastore = None
datastore_name = None
if len(self.params['disk']) != 0:
# TODO: really use the datastore for newly created disks
if 'autoselect_datastore' in self.params['disk'][0] and self.params['disk'][0]['autoselect_datastore']:
datastores = self.cache.get_all_objs(self.content, [vim.Datastore])
datastores = [x for x in datastores if self.cache.get_parent_datacenter(x).name == self.params['datacenter']]
if datastores is None or len(datastores) == 0:
self.module.fail_json(msg="Unable to find a datastore list when autoselecting")
datastore_freespace = 0
for ds in datastores:
if (ds.summary.freeSpace > datastore_freespace) or (ds.summary.freeSpace == datastore_freespace and not datastore):
# If datastore field is provided, filter destination datastores
if 'datastore' in self.params['disk'][0] and \
isinstance(self.params['disk'][0]['datastore'], str) and \
ds.name.find(self.params['disk'][0]['datastore']) < 0:
continue
datastore = ds
datastore_name = datastore.name
datastore_freespace = ds.summary.freeSpace
elif 'datastore' in self.params['disk'][0]:
datastore_name = self.params['disk'][0]['datastore']
# Check if user has provided datastore cluster first
datastore_cluster = self.cache.find_obj(self.content, [vim.StoragePod], datastore_name)
if datastore_cluster:
# If user specified datastore cluster so get recommended datastore
datastore_name = self.get_recommended_datastore(datastore_cluster_obj=datastore_cluster)
# Check if get_recommended_datastore or user specified datastore exists or not
datastore = self.cache.find_obj(self.content, [vim.Datastore], datastore_name)
else:
self.module.fail_json(msg="Either datastore or autoselect_datastore should be provided to select datastore")
if not datastore and self.params['template']:
# use the template's existing DS
disks = [x for x in vm_obj.config.hardware.device if isinstance(x, vim.vm.device.VirtualDisk)]
if disks:
datastore = disks[0].backing.datastore
datastore_name = datastore.name
# validation
if datastore:
dc = self.cache.get_parent_datacenter(datastore)
if dc.name != self.params['datacenter']:
datastore = self.autoselect_datastore()
datastore_name = datastore.name
if not datastore:
if len(self.params['disk']) != 0 or self.params['template'] is None:
self.module.fail_json(msg="Unable to find the datastore with given parameters."
" This could mean, %s is a non-existent virtual machine and module tried to"
" deploy it as new virtual machine with no disk. Please specify disks parameter"
" or specify template to clone from." % self.params['name'])
self.module.fail_json(msg="Failed to find a matching datastore")
return datastore, datastore_name
def obj_has_parent(self, obj, parent):
if obj is None and parent is None:
raise AssertionError()
current_parent = obj
while True:
if current_parent.name == parent.name:
return True
# Check if we have reached till root folder
moid = current_parent._moId
if moid in ['group-d1', 'ha-folder-root']:
return False
current_parent = current_parent.parent
if current_parent is None:
return False
def get_scsi_type(self):
disk_controller_type = "paravirtual"
# set cpu/memory/etc
if 'hardware' in self.params:
if 'scsi' in self.params['hardware']:
if self.params['hardware']['scsi'] in ['buslogic', 'paravirtual', 'lsilogic', 'lsilogicsas']:
disk_controller_type = self.params['hardware']['scsi']
else:
self.module.fail_json(msg="hardware.scsi attribute should be 'paravirtual' or 'lsilogic'")
return disk_controller_type
def find_folder(self, searchpath):
""" Walk inventory objects one position of the searchpath at a time """
# split the searchpath so we can iterate through it
paths = [x.replace('/', '') for x in searchpath.split('/')]
paths_total = len(paths) - 1
position = 0
# recursive walk while looking for next element in searchpath
root = self.content.rootFolder
while root and position <= paths_total:
change = False
if hasattr(root, 'childEntity'):
for child in root.childEntity:
if child.name == paths[position]:
root = child
position += 1
change = True
break
elif isinstance(root, vim.Datacenter):
if hasattr(root, 'vmFolder'):
if root.vmFolder.name == paths[position]:
root = root.vmFolder
position += 1
change = True
else:
root = None
if not change:
root = None
return root
def get_resource_pool(self, cluster=None, host=None, resource_pool=None):
""" Get a resource pool, filter on cluster, esxi_hostname or resource_pool if given """
cluster_name = cluster or self.params.get('cluster', None)
host_name = host or self.params.get('esxi_hostname', None)
resource_pool_name = resource_pool or self.params.get('resource_pool', None)
# get the datacenter object
datacenter = find_obj(self.content, [vim.Datacenter], self.params['datacenter'])
if not datacenter:
self.module.fail_json(msg='Unable to find datacenter "%s"' % self.params['datacenter'])
# if cluster is given, get the cluster object
if cluster_name:
cluster = find_obj(self.content, [vim.ComputeResource], cluster_name, folder=datacenter)
if not cluster:
self.module.fail_json(msg='Unable to find cluster "%s"' % cluster_name)
# if host is given, get the cluster object using the host
elif host_name:
host = find_obj(self.content, [vim.HostSystem], host_name, folder=datacenter)
if not host:
self.module.fail_json(msg='Unable to find host "%s"' % host_name)
cluster = host.parent
else:
cluster = None
# get resource pools limiting search to cluster or datacenter
resource_pool = find_obj(self.content, [vim.ResourcePool], resource_pool_name, folder=cluster or datacenter)
if not resource_pool:
if resource_pool_name:
self.module.fail_json(msg='Unable to find resource_pool "%s"' % resource_pool_name)
else:
self.module.fail_json(msg='Unable to find resource pool, need esxi_hostname, resource_pool, or cluster')
return resource_pool
def deploy_vm(self):
# https://github.com/vmware/pyvmomi-community-samples/blob/master/samples/clone_vm.py
# https://www.vmware.com/support/developer/vc-sdk/visdk25pubs/ReferenceGuide/vim.vm.CloneSpec.html
# https://www.vmware.com/support/developer/vc-sdk/visdk25pubs/ReferenceGuide/vim.vm.ConfigSpec.html
# https://www.vmware.com/support/developer/vc-sdk/visdk41pubs/ApiReference/vim.vm.RelocateSpec.html
# FIXME:
# - static IPs
self.folder = self.params.get('folder', None)
if self.folder is None:
self.module.fail_json(msg="Folder is required parameter while deploying new virtual machine")
# Prepend / if it was missing from the folder path, also strip trailing slashes
if not self.folder.startswith('/'):
self.folder = '/%(folder)s' % self.params
self.folder = self.folder.rstrip('/')
datacenter = self.cache.find_obj(self.content, [vim.Datacenter], self.params['datacenter'])
if datacenter is None:
self.module.fail_json(msg='No datacenter named %(datacenter)s was found' % self.params)
dcpath = compile_folder_path_for_object(datacenter)
# Nested folder does not have trailing /
if not dcpath.endswith('/'):
dcpath += '/'
# Check for full path first in case it was already supplied
if (self.folder.startswith(dcpath + self.params['datacenter'] + '/vm') or
self.folder.startswith(dcpath + '/' + self.params['datacenter'] + '/vm')):
fullpath = self.folder
elif self.folder.startswith('/vm/') or self.folder == '/vm':
fullpath = "%s%s%s" % (dcpath, self.params['datacenter'], self.folder)
elif self.folder.startswith('/'):
fullpath = "%s%s/vm%s" % (dcpath, self.params['datacenter'], self.folder)
else:
fullpath = "%s%s/vm/%s" % (dcpath, self.params['datacenter'], self.folder)
f_obj = self.content.searchIndex.FindByInventoryPath(fullpath)
# abort if no strategy was successful
if f_obj is None:
# Add some debugging values in failure.
details = {
'datacenter': datacenter.name,
'datacenter_path': dcpath,
'folder': self.folder,
'full_search_path': fullpath,
}
self.module.fail_json(msg='No folder %s matched in the search path : %s' % (self.folder, fullpath),
details=details)
destfolder = f_obj
if self.params['template']:
vm_obj = self.get_vm_or_template(template_name=self.params['template'])
if vm_obj is None:
self.module.fail_json(msg="Could not find a template named %(template)s" % self.params)
else:
vm_obj = None
# always get a resource_pool
resource_pool = self.get_resource_pool()
# set the destination datastore for VM & disks
if self.params['datastore']:
# Give precedence to datastore value provided by user
# User may want to deploy VM to specific datastore.
datastore_name = self.params['datastore']
# Check if user has provided datastore cluster first
datastore_cluster = self.cache.find_obj(self.content, [vim.StoragePod], datastore_name)
if datastore_cluster:
# If user specified datastore cluster so get recommended datastore
datastore_name = self.get_recommended_datastore(datastore_cluster_obj=datastore_cluster)
# Check if get_recommended_datastore or user specified datastore exists or not
datastore = self.cache.find_obj(self.content, [vim.Datastore], datastore_name)
else:
(datastore, datastore_name) = self.select_datastore(vm_obj)
self.configspec = vim.vm.ConfigSpec()
self.configspec.deviceChange = []
# create the relocation spec
self.relospec = vim.vm.RelocateSpec()
self.relospec.deviceChange = []
self.configure_guestid(vm_obj=vm_obj, vm_creation=True)
self.configure_cpu_and_memory(vm_obj=vm_obj, vm_creation=True)
self.configure_hardware_params(vm_obj=vm_obj)
self.configure_resource_alloc_info(vm_obj=vm_obj)
self.configure_vapp_properties(vm_obj=vm_obj)
self.configure_disks(vm_obj=vm_obj)
self.configure_network(vm_obj=vm_obj)
self.configure_cdrom(vm_obj=vm_obj)
# Find if we need network customizations (find keys in dictionary that requires customizations)
network_changes = False
for nw in self.params['networks']:
for key in nw:
# We don't need customizations for these keys
if key not in ('device_type', 'mac', 'name', 'vlan', 'type', 'start_connected'):
network_changes = True
break
if len(self.params['customization']) > 0 or network_changes or self.params.get('customization_spec') is not None:
self.customize_vm(vm_obj=vm_obj)
clonespec = None
clone_method = None
try:
if self.params['template']:
# Only select specific host when ESXi hostname is provided
if self.params['esxi_hostname']:
self.relospec.host = self.select_host()
self.relospec.datastore = datastore
# Convert disk present in template if is set
if self.params['convert']:
for device in vm_obj.config.hardware.device:
if isinstance(device, vim.vm.device.VirtualDisk):
disk_locator = vim.vm.RelocateSpec.DiskLocator()
disk_locator.diskBackingInfo = vim.vm.device.VirtualDisk.FlatVer2BackingInfo()
if self.params['convert'] in ['thin']:
disk_locator.diskBackingInfo.thinProvisioned = True
if self.params['convert'] in ['eagerzeroedthick']:
disk_locator.diskBackingInfo.eagerlyScrub = True
if self.params['convert'] in ['thick']:
disk_locator.diskBackingInfo.diskMode = "persistent"
disk_locator.diskId = device.key
disk_locator.datastore = datastore
self.relospec.disk.append(disk_locator)
# https://www.vmware.com/support/developer/vc-sdk/visdk41pubs/ApiReference/vim.vm.RelocateSpec.html
# > pool: For a clone operation from a template to a virtual machine, this argument is required.
self.relospec.pool = resource_pool
linked_clone = self.params.get('linked_clone')
snapshot_src = self.params.get('snapshot_src', None)
if linked_clone:
if snapshot_src is not None:
self.relospec.diskMoveType = vim.vm.RelocateSpec.DiskMoveOptions.createNewChildDiskBacking
else:
self.module.fail_json(msg="Parameter 'linked_src' and 'snapshot_src' are"
" required together for linked clone operation.")
clonespec = vim.vm.CloneSpec(template=self.params['is_template'], location=self.relospec)
if self.customspec:
clonespec.customization = self.customspec
if snapshot_src is not None:
if vm_obj.snapshot is None:
self.module.fail_json(msg="No snapshots present for virtual machine or template [%(template)s]" % self.params)
snapshot = self.get_snapshots_by_name_recursively(snapshots=vm_obj.snapshot.rootSnapshotList,
snapname=snapshot_src)
if len(snapshot) != 1:
self.module.fail_json(msg='virtual machine "%(template)s" does not contain'
' snapshot named "%(snapshot_src)s"' % self.params)
clonespec.snapshot = snapshot[0].snapshot
clonespec.config = self.configspec
clone_method = 'Clone'
try:
task = vm_obj.Clone(folder=destfolder, name=self.params['name'], spec=clonespec)
except vim.fault.NoPermission as e:
self.module.fail_json(msg="Failed to clone virtual machine %s to folder %s "
"due to permission issue: %s" % (self.params['name'],
destfolder,
to_native(e.msg)))
self.change_detected = True
else:
# ConfigSpec require name for VM creation
self.configspec.name = self.params['name']
self.configspec.files = vim.vm.FileInfo(logDirectory=None,
snapshotDirectory=None,
suspendDirectory=None,
vmPathName="[" + datastore_name + "]")
clone_method = 'CreateVM_Task'
try:
task = destfolder.CreateVM_Task(config=self.configspec, pool=resource_pool)
except vmodl.fault.InvalidRequest as e:
self.module.fail_json(msg="Failed to create virtual machine due to invalid configuration "
"parameter %s" % to_native(e.msg))
except vim.fault.RestrictedVersion as e:
self.module.fail_json(msg="Failed to create virtual machine due to "
"product versioning restrictions: %s" % to_native(e.msg))
self.change_detected = True
self.wait_for_task(task)
except TypeError as e:
self.module.fail_json(msg="TypeError was returned, please ensure to give correct inputs. %s" % to_text(e))
if task.info.state == 'error':
# https://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=2021361
# https://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=2173
# provide these to the user for debugging
clonespec_json = serialize_spec(clonespec)
configspec_json = serialize_spec(self.configspec)
kwargs = {
'changed': self.change_applied,
'failed': True,
'msg': task.info.error.msg,
'clonespec': clonespec_json,
'configspec': configspec_json,
'clone_method': clone_method
}
return kwargs
else:
# set annotation
vm = task.info.result
if self.params['annotation']:
annotation_spec = vim.vm.ConfigSpec()
annotation_spec.annotation = str(self.params['annotation'])
task = vm.ReconfigVM_Task(annotation_spec)
self.wait_for_task(task)
if task.info.state == 'error':
return {'changed': self.change_applied, 'failed': True, 'msg': task.info.error.msg, 'op': 'annotation'}
if self.params['customvalues']:
self.customize_customvalues(vm_obj=vm)
if self.params['wait_for_ip_address'] or self.params['wait_for_customization'] or self.params['state'] in ['poweredon', 'restarted']:
set_vm_power_state(self.content, vm, 'poweredon', force=False)
if self.params['wait_for_ip_address']:
self.wait_for_vm_ip(vm)
if self.params['wait_for_customization']:
is_customization_ok = self.wait_for_customization(vm)
if not is_customization_ok:
vm_facts = self.gather_facts(vm)
return {'changed': self.change_applied, 'failed': True, 'instance': vm_facts, 'op': 'customization'}
vm_facts = self.gather_facts(vm)
return {'changed': self.change_applied, 'failed': False, 'instance': vm_facts}
def get_snapshots_by_name_recursively(self, snapshots, snapname):
snap_obj = []
for snapshot in snapshots:
if snapshot.name == snapname:
snap_obj.append(snapshot)
else:
snap_obj = snap_obj + self.get_snapshots_by_name_recursively(snapshot.childSnapshotList, snapname)
return snap_obj
def reconfigure_vm(self):
self.configspec = vim.vm.ConfigSpec()
self.configspec.deviceChange = []
# create the relocation spec
self.relospec = vim.vm.RelocateSpec()
self.relospec.deviceChange = []
self.configure_guestid(vm_obj=self.current_vm_obj)
self.configure_cpu_and_memory(vm_obj=self.current_vm_obj)
self.configure_hardware_params(vm_obj=self.current_vm_obj)
self.configure_disks(vm_obj=self.current_vm_obj)
self.configure_network(vm_obj=self.current_vm_obj)
self.configure_cdrom(vm_obj=self.current_vm_obj)
self.customize_customvalues(vm_obj=self.current_vm_obj)
self.configure_resource_alloc_info(vm_obj=self.current_vm_obj)
self.configure_vapp_properties(vm_obj=self.current_vm_obj)
if self.params['annotation'] and self.current_vm_obj.config.annotation != self.params['annotation']:
self.configspec.annotation = str(self.params['annotation'])
self.change_detected = True
if self.params['resource_pool']:
self.relospec.pool = self.get_resource_pool()
if self.relospec.pool != self.current_vm_obj.resourcePool:
task = self.current_vm_obj.RelocateVM_Task(spec=self.relospec)
self.wait_for_task(task)
if task.info.state == 'error':
return {'changed': self.change_applied, 'failed': True, 'msg': task.info.error.msg, 'op': 'relocate'}
# Only send VMware task if we see a modification
if self.change_detected:
task = None
try:
task = self.current_vm_obj.ReconfigVM_Task(spec=self.configspec)
except vim.fault.RestrictedVersion as e:
self.module.fail_json(msg="Failed to reconfigure virtual machine due to"
" product versioning restrictions: %s" % to_native(e.msg))
self.wait_for_task(task)
if task.info.state == 'error':
return {'changed': self.change_applied, 'failed': True, 'msg': task.info.error.msg, 'op': 'reconfig'}
# Rename VM
if self.params['uuid'] and self.params['name'] and self.params['name'] != self.current_vm_obj.config.name:
task = self.current_vm_obj.Rename_Task(self.params['name'])
self.wait_for_task(task)
if task.info.state == 'error':
return {'changed': self.change_applied, 'failed': True, 'msg': task.info.error.msg, 'op': 'rename'}
# Mark VM as Template
if self.params['is_template'] and not self.current_vm_obj.config.template:
try:
self.current_vm_obj.MarkAsTemplate()
self.change_applied = True
except vmodl.fault.NotSupported as e:
self.module.fail_json(msg="Failed to mark virtual machine [%s] "
"as template: %s" % (self.params['name'], e.msg))
# Mark Template as VM
elif not self.params['is_template'] and self.current_vm_obj.config.template:
resource_pool = self.get_resource_pool()
kwargs = dict(pool=resource_pool)
if self.params.get('esxi_hostname', None):
host_system_obj = self.select_host()
kwargs.update(host=host_system_obj)
try:
self.current_vm_obj.MarkAsVirtualMachine(**kwargs)
self.change_applied = True
except vim.fault.InvalidState as invalid_state:
self.module.fail_json(msg="Virtual machine is not marked"
" as template : %s" % to_native(invalid_state.msg))
except vim.fault.InvalidDatastore as invalid_ds:
self.module.fail_json(msg="Converting template to virtual machine"
" operation cannot be performed on the"
" target datastores: %s" % to_native(invalid_ds.msg))
except vim.fault.CannotAccessVmComponent as cannot_access:
self.module.fail_json(msg="Failed to convert template to virtual machine"
" as operation unable access virtual machine"
" component: %s" % to_native(cannot_access.msg))
except vmodl.fault.InvalidArgument as invalid_argument:
self.module.fail_json(msg="Failed to convert template to virtual machine"
" due to : %s" % to_native(invalid_argument.msg))
except Exception as generic_exc:
self.module.fail_json(msg="Failed to convert template to virtual machine"
" due to generic error : %s" % to_native(generic_exc))
# Automatically update VMware UUID when converting template to VM.
# This avoids an interactive prompt during VM startup.
uuid_action = [x for x in self.current_vm_obj.config.extraConfig if x.key == "uuid.action"]
if not uuid_action:
uuid_action_opt = vim.option.OptionValue()
uuid_action_opt.key = "uuid.action"
uuid_action_opt.value = "create"
self.configspec.extraConfig.append(uuid_action_opt)
self.change_detected = True
# add customize existing VM after VM re-configure
if 'existing_vm' in self.params['customization'] and self.params['customization']['existing_vm']:
if self.current_vm_obj.config.template:
self.module.fail_json(msg="VM is template, not support guest OS customization.")
if self.current_vm_obj.runtime.powerState != vim.VirtualMachinePowerState.poweredOff:
self.module.fail_json(msg="VM is not in poweroff state, can not do guest OS customization.")
cus_result = self.customize_exist_vm()
if cus_result['failed']:
return cus_result
vm_facts = self.gather_facts(self.current_vm_obj)
return {'changed': self.change_applied, 'failed': False, 'instance': vm_facts}
def customize_exist_vm(self):
task = None
# Find if we need network customizations (find keys in dictionary that requires customizations)
network_changes = False
for nw in self.params['networks']:
for key in nw:
# We don't need customizations for these keys
if key not in ('device_type', 'mac', 'name', 'vlan', 'type', 'start_connected'):
network_changes = True
break
if len(self.params['customization']) > 1 or network_changes or self.params.get('customization_spec'):
self.customize_vm(vm_obj=self.current_vm_obj)
try:
task = self.current_vm_obj.CustomizeVM_Task(self.customspec)
except vim.fault.CustomizationFault as e:
self.module.fail_json(msg="Failed to customization virtual machine due to CustomizationFault: %s" % to_native(e.msg))
except vim.fault.RuntimeFault as e:
self.module.fail_json(msg="failed to customization virtual machine due to RuntimeFault: %s" % to_native(e.msg))
except Exception as e:
self.module.fail_json(msg="failed to customization virtual machine due to fault: %s" % to_native(e.msg))
self.wait_for_task(task)
if task.info.state == 'error':
return {'changed': self.change_applied, 'failed': True, 'msg': task.info.error.msg, 'op': 'customize_exist'}
if self.params['wait_for_customization']:
set_vm_power_state(self.content, self.current_vm_obj, 'poweredon', force=False)
is_customization_ok = self.wait_for_customization(self.current_vm_obj)
if not is_customization_ok:
return {'changed': self.change_applied, 'failed': True, 'op': 'wait_for_customize_exist'}
return {'changed': self.change_applied, 'failed': False}
def wait_for_task(self, task, poll_interval=1):
"""
Wait for a VMware task to complete. Terminal states are 'error' and 'success'.
Inputs:
- task: the task to wait for
- poll_interval: polling interval to check the task, in seconds
Modifies:
- self.change_applied
"""
# https://www.vmware.com/support/developer/vc-sdk/visdk25pubs/ReferenceGuide/vim.Task.html
# https://www.vmware.com/support/developer/vc-sdk/visdk25pubs/ReferenceGuide/vim.TaskInfo.html
# https://github.com/virtdevninja/pyvmomi-community-samples/blob/master/samples/tools/tasks.py
while task.info.state not in ['error', 'success']:
time.sleep(poll_interval)
self.change_applied = self.change_applied or task.info.state == 'success'
def wait_for_vm_ip(self, vm, poll=100, sleep=5):
ips = None
facts = {}
thispoll = 0
while not ips and thispoll <= poll:
newvm = self.get_vm()
facts = self.gather_facts(newvm)
if facts['ipv4'] or facts['ipv6']:
ips = True
else:
time.sleep(sleep)
thispoll += 1
return facts
def get_vm_events(self, vm, eventTypeIdList):
byEntity = vim.event.EventFilterSpec.ByEntity(entity=vm, recursion="self")
filterSpec = vim.event.EventFilterSpec(entity=byEntity, eventTypeId=eventTypeIdList)
eventManager = self.content.eventManager
return eventManager.QueryEvent(filterSpec)
def wait_for_customization(self, vm, poll=10000, sleep=10):
thispoll = 0
while thispoll <= poll:
eventStarted = self.get_vm_events(vm, ['CustomizationStartedEvent'])
if len(eventStarted):
thispoll = 0
while thispoll <= poll:
eventsFinishedResult = self.get_vm_events(vm, ['CustomizationSucceeded', 'CustomizationFailed'])
if len(eventsFinishedResult):
if not isinstance(eventsFinishedResult[0], vim.event.CustomizationSucceeded):
self.module.fail_json(msg='Customization failed with error {0}:\n{1}'.format(
eventsFinishedResult[0]._wsdlName, eventsFinishedResult[0].fullFormattedMessage))
return False
break
else:
time.sleep(sleep)
thispoll += 1
return True
else:
time.sleep(sleep)
thispoll += 1
self.module.fail_json('waiting for customizations timed out.')
return False
def main():
argument_spec = vmware_argument_spec()
argument_spec.update(
state=dict(type='str', default='present',
choices=['absent', 'poweredoff', 'poweredon', 'present', 'rebootguest', 'restarted', 'shutdownguest', 'suspended']),
template=dict(type='str', aliases=['template_src']),
is_template=dict(type='bool', default=False),
annotation=dict(type='str', aliases=['notes']),
customvalues=dict(type='list', default=[]),
name=dict(type='str'),
name_match=dict(type='str', choices=['first', 'last'], default='first'),
uuid=dict(type='str'),
use_instance_uuid=dict(type='bool', default=False),
folder=dict(type='str'),
guest_id=dict(type='str'),
disk=dict(type='list', default=[]),
cdrom=dict(type='dict', default={}),
hardware=dict(type='dict', default={}),
force=dict(type='bool', default=False),
datacenter=dict(type='str', default='ha-datacenter'),
esxi_hostname=dict(type='str'),
cluster=dict(type='str'),
wait_for_ip_address=dict(type='bool', default=False),
state_change_timeout=dict(type='int', default=0),
snapshot_src=dict(type='str'),
linked_clone=dict(type='bool', default=False),
networks=dict(type='list', default=[]),
resource_pool=dict(type='str'),
customization=dict(type='dict', default={}, no_log=True),
customization_spec=dict(type='str', default=None),
wait_for_customization=dict(type='bool', default=False),
vapp_properties=dict(type='list', default=[]),
datastore=dict(type='str'),
convert=dict(type='str', choices=['thin', 'thick', 'eagerzeroedthick']),
)
module = AnsibleModule(argument_spec=argument_spec,
supports_check_mode=True,
mutually_exclusive=[
['cluster', 'esxi_hostname'],
],
required_one_of=[
['name', 'uuid'],
],
)
result = {'failed': False, 'changed': False}
pyv = PyVmomiHelper(module)
# Check if the VM exists before continuing
vm = pyv.get_vm()
# VM already exists
if vm:
if module.params['state'] == 'absent':
# destroy it
if module.check_mode:
result.update(
vm_name=vm.name,
changed=True,
current_powerstate=vm.summary.runtime.powerState.lower(),
desired_operation='remove_vm',
)
module.exit_json(**result)
if module.params['force']:
# has to be poweredoff first
set_vm_power_state(pyv.content, vm, 'poweredoff', module.params['force'])
result = pyv.remove_vm(vm)
elif module.params['state'] == 'present':
if module.check_mode:
result.update(
vm_name=vm.name,
changed=True,
desired_operation='reconfigure_vm',
)
module.exit_json(**result)
result = pyv.reconfigure_vm()
elif module.params['state'] in ['poweredon', 'poweredoff', 'restarted', 'suspended', 'shutdownguest', 'rebootguest']:
if module.check_mode:
result.update(
vm_name=vm.name,
changed=True,
current_powerstate=vm.summary.runtime.powerState.lower(),
desired_operation='set_vm_power_state',
)
module.exit_json(**result)
# set powerstate
tmp_result = set_vm_power_state(pyv.content, vm, module.params['state'], module.params['force'], module.params['state_change_timeout'])
if tmp_result['changed']:
result["changed"] = True
if module.params['state'] in ['poweredon', 'restarted', 'rebootguest'] and module.params['wait_for_ip_address']:
wait_result = wait_for_vm_ip(pyv.content, vm)
if not wait_result:
module.fail_json(msg='Waiting for IP address timed out')
tmp_result['instance'] = wait_result
if not tmp_result["failed"]:
result["failed"] = False
result['instance'] = tmp_result['instance']
if tmp_result["failed"]:
result["failed"] = True
result["msg"] = tmp_result["msg"]
else:
# This should not happen
raise AssertionError()
# VM doesn't exist
else:
if module.params['state'] in ['poweredon', 'poweredoff', 'present', 'restarted', 'suspended']:
if module.check_mode:
result.update(
changed=True,
desired_operation='deploy_vm',
)
module.exit_json(**result)
result = pyv.deploy_vm()
if result['failed']:
module.fail_json(msg='Failed to create a virtual machine : %s' % result['msg'])
if result['failed']:
module.fail_json(**result)
else:
module.exit_json(**result)
if __name__ == '__main__':
main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 59,734 |
ansible-galaxy login -c CERT error
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
`ansible-galaxy login --ignore-certs` is not respected. Further, the traceback for the error when a cert fails isn't clear.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
ansible-galaxy login https://github.com/ansible/ansible/blob/devel/lib/ansible/galaxy/login.py#L91
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```
ansible 2.9.0.dev0
config file = None
configured module search path = [u'/home/meyers/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /home/meyers/ansible/virtualenv/ansible-dev2/local/lib/python2.7/site-packages/ansible
executable location = /home/meyers/ansible/virtualenv/ansible-dev2/bin/ansible
python version = 2.7.16 (default, Apr 6 2019, 01:42:57) [GCC 8.3.0]
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
```
cat /etc/lsb-release
DISTRIB_ID=Ubuntu
DISTRIB_RELEASE=19.04
DISTRIB_CODENAME=disco
DISTRIB_DESCRIPTION="Ubuntu 19.04"
```
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
<!--- Paste example playbooks or commands between quotes below -->
```
ansible-galaxy login -c
```
You can enter junk for the username/password and the error will trigger.
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
```
ERROR! Bad credentials
```
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes -->
```
kjfldjansible-galaxy 2.9.0.dev0
config file = None
configured module search path = [u'/home/meyers/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /home/meyers/ansible/virtualenv/ansible-dev2/local/lib/python2.7/site-packages/ansible
executable location = /home/meyers/ansible/virtualenv/ansible-dev2/bin/ansible-galaxy
python version = 2.7.16 (default, Apr 6 2019, 01:42:57) [GCC 8.3.0]
No config file found; using defaults
Opened /home/meyers/.ansible_galaxy
We need your GitHub login to identify you.
This information will not be sent to Galaxy, only to api.github.com.
The password will not be displayed.
Use --github-token if you do not want to enter your password.
GitHub Username: fldlfd
Password for dkjfldjfldlfd:
ca path None
Force False
ERROR! Unexpected Exception, this is probably a bug: Incorrect padding
the full traceback was:
Traceback (most recent call last):
File "/home/meyers/ansible/virtualenv/ansible-dev2/bin/ansible-galaxy", line 111, in <module>
exit_code = cli.run()
File "/home/meyers/ansible/virtualenv/ansible-dev2/local/lib/python2.7/site-packages/ansible/cli/galaxy.py", line 269, in run
context.CLIARGS['func']()
File "/home/meyers/ansible/virtualenv/ansible-dev2/local/lib/python2.7/site-packages/ansible/cli/galaxy.py", line 888, in execute_login
github_token = login.create_github_token()
File "/home/meyers/ansible/virtualenv/ansible-dev2/local/lib/python2.7/site-packages/ansible/galaxy/login.py", line 100, in create_github_token
self.remove_github_token()
File "/home/meyers/ansible/virtualenv/ansible-dev2/local/lib/python2.7/site-packages/ansible/galaxy/login.py", line 81, in remove_github_token
url_password=self.github_password, force_basic_auth=True,))
File "/home/meyers/ansible/virtualenv/ansible-dev2/local/lib/python2.7/site-packages/ansible/module_utils/urls.py", line 1382, in open_url
use_gssapi=use_gssapi, unix_socket=unix_socket, ca_path=ca_path)
File "/home/meyers/ansible/virtualenv/ansible-dev2/local/lib/python2.7/site-packages/ansible/module_utils/urls.py", line 1235, in open
tmp_ca_path, cadata, paths_checked = ssl_handler.get_ca_certs()
File "/home/meyers/ansible/virtualenv/ansible-dev2/local/lib/python2.7/site-packages/ansible/module_utils/urls.py", line 870, in get_ca_certs
to_native(b_cert, errors='surrogate_or_strict')
File "/usr/lib/python2.7/ssl.py", line 988, in PEM_cert_to_DER_cert
return base64.decodestring(d.encode('ASCII', 'strict'))
File "/usr/lib/python2.7/base64.py", line 328, in decodestring
return binascii.a2b_base64(s)
Error: Incorrect padding
```
|
https://github.com/ansible/ansible/issues/59734
|
https://github.com/ansible/ansible/pull/59959
|
acbffce0796ff8e28ac5646ed8b3fd4e19232223
|
94f5e2d9ed964d32271e193445f40557bb5892b6
| 2019-07-29T17:08:45Z |
python
| 2019-08-06T19:59:34Z |
lib/ansible/galaxy/login.py
|
########################################################################
#
# (C) 2015, Chris Houseknecht <[email protected]>
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
#
########################################################################
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import getpass
import json
from ansible.errors import AnsibleError, AnsibleOptionsError
from ansible.module_utils.six.moves import input
from ansible.module_utils.six.moves.urllib.parse import quote as urlquote, urlparse
from ansible.module_utils.six.moves.urllib.error import HTTPError
from ansible.module_utils.urls import open_url
from ansible.utils.color import stringc
from ansible.utils.display import Display
display = Display()
class GalaxyLogin(object):
''' Class to handle authenticating user with Galaxy API prior to performing CUD operations '''
GITHUB_AUTH = 'https://api.github.com/authorizations'
def __init__(self, galaxy, github_token=None):
self.galaxy = galaxy
self.github_username = None
self.github_password = None
if github_token is None:
self.get_credentials()
def get_credentials(self):
display.display(u'\n\n' + "We need your " + stringc("GitHub login", 'bright cyan') +
" to identify you.", screen_only=True)
display.display("This information will " + stringc("not be sent to Galaxy", 'bright cyan') +
", only to " + stringc("api.github.com.", "yellow"), screen_only=True)
display.display("The password will not be displayed." + u'\n\n', screen_only=True)
display.display("Use " + stringc("--github-token", 'yellow') +
" if you do not want to enter your password." + u'\n\n', screen_only=True)
try:
self.github_username = input("GitHub Username: ")
except Exception:
pass
try:
self.github_password = getpass.getpass("Password for %s: " % self.github_username)
except Exception:
pass
if not self.github_username or not self.github_password:
raise AnsibleError("Invalid GitHub credentials. Username and password are required.")
def remove_github_token(self):
'''
If for some reason an ansible-galaxy token was left from a prior login, remove it. We cannot
retrieve the token after creation, so we are forced to create a new one.
'''
try:
tokens = json.load(open_url(self.GITHUB_AUTH, url_username=self.github_username,
url_password=self.github_password, force_basic_auth=True,))
except HTTPError as e:
res = json.load(e)
raise AnsibleError(res['message'])
for token in tokens:
if token['note'] == 'ansible-galaxy login':
display.vvvvv('removing token: %s' % token['token_last_eight'])
try:
open_url('https://api.github.com/authorizations/%d' % token['id'], url_username=self.github_username,
url_password=self.github_password, method='DELETE', force_basic_auth=True)
except HTTPError as e:
res = json.load(e)
raise AnsibleError(res['message'])
def create_github_token(self):
'''
Create a personal authorization token with a note of 'ansible-galaxy login'
'''
self.remove_github_token()
args = json.dumps({"scopes": ["public_repo"], "note": "ansible-galaxy login"})
try:
data = json.load(open_url(self.GITHUB_AUTH, url_username=self.github_username,
url_password=self.github_password, force_basic_auth=True, data=args))
except HTTPError as e:
res = json.load(e)
raise AnsibleError(res['message'])
return data['token']
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 59,734 |
ansible-galaxy login -c CERT error
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
`ansible-galaxy login --ignore-certs` is not respected. Further, the traceback for the error when a cert fails isn't clear.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
ansible-galaxy login https://github.com/ansible/ansible/blob/devel/lib/ansible/galaxy/login.py#L91
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```
ansible 2.9.0.dev0
config file = None
configured module search path = [u'/home/meyers/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /home/meyers/ansible/virtualenv/ansible-dev2/local/lib/python2.7/site-packages/ansible
executable location = /home/meyers/ansible/virtualenv/ansible-dev2/bin/ansible
python version = 2.7.16 (default, Apr 6 2019, 01:42:57) [GCC 8.3.0]
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
```
cat /etc/lsb-release
DISTRIB_ID=Ubuntu
DISTRIB_RELEASE=19.04
DISTRIB_CODENAME=disco
DISTRIB_DESCRIPTION="Ubuntu 19.04"
```
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
<!--- Paste example playbooks or commands between quotes below -->
```
ansible-galaxy login -c
```
You can enter junk for the username/password and the error will trigger.
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
```
ERROR! Bad credentials
```
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes -->
```
kjfldjansible-galaxy 2.9.0.dev0
config file = None
configured module search path = [u'/home/meyers/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /home/meyers/ansible/virtualenv/ansible-dev2/local/lib/python2.7/site-packages/ansible
executable location = /home/meyers/ansible/virtualenv/ansible-dev2/bin/ansible-galaxy
python version = 2.7.16 (default, Apr 6 2019, 01:42:57) [GCC 8.3.0]
No config file found; using defaults
Opened /home/meyers/.ansible_galaxy
We need your GitHub login to identify you.
This information will not be sent to Galaxy, only to api.github.com.
The password will not be displayed.
Use --github-token if you do not want to enter your password.
GitHub Username: fldlfd
Password for dkjfldjfldlfd:
ca path None
Force False
ERROR! Unexpected Exception, this is probably a bug: Incorrect padding
the full traceback was:
Traceback (most recent call last):
File "/home/meyers/ansible/virtualenv/ansible-dev2/bin/ansible-galaxy", line 111, in <module>
exit_code = cli.run()
File "/home/meyers/ansible/virtualenv/ansible-dev2/local/lib/python2.7/site-packages/ansible/cli/galaxy.py", line 269, in run
context.CLIARGS['func']()
File "/home/meyers/ansible/virtualenv/ansible-dev2/local/lib/python2.7/site-packages/ansible/cli/galaxy.py", line 888, in execute_login
github_token = login.create_github_token()
File "/home/meyers/ansible/virtualenv/ansible-dev2/local/lib/python2.7/site-packages/ansible/galaxy/login.py", line 100, in create_github_token
self.remove_github_token()
File "/home/meyers/ansible/virtualenv/ansible-dev2/local/lib/python2.7/site-packages/ansible/galaxy/login.py", line 81, in remove_github_token
url_password=self.github_password, force_basic_auth=True,))
File "/home/meyers/ansible/virtualenv/ansible-dev2/local/lib/python2.7/site-packages/ansible/module_utils/urls.py", line 1382, in open_url
use_gssapi=use_gssapi, unix_socket=unix_socket, ca_path=ca_path)
File "/home/meyers/ansible/virtualenv/ansible-dev2/local/lib/python2.7/site-packages/ansible/module_utils/urls.py", line 1235, in open
tmp_ca_path, cadata, paths_checked = ssl_handler.get_ca_certs()
File "/home/meyers/ansible/virtualenv/ansible-dev2/local/lib/python2.7/site-packages/ansible/module_utils/urls.py", line 870, in get_ca_certs
to_native(b_cert, errors='surrogate_or_strict')
File "/usr/lib/python2.7/ssl.py", line 988, in PEM_cert_to_DER_cert
return base64.decodestring(d.encode('ASCII', 'strict'))
File "/usr/lib/python2.7/base64.py", line 328, in decodestring
return binascii.a2b_base64(s)
Error: Incorrect padding
```
|
https://github.com/ansible/ansible/issues/59734
|
https://github.com/ansible/ansible/pull/59959
|
acbffce0796ff8e28ac5646ed8b3fd4e19232223
|
94f5e2d9ed964d32271e193445f40557bb5892b6
| 2019-07-29T17:08:45Z |
python
| 2019-08-06T19:59:34Z |
lib/ansible/module_utils/urls.py
|
# This code is part of Ansible, but is an independent component.
# This particular file snippet, and this file snippet only, is BSD licensed.
# Modules you write using this snippet, which is embedded dynamically by Ansible
# still belong to the author of the module, and may assign their own license
# to the complete work.
#
# Copyright (c), Michael DeHaan <[email protected]>, 2012-2013
# Copyright (c), Toshio Kuratomi <[email protected]>, 2015
#
# Simplified BSD License (see licenses/simplified_bsd.txt or https://opensource.org/licenses/BSD-2-Clause)
#
# The match_hostname function and supporting code is under the terms and
# conditions of the Python Software Foundation License. They were taken from
# the Python3 standard library and adapted for use in Python2. See comments in the
# source for which code precisely is under this License.
#
# PSF License (see licenses/PSF-license.txt or https://opensource.org/licenses/Python-2.0)
'''
The **urls** utils module offers a replacement for the urllib2 python library.
urllib2 is the python stdlib way to retrieve files from the Internet but it
lacks some security features (around verifying SSL certificates) that users
should care about in most situations. Using the functions in this module corrects
deficiencies in the urllib2 module wherever possible.
There are also third-party libraries (for instance, requests) which can be used
to replace urllib2 with a more secure library. However, all third party libraries
require that the library be installed on the managed machine. That is an extra step
for users making use of a module. If possible, avoid third party libraries by using
this code instead.
'''
import atexit
import base64
import functools
import netrc
import os
import platform
import re
import socket
import sys
import tempfile
import traceback
from contextlib import contextmanager
try:
import httplib
except ImportError:
# Python 3
import http.client as httplib
import ansible.module_utils.six.moves.http_cookiejar as cookiejar
import ansible.module_utils.six.moves.urllib.request as urllib_request
import ansible.module_utils.six.moves.urllib.error as urllib_error
from ansible.module_utils.six import PY3
from ansible.module_utils.basic import get_distribution
from ansible.module_utils._text import to_bytes, to_native, to_text
try:
# python3
import urllib.request as urllib_request
from urllib.request import AbstractHTTPHandler
except ImportError:
# python2
import urllib2 as urllib_request
from urllib2 import AbstractHTTPHandler
urllib_request.HTTPRedirectHandler.http_error_308 = urllib_request.HTTPRedirectHandler.http_error_307
try:
from ansible.module_utils.six.moves.urllib.parse import urlparse, urlunparse
HAS_URLPARSE = True
except Exception:
HAS_URLPARSE = False
try:
import ssl
HAS_SSL = True
except Exception:
HAS_SSL = False
try:
# SNI Handling needs python2.7.9's SSLContext
from ssl import create_default_context, SSLContext
HAS_SSLCONTEXT = True
except ImportError:
HAS_SSLCONTEXT = False
# SNI Handling for python < 2.7.9 with urllib3 support
try:
# urllib3>=1.15
HAS_URLLIB3_SSL_WRAP_SOCKET = False
try:
from urllib3.contrib.pyopenssl import PyOpenSSLContext
except ImportError:
from requests.packages.urllib3.contrib.pyopenssl import PyOpenSSLContext
HAS_URLLIB3_PYOPENSSLCONTEXT = True
except ImportError:
# urllib3<1.15,>=1.6
HAS_URLLIB3_PYOPENSSLCONTEXT = False
try:
try:
from urllib3.contrib.pyopenssl import ssl_wrap_socket
except ImportError:
from requests.packages.urllib3.contrib.pyopenssl import ssl_wrap_socket
HAS_URLLIB3_SSL_WRAP_SOCKET = True
except ImportError:
pass
# Select a protocol that includes all secure tls protocols
# Exclude insecure ssl protocols if possible
if HAS_SSL:
# If we can't find extra tls methods, ssl.PROTOCOL_TLSv1 is sufficient
PROTOCOL = ssl.PROTOCOL_TLSv1
if not HAS_SSLCONTEXT and HAS_SSL:
try:
import ctypes
import ctypes.util
except ImportError:
# python 2.4 (likely rhel5 which doesn't have tls1.1 support in its openssl)
pass
else:
libssl_name = ctypes.util.find_library('ssl')
libssl = ctypes.CDLL(libssl_name)
for method in ('TLSv1_1_method', 'TLSv1_2_method'):
try:
libssl[method]
# Found something - we'll let openssl autonegotiate and hope
# the server has disabled sslv2 and 3. best we can do.
PROTOCOL = ssl.PROTOCOL_SSLv23
break
except AttributeError:
pass
del libssl
# The following makes it easier for us to script updates of the bundled backports.ssl_match_hostname
# The bundled backports.ssl_match_hostname should really be moved into its own file for processing
_BUNDLED_METADATA = {"pypi_name": "backports.ssl_match_hostname", "version": "3.7.0.1"}
LOADED_VERIFY_LOCATIONS = set()
HAS_MATCH_HOSTNAME = True
try:
from ssl import match_hostname, CertificateError
except ImportError:
try:
from backports.ssl_match_hostname import match_hostname, CertificateError
except ImportError:
HAS_MATCH_HOSTNAME = False
try:
import urllib_gssapi
HAS_GSSAPI = True
except ImportError:
HAS_GSSAPI = False
if not HAS_MATCH_HOSTNAME:
# The following block of code is under the terms and conditions of the
# Python Software Foundation License
"""The match_hostname() function from Python 3.4, essential when using SSL."""
try:
# Divergence: Python-3.7+'s _ssl has this exception type but older Pythons do not
from _ssl import SSLCertVerificationError
CertificateError = SSLCertVerificationError
except ImportError:
class CertificateError(ValueError):
pass
def _dnsname_match(dn, hostname):
"""Matching according to RFC 6125, section 6.4.3
- Hostnames are compared lower case.
- For IDNA, both dn and hostname must be encoded as IDN A-label (ACE).
- Partial wildcards like 'www*.example.org', multiple wildcards, sole
wildcard or wildcards in labels other then the left-most label are not
supported and a CertificateError is raised.
- A wildcard must match at least one character.
"""
if not dn:
return False
wildcards = dn.count('*')
# speed up common case w/o wildcards
if not wildcards:
return dn.lower() == hostname.lower()
if wildcards > 1:
# Divergence .format() to percent formatting for Python < 2.6
raise CertificateError(
"too many wildcards in certificate DNS name: %s" % repr(dn))
dn_leftmost, sep, dn_remainder = dn.partition('.')
if '*' in dn_remainder:
# Only match wildcard in leftmost segment.
# Divergence .format() to percent formatting for Python < 2.6
raise CertificateError(
"wildcard can only be present in the leftmost label: "
"%s." % repr(dn))
if not sep:
# no right side
# Divergence .format() to percent formatting for Python < 2.6
raise CertificateError(
"sole wildcard without additional labels are not support: "
"%s." % repr(dn))
if dn_leftmost != '*':
# no partial wildcard matching
# Divergence .format() to percent formatting for Python < 2.6
raise CertificateError(
"partial wildcards in leftmost label are not supported: "
"%s." % repr(dn))
hostname_leftmost, sep, hostname_remainder = hostname.partition('.')
if not hostname_leftmost or not sep:
# wildcard must match at least one char
return False
return dn_remainder.lower() == hostname_remainder.lower()
def _inet_paton(ipname):
"""Try to convert an IP address to packed binary form
Supports IPv4 addresses on all platforms and IPv6 on platforms with IPv6
support.
"""
# inet_aton() also accepts strings like '1'
# Divergence: We make sure we have native string type for all python versions
try:
b_ipname = to_bytes(ipname, errors='strict')
except UnicodeError:
raise ValueError("%s must be an all-ascii string." % repr(ipname))
# Set ipname in native string format
if sys.version_info < (3,):
n_ipname = b_ipname
else:
n_ipname = ipname
if n_ipname.count('.') == 3:
try:
return socket.inet_aton(n_ipname)
# Divergence: OSError on late python3. socket.error earlier.
# Null bytes generate ValueError on python3(we want to raise
# ValueError anyway), TypeError # earlier
except (OSError, socket.error, TypeError):
pass
try:
return socket.inet_pton(socket.AF_INET6, n_ipname)
# Divergence: OSError on late python3. socket.error earlier.
# Null bytes generate ValueError on python3(we want to raise
# ValueError anyway), TypeError # earlier
except (OSError, socket.error, TypeError):
# Divergence .format() to percent formatting for Python < 2.6
raise ValueError("%s is neither an IPv4 nor an IP6 "
"address." % repr(ipname))
except AttributeError:
# AF_INET6 not available
pass
# Divergence .format() to percent formatting for Python < 2.6
raise ValueError("%s is not an IPv4 address." % repr(ipname))
def _ipaddress_match(ipname, host_ip):
"""Exact matching of IP addresses.
RFC 6125 explicitly doesn't define an algorithm for this
(section 1.7.2 - "Out of Scope").
"""
# OpenSSL may add a trailing newline to a subjectAltName's IP address
ip = _inet_paton(ipname.rstrip())
return ip == host_ip
def match_hostname(cert, hostname):
"""Verify that *cert* (in decoded format as returned by
SSLSocket.getpeercert()) matches the *hostname*. RFC 2818 and RFC 6125
rules are followed.
The function matches IP addresses rather than dNSNames if hostname is a
valid ipaddress string. IPv4 addresses are supported on all platforms.
IPv6 addresses are supported on platforms with IPv6 support (AF_INET6
and inet_pton).
CertificateError is raised on failure. On success, the function
returns nothing.
"""
if not cert:
raise ValueError("empty or no certificate, match_hostname needs a "
"SSL socket or SSL context with either "
"CERT_OPTIONAL or CERT_REQUIRED")
try:
# Divergence: Deal with hostname as bytes
host_ip = _inet_paton(to_text(hostname, errors='strict'))
except UnicodeError:
# Divergence: Deal with hostname as byte strings.
# IP addresses should be all ascii, so we consider it not
# an IP address if this fails
host_ip = None
except ValueError:
# Not an IP address (common case)
host_ip = None
dnsnames = []
san = cert.get('subjectAltName', ())
for key, value in san:
if key == 'DNS':
if host_ip is None and _dnsname_match(value, hostname):
return
dnsnames.append(value)
elif key == 'IP Address':
if host_ip is not None and _ipaddress_match(value, host_ip):
return
dnsnames.append(value)
if not dnsnames:
# The subject is only checked when there is no dNSName entry
# in subjectAltName
for sub in cert.get('subject', ()):
for key, value in sub:
# XXX according to RFC 2818, the most specific Common Name
# must be used.
if key == 'commonName':
if _dnsname_match(value, hostname):
return
dnsnames.append(value)
if len(dnsnames) > 1:
raise CertificateError("hostname %r doesn't match either of %s" % (hostname, ', '.join(map(repr, dnsnames))))
elif len(dnsnames) == 1:
raise CertificateError("hostname %r doesn't match %r" % (hostname, dnsnames[0]))
else:
raise CertificateError("no appropriate commonName or subjectAltName fields were found")
# End of Python Software Foundation Licensed code
HAS_MATCH_HOSTNAME = True
# This is a dummy cacert provided for macOS since you need at least 1
# ca cert, regardless of validity, for Python on macOS to use the
# keychain functionality in OpenSSL for validating SSL certificates.
# See: http://mercurial.selenic.com/wiki/CACertificates#Mac_OS_X_10.6_and_higher
b_DUMMY_CA_CERT = b"""-----BEGIN CERTIFICATE-----
MIICvDCCAiWgAwIBAgIJAO8E12S7/qEpMA0GCSqGSIb3DQEBBQUAMEkxCzAJBgNV
BAYTAlVTMRcwFQYDVQQIEw5Ob3J0aCBDYXJvbGluYTEPMA0GA1UEBxMGRHVyaGFt
MRAwDgYDVQQKEwdBbnNpYmxlMB4XDTE0MDMxODIyMDAyMloXDTI0MDMxNTIyMDAy
MlowSTELMAkGA1UEBhMCVVMxFzAVBgNVBAgTDk5vcnRoIENhcm9saW5hMQ8wDQYD
VQQHEwZEdXJoYW0xEDAOBgNVBAoTB0Fuc2libGUwgZ8wDQYJKoZIhvcNAQEBBQAD
gY0AMIGJAoGBANtvpPq3IlNlRbCHhZAcP6WCzhc5RbsDqyh1zrkmLi0GwcQ3z/r9
gaWfQBYhHpobK2Tiq11TfraHeNB3/VfNImjZcGpN8Fl3MWwu7LfVkJy3gNNnxkA1
4Go0/LmIvRFHhbzgfuo9NFgjPmmab9eqXJceqZIlz2C8xA7EeG7ku0+vAgMBAAGj
gaswgagwHQYDVR0OBBYEFPnN1nPRqNDXGlCqCvdZchRNi/FaMHkGA1UdIwRyMHCA
FPnN1nPRqNDXGlCqCvdZchRNi/FaoU2kSzBJMQswCQYDVQQGEwJVUzEXMBUGA1UE
CBMOTm9ydGggQ2Fyb2xpbmExDzANBgNVBAcTBkR1cmhhbTEQMA4GA1UEChMHQW5z
aWJsZYIJAO8E12S7/qEpMAwGA1UdEwQFMAMBAf8wDQYJKoZIhvcNAQEFBQADgYEA
MUB80IR6knq9K/tY+hvPsZer6eFMzO3JGkRFBh2kn6JdMDnhYGX7AXVHGflrwNQH
qFy+aenWXsC0ZvrikFxbQnX8GVtDADtVznxOi7XzFw7JOxdsVrpXgSN0eh0aMzvV
zKPZsZ2miVGclicJHzm5q080b1p/sZtuKIEZk6vZqEg=
-----END CERTIFICATE-----
"""
#
# Exceptions
#
class ConnectionError(Exception):
"""Failed to connect to the server"""
pass
class ProxyError(ConnectionError):
"""Failure to connect because of a proxy"""
pass
class SSLValidationError(ConnectionError):
"""Failure to connect due to SSL validation failing"""
pass
class NoSSLError(SSLValidationError):
"""Needed to connect to an HTTPS url but no ssl library available to verify the certificate"""
pass
# Some environments (Google Compute Engine's CoreOS deploys) do not compile
# against openssl and thus do not have any HTTPS support.
CustomHTTPSConnection = None
CustomHTTPSHandler = None
HTTPSClientAuthHandler = None
UnixHTTPSConnection = None
if hasattr(httplib, 'HTTPSConnection') and hasattr(urllib_request, 'HTTPSHandler'):
class CustomHTTPSConnection(httplib.HTTPSConnection):
def __init__(self, *args, **kwargs):
httplib.HTTPSConnection.__init__(self, *args, **kwargs)
self.context = None
if HAS_SSLCONTEXT:
self.context = self._context
elif HAS_URLLIB3_PYOPENSSLCONTEXT:
self.context = self._context = PyOpenSSLContext(PROTOCOL)
if self.context and self.cert_file:
self.context.load_cert_chain(self.cert_file, self.key_file)
def connect(self):
"Connect to a host on a given (SSL) port."
if hasattr(self, 'source_address'):
sock = socket.create_connection((self.host, self.port), self.timeout, self.source_address)
else:
sock = socket.create_connection((self.host, self.port), self.timeout)
server_hostname = self.host
# Note: self._tunnel_host is not available on py < 2.6 but this code
# isn't used on py < 2.6 (lack of create_connection)
if self._tunnel_host:
self.sock = sock
self._tunnel()
server_hostname = self._tunnel_host
if HAS_SSLCONTEXT or HAS_URLLIB3_PYOPENSSLCONTEXT:
self.sock = self.context.wrap_socket(sock, server_hostname=server_hostname)
elif HAS_URLLIB3_SSL_WRAP_SOCKET:
self.sock = ssl_wrap_socket(sock, keyfile=self.key_file, cert_reqs=ssl.CERT_NONE, certfile=self.cert_file, ssl_version=PROTOCOL,
server_hostname=server_hostname)
else:
self.sock = ssl.wrap_socket(sock, keyfile=self.key_file, certfile=self.cert_file, ssl_version=PROTOCOL)
class CustomHTTPSHandler(urllib_request.HTTPSHandler):
def https_open(self, req):
kwargs = {}
if HAS_SSLCONTEXT:
kwargs['context'] = self._context
return self.do_open(
functools.partial(
CustomHTTPSConnection,
**kwargs
),
req
)
https_request = AbstractHTTPHandler.do_request_
class HTTPSClientAuthHandler(urllib_request.HTTPSHandler):
'''Handles client authentication via cert/key
This is a fairly lightweight extension on HTTPSHandler, and can be used
in place of HTTPSHandler
'''
def __init__(self, client_cert=None, client_key=None, unix_socket=None, **kwargs):
urllib_request.HTTPSHandler.__init__(self, **kwargs)
self.client_cert = client_cert
self.client_key = client_key
self._unix_socket = unix_socket
def https_open(self, req):
return self.do_open(self._build_https_connection, req)
def _build_https_connection(self, host, **kwargs):
kwargs.update({
'cert_file': self.client_cert,
'key_file': self.client_key,
})
try:
kwargs['context'] = self._context
except AttributeError:
pass
if self._unix_socket:
return UnixHTTPSConnection(self._unix_socket)(host, **kwargs)
return httplib.HTTPSConnection(host, **kwargs)
@contextmanager
def unix_socket_patch_httpconnection_connect():
'''Monkey patch ``httplib.HTTPConnection.connect`` to be ``UnixHTTPConnection.connect``
so that when calling ``super(UnixHTTPSConnection, self).connect()`` we get the
correct behavior of creating self.sock for the unix socket
'''
_connect = httplib.HTTPConnection.connect
httplib.HTTPConnection.connect = UnixHTTPConnection.connect
yield
httplib.HTTPConnection.connect = _connect
class UnixHTTPSConnection(httplib.HTTPSConnection):
def __init__(self, unix_socket):
self._unix_socket = unix_socket
def connect(self):
# This method exists simply to ensure we monkeypatch
# httplib.HTTPConnection.connect to call UnixHTTPConnection.connect
with unix_socket_patch_httpconnection_connect():
# Disable pylint check for the super() call. It complains about UnixHTTPSConnection
# being a NoneType because of the initial definition above, but it won't actually
# be a NoneType when this code runs
# pylint: disable=bad-super-call
super(UnixHTTPSConnection, self).connect()
def __call__(self, *args, **kwargs):
httplib.HTTPSConnection.__init__(self, *args, **kwargs)
return self
class UnixHTTPConnection(httplib.HTTPConnection):
'''Handles http requests to a unix socket file'''
def __init__(self, unix_socket):
self._unix_socket = unix_socket
def connect(self):
self.sock = socket.socket(socket.AF_UNIX, socket.SOCK_STREAM)
try:
self.sock.connect(self._unix_socket)
except OSError as e:
raise OSError('Invalid Socket File (%s): %s' % (self._unix_socket, e))
if self.timeout is not socket._GLOBAL_DEFAULT_TIMEOUT:
self.sock.settimeout(self.timeout)
def __call__(self, *args, **kwargs):
httplib.HTTPConnection.__init__(self, *args, **kwargs)
return self
class UnixHTTPHandler(urllib_request.HTTPHandler):
'''Handler for Unix urls'''
def __init__(self, unix_socket, **kwargs):
urllib_request.HTTPHandler.__init__(self, **kwargs)
self._unix_socket = unix_socket
def http_open(self, req):
return self.do_open(UnixHTTPConnection(self._unix_socket), req)
class ParseResultDottedDict(dict):
'''
A dict that acts similarly to the ParseResult named tuple from urllib
'''
def __init__(self, *args, **kwargs):
super(ParseResultDottedDict, self).__init__(*args, **kwargs)
self.__dict__ = self
def as_list(self):
'''
Generate a list from this dict, that looks like the ParseResult named tuple
'''
return [self.get(k, None) for k in ('scheme', 'netloc', 'path', 'params', 'query', 'fragment')]
def generic_urlparse(parts):
'''
Returns a dictionary of url parts as parsed by urlparse,
but accounts for the fact that older versions of that
library do not support named attributes (ie. .netloc)
'''
generic_parts = ParseResultDottedDict()
if hasattr(parts, 'netloc'):
# urlparse is newer, just read the fields straight
# from the parts object
generic_parts['scheme'] = parts.scheme
generic_parts['netloc'] = parts.netloc
generic_parts['path'] = parts.path
generic_parts['params'] = parts.params
generic_parts['query'] = parts.query
generic_parts['fragment'] = parts.fragment
generic_parts['username'] = parts.username
generic_parts['password'] = parts.password
hostname = parts.hostname
if hostname and hostname[0] == '[' and '[' in parts.netloc and ']' in parts.netloc:
# Py2.6 doesn't parse IPv6 addresses correctly
hostname = parts.netloc.split(']')[0][1:].lower()
generic_parts['hostname'] = hostname
try:
port = parts.port
except ValueError:
# Py2.6 doesn't parse IPv6 addresses correctly
netloc = parts.netloc.split('@')[-1].split(']')[-1]
if ':' in netloc:
port = netloc.split(':')[1]
if port:
port = int(port)
else:
port = None
generic_parts['port'] = port
else:
# we have to use indexes, and then parse out
# the other parts not supported by indexing
generic_parts['scheme'] = parts[0]
generic_parts['netloc'] = parts[1]
generic_parts['path'] = parts[2]
generic_parts['params'] = parts[3]
generic_parts['query'] = parts[4]
generic_parts['fragment'] = parts[5]
# get the username, password, etc.
try:
netloc_re = re.compile(r'^((?:\w)+(?::(?:\w)+)?@)?([A-Za-z0-9.-]+)(:\d+)?$')
match = netloc_re.match(parts[1])
auth = match.group(1)
hostname = match.group(2)
port = match.group(3)
if port:
# the capture group for the port will include the ':',
# so remove it and convert the port to an integer
port = int(port[1:])
if auth:
# the capture group above includes the @, so remove it
# and then split it up based on the first ':' found
auth = auth[:-1]
username, password = auth.split(':', 1)
else:
username = password = None
generic_parts['username'] = username
generic_parts['password'] = password
generic_parts['hostname'] = hostname
generic_parts['port'] = port
except Exception:
generic_parts['username'] = None
generic_parts['password'] = None
generic_parts['hostname'] = parts[1]
generic_parts['port'] = None
return generic_parts
class RequestWithMethod(urllib_request.Request):
'''
Workaround for using DELETE/PUT/etc with urllib2
Originally contained in library/net_infrastructure/dnsmadeeasy
'''
def __init__(self, url, method, data=None, headers=None, origin_req_host=None, unverifiable=True):
if headers is None:
headers = {}
self._method = method.upper()
urllib_request.Request.__init__(self, url, data, headers, origin_req_host, unverifiable)
def get_method(self):
if self._method:
return self._method
else:
return urllib_request.Request.get_method(self)
def RedirectHandlerFactory(follow_redirects=None, validate_certs=True, ca_path=None):
"""This is a class factory that closes over the value of
``follow_redirects`` so that the RedirectHandler class has access to
that value without having to use globals, and potentially cause problems
where ``open_url`` or ``fetch_url`` are used multiple times in a module.
"""
class RedirectHandler(urllib_request.HTTPRedirectHandler):
"""This is an implementation of a RedirectHandler to match the
functionality provided by httplib2. It will utilize the value of
``follow_redirects`` that is passed into ``RedirectHandlerFactory``
to determine how redirects should be handled in urllib2.
"""
def redirect_request(self, req, fp, code, msg, hdrs, newurl):
if not HAS_SSLCONTEXT:
handler = maybe_add_ssl_handler(newurl, validate_certs, ca_path=ca_path)
if handler:
urllib_request._opener.add_handler(handler)
# Preserve urllib2 compatibility
if follow_redirects == 'urllib2':
return urllib_request.HTTPRedirectHandler.redirect_request(self, req, fp, code, msg, hdrs, newurl)
# Handle disabled redirects
elif follow_redirects in ['no', 'none', False]:
raise urllib_error.HTTPError(newurl, code, msg, hdrs, fp)
method = req.get_method()
# Handle non-redirect HTTP status or invalid follow_redirects
if follow_redirects in ['all', 'yes', True]:
if code < 300 or code >= 400:
raise urllib_error.HTTPError(req.get_full_url(), code, msg, hdrs, fp)
elif follow_redirects == 'safe':
if code < 300 or code >= 400 or method not in ('GET', 'HEAD'):
raise urllib_error.HTTPError(req.get_full_url(), code, msg, hdrs, fp)
else:
raise urllib_error.HTTPError(req.get_full_url(), code, msg, hdrs, fp)
try:
# Python 2-3.3
data = req.get_data()
origin_req_host = req.get_origin_req_host()
except AttributeError:
# Python 3.4+
data = req.data
origin_req_host = req.origin_req_host
# Be conciliant with URIs containing a space
newurl = newurl.replace(' ', '%20')
# Suport redirect with payload and original headers
if code in (307, 308):
# Preserve payload and headers
headers = req.headers
else:
# Do not preserve payload and filter headers
data = None
headers = dict((k, v) for k, v in req.headers.items()
if k.lower() not in ("content-length", "content-type", "transfer-encoding"))
# http://tools.ietf.org/html/rfc7231#section-6.4.4
if code == 303 and method != 'HEAD':
method = 'GET'
# Do what the browsers do, despite standards...
# First, turn 302s into GETs.
if code == 302 and method != 'HEAD':
method = 'GET'
# Second, if a POST is responded to with a 301, turn it into a GET.
if code == 301 and method == 'POST':
method = 'GET'
return RequestWithMethod(newurl,
method=method,
headers=headers,
data=data,
origin_req_host=origin_req_host,
unverifiable=True,
)
return RedirectHandler
def build_ssl_validation_error(hostname, port, paths, exc=None):
'''Inteligently build out the SSLValidationError based on what support
you have installed
'''
msg = [
('Failed to validate the SSL certificate for %s:%s.'
' Make sure your managed systems have a valid CA'
' certificate installed.')
]
if not HAS_SSLCONTEXT:
msg.append('If the website serving the url uses SNI you need'
' python >= 2.7.9 on your managed machine')
msg.append(' (the python executable used (%s) is version: %s)' %
(sys.executable, ''.join(sys.version.splitlines())))
if not HAS_URLLIB3_PYOPENSSLCONTEXT and not HAS_URLLIB3_SSL_WRAP_SOCKET:
msg.append('or you can install the `urllib3`, `pyOpenSSL`,'
' `ndg-httpsclient`, and `pyasn1` python modules')
msg.append('to perform SNI verification in python >= 2.6.')
msg.append('You can use validate_certs=False if you do'
' not need to confirm the servers identity but this is'
' unsafe and not recommended.'
' Paths checked for this platform: %s.')
if exc:
msg.append('The exception msg was: %s.' % to_native(exc))
raise SSLValidationError(' '.join(msg) % (hostname, port, ", ".join(paths)))
def atexit_remove_file(filename):
if os.path.exists(filename):
try:
os.unlink(filename)
except Exception:
# just ignore if we cannot delete, things should be ok
pass
class SSLValidationHandler(urllib_request.BaseHandler):
'''
A custom handler class for SSL validation.
Based on:
http://stackoverflow.com/questions/1087227/validate-ssl-certificates-with-python
http://techknack.net/python-urllib2-handlers/
'''
CONNECT_COMMAND = "CONNECT %s:%s HTTP/1.0\r\n"
def __init__(self, hostname, port, ca_path=None):
self.hostname = hostname
self.port = port
self.ca_path = ca_path
def get_ca_certs(self):
# tries to find a valid CA cert in one of the
# standard locations for the current distribution
ca_certs = []
cadata = bytearray()
paths_checked = []
if self.ca_path:
paths_checked = [self.ca_path]
with open(to_bytes(self.ca_path, errors='surrogate_or_strict'), 'rb') as f:
if HAS_SSLCONTEXT:
cadata.extend(
ssl.PEM_cert_to_DER_cert(
to_native(f.read(), errors='surrogate_or_strict')
)
)
else:
ca_certs.append(f.read())
return ca_certs, cadata, paths_checked
if not HAS_SSLCONTEXT:
paths_checked.append('/etc/ssl/certs')
system = to_text(platform.system(), errors='surrogate_or_strict')
# build a list of paths to check for .crt/.pem files
# based on the platform type
if system == u'Linux':
paths_checked.append('/etc/pki/ca-trust/extracted/pem')
paths_checked.append('/etc/pki/tls/certs')
paths_checked.append('/usr/share/ca-certificates/cacert.org')
elif system == u'FreeBSD':
paths_checked.append('/usr/local/share/certs')
elif system == u'OpenBSD':
paths_checked.append('/etc/ssl')
elif system == u'NetBSD':
ca_certs.append('/etc/openssl/certs')
elif system == u'SunOS':
paths_checked.append('/opt/local/etc/openssl/certs')
# fall back to a user-deployed cert in a standard
# location if the OS platform one is not available
paths_checked.append('/etc/ansible')
tmp_path = None
if not HAS_SSLCONTEXT:
tmp_fd, tmp_path = tempfile.mkstemp()
atexit.register(atexit_remove_file, tmp_path)
# Write the dummy ca cert if we are running on macOS
if system == u'Darwin':
if HAS_SSLCONTEXT:
cadata.extend(
ssl.PEM_cert_to_DER_cert(
to_native(b_DUMMY_CA_CERT, errors='surrogate_or_strict')
)
)
else:
os.write(tmp_fd, b_DUMMY_CA_CERT)
# Default Homebrew path for OpenSSL certs
paths_checked.append('/usr/local/etc/openssl')
# for all of the paths, find any .crt or .pem files
# and compile them into single temp file for use
# in the ssl check to speed up the test
for path in paths_checked:
if os.path.exists(path) and os.path.isdir(path):
dir_contents = os.listdir(path)
for f in dir_contents:
full_path = os.path.join(path, f)
if os.path.isfile(full_path) and os.path.splitext(f)[1] in ('.crt', '.pem'):
try:
if full_path not in LOADED_VERIFY_LOCATIONS:
with open(full_path, 'rb') as cert_file:
b_cert = cert_file.read()
if HAS_SSLCONTEXT:
try:
cadata.extend(
ssl.PEM_cert_to_DER_cert(
to_native(b_cert, errors='surrogate_or_strict')
)
)
except ValueError:
continue
else:
os.write(tmp_fd, b_cert)
os.write(tmp_fd, b'\n')
except (OSError, IOError):
pass
if HAS_SSLCONTEXT:
default_verify_paths = ssl.get_default_verify_paths()
paths_checked[:0] = [default_verify_paths.capath]
return (tmp_path, cadata, paths_checked)
def validate_proxy_response(self, response, valid_codes=None):
'''
make sure we get back a valid code from the proxy
'''
valid_codes = [200] if valid_codes is None else valid_codes
try:
(http_version, resp_code, msg) = re.match(br'(HTTP/\d\.\d) (\d\d\d) (.*)', response).groups()
if int(resp_code) not in valid_codes:
raise Exception
except Exception:
raise ProxyError('Connection to proxy failed')
def detect_no_proxy(self, url):
'''
Detect if the 'no_proxy' environment variable is set and honor those locations.
'''
env_no_proxy = os.environ.get('no_proxy')
if env_no_proxy:
env_no_proxy = env_no_proxy.split(',')
netloc = urlparse(url).netloc
for host in env_no_proxy:
if netloc.endswith(host) or netloc.split(':')[0].endswith(host):
# Our requested URL matches something in no_proxy, so don't
# use the proxy for this
return False
return True
def make_context(self, cafile, cadata):
cafile = self.ca_path or cafile
if self.ca_path:
cadata = None
else:
cadata = cadata or None
if HAS_SSLCONTEXT:
context = create_default_context(cafile=cafile)
elif HAS_URLLIB3_PYOPENSSLCONTEXT:
context = PyOpenSSLContext(PROTOCOL)
else:
raise NotImplementedError('Host libraries are too old to support creating an sslcontext')
if cafile or cadata:
context.load_verify_locations(cafile=cafile, cadata=cadata)
return context
def http_request(self, req):
tmp_ca_cert_path, cadata, paths_checked = self.get_ca_certs()
# Detect if 'no_proxy' environment variable is set and if our URL is included
use_proxy = self.detect_no_proxy(req.get_full_url())
https_proxy = os.environ.get('https_proxy')
context = None
try:
context = self.make_context(tmp_ca_cert_path, cadata)
except NotImplementedError:
# We'll make do with no context below
pass
try:
if use_proxy and https_proxy:
proxy_parts = generic_urlparse(urlparse(https_proxy))
port = proxy_parts.get('port') or 443
proxy_hostname = proxy_parts.get('hostname', None)
if proxy_hostname is None or proxy_parts.get('scheme') == '':
raise ProxyError("Failed to parse https_proxy environment variable."
" Please make sure you export https proxy as 'https_proxy=<SCHEME>://<IP_ADDRESS>:<PORT>'")
s = socket.create_connection((proxy_hostname, port))
if proxy_parts.get('scheme') == 'http':
s.sendall(to_bytes(self.CONNECT_COMMAND % (self.hostname, self.port), errors='surrogate_or_strict'))
if proxy_parts.get('username'):
credentials = "%s:%s" % (proxy_parts.get('username', ''), proxy_parts.get('password', ''))
s.sendall(b'Proxy-Authorization: Basic %s\r\n' % base64.b64encode(to_bytes(credentials, errors='surrogate_or_strict')).strip())
s.sendall(b'\r\n')
connect_result = b""
while connect_result.find(b"\r\n\r\n") <= 0:
connect_result += s.recv(4096)
# 128 kilobytes of headers should be enough for everyone.
if len(connect_result) > 131072:
raise ProxyError('Proxy sent too verbose headers. Only 128KiB allowed.')
self.validate_proxy_response(connect_result)
if context:
ssl_s = context.wrap_socket(s, server_hostname=self.hostname)
elif HAS_URLLIB3_SSL_WRAP_SOCKET:
ssl_s = ssl_wrap_socket(s, ca_certs=tmp_ca_cert_path, cert_reqs=ssl.CERT_REQUIRED, ssl_version=PROTOCOL, server_hostname=self.hostname)
else:
ssl_s = ssl.wrap_socket(s, ca_certs=tmp_ca_cert_path, cert_reqs=ssl.CERT_REQUIRED, ssl_version=PROTOCOL)
match_hostname(ssl_s.getpeercert(), self.hostname)
else:
raise ProxyError('Unsupported proxy scheme: %s. Currently ansible only supports HTTP proxies.' % proxy_parts.get('scheme'))
else:
s = socket.create_connection((self.hostname, self.port))
if context:
ssl_s = context.wrap_socket(s, server_hostname=self.hostname)
elif HAS_URLLIB3_SSL_WRAP_SOCKET:
ssl_s = ssl_wrap_socket(s, ca_certs=tmp_ca_cert_path, cert_reqs=ssl.CERT_REQUIRED, ssl_version=PROTOCOL, server_hostname=self.hostname)
else:
ssl_s = ssl.wrap_socket(s, ca_certs=tmp_ca_cert_path, cert_reqs=ssl.CERT_REQUIRED, ssl_version=PROTOCOL)
match_hostname(ssl_s.getpeercert(), self.hostname)
# close the ssl connection
# ssl_s.unwrap()
s.close()
except (ssl.SSLError, CertificateError) as e:
build_ssl_validation_error(self.hostname, self.port, paths_checked, e)
except socket.error as e:
raise ConnectionError('Failed to connect to %s at port %s: %s' % (self.hostname, self.port, to_native(e)))
return req
https_request = http_request
def maybe_add_ssl_handler(url, validate_certs, ca_path=None):
parsed = generic_urlparse(urlparse(url))
if parsed.scheme == 'https' and validate_certs:
if not HAS_SSL:
raise NoSSLError('SSL validation is not available in your version of python. You can use validate_certs=False,'
' however this is unsafe and not recommended')
# create the SSL validation handler and
# add it to the list of handlers
return SSLValidationHandler(parsed.hostname, parsed.port or 443, ca_path=ca_path)
def rfc2822_date_string(timetuple, zone='-0000'):
"""Accepts a timetuple and optional zone which defaults to ``-0000``
and returns a date string as specified by RFC 2822, e.g.:
Fri, 09 Nov 2001 01:08:47 -0000
Copied from email.utils.formatdate and modified for separate use
"""
return '%s, %02d %s %04d %02d:%02d:%02d %s' % (
['Mon', 'Tue', 'Wed', 'Thu', 'Fri', 'Sat', 'Sun'][timetuple[6]],
timetuple[2],
['Jan', 'Feb', 'Mar', 'Apr', 'May', 'Jun',
'Jul', 'Aug', 'Sep', 'Oct', 'Nov', 'Dec'][timetuple[1] - 1],
timetuple[0], timetuple[3], timetuple[4], timetuple[5],
zone)
class Request:
def __init__(self, headers=None, use_proxy=True, force=False, timeout=10, validate_certs=True,
url_username=None, url_password=None, http_agent=None, force_basic_auth=False,
follow_redirects='urllib2', client_cert=None, client_key=None, cookies=None, unix_socket=None,
ca_path=None):
"""This class works somewhat similarly to the ``Session`` class of from requests
by defining a cookiejar that an be used across requests as well as cascaded defaults that
can apply to repeated requests
For documentation of params, see ``Request.open``
>>> from ansible.module_utils.urls import Request
>>> r = Request()
>>> r.open('GET', 'http://httpbin.org/cookies/set?k1=v1').read()
'{\n "cookies": {\n "k1": "v1"\n }\n}\n'
>>> r = Request(url_username='user', url_password='passwd')
>>> r.open('GET', 'http://httpbin.org/basic-auth/user/passwd').read()
'{\n "authenticated": true, \n "user": "user"\n}\n'
>>> r = Request(headers=dict(foo='bar'))
>>> r.open('GET', 'http://httpbin.org/get', headers=dict(baz='qux')).read()
"""
self.headers = headers or {}
if not isinstance(self.headers, dict):
raise ValueError("headers must be a dict: %r" % self.headers)
self.use_proxy = use_proxy
self.force = force
self.timeout = timeout
self.validate_certs = validate_certs
self.url_username = url_username
self.url_password = url_password
self.http_agent = http_agent
self.force_basic_auth = force_basic_auth
self.follow_redirects = follow_redirects
self.client_cert = client_cert
self.client_key = client_key
self.unix_socket = unix_socket
self.ca_path = ca_path
if isinstance(cookies, cookiejar.CookieJar):
self.cookies = cookies
else:
self.cookies = cookiejar.CookieJar()
def _fallback(self, value, fallback):
if value is None:
return fallback
return value
def open(self, method, url, data=None, headers=None, use_proxy=None,
force=None, last_mod_time=None, timeout=None, validate_certs=None,
url_username=None, url_password=None, http_agent=None,
force_basic_auth=None, follow_redirects=None,
client_cert=None, client_key=None, cookies=None, use_gssapi=False,
unix_socket=None, ca_path=None):
"""
Sends a request via HTTP(S) or FTP using urllib2 (Python2) or urllib (Python3)
Does not require the module environment
Returns :class:`HTTPResponse` object.
:arg method: method for the request
:arg url: URL to request
:kwarg data: (optional) bytes, or file-like object to send
in the body of the request
:kwarg headers: (optional) Dictionary of HTTP Headers to send with the
request
:kwarg use_proxy: (optional) Boolean of whether or not to use proxy
:kwarg force: (optional) Boolean of whether or not to set `cache-control: no-cache` header
:kwarg last_mod_time: (optional) Datetime object to use when setting If-Modified-Since header
:kwarg timeout: (optional) How long to wait for the server to send
data before giving up, as a float
:kwarg validate_certs: (optional) Booleani that controls whether we verify
the server's TLS certificate
:kwarg url_username: (optional) String of the user to use when authenticating
:kwarg url_password: (optional) String of the password to use when authenticating
:kwarg http_agent: (optional) String of the User-Agent to use in the request
:kwarg force_basic_auth: (optional) Boolean determining if auth header should be sent in the initial request
:kwarg follow_redirects: (optional) String of urllib2, all/yes, safe, none to determine how redirects are
followed, see RedirectHandlerFactory for more information
:kwarg client_cert: (optional) PEM formatted certificate chain file to be used for SSL client authentication.
This file can also include the key as well, and if the key is included, client_key is not required
:kwarg client_key: (optional) PEM formatted file that contains your private key to be used for SSL client
authentication. If client_cert contains both the certificate and key, this option is not required
:kwarg cookies: (optional) CookieJar object to send with the
request
:kwarg use_gssapi: (optional) Use GSSAPI handler of requests.
:kwarg unix_socket: (optional) String of file system path to unix socket file to use when establishing
connection to the provided url
:kwarg ca_path: (optional) String of file system path to CA cert bundle to use
:returns: HTTPResponse
"""
method = method.upper()
if headers is None:
headers = {}
elif not isinstance(headers, dict):
raise ValueError("headers must be a dict")
headers = dict(self.headers, **headers)
use_proxy = self._fallback(use_proxy, self.use_proxy)
force = self._fallback(force, self.force)
timeout = self._fallback(timeout, self.timeout)
validate_certs = self._fallback(validate_certs, self.validate_certs)
url_username = self._fallback(url_username, self.url_username)
url_password = self._fallback(url_password, self.url_password)
http_agent = self._fallback(http_agent, self.http_agent)
force_basic_auth = self._fallback(force_basic_auth, self.force_basic_auth)
follow_redirects = self._fallback(follow_redirects, self.follow_redirects)
client_cert = self._fallback(client_cert, self.client_cert)
client_key = self._fallback(client_key, self.client_key)
cookies = self._fallback(cookies, self.cookies)
unix_socket = self._fallback(unix_socket, self.unix_socket)
ca_path = self._fallback(ca_path, self.ca_path)
handlers = []
if unix_socket:
handlers.append(UnixHTTPHandler(unix_socket))
ssl_handler = maybe_add_ssl_handler(url, validate_certs, ca_path=ca_path)
if ssl_handler and not HAS_SSLCONTEXT:
handlers.append(ssl_handler)
if HAS_GSSAPI and use_gssapi:
handlers.append(urllib_gssapi.HTTPSPNEGOAuthHandler())
parsed = generic_urlparse(urlparse(url))
if parsed.scheme != 'ftp':
username = url_username
if username:
password = url_password
netloc = parsed.netloc
elif '@' in parsed.netloc:
credentials, netloc = parsed.netloc.split('@', 1)
if ':' in credentials:
username, password = credentials.split(':', 1)
else:
username = credentials
password = ''
parsed_list = parsed.as_list()
parsed_list[1] = netloc
# reconstruct url without credentials
url = urlunparse(parsed_list)
if username and not force_basic_auth:
passman = urllib_request.HTTPPasswordMgrWithDefaultRealm()
# this creates a password manager
passman.add_password(None, netloc, username, password)
# because we have put None at the start it will always
# use this username/password combination for urls
# for which `theurl` is a super-url
authhandler = urllib_request.HTTPBasicAuthHandler(passman)
digest_authhandler = urllib_request.HTTPDigestAuthHandler(passman)
# create the AuthHandler
handlers.append(authhandler)
handlers.append(digest_authhandler)
elif username and force_basic_auth:
headers["Authorization"] = basic_auth_header(username, password)
else:
try:
rc = netrc.netrc(os.environ.get('NETRC'))
login = rc.authenticators(parsed.hostname)
except IOError:
login = None
if login:
username, _, password = login
if username and password:
headers["Authorization"] = basic_auth_header(username, password)
if not use_proxy:
proxyhandler = urllib_request.ProxyHandler({})
handlers.append(proxyhandler)
context = None
if HAS_SSLCONTEXT and not validate_certs:
# In 2.7.9, the default context validates certificates
context = SSLContext(ssl.PROTOCOL_SSLv23)
if ssl.OP_NO_SSLv2:
context.options |= ssl.OP_NO_SSLv2
context.options |= ssl.OP_NO_SSLv3
context.verify_mode = ssl.CERT_NONE
context.check_hostname = False
handlers.append(HTTPSClientAuthHandler(client_cert=client_cert,
client_key=client_key,
context=context,
unix_socket=unix_socket))
elif client_cert or unix_socket:
handlers.append(HTTPSClientAuthHandler(client_cert=client_cert,
client_key=client_key,
unix_socket=unix_socket))
if ssl_handler and HAS_SSLCONTEXT and validate_certs:
tmp_ca_path, cadata, paths_checked = ssl_handler.get_ca_certs()
try:
context = ssl_handler.make_context(tmp_ca_path, cadata)
except NotImplementedError:
pass
# pre-2.6 versions of python cannot use the custom https
# handler, since the socket class is lacking create_connection.
# Some python builds lack HTTPS support.
if hasattr(socket, 'create_connection') and CustomHTTPSHandler:
kwargs = {}
if HAS_SSLCONTEXT:
kwargs['context'] = context
handlers.append(CustomHTTPSHandler(**kwargs))
handlers.append(RedirectHandlerFactory(follow_redirects, validate_certs, ca_path=ca_path))
# add some nicer cookie handling
if cookies is not None:
handlers.append(urllib_request.HTTPCookieProcessor(cookies))
opener = urllib_request.build_opener(*handlers)
urllib_request.install_opener(opener)
data = to_bytes(data, nonstring='passthru')
request = RequestWithMethod(url, method, data)
# add the custom agent header, to help prevent issues
# with sites that block the default urllib agent string
if http_agent:
request.add_header('User-agent', http_agent)
# Cache control
# Either we directly force a cache refresh
if force:
request.add_header('cache-control', 'no-cache')
# or we do it if the original is more recent than our copy
elif last_mod_time:
tstamp = rfc2822_date_string(last_mod_time.timetuple())
request.add_header('If-Modified-Since', tstamp)
# user defined headers now, which may override things we've set above
for header in headers:
request.add_header(header, headers[header])
urlopen_args = [request, None]
if sys.version_info >= (2, 6, 0):
# urlopen in python prior to 2.6.0 did not
# have a timeout parameter
urlopen_args.append(timeout)
r = urllib_request.urlopen(*urlopen_args)
return r
def get(self, url, **kwargs):
r"""Sends a GET request. Returns :class:`HTTPResponse` object.
:arg url: URL to request
:kwarg \*\*kwargs: Optional arguments that ``open`` takes.
:returns: HTTPResponse
"""
return self.open('GET', url, **kwargs)
def options(self, url, **kwargs):
r"""Sends a OPTIONS request. Returns :class:`HTTPResponse` object.
:arg url: URL to request
:kwarg \*\*kwargs: Optional arguments that ``open`` takes.
:returns: HTTPResponse
"""
return self.open('OPTIONS', url, **kwargs)
def head(self, url, **kwargs):
r"""Sends a HEAD request. Returns :class:`HTTPResponse` object.
:arg url: URL to request
:kwarg \*\*kwargs: Optional arguments that ``open`` takes.
:returns: HTTPResponse
"""
return self.open('HEAD', url, **kwargs)
def post(self, url, data=None, **kwargs):
r"""Sends a POST request. Returns :class:`HTTPResponse` object.
:arg url: URL to request.
:kwarg data: (optional) bytes, or file-like object to send in the body of the request.
:kwarg \*\*kwargs: Optional arguments that ``open`` takes.
:returns: HTTPResponse
"""
return self.open('POST', url, data=data, **kwargs)
def put(self, url, data=None, **kwargs):
r"""Sends a PUT request. Returns :class:`HTTPResponse` object.
:arg url: URL to request.
:kwarg data: (optional) bytes, or file-like object to send in the body of the request.
:kwarg \*\*kwargs: Optional arguments that ``open`` takes.
:returns: HTTPResponse
"""
return self.open('PUT', url, data=data, **kwargs)
def patch(self, url, data=None, **kwargs):
r"""Sends a PATCH request. Returns :class:`HTTPResponse` object.
:arg url: URL to request.
:kwarg data: (optional) bytes, or file-like object to send in the body of the request.
:kwarg \*\*kwargs: Optional arguments that ``open`` takes.
:returns: HTTPResponse
"""
return self.open('PATCH', url, data=data, **kwargs)
def delete(self, url, **kwargs):
r"""Sends a DELETE request. Returns :class:`HTTPResponse` object.
:arg url: URL to request
:kwargs \*\*kwargs: Optional arguments that ``open`` takes.
:returns: HTTPResponse
"""
return self.open('DELETE', url, **kwargs)
def open_url(url, data=None, headers=None, method=None, use_proxy=True,
force=False, last_mod_time=None, timeout=10, validate_certs=True,
url_username=None, url_password=None, http_agent=None,
force_basic_auth=False, follow_redirects='urllib2',
client_cert=None, client_key=None, cookies=None,
use_gssapi=False, unix_socket=None, ca_path=None):
'''
Sends a request via HTTP(S) or FTP using urllib2 (Python2) or urllib (Python3)
Does not require the module environment
'''
method = method or ('POST' if data else 'GET')
return Request().open(method, url, data=data, headers=headers, use_proxy=use_proxy,
force=force, last_mod_time=last_mod_time, timeout=timeout, validate_certs=validate_certs,
url_username=url_username, url_password=url_password, http_agent=http_agent,
force_basic_auth=force_basic_auth, follow_redirects=follow_redirects,
client_cert=client_cert, client_key=client_key, cookies=cookies,
use_gssapi=use_gssapi, unix_socket=unix_socket, ca_path=ca_path)
#
# Module-related functions
#
def basic_auth_header(username, password):
"""Takes a username and password and returns a byte string suitable for
using as value of an Authorization header to do basic auth.
"""
return b"Basic %s" % base64.b64encode(to_bytes("%s:%s" % (username, password), errors='surrogate_or_strict'))
def url_argument_spec():
'''
Creates an argument spec that can be used with any module
that will be requesting content via urllib/urllib2
'''
return dict(
url=dict(type='str'),
force=dict(type='bool', default=False, aliases=['thirsty']),
http_agent=dict(type='str', default='ansible-httpget'),
use_proxy=dict(type='bool', default=True),
validate_certs=dict(type='bool', default=True),
url_username=dict(type='str'),
url_password=dict(type='str', no_log=True),
force_basic_auth=dict(type='bool', default=False),
client_cert=dict(type='path'),
client_key=dict(type='path'),
)
def fetch_url(module, url, data=None, headers=None, method=None,
use_proxy=True, force=False, last_mod_time=None, timeout=10,
use_gssapi=False, unix_socket=None, ca_path=None):
"""Sends a request via HTTP(S) or FTP (needs the module as parameter)
:arg module: The AnsibleModule (used to get username, password etc. (s.b.).
:arg url: The url to use.
:kwarg data: The data to be sent (in case of POST/PUT).
:kwarg headers: A dict with the request headers.
:kwarg method: "POST", "PUT", etc.
:kwarg boolean use_proxy: Default: True
:kwarg boolean force: If True: Do not get a cached copy (Default: False)
:kwarg last_mod_time: Default: None
:kwarg int timeout: Default: 10
:kwarg boolean use_gssapi: Default: False
:kwarg unix_socket: (optional) String of file system path to unix socket file to use when establishing
connection to the provided url
:kwarg ca_path: (optional) String of file system path to CA cert bundle to use
:returns: A tuple of (**response**, **info**). Use ``response.read()`` to read the data.
The **info** contains the 'status' and other meta data. When a HttpError (status > 400)
occurred then ``info['body']`` contains the error response data::
Example::
data={...}
resp, info = fetch_url(module,
"http://example.com",
data=module.jsonify(data),
headers={'Content-type': 'application/json'},
method="POST")
status_code = info["status"]
body = resp.read()
if status_code >= 400 :
body = info['body']
"""
if not HAS_URLPARSE:
module.fail_json(msg='urlparse is not installed')
# ensure we use proper tempdir
old_tempdir = tempfile.tempdir
tempfile.tempdir = module.tmpdir
# Get validate_certs from the module params
validate_certs = module.params.get('validate_certs', True)
username = module.params.get('url_username', '')
password = module.params.get('url_password', '')
http_agent = module.params.get('http_agent', 'ansible-httpget')
force_basic_auth = module.params.get('force_basic_auth', '')
follow_redirects = module.params.get('follow_redirects', 'urllib2')
client_cert = module.params.get('client_cert')
client_key = module.params.get('client_key')
cookies = cookiejar.LWPCookieJar()
r = None
info = dict(url=url, status=-1)
try:
r = open_url(url, data=data, headers=headers, method=method,
use_proxy=use_proxy, force=force, last_mod_time=last_mod_time, timeout=timeout,
validate_certs=validate_certs, url_username=username,
url_password=password, http_agent=http_agent, force_basic_auth=force_basic_auth,
follow_redirects=follow_redirects, client_cert=client_cert,
client_key=client_key, cookies=cookies, use_gssapi=use_gssapi,
unix_socket=unix_socket, ca_path=ca_path)
# Lowercase keys, to conform to py2 behavior, so that py3 and py2 are predictable
info.update(dict((k.lower(), v) for k, v in r.info().items()))
# Don't be lossy, append header values for duplicate headers
# In Py2 there is nothing that needs done, py2 does this for us
if PY3:
temp_headers = {}
for name, value in r.headers.items():
# The same as above, lower case keys to match py2 behavior, and create more consistent results
name = name.lower()
if name in temp_headers:
temp_headers[name] = ', '.join((temp_headers[name], value))
else:
temp_headers[name] = value
info.update(temp_headers)
# parse the cookies into a nice dictionary
cookie_list = []
cookie_dict = dict()
# Python sorts cookies in order of most specific (ie. longest) path first. See ``CookieJar._cookie_attrs``
# Cookies with the same path are reversed from response order.
# This code makes no assumptions about that, and accepts the order given by python
for cookie in cookies:
cookie_dict[cookie.name] = cookie.value
cookie_list.append((cookie.name, cookie.value))
info['cookies_string'] = '; '.join('%s=%s' % c for c in cookie_list)
info['cookies'] = cookie_dict
# finally update the result with a message about the fetch
info.update(dict(msg="OK (%s bytes)" % r.headers.get('Content-Length', 'unknown'), url=r.geturl(), status=r.code))
except NoSSLError as e:
distribution = get_distribution()
if distribution is not None and distribution.lower() == 'redhat':
module.fail_json(msg='%s. You can also install python-ssl from EPEL' % to_native(e), **info)
else:
module.fail_json(msg='%s' % to_native(e), **info)
except (ConnectionError, ValueError) as e:
module.fail_json(msg=to_native(e), **info)
except urllib_error.HTTPError as e:
try:
body = e.read()
except AttributeError:
body = ''
# Try to add exception info to the output but don't fail if we can't
try:
# Lowercase keys, to conform to py2 behavior, so that py3 and py2 are predictable
info.update(dict((k.lower(), v) for k, v in e.info().items()))
except Exception:
pass
info.update({'msg': to_native(e), 'body': body, 'status': e.code})
except urllib_error.URLError as e:
code = int(getattr(e, 'code', -1))
info.update(dict(msg="Request failed: %s" % to_native(e), status=code))
except socket.error as e:
info.update(dict(msg="Connection failure: %s" % to_native(e), status=-1))
except httplib.BadStatusLine as e:
info.update(dict(msg="Connection failure: connection was closed before a valid response was received: %s" % to_native(e.line), status=-1))
except Exception as e:
info.update(dict(msg="An unknown error occurred: %s" % to_native(e), status=-1),
exception=traceback.format_exc())
finally:
tempfile.tempdir = old_tempdir
return r, info
def fetch_file(module, url, data=None, headers=None, method=None,
use_proxy=True, force=False, last_mod_time=None, timeout=10):
'''Download and save a file via HTTP(S) or FTP (needs the module as parameter).
This is basically a wrapper around fetch_url().
:arg module: The AnsibleModule (used to get username, password etc. (s.b.).
:arg url: The url to use.
:kwarg data: The data to be sent (in case of POST/PUT).
:kwarg headers: A dict with the request headers.
:kwarg method: "POST", "PUT", etc.
:kwarg boolean use_proxy: Default: True
:kwarg boolean force: If True: Do not get a cached copy (Default: False)
:kwarg last_mod_time: Default: None
:kwarg int timeout: Default: 10
:returns: A string, the path to the downloaded file.
'''
# download file
bufsize = 65536
file_name, file_ext = os.path.splitext(str(url.rsplit('/', 1)[1]))
fetch_temp_file = tempfile.NamedTemporaryFile(dir=module.tmpdir, prefix=file_name, suffix=file_ext, delete=False)
module.add_cleanup_file(fetch_temp_file.name)
try:
rsp, info = fetch_url(module, url, data, headers, method, use_proxy, force, last_mod_time, timeout)
if not rsp:
module.fail_json(msg="Failure downloading %s, %s" % (url, info['msg']))
data = rsp.read(bufsize)
while data:
fetch_temp_file.write(data)
data = rsp.read(bufsize)
fetch_temp_file.close()
except Exception as e:
module.fail_json(msg="Failure downloading %s, %s" % (url, to_native(e)))
return fetch_temp_file.name
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 60,166 |
ansible-test sanity AttributeError: 'NoneType' object has no attribute 'endswith' after test/sanity/ignore-2.9.txt
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
When testing ansible-test sanity with a collection, I see the following traceback, at first I thought it was because of a missing file, but adding it doesn't fix the issue.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
ansible-test
##### ANSIBLE VERSION
923e21836b1a4fb91aa6e93463efe0aea4022144 Move plugin loader playbook dir additions back to Playbook instead of PlaybookCLI (#59557)
##### CONFIGURATION
https://logs.zuul.ansible.com/34/34/89748bcccaba051183f1e8819502c7398b4d2206/check/ansible-test-sanity/c8e84e5/zuul-info/inventory.yaml
##### OS / ENVIRONMENT
fedora 29 from zuul.ansible.com
##### STEPS TO REPRODUCE
Run ansible-test with the following:
source ~/venv/bin/activate; /home/zuul/src/github.com/ansible/ansible/bin/ansible-test sanity -vv --requirements --python 3.6 --lint
from
/home/zuul/src/github.com/ansible-network/sandbox/ansible_collections/ansible_network/sandbox
example logs: https://logs.zuul.ansible.com/34/34/89748bcccaba051183f1e8819502c7398b4d2206/check/ansible-test-sanity/c8e84e5/ara-report/result/7b2fad99-9bc4-41bc-a264-b71819b0d659/
##### EXPECTED RESULTS
ansible-test to pass
##### ACTUAL RESULTS
https://logs.zuul.ansible.com/34/34/89748bcccaba051183f1e8819502c7398b4d2206/check/ansible-test-sanity/c8e84e5/job-output.html#l1031
```
2019-08-06 21:13:50.017154 | fedora-29 | Traceback (most recent call last):
2019-08-06 21:13:50.017311 | fedora-29 | File "/home/zuul/src/github.com/ansible/ansible/bin/ansible-test", line 15, in <module>
2019-08-06 21:13:50.017361 | fedora-29 | lib.cli.main()
2019-08-06 21:13:50.017454 | fedora-29 | File "/home/zuul/src/github.com/ansible/ansible/test/runner/lib/cli.py", line 125, in main
2019-08-06 21:13:50.017540 | fedora-29 | args.func(config)
2019-08-06 21:13:50.017686 | fedora-29 | File "/home/zuul/src/github.com/ansible/ansible/test/runner/lib/sanity/__init__.py", line 156, in command_sanity
2019-08-06 21:13:50.017748 | fedora-29 | settings = test.load_processor(args)
2019-08-06 21:13:50.017853 | fedora-29 | File "/home/zuul/src/github.com/ansible/ansible/test/runner/lib/sanity/__init__.py", line 803, in load_processor
2019-08-06 21:13:50.017911 | fedora-29 | return SanityIgnoreProcessor(args, self, None)
2019-08-06 21:13:50.018011 | fedora-29 | File "/home/zuul/src/github.com/ansible/ansible/test/runner/lib/sanity/__init__.py", line 448, in __init__
2019-08-06 21:13:50.018089 | fedora-29 | self.parser = SanityIgnoreParser.load(args)
2019-08-06 21:13:50.018193 | fedora-29 | File "/home/zuul/src/github.com/ansible/ansible/test/runner/lib/sanity/__init__.py", line 426, in load
2019-08-06 21:13:50.018290 | fedora-29 | SanityIgnoreParser.instance = SanityIgnoreParser(args)
2019-08-06 21:13:50.018450 | fedora-29 | File "/home/zuul/src/github.com/ansible/ansible/test/runner/lib/sanity/__init__.py", line 278, in __init__
2019-08-06 21:13:50.018574 | fedora-29 | paths_by_test[test.name] = set(target.path for target in test.filter_targets(test_targets))
2019-08-06 21:13:50.018711 | fedora-29 | File "/home/zuul/src/github.com/ansible/ansible/test/runner/lib/sanity/import.py", line 62, in filter_targets
2019-08-06 21:13:50.018806 | fedora-29 | return [target for target in targets if os.path.splitext(target.path)[1] == '.py' and
2019-08-06 21:13:50.018908 | fedora-29 | File "/home/zuul/src/github.com/ansible/ansible/test/runner/lib/sanity/import.py", line 63, in <listcomp>
2019-08-06 21:13:50.019077 | fedora-29 | (is_subdir(target.path, data_context().content.module_path) or is_subdir(target.path, data_context().content.module_utils_path))]
2019-08-06 21:13:50.019178 | fedora-29 | File "/home/zuul/src/github.com/ansible/ansible/test/runner/lib/util.py", line 782, in is_subdir
2019-08-06 21:13:50.019225 | fedora-29 | if not path.endswith(os.sep):
2019-08-06 21:13:50.019291 | fedora-29 | AttributeError: 'NoneType' object has no attribute 'endswith'
```
|
https://github.com/ansible/ansible/issues/60166
|
https://github.com/ansible/ansible/pull/60169
|
233efe08862094d9015320570fa419b7dcb0cd49
|
9da5908afba1453040b32747a6707bd373ad0ff2
| 2019-08-06T21:22:54Z |
python
| 2019-08-06T23:07:53Z |
test/lib/ansible_test/_internal/provider/layout/__init__.py
|
"""Code for finding content."""
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import abc
import collections
import os
from ... import types as t
from ...util import (
ANSIBLE_ROOT,
)
from .. import (
PathProvider,
)
class Layout:
"""Description of content locations and helper methods to access content."""
def __init__(self,
root, # type: str
paths, # type: t.List[str]
): # type: (...) -> None
self.root = root
self.__paths = paths
self.__tree = paths_to_tree(paths)
def all_files(self): # type: () -> t.List[str]
"""Return a list of all file paths."""
return self.__paths
def walk_files(self, directory): # type: (str) -> t.List[str]
"""Return a list of file paths found recursively under the given directory."""
parts = directory.rstrip(os.sep).split(os.sep)
item = get_tree_item(self.__tree, parts)
if not item:
return []
directories = collections.deque(item[0].values())
files = list(item[1])
while directories:
item = directories.pop()
directories.extend(item[0].values())
files.extend(item[1])
return files
def get_dirs(self, directory): # type: (str) -> t.List[str]
"""Return a list directory paths found directly under the given directory."""
parts = directory.rstrip(os.sep).split(os.sep)
item = get_tree_item(self.__tree, parts)
return [os.path.join(directory, key) for key in item[0].keys()] if item else []
def get_files(self, directory): # type: (str) -> t.List[str]
"""Return a list of file paths found directly under the given directory."""
parts = directory.rstrip(os.sep).split(os.sep)
item = get_tree_item(self.__tree, parts)
return item[1] if item else []
class InstallLayout(Layout):
"""Information about the current Ansible install."""
class ContentLayout(Layout):
"""Information about the current Ansible content being tested."""
def __init__(self,
root, # type: str
paths, # type: t.List[str]
plugin_paths, # type: t.Dict[str, str]
provider_paths, # type: t.Dict[str, str]
code_path=None, # type: t.Optional[str]
collection=None, # type: t.Optional[CollectionDetail]
util_path=None, # type: t.Optional[str]
unit_path=None, # type: t.Optional[str]
unit_module_path=None, # type: t.Optional[str]
unit_module_utils_path=None, # type: t.Optional[str]
integration_path=None, # type: t.Optional[str]
): # type: (...) -> None
super(ContentLayout, self).__init__(root, paths)
self.plugin_paths = plugin_paths
self.provider_paths = provider_paths
self.code_path = code_path
self.collection = collection
self.util_path = util_path
self.unit_path = unit_path
self.unit_module_path = unit_module_path
self.unit_module_utils_path = unit_module_utils_path
self.integration_path = integration_path
self.is_ansible = root == ANSIBLE_ROOT
@property
def prefix(self): # type: () -> str
"""Return the collection prefix or an empty string if not a collection."""
if self.collection:
return self.collection.prefix
return ''
@property
def module_path(self): # type: () -> t.Optional[str]
"""Return the path where modules are found, if any."""
return self.plugin_paths.get('modules')
@property
def module_utils_path(self): # type: () -> t.Optional[str]
"""Return the path where module_utils are found, if any."""
return self.plugin_paths.get('module_utils')
@property
def module_utils_powershell_path(self): # type: () -> t.Optional[str]
"""Return the path where powershell module_utils are found, if any."""
if self.is_ansible:
return os.path.join(self.plugin_paths['module_utils'], 'powershell')
return self.plugin_paths.get('module_utils')
@property
def module_utils_csharp_path(self): # type: () -> t.Optional[str]
"""Return the path where csharp module_utils are found, if any."""
if self.is_ansible:
return os.path.join(self.plugin_paths['module_utils'], 'csharp')
return self.plugin_paths.get('module_utils')
class CollectionDetail:
"""Details about the layout of the current collection."""
def __init__(self,
name, # type: str
namespace, # type: str
root, # type: str
prefix, # type: str
): # type: (...) -> None
self.name = name
self.namespace = namespace
self.root = root
self.prefix = prefix
self.directory = os.path.join('ansible_collections', namespace, name)
class LayoutProvider(PathProvider):
"""Base class for layout providers."""
@abc.abstractmethod
def create(self, root, paths): # type: (str, t.List[str]) -> ContentLayout
"""Create a layout using the given root and paths."""
def paths_to_tree(paths): # type: (t.List[str]) -> t.Tuple(t.Dict[str, t.Any], t.List[str])
"""Return a filesystem tree from the given list of paths."""
tree = {}, []
for path in paths:
parts = path.split(os.sep)
root = tree
for part in parts[:-1]:
if part not in root[0]:
root[0][part] = {}, []
root = root[0][part]
root[1].append(path)
return tree
def get_tree_item(tree, parts): # type: (t.Tuple(t.Dict[str, t.Any], t.List[str]), t.List[str]) -> t.Optional[t.Tuple(t.Dict[str, t.Any], t.List[str])]
"""Return the portion of the tree found under the path given by parts, or None if it does not exist."""
root = tree
for part in parts:
root = root[0].get(part)
if not root:
return None
return root
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 60,166 |
ansible-test sanity AttributeError: 'NoneType' object has no attribute 'endswith' after test/sanity/ignore-2.9.txt
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
When testing ansible-test sanity with a collection, I see the following traceback, at first I thought it was because of a missing file, but adding it doesn't fix the issue.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
ansible-test
##### ANSIBLE VERSION
923e21836b1a4fb91aa6e93463efe0aea4022144 Move plugin loader playbook dir additions back to Playbook instead of PlaybookCLI (#59557)
##### CONFIGURATION
https://logs.zuul.ansible.com/34/34/89748bcccaba051183f1e8819502c7398b4d2206/check/ansible-test-sanity/c8e84e5/zuul-info/inventory.yaml
##### OS / ENVIRONMENT
fedora 29 from zuul.ansible.com
##### STEPS TO REPRODUCE
Run ansible-test with the following:
source ~/venv/bin/activate; /home/zuul/src/github.com/ansible/ansible/bin/ansible-test sanity -vv --requirements --python 3.6 --lint
from
/home/zuul/src/github.com/ansible-network/sandbox/ansible_collections/ansible_network/sandbox
example logs: https://logs.zuul.ansible.com/34/34/89748bcccaba051183f1e8819502c7398b4d2206/check/ansible-test-sanity/c8e84e5/ara-report/result/7b2fad99-9bc4-41bc-a264-b71819b0d659/
##### EXPECTED RESULTS
ansible-test to pass
##### ACTUAL RESULTS
https://logs.zuul.ansible.com/34/34/89748bcccaba051183f1e8819502c7398b4d2206/check/ansible-test-sanity/c8e84e5/job-output.html#l1031
```
2019-08-06 21:13:50.017154 | fedora-29 | Traceback (most recent call last):
2019-08-06 21:13:50.017311 | fedora-29 | File "/home/zuul/src/github.com/ansible/ansible/bin/ansible-test", line 15, in <module>
2019-08-06 21:13:50.017361 | fedora-29 | lib.cli.main()
2019-08-06 21:13:50.017454 | fedora-29 | File "/home/zuul/src/github.com/ansible/ansible/test/runner/lib/cli.py", line 125, in main
2019-08-06 21:13:50.017540 | fedora-29 | args.func(config)
2019-08-06 21:13:50.017686 | fedora-29 | File "/home/zuul/src/github.com/ansible/ansible/test/runner/lib/sanity/__init__.py", line 156, in command_sanity
2019-08-06 21:13:50.017748 | fedora-29 | settings = test.load_processor(args)
2019-08-06 21:13:50.017853 | fedora-29 | File "/home/zuul/src/github.com/ansible/ansible/test/runner/lib/sanity/__init__.py", line 803, in load_processor
2019-08-06 21:13:50.017911 | fedora-29 | return SanityIgnoreProcessor(args, self, None)
2019-08-06 21:13:50.018011 | fedora-29 | File "/home/zuul/src/github.com/ansible/ansible/test/runner/lib/sanity/__init__.py", line 448, in __init__
2019-08-06 21:13:50.018089 | fedora-29 | self.parser = SanityIgnoreParser.load(args)
2019-08-06 21:13:50.018193 | fedora-29 | File "/home/zuul/src/github.com/ansible/ansible/test/runner/lib/sanity/__init__.py", line 426, in load
2019-08-06 21:13:50.018290 | fedora-29 | SanityIgnoreParser.instance = SanityIgnoreParser(args)
2019-08-06 21:13:50.018450 | fedora-29 | File "/home/zuul/src/github.com/ansible/ansible/test/runner/lib/sanity/__init__.py", line 278, in __init__
2019-08-06 21:13:50.018574 | fedora-29 | paths_by_test[test.name] = set(target.path for target in test.filter_targets(test_targets))
2019-08-06 21:13:50.018711 | fedora-29 | File "/home/zuul/src/github.com/ansible/ansible/test/runner/lib/sanity/import.py", line 62, in filter_targets
2019-08-06 21:13:50.018806 | fedora-29 | return [target for target in targets if os.path.splitext(target.path)[1] == '.py' and
2019-08-06 21:13:50.018908 | fedora-29 | File "/home/zuul/src/github.com/ansible/ansible/test/runner/lib/sanity/import.py", line 63, in <listcomp>
2019-08-06 21:13:50.019077 | fedora-29 | (is_subdir(target.path, data_context().content.module_path) or is_subdir(target.path, data_context().content.module_utils_path))]
2019-08-06 21:13:50.019178 | fedora-29 | File "/home/zuul/src/github.com/ansible/ansible/test/runner/lib/util.py", line 782, in is_subdir
2019-08-06 21:13:50.019225 | fedora-29 | if not path.endswith(os.sep):
2019-08-06 21:13:50.019291 | fedora-29 | AttributeError: 'NoneType' object has no attribute 'endswith'
```
|
https://github.com/ansible/ansible/issues/60166
|
https://github.com/ansible/ansible/pull/60169
|
233efe08862094d9015320570fa419b7dcb0cd49
|
9da5908afba1453040b32747a6707bd373ad0ff2
| 2019-08-06T21:22:54Z |
python
| 2019-08-06T23:07:53Z |
test/lib/ansible_test/_internal/provider/layout/ansible.py
|
"""Layout provider for Ansible source."""
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import os
import re
from ... import types as t
from ...util import (
ANSIBLE_TEST_ROOT,
)
from . import (
ContentLayout,
LayoutProvider,
)
class AnsibleLayout(LayoutProvider):
"""Layout provider for Ansible source."""
@staticmethod
def is_content_root(path): # type: (str) -> bool
"""Return True if the given path is a content root for this provider."""
return os.path.exists(os.path.join(path, 'setup.py')) and os.path.exists(os.path.join(path, 'bin/ansible-test'))
def create(self, root, paths): # type: (str, t.List[str]) -> ContentLayout
"""Create a Layout using the given root and paths."""
plugin_types = sorted(set(p.split('/')[3] for p in paths if re.search(r'^lib/ansible/plugins/[^/]+/', p)))
provider_types = sorted(set(p.split('/')[5] for p in paths if re.search(r'^test/lib/ansible_test/_internal/provider/[^/]+/', p)))
plugin_paths = dict((p, os.path.join('lib/ansible/plugins', p)) for p in plugin_types)
provider_paths = dict((p, os.path.join(ANSIBLE_TEST_ROOT, '_internal/provider', p)) for p in provider_types)
plugin_paths.update(dict(
modules='lib/ansible/modules',
module_utils='lib/ansible/module_utils',
))
return ContentLayout(root,
paths,
plugin_paths=plugin_paths,
provider_paths=provider_paths,
code_path='lib/ansible',
util_path='test/utils',
unit_path='test/units',
unit_module_path='test/units/modules',
unit_module_utils_path='test/units/module_utils',
integration_path='test/integration',
)
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 60,166 |
ansible-test sanity AttributeError: 'NoneType' object has no attribute 'endswith' after test/sanity/ignore-2.9.txt
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
When testing ansible-test sanity with a collection, I see the following traceback, at first I thought it was because of a missing file, but adding it doesn't fix the issue.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
ansible-test
##### ANSIBLE VERSION
923e21836b1a4fb91aa6e93463efe0aea4022144 Move plugin loader playbook dir additions back to Playbook instead of PlaybookCLI (#59557)
##### CONFIGURATION
https://logs.zuul.ansible.com/34/34/89748bcccaba051183f1e8819502c7398b4d2206/check/ansible-test-sanity/c8e84e5/zuul-info/inventory.yaml
##### OS / ENVIRONMENT
fedora 29 from zuul.ansible.com
##### STEPS TO REPRODUCE
Run ansible-test with the following:
source ~/venv/bin/activate; /home/zuul/src/github.com/ansible/ansible/bin/ansible-test sanity -vv --requirements --python 3.6 --lint
from
/home/zuul/src/github.com/ansible-network/sandbox/ansible_collections/ansible_network/sandbox
example logs: https://logs.zuul.ansible.com/34/34/89748bcccaba051183f1e8819502c7398b4d2206/check/ansible-test-sanity/c8e84e5/ara-report/result/7b2fad99-9bc4-41bc-a264-b71819b0d659/
##### EXPECTED RESULTS
ansible-test to pass
##### ACTUAL RESULTS
https://logs.zuul.ansible.com/34/34/89748bcccaba051183f1e8819502c7398b4d2206/check/ansible-test-sanity/c8e84e5/job-output.html#l1031
```
2019-08-06 21:13:50.017154 | fedora-29 | Traceback (most recent call last):
2019-08-06 21:13:50.017311 | fedora-29 | File "/home/zuul/src/github.com/ansible/ansible/bin/ansible-test", line 15, in <module>
2019-08-06 21:13:50.017361 | fedora-29 | lib.cli.main()
2019-08-06 21:13:50.017454 | fedora-29 | File "/home/zuul/src/github.com/ansible/ansible/test/runner/lib/cli.py", line 125, in main
2019-08-06 21:13:50.017540 | fedora-29 | args.func(config)
2019-08-06 21:13:50.017686 | fedora-29 | File "/home/zuul/src/github.com/ansible/ansible/test/runner/lib/sanity/__init__.py", line 156, in command_sanity
2019-08-06 21:13:50.017748 | fedora-29 | settings = test.load_processor(args)
2019-08-06 21:13:50.017853 | fedora-29 | File "/home/zuul/src/github.com/ansible/ansible/test/runner/lib/sanity/__init__.py", line 803, in load_processor
2019-08-06 21:13:50.017911 | fedora-29 | return SanityIgnoreProcessor(args, self, None)
2019-08-06 21:13:50.018011 | fedora-29 | File "/home/zuul/src/github.com/ansible/ansible/test/runner/lib/sanity/__init__.py", line 448, in __init__
2019-08-06 21:13:50.018089 | fedora-29 | self.parser = SanityIgnoreParser.load(args)
2019-08-06 21:13:50.018193 | fedora-29 | File "/home/zuul/src/github.com/ansible/ansible/test/runner/lib/sanity/__init__.py", line 426, in load
2019-08-06 21:13:50.018290 | fedora-29 | SanityIgnoreParser.instance = SanityIgnoreParser(args)
2019-08-06 21:13:50.018450 | fedora-29 | File "/home/zuul/src/github.com/ansible/ansible/test/runner/lib/sanity/__init__.py", line 278, in __init__
2019-08-06 21:13:50.018574 | fedora-29 | paths_by_test[test.name] = set(target.path for target in test.filter_targets(test_targets))
2019-08-06 21:13:50.018711 | fedora-29 | File "/home/zuul/src/github.com/ansible/ansible/test/runner/lib/sanity/import.py", line 62, in filter_targets
2019-08-06 21:13:50.018806 | fedora-29 | return [target for target in targets if os.path.splitext(target.path)[1] == '.py' and
2019-08-06 21:13:50.018908 | fedora-29 | File "/home/zuul/src/github.com/ansible/ansible/test/runner/lib/sanity/import.py", line 63, in <listcomp>
2019-08-06 21:13:50.019077 | fedora-29 | (is_subdir(target.path, data_context().content.module_path) or is_subdir(target.path, data_context().content.module_utils_path))]
2019-08-06 21:13:50.019178 | fedora-29 | File "/home/zuul/src/github.com/ansible/ansible/test/runner/lib/util.py", line 782, in is_subdir
2019-08-06 21:13:50.019225 | fedora-29 | if not path.endswith(os.sep):
2019-08-06 21:13:50.019291 | fedora-29 | AttributeError: 'NoneType' object has no attribute 'endswith'
```
|
https://github.com/ansible/ansible/issues/60166
|
https://github.com/ansible/ansible/pull/60169
|
233efe08862094d9015320570fa419b7dcb0cd49
|
9da5908afba1453040b32747a6707bd373ad0ff2
| 2019-08-06T21:22:54Z |
python
| 2019-08-06T23:07:53Z |
test/lib/ansible_test/_internal/provider/layout/collection.py
|
"""Layout provider for Ansible collections."""
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import os
import re
from ... import types as t
from . import (
ContentLayout,
LayoutProvider,
CollectionDetail,
)
class CollectionLayout(LayoutProvider):
"""Layout provider for Ansible collections."""
__module_path = 'plugins/modules'
__unit_path = 'test/unit'
@staticmethod
def is_content_root(path): # type: (str) -> bool
"""Return True if the given path is a content root for this provider."""
if os.path.basename(os.path.dirname(os.path.dirname(path))) == 'ansible_collections':
return True
return False
def create(self, root, paths): # type: (str, t.List[str]) -> ContentLayout
"""Create a Layout using the given root and paths."""
plugin_types = sorted(set(p.split('/')[1] for p in paths if re.search(r'^plugins/[^/]+/', p)))
provider_types = sorted(set(p.split('/')[2] for p in paths if re.search(r'^test/provider/[^/]+/', p)))
plugin_paths = dict((p, os.path.join('plugins', p)) for p in plugin_types)
provider_paths = dict((p, os.path.join('test/provider', p)) for p in provider_types)
collection_root = os.path.dirname(os.path.dirname(root))
collection_dir = os.path.relpath(root, collection_root)
collection_namespace, collection_name = collection_dir.split(os.sep)
collection_prefix = '%s.%s.' % (collection_namespace, collection_name)
collection_root = os.path.dirname(collection_root)
return ContentLayout(root,
paths,
plugin_paths=plugin_paths,
provider_paths=provider_paths,
code_path='',
collection=CollectionDetail(
name=collection_name,
namespace=collection_namespace,
root=collection_root,
prefix=collection_prefix,
),
util_path='test/util',
unit_path='test/unit',
unit_module_path='test/unit/plugins/modules',
unit_module_utils_path='test/unit/plugins/module_utils',
integration_path='test/integration',
)
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 59,361 |
VMware: vmware_export_ovf timeout when exporting VM with large disk
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below -->
If target VM has big disk file, exporting to ovf will timeout in 10 minutes.
"timeout" can be set as a module parameter to let user set the timeout value for different scenarios.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
vmware_export_ovf
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.8.1
config file = /root/ansible.cfg
configured module search path = [u'/root/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python2.7/dist-packages/ansible
executable location = /usr/bin/ansible
python version = 2.7.15rc1 (default, Nov 12 2018, 14:31:15) [GCC 7.3.0]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
<!--- Paste example playbooks or commands between quotes below -->
```yaml
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
VM can be exported.
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
Timeout occurred.
<!--- Paste verbatim command output between quotes -->
```paste below
```
|
https://github.com/ansible/ansible/issues/59361
|
https://github.com/ansible/ansible/pull/60062
|
279617a94ebf2fbca2c548d1f2fa776b7b261d5a
|
d5bff7a87f0d257cc71310e3b3441f1b7628d3ce
| 2019-07-22T02:00:14Z |
python
| 2019-08-07T09:05:42Z |
lib/ansible/modules/cloud/vmware/vmware_export_ovf.py
|
#!/usr/bin/python
# -*- coding: utf-8 -*-
# Copyright: (c) 2018, Ansible Project
# Copyright: (c) 2018, Diane Wang <[email protected]>
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
ANSIBLE_METADATA = {'metadata_version': '1.1',
'status': ['preview'],
'supported_by': 'community'}
DOCUMENTATION = r'''
---
module: vmware_export_ovf
short_description: Exports a VMware virtual machine to an OVF file, device files and a manifest file
description: >
This module can be used to export a VMware virtual machine to OVF template from vCenter server or ESXi host.
version_added: '2.8'
author:
- Diane Wang (@Tomorrow9) <[email protected]>
requirements:
- python >= 2.6
- PyVmomi
notes: []
options:
name:
description:
- Name of the virtual machine to export.
- This is a required parameter, if parameter C(uuid) or C(moid) is not supplied.
type: str
uuid:
description:
- Uuid of the virtual machine to export.
- This is a required parameter, if parameter C(name) or C(moid) is not supplied.
type: str
moid:
description:
- Managed Object ID of the instance to manage if known, this is a unique identifier only within a single vCenter instance.
- This is required if C(name) or C(uuid) is not supplied.
version_added: '2.9'
type: str
datacenter:
default: ha-datacenter
description:
- Datacenter name of the virtual machine to export.
- This parameter is case sensitive.
type: str
folder:
description:
- Destination folder, absolute path to find the specified guest.
- The folder should include the datacenter. ESX's datacenter is ha-datacenter.
- This parameter is case sensitive.
- 'If multiple machines are found with same name, this parameter is used to identify
uniqueness of the virtual machine. version_added 2.5'
- 'Examples:'
- ' folder: /ha-datacenter/vm'
- ' folder: ha-datacenter/vm'
- ' folder: /datacenter1/vm'
- ' folder: datacenter1/vm'
- ' folder: /datacenter1/vm/folder1'
- ' folder: datacenter1/vm/folder1'
- ' folder: /folder1/datacenter1/vm'
- ' folder: folder1/datacenter1/vm'
- ' folder: /folder1/datacenter1/vm/folder2'
type: str
export_dir:
description:
- Absolute path to place the exported files on the server running this task, must have write permission.
- If folder not exist will create it, also create a folder under this path named with VM name.
required: yes
type: str
export_with_images:
default: false
description:
- Export an ISO image of the media mounted on the CD/DVD Drive within the virtual machine.
type: bool
extends_documentation_fragment: vmware.documentation
'''
EXAMPLES = r'''
- vmware_export_ovf:
validate_certs: false
hostname: '{{ vcenter_hostname }}'
username: '{{ vcenter_username }}'
password: '{{ vcenter_password }}'
name: '{{ vm_name }}'
export_with_images: true
export_dir: /path/to/ovf_template/
delegate_to: localhost
'''
RETURN = r'''
instance:
description: list of the exported files, if exported from vCenter server, device file is not named with vm name
returned: always
type: dict
sample: None
'''
import os
import hashlib
from time import sleep
from threading import Thread
from ansible.module_utils.urls import open_url
from ansible.module_utils.basic import AnsibleModule
from ansible.module_utils._text import to_text
from ansible.module_utils.vmware import vmware_argument_spec, PyVmomi
try:
from pyVmomi import vim
from pyVim import connect
except ImportError:
pass
class LeaseProgressUpdater(Thread):
def __init__(self, http_nfc_lease, update_interval):
Thread.__init__(self)
self._running = True
self.httpNfcLease = http_nfc_lease
self.updateInterval = update_interval
self.progressPercent = 0
def set_progress_percent(self, progress_percent):
self.progressPercent = progress_percent
def stop(self):
self._running = False
def run(self):
while self._running:
try:
if self.httpNfcLease.state == vim.HttpNfcLease.State.done:
return
self.httpNfcLease.HttpNfcLeaseProgress(self.progressPercent)
sleep_sec = 0
while True:
if self.httpNfcLease.state == vim.HttpNfcLease.State.done or self.httpNfcLease.state == vim.HttpNfcLease.State.error:
return
sleep_sec += 1
sleep(1)
if sleep_sec == self.updateInterval:
break
except Exception:
return
class VMwareExportVmOvf(PyVmomi):
def __init__(self, module):
super(VMwareExportVmOvf, self).__init__(module)
self.mf_file = ''
self.ovf_dir = ''
# set read device content chunk size to 2 MB
self.chunk_size = 2 * 2 ** 20
# set lease progress update interval to 15 seconds
self.lease_interval = 15
self.facts = {'device_files': []}
def create_export_dir(self, vm_obj):
self.ovf_dir = os.path.join(self.params['export_dir'], vm_obj.name)
if not os.path.exists(self.ovf_dir):
try:
os.makedirs(self.ovf_dir)
except OSError as err:
self.module.fail_json(msg='Exception caught when create folder %s, with error %s'
% (self.ovf_dir, to_text(err)))
self.mf_file = os.path.join(self.ovf_dir, vm_obj.name + '.mf')
def download_device_files(self, headers, temp_target_disk, device_url, lease_updater, total_bytes_written,
total_bytes_to_write):
mf_content = 'SHA256(' + os.path.basename(temp_target_disk) + ')= '
sha256_hash = hashlib.sha256()
with open(self.mf_file, 'a') as mf_handle:
with open(temp_target_disk, 'wb') as handle:
try:
response = open_url(device_url, headers=headers, validate_certs=False)
except Exception as err:
lease_updater.httpNfcLease.HttpNfcLeaseAbort()
lease_updater.stop()
self.module.fail_json(msg='Exception caught when getting %s, %s' % (device_url, to_text(err)))
if not response:
lease_updater.httpNfcLease.HttpNfcLeaseAbort()
lease_updater.stop()
self.module.fail_json(msg='Getting %s failed' % device_url)
if response.getcode() >= 400:
lease_updater.httpNfcLease.HttpNfcLeaseAbort()
lease_updater.stop()
self.module.fail_json(msg='Getting %s return code %d' % (device_url, response.getcode()))
current_bytes_written = 0
block = response.read(self.chunk_size)
while block:
handle.write(block)
sha256_hash.update(block)
handle.flush()
os.fsync(handle.fileno())
current_bytes_written += len(block)
block = response.read(self.chunk_size)
written_percent = ((current_bytes_written + total_bytes_written) * 100) / total_bytes_to_write
lease_updater.progressPercent = int(written_percent)
mf_handle.write(mf_content + sha256_hash.hexdigest() + '\n')
self.facts['device_files'].append(temp_target_disk)
return current_bytes_written
def export_to_ovf_files(self, vm_obj):
self.create_export_dir(vm_obj=vm_obj)
export_with_iso = False
if 'export_with_images' in self.params and self.params['export_with_images']:
export_with_iso = True
ovf_files = []
# get http nfc lease firstly
http_nfc_lease = vm_obj.ExportVm()
# create a thread to track file download progress
lease_updater = LeaseProgressUpdater(http_nfc_lease, self.lease_interval)
total_bytes_written = 0
# total storage space occupied by the virtual machine across all datastores
total_bytes_to_write = vm_obj.summary.storage.unshared
# new deployed VM with no OS installed
if total_bytes_to_write == 0:
total_bytes_to_write = vm_obj.summary.storage.committed
if total_bytes_to_write == 0:
http_nfc_lease.HttpNfcLeaseAbort()
self.module.fail_json(msg='Total storage space occupied by the VM is 0.')
headers = {'Accept': 'application/x-vnd.vmware-streamVmdk'}
cookies = connect.GetStub().cookie
if cookies:
headers['Cookie'] = cookies
lease_updater.start()
try:
while True:
if http_nfc_lease.state == vim.HttpNfcLease.State.ready:
for deviceUrl in http_nfc_lease.info.deviceUrl:
file_download = False
if deviceUrl.targetId and deviceUrl.disk:
file_download = True
elif deviceUrl.url.split('/')[-1].split('.')[-1] == 'iso':
if export_with_iso:
file_download = True
elif deviceUrl.url.split('/')[-1].split('.')[-1] == 'nvram':
if self.host_version_at_least(version=(6, 7, 0), vm_obj=vm_obj):
file_download = True
else:
continue
device_file_name = deviceUrl.url.split('/')[-1]
# device file named disk-0.iso, disk-1.vmdk, disk-2.vmdk, replace 'disk' with vm name
if device_file_name.split('.')[0][0:5] == "disk-":
device_file_name = device_file_name.replace('disk', vm_obj.name)
temp_target_disk = os.path.join(self.ovf_dir, device_file_name)
device_url = deviceUrl.url
# if export from ESXi host, replace * with hostname in url
# e.g., https://*/ha-nfc/5289bf27-da99-7c0e-3978-8853555deb8c/disk-1.vmdk
if '*' in device_url:
device_url = device_url.replace('*', self.params['hostname'])
if file_download:
current_bytes_written = self.download_device_files(headers=headers,
temp_target_disk=temp_target_disk,
device_url=device_url,
lease_updater=lease_updater,
total_bytes_written=total_bytes_written,
total_bytes_to_write=total_bytes_to_write)
total_bytes_written += current_bytes_written
ovf_file = vim.OvfManager.OvfFile()
ovf_file.deviceId = deviceUrl.key
ovf_file.path = device_file_name
ovf_file.size = current_bytes_written
ovf_files.append(ovf_file)
break
elif http_nfc_lease.state == vim.HttpNfcLease.State.initializing:
sleep(2)
continue
elif http_nfc_lease.state == vim.HttpNfcLease.State.error:
lease_updater.stop()
self.module.fail_json(msg='Get HTTP NFC lease error %s.' % http_nfc_lease.state.error[0].fault)
# generate ovf file
ovf_manager = self.content.ovfManager
ovf_descriptor_name = vm_obj.name
ovf_parameters = vim.OvfManager.CreateDescriptorParams()
ovf_parameters.name = ovf_descriptor_name
ovf_parameters.ovfFiles = ovf_files
vm_descriptor_result = ovf_manager.CreateDescriptor(obj=vm_obj, cdp=ovf_parameters)
if vm_descriptor_result.error:
http_nfc_lease.HttpNfcLeaseAbort()
lease_updater.stop()
self.module.fail_json(msg='Create VM descriptor file error %s.' % vm_descriptor_result.error)
else:
vm_descriptor = vm_descriptor_result.ovfDescriptor
ovf_descriptor_path = os.path.join(self.ovf_dir, ovf_descriptor_name + '.ovf')
sha256_hash = hashlib.sha256()
with open(self.mf_file, 'a') as mf_handle:
with open(ovf_descriptor_path, 'wb') as handle:
handle.write(vm_descriptor)
sha256_hash.update(vm_descriptor)
mf_handle.write('SHA256(' + os.path.basename(ovf_descriptor_path) + ')= ' + sha256_hash.hexdigest() + '\n')
http_nfc_lease.HttpNfcLeaseProgress(100)
# self.facts = http_nfc_lease.HttpNfcLeaseGetManifest()
http_nfc_lease.HttpNfcLeaseComplete()
lease_updater.stop()
self.facts.update({'manifest': self.mf_file, 'ovf_file': ovf_descriptor_path})
except Exception as err:
kwargs = {
'changed': False,
'failed': True,
'msg': to_text(err),
}
http_nfc_lease.HttpNfcLeaseAbort()
lease_updater.stop()
return kwargs
return {'changed': True, 'failed': False, 'instance': self.facts}
def main():
argument_spec = vmware_argument_spec()
argument_spec.update(
name=dict(type='str'),
uuid=dict(type='str'),
moid=dict(type='str'),
folder=dict(type='str'),
datacenter=dict(type='str', default='ha-datacenter'),
export_dir=dict(type='str'),
export_with_images=dict(type='bool', default=False),
)
module = AnsibleModule(argument_spec=argument_spec,
supports_check_mode=True,
required_one_of=[
['name', 'uuid', 'moid'],
],
)
pyv = VMwareExportVmOvf(module)
vm = pyv.get_vm()
if vm:
vm_facts = pyv.gather_facts(vm)
vm_power_state = vm_facts['hw_power_status'].lower()
if vm_power_state != 'poweredoff':
module.fail_json(msg='VM state should be poweredoff to export')
results = pyv.export_to_ovf_files(vm_obj=vm)
else:
module.fail_json(msg='The specified virtual machine not found')
module.exit_json(**results)
if __name__ == '__main__':
main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 59,606 |
Remove UnsafeProxy
|
##### SUMMARY
There is no real reason that `UnsafeProxy` needs to exist.
We need to add unsafe bytes support, and by doing so within `UnsafeProxy` over complicates matters.
`wrap_var` should effectively take over the work that `UnsafeProxy` is doing, and then just rely on `wrap_var` directly using the proper `AnsibleUnsafe*` classes. Or callers can directly use `AnsibleUnsafe*` themselves when `wrap_var` isn't necessary.
##### ISSUE TYPE
- Feature Idea
##### COMPONENT NAME
lib/ansible/utils/unsafe_proxy.py
##### ADDITIONAL INFORMATION
<!--- Describe how the feature would be used, why it is needed and what it would solve -->
<!--- Paste example playbooks or commands between quotes below -->
```yaml
```
<!--- HINT: You can also paste gist.github.com links for larger files -->
|
https://github.com/ansible/ansible/issues/59606
|
https://github.com/ansible/ansible/pull/59711
|
e80f8048ee027ab0c7c8b5912fb6c69c44fb877a
|
164881d871964aa64e0f911d03ae270acbad253c
| 2019-07-25T20:15:19Z |
python
| 2019-08-07T15:39:01Z |
lib/ansible/executor/task_executor.py
|
# (c) 2012-2014, Michael DeHaan <[email protected]>
# (c) 2017 Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import os
import re
import pty
import time
import json
import subprocess
import sys
import termios
import traceback
from ansible import constants as C
from ansible.errors import AnsibleError, AnsibleParserError, AnsibleUndefinedVariable, AnsibleConnectionFailure, AnsibleActionFail, AnsibleActionSkip
from ansible.executor.task_result import TaskResult
from ansible.executor.module_common import get_action_args_with_defaults
from ansible.module_utils.six import iteritems, string_types, binary_type
from ansible.module_utils.six.moves import xrange
from ansible.module_utils._text import to_text, to_native
from ansible.module_utils.connection import write_to_file_descriptor
from ansible.playbook.conditional import Conditional
from ansible.playbook.task import Task
from ansible.plugins.loader import become_loader, cliconf_loader, connection_loader, httpapi_loader, netconf_loader, terminal_loader
from ansible.template import Templar
from ansible.utils.collection_loader import AnsibleCollectionLoader
from ansible.utils.listify import listify_lookup_plugin_terms
from ansible.utils.unsafe_proxy import UnsafeProxy, wrap_var, AnsibleUnsafe
from ansible.vars.clean import namespace_facts, clean_facts
from ansible.utils.display import Display
from ansible.utils.vars import combine_vars, isidentifier
display = Display()
__all__ = ['TaskExecutor']
def remove_omit(task_args, omit_token):
'''
Remove args with a value equal to the ``omit_token`` recursively
to align with now having suboptions in the argument_spec
'''
if not isinstance(task_args, dict):
return task_args
new_args = {}
for i in iteritems(task_args):
if i[1] == omit_token:
continue
elif isinstance(i[1], dict):
new_args[i[0]] = remove_omit(i[1], omit_token)
elif isinstance(i[1], list):
new_args[i[0]] = [remove_omit(v, omit_token) for v in i[1]]
else:
new_args[i[0]] = i[1]
return new_args
class TaskExecutor:
'''
This is the main worker class for the executor pipeline, which
handles loading an action plugin to actually dispatch the task to
a given host. This class roughly corresponds to the old Runner()
class.
'''
# Modules that we optimize by squashing loop items into a single call to
# the module
SQUASH_ACTIONS = frozenset(C.DEFAULT_SQUASH_ACTIONS)
def __init__(self, host, task, job_vars, play_context, new_stdin, loader, shared_loader_obj, final_q):
self._host = host
self._task = task
self._job_vars = job_vars
self._play_context = play_context
self._new_stdin = new_stdin
self._loader = loader
self._shared_loader_obj = shared_loader_obj
self._connection = None
self._final_q = final_q
self._loop_eval_error = None
self._task.squash()
def run(self):
'''
The main executor entrypoint, where we determine if the specified
task requires looping and either runs the task with self._run_loop()
or self._execute(). After that, the returned results are parsed and
returned as a dict.
'''
display.debug("in run() - task %s" % self._task._uuid)
try:
try:
items = self._get_loop_items()
except AnsibleUndefinedVariable as e:
# save the error raised here for use later
items = None
self._loop_eval_error = e
if items is not None:
if len(items) > 0:
item_results = self._run_loop(items)
# create the overall result item
res = dict(results=item_results)
# loop through the item results, and set the global changed/failed result flags based on any item.
for item in item_results:
if 'changed' in item and item['changed'] and not res.get('changed'):
res['changed'] = True
if 'failed' in item and item['failed']:
item_ignore = item.pop('_ansible_ignore_errors')
if not res.get('failed'):
res['failed'] = True
res['msg'] = 'One or more items failed'
self._task.ignore_errors = item_ignore
elif self._task.ignore_errors and not item_ignore:
self._task.ignore_errors = item_ignore
# ensure to accumulate these
for array in ['warnings', 'deprecations']:
if array in item and item[array]:
if array not in res:
res[array] = []
if not isinstance(item[array], list):
item[array] = [item[array]]
res[array] = res[array] + item[array]
del item[array]
if not res.get('Failed', False):
res['msg'] = 'All items completed'
else:
res = dict(changed=False, skipped=True, skipped_reason='No items in the list', results=[])
else:
display.debug("calling self._execute()")
res = self._execute()
display.debug("_execute() done")
# make sure changed is set in the result, if it's not present
if 'changed' not in res:
res['changed'] = False
def _clean_res(res, errors='surrogate_or_strict'):
if isinstance(res, binary_type):
return to_text(res, errors=errors)
elif isinstance(res, dict):
for k in res:
try:
res[k] = _clean_res(res[k], errors=errors)
except UnicodeError:
if k == 'diff':
# If this is a diff, substitute a replacement character if the value
# is undecodable as utf8. (Fix #21804)
display.warning("We were unable to decode all characters in the module return data."
" Replaced some in an effort to return as much as possible")
res[k] = _clean_res(res[k], errors='surrogate_then_replace')
else:
raise
elif isinstance(res, list):
for idx, item in enumerate(res):
res[idx] = _clean_res(item, errors=errors)
return res
display.debug("dumping result to json")
res = _clean_res(res)
display.debug("done dumping result, returning")
return res
except AnsibleError as e:
return dict(failed=True, msg=wrap_var(to_text(e, nonstring='simplerepr')), _ansible_no_log=self._play_context.no_log)
except Exception as e:
return dict(failed=True, msg='Unexpected failure during module execution.', exception=to_text(traceback.format_exc()),
stdout='', _ansible_no_log=self._play_context.no_log)
finally:
try:
self._connection.close()
except AttributeError:
pass
except Exception as e:
display.debug(u"error closing connection: %s" % to_text(e))
def _get_loop_items(self):
'''
Loads a lookup plugin to handle the with_* portion of a task (if specified),
and returns the items result.
'''
# save the play context variables to a temporary dictionary,
# so that we can modify the job vars without doing a full copy
# and later restore them to avoid modifying things too early
play_context_vars = dict()
self._play_context.update_vars(play_context_vars)
old_vars = dict()
for k in play_context_vars:
if k in self._job_vars:
old_vars[k] = self._job_vars[k]
self._job_vars[k] = play_context_vars[k]
# get search path for this task to pass to lookup plugins
self._job_vars['ansible_search_path'] = self._task.get_search_path()
# ensure basedir is always in (dwim already searches here but we need to display it)
if self._loader.get_basedir() not in self._job_vars['ansible_search_path']:
self._job_vars['ansible_search_path'].append(self._loader.get_basedir())
templar = Templar(loader=self._loader, shared_loader_obj=self._shared_loader_obj, variables=self._job_vars)
items = None
loop_cache = self._job_vars.get('_ansible_loop_cache')
if loop_cache is not None:
# _ansible_loop_cache may be set in `get_vars` when calculating `delegate_to`
# to avoid reprocessing the loop
items = loop_cache
elif self._task.loop_with:
if self._task.loop_with in self._shared_loader_obj.lookup_loader:
fail = True
if self._task.loop_with == 'first_found':
# first_found loops are special. If the item is undefined then we want to fall through to the next value rather than failing.
fail = False
loop_terms = listify_lookup_plugin_terms(terms=self._task.loop, templar=templar, loader=self._loader, fail_on_undefined=fail,
convert_bare=False)
if not fail:
loop_terms = [t for t in loop_terms if not templar.is_template(t)]
# get lookup
mylookup = self._shared_loader_obj.lookup_loader.get(self._task.loop_with, loader=self._loader, templar=templar)
# give lookup task 'context' for subdir (mostly needed for first_found)
for subdir in ['template', 'var', 'file']: # TODO: move this to constants?
if subdir in self._task.action:
break
setattr(mylookup, '_subdir', subdir + 's')
# run lookup
items = mylookup.run(terms=loop_terms, variables=self._job_vars, wantlist=True)
else:
raise AnsibleError("Unexpected failure in finding the lookup named '%s' in the available lookup plugins" % self._task.loop_with)
elif self._task.loop is not None:
items = templar.template(self._task.loop)
if not isinstance(items, list):
raise AnsibleError(
"Invalid data passed to 'loop', it requires a list, got this instead: %s."
" Hint: If you passed a list/dict of just one element,"
" try adding wantlist=True to your lookup invocation or use q/query instead of lookup." % items
)
# now we restore any old job variables that may have been modified,
# and delete them if they were in the play context vars but not in
# the old variables dictionary
for k in play_context_vars:
if k in old_vars:
self._job_vars[k] = old_vars[k]
else:
del self._job_vars[k]
if items:
for idx, item in enumerate(items):
if item is not None and not isinstance(item, AnsibleUnsafe):
items[idx] = UnsafeProxy(item)
return items
def _run_loop(self, items):
'''
Runs the task with the loop items specified and collates the result
into an array named 'results' which is inserted into the final result
along with the item for which the loop ran.
'''
results = []
# make copies of the job vars and task so we can add the item to
# the variables and re-validate the task with the item variable
# task_vars = self._job_vars.copy()
task_vars = self._job_vars
loop_var = 'item'
index_var = None
label = None
loop_pause = 0
extended = False
templar = Templar(loader=self._loader, shared_loader_obj=self._shared_loader_obj, variables=self._job_vars)
# FIXME: move this to the object itself to allow post_validate to take care of templating (loop_control.post_validate)
if self._task.loop_control:
loop_var = templar.template(self._task.loop_control.loop_var)
index_var = templar.template(self._task.loop_control.index_var)
loop_pause = templar.template(self._task.loop_control.pause)
extended = templar.template(self._task.loop_control.extended)
# This may be 'None',so it is templated below after we ensure a value and an item is assigned
label = self._task.loop_control.label
# ensure we always have a label
if label is None:
label = '{{' + loop_var + '}}'
if loop_var in task_vars:
display.warning(u"The loop variable '%s' is already in use. "
u"You should set the `loop_var` value in the `loop_control` option for the task"
u" to something else to avoid variable collisions and unexpected behavior." % loop_var)
ran_once = False
if self._task.loop_with:
# Only squash with 'with_:' not with the 'loop:', 'magic' squashing can be removed once with_ loops are
items = self._squash_items(items, loop_var, task_vars)
no_log = False
items_len = len(items)
for item_index, item in enumerate(items):
task_vars['ansible_loop_var'] = loop_var
task_vars[loop_var] = item
if index_var:
task_vars['ansible_index_var'] = index_var
task_vars[index_var] = item_index
if extended:
task_vars['ansible_loop'] = {
'allitems': items,
'index': item_index + 1,
'index0': item_index,
'first': item_index == 0,
'last': item_index + 1 == items_len,
'length': items_len,
'revindex': items_len - item_index,
'revindex0': items_len - item_index - 1,
}
try:
task_vars['ansible_loop']['nextitem'] = items[item_index + 1]
except IndexError:
pass
if item_index - 1 >= 0:
task_vars['ansible_loop']['previtem'] = items[item_index - 1]
# Update template vars to reflect current loop iteration
templar.available_variables = task_vars
# pause between loop iterations
if loop_pause and ran_once:
try:
time.sleep(float(loop_pause))
except ValueError as e:
raise AnsibleError('Invalid pause value: %s, produced error: %s' % (loop_pause, to_native(e)))
else:
ran_once = True
try:
tmp_task = self._task.copy(exclude_parent=True, exclude_tasks=True)
tmp_task._parent = self._task._parent
tmp_play_context = self._play_context.copy()
except AnsibleParserError as e:
results.append(dict(failed=True, msg=to_text(e)))
continue
# now we swap the internal task and play context with their copies,
# execute, and swap them back so we can do the next iteration cleanly
(self._task, tmp_task) = (tmp_task, self._task)
(self._play_context, tmp_play_context) = (tmp_play_context, self._play_context)
res = self._execute(variables=task_vars)
task_fields = self._task.dump_attrs()
(self._task, tmp_task) = (tmp_task, self._task)
(self._play_context, tmp_play_context) = (tmp_play_context, self._play_context)
# update 'general no_log' based on specific no_log
no_log = no_log or tmp_task.no_log
# now update the result with the item info, and append the result
# to the list of results
res[loop_var] = item
res['ansible_loop_var'] = loop_var
if index_var:
res[index_var] = item_index
res['ansible_index_var'] = index_var
if extended:
res['ansible_loop'] = task_vars['ansible_loop']
res['_ansible_item_result'] = True
res['_ansible_ignore_errors'] = task_fields.get('ignore_errors')
# gets templated here unlike rest of loop_control fields, depends on loop_var above
try:
res['_ansible_item_label'] = templar.template(label, cache=False)
except AnsibleUndefinedVariable as e:
res.update({
'failed': True,
'msg': 'Failed to template loop_control.label: %s' % to_text(e)
})
self._final_q.put(
TaskResult(
self._host.name,
self._task._uuid,
res,
task_fields=task_fields,
),
block=False,
)
results.append(res)
del task_vars[loop_var]
# clear 'connection related' plugin variables for next iteration
if self._connection:
clear_plugins = {
'connection': self._connection._load_name,
'shell': self._connection._shell._load_name
}
if self._connection.become:
clear_plugins['become'] = self._connection.become._load_name
for plugin_type, plugin_name in iteritems(clear_plugins):
for var in C.config.get_plugin_vars(plugin_type, plugin_name):
if var in task_vars and var not in self._job_vars:
del task_vars[var]
self._task.no_log = no_log
return results
def _squash_items(self, items, loop_var, variables):
'''
Squash items down to a comma-separated list for certain modules which support it
(typically package management modules).
'''
name = None
try:
# _task.action could contain templatable strings (via action: and
# local_action:) Template it before comparing. If we don't end up
# optimizing it here, the templatable string might use template vars
# that aren't available until later (it could even use vars from the
# with_items loop) so don't make the templated string permanent yet.
templar = Templar(loader=self._loader, shared_loader_obj=self._shared_loader_obj, variables=variables)
task_action = self._task.action
if templar.is_template(task_action):
task_action = templar.template(task_action, fail_on_undefined=False)
if len(items) > 0 and task_action in self.SQUASH_ACTIONS:
if all(isinstance(o, string_types) for o in items):
final_items = []
found = None
for allowed in ['name', 'pkg', 'package']:
name = self._task.args.pop(allowed, None)
if name is not None:
found = allowed
break
# This gets the information to check whether the name field
# contains a template that we can squash for
template_no_item = template_with_item = None
if name:
if templar.is_template(name):
variables[loop_var] = '\0$'
template_no_item = templar.template(name, variables, cache=False)
variables[loop_var] = '\0@'
template_with_item = templar.template(name, variables, cache=False)
del variables[loop_var]
# Check if the user is doing some operation that doesn't take
# name/pkg or the name/pkg field doesn't have any variables
# and thus the items can't be squashed
if template_no_item != template_with_item:
if self._task.loop_with and self._task.loop_with not in ('items', 'list'):
value_text = "\"{{ query('%s', %r) }}\"" % (self._task.loop_with, self._task.loop)
else:
value_text = '%r' % self._task.loop
# Without knowing the data structure well, it's easiest to strip python2 unicode
# literals after stringifying
value_text = re.sub(r"\bu'", "'", value_text)
display.deprecated(
'Invoking "%s" only once while using a loop via squash_actions is deprecated. '
'Instead of using a loop to supply multiple items and specifying `%s: "%s"`, '
'please use `%s: %s` and remove the loop' % (self._task.action, found, name, found, value_text),
version='2.11'
)
for item in items:
variables[loop_var] = item
if self._task.evaluate_conditional(templar, variables):
new_item = templar.template(name, cache=False)
final_items.append(new_item)
self._task.args['name'] = final_items
# Wrap this in a list so that the calling function loop
# executes exactly once
return [final_items]
else:
# Restore the name parameter
self._task.args['name'] = name
# elif:
# Right now we only optimize single entries. In the future we
# could optimize more types:
# * lists can be squashed together
# * dicts could squash entries that match in all cases except the
# name or pkg field.
except Exception:
# Squashing is an optimization. If it fails for any reason,
# simply use the unoptimized list of items.
# Restore the name parameter
if name is not None:
self._task.args['name'] = name
return items
def _execute(self, variables=None):
'''
The primary workhorse of the executor system, this runs the task
on the specified host (which may be the delegated_to host) and handles
the retry/until and block rescue/always execution
'''
if variables is None:
variables = self._job_vars
templar = Templar(loader=self._loader, shared_loader_obj=self._shared_loader_obj, variables=variables)
context_validation_error = None
try:
# apply the given task's information to the connection info,
# which may override some fields already set by the play or
# the options specified on the command line
self._play_context = self._play_context.set_task_and_variable_override(task=self._task, variables=variables, templar=templar)
# fields set from the play/task may be based on variables, so we have to
# do the same kind of post validation step on it here before we use it.
self._play_context.post_validate(templar=templar)
# now that the play context is finalized, if the remote_addr is not set
# default to using the host's address field as the remote address
if not self._play_context.remote_addr:
self._play_context.remote_addr = self._host.address
# We also add "magic" variables back into the variables dict to make sure
# a certain subset of variables exist.
self._play_context.update_vars(variables)
# FIXME: update connection/shell plugin options
except AnsibleError as e:
# save the error, which we'll raise later if we don't end up
# skipping this task during the conditional evaluation step
context_validation_error = e
# Evaluate the conditional (if any) for this task, which we do before running
# the final task post-validation. We do this before the post validation due to
# the fact that the conditional may specify that the task be skipped due to a
# variable not being present which would otherwise cause validation to fail
try:
if not self._task.evaluate_conditional(templar, variables):
display.debug("when evaluation is False, skipping this task")
return dict(changed=False, skipped=True, skip_reason='Conditional result was False', _ansible_no_log=self._play_context.no_log)
except AnsibleError:
# loop error takes precedence
if self._loop_eval_error is not None:
raise self._loop_eval_error # pylint: disable=raising-bad-type
raise
# Not skipping, if we had loop error raised earlier we need to raise it now to halt the execution of this task
if self._loop_eval_error is not None:
raise self._loop_eval_error # pylint: disable=raising-bad-type
# if we ran into an error while setting up the PlayContext, raise it now
if context_validation_error is not None:
raise context_validation_error # pylint: disable=raising-bad-type
# if this task is a TaskInclude, we just return now with a success code so the
# main thread can expand the task list for the given host
if self._task.action in ('include', 'include_tasks'):
include_args = self._task.args.copy()
include_file = include_args.pop('_raw_params', None)
if not include_file:
return dict(failed=True, msg="No include file was specified to the include")
include_file = templar.template(include_file)
return dict(include=include_file, include_args=include_args)
# if this task is a IncludeRole, we just return now with a success code so the main thread can expand the task list for the given host
elif self._task.action == 'include_role':
include_args = self._task.args.copy()
return dict(include_args=include_args)
# Now we do final validation on the task, which sets all fields to their final values.
self._task.post_validate(templar=templar)
if '_variable_params' in self._task.args:
variable_params = self._task.args.pop('_variable_params')
if isinstance(variable_params, dict):
if C.INJECT_FACTS_AS_VARS:
display.warning("Using a variable for a task's 'args' is unsafe in some situations "
"(see https://docs.ansible.com/ansible/devel/reference_appendices/faq.html#argsplat-unsafe)")
variable_params.update(self._task.args)
self._task.args = variable_params
# get the connection and the handler for this execution
if (not self._connection or
not getattr(self._connection, 'connected', False) or
self._play_context.remote_addr != self._connection._play_context.remote_addr):
self._connection = self._get_connection(variables=variables, templar=templar)
else:
# if connection is reused, its _play_context is no longer valid and needs
# to be replaced with the one templated above, in case other data changed
self._connection._play_context = self._play_context
self._set_connection_options(variables, templar)
# get handler
self._handler = self._get_action_handler(connection=self._connection, templar=templar)
# Apply default params for action/module, if present
self._task.args = get_action_args_with_defaults(self._task.action, self._task.args, self._task.module_defaults, templar)
# And filter out any fields which were set to default(omit), and got the omit token value
omit_token = variables.get('omit')
if omit_token is not None:
self._task.args = remove_omit(self._task.args, omit_token)
# Read some values from the task, so that we can modify them if need be
if self._task.until:
retries = self._task.retries
if retries is None:
retries = 3
elif retries <= 0:
retries = 1
else:
retries += 1
else:
retries = 1
delay = self._task.delay
if delay < 0:
delay = 1
# make a copy of the job vars here, in case we need to update them
# with the registered variable value later on when testing conditions
vars_copy = variables.copy()
display.debug("starting attempt loop")
result = None
for attempt in xrange(1, retries + 1):
display.debug("running the handler")
try:
result = self._handler.run(task_vars=variables)
except AnsibleActionSkip as e:
return dict(skipped=True, msg=to_text(e))
except AnsibleActionFail as e:
return dict(failed=True, msg=to_text(e))
except AnsibleConnectionFailure as e:
return dict(unreachable=True, msg=to_text(e))
display.debug("handler run complete")
# preserve no log
result["_ansible_no_log"] = self._play_context.no_log
# update the local copy of vars with the registered value, if specified,
# or any facts which may have been generated by the module execution
if self._task.register:
if not isidentifier(self._task.register):
raise AnsibleError("Invalid variable name in 'register' specified: '%s'" % self._task.register)
vars_copy[self._task.register] = wrap_var(result)
if self._task.async_val > 0:
if self._task.poll > 0 and not result.get('skipped') and not result.get('failed'):
result = self._poll_async_result(result=result, templar=templar, task_vars=vars_copy)
# FIXME callback 'v2_runner_on_async_poll' here
# ensure no log is preserved
result["_ansible_no_log"] = self._play_context.no_log
# helper methods for use below in evaluating changed/failed_when
def _evaluate_changed_when_result(result):
if self._task.changed_when is not None and self._task.changed_when:
cond = Conditional(loader=self._loader)
cond.when = self._task.changed_when
result['changed'] = cond.evaluate_conditional(templar, vars_copy)
def _evaluate_failed_when_result(result):
if self._task.failed_when:
cond = Conditional(loader=self._loader)
cond.when = self._task.failed_when
failed_when_result = cond.evaluate_conditional(templar, vars_copy)
result['failed_when_result'] = result['failed'] = failed_when_result
else:
failed_when_result = False
return failed_when_result
if 'ansible_facts' in result:
if self._task.action in ('set_fact', 'include_vars'):
vars_copy.update(result['ansible_facts'])
else:
# TODO: cleaning of facts should eventually become part of taskresults instead of vars
af = wrap_var(result['ansible_facts'])
vars_copy.update(namespace_facts(af))
if C.INJECT_FACTS_AS_VARS:
vars_copy.update(clean_facts(af))
# set the failed property if it was missing.
if 'failed' not in result:
# rc is here for backwards compatibility and modules that use it instead of 'failed'
if 'rc' in result and result['rc'] not in [0, "0"]:
result['failed'] = True
else:
result['failed'] = False
# Make attempts and retries available early to allow their use in changed/failed_when
if self._task.until:
result['attempts'] = attempt
# set the changed property if it was missing.
if 'changed' not in result:
result['changed'] = False
# re-update the local copy of vars with the registered value, if specified,
# or any facts which may have been generated by the module execution
# This gives changed/failed_when access to additional recently modified
# attributes of result
if self._task.register:
vars_copy[self._task.register] = wrap_var(result)
# if we didn't skip this task, use the helpers to evaluate the changed/
# failed_when properties
if 'skipped' not in result:
_evaluate_changed_when_result(result)
_evaluate_failed_when_result(result)
if retries > 1:
cond = Conditional(loader=self._loader)
cond.when = self._task.until
if cond.evaluate_conditional(templar, vars_copy):
break
else:
# no conditional check, or it failed, so sleep for the specified time
if attempt < retries:
result['_ansible_retry'] = True
result['retries'] = retries
display.debug('Retrying task, attempt %d of %d' % (attempt, retries))
self._final_q.put(TaskResult(self._host.name, self._task._uuid, result, task_fields=self._task.dump_attrs()), block=False)
time.sleep(delay)
self._handler = self._get_action_handler(connection=self._connection, templar=templar)
else:
if retries > 1:
# we ran out of attempts, so mark the result as failed
result['attempts'] = retries - 1
result['failed'] = True
# do the final update of the local variables here, for both registered
# values and any facts which may have been created
if self._task.register:
variables[self._task.register] = wrap_var(result)
if 'ansible_facts' in result:
if self._task.action in ('set_fact', 'include_vars'):
variables.update(result['ansible_facts'])
else:
# TODO: cleaning of facts should eventually become part of taskresults instead of vars
af = wrap_var(result['ansible_facts'])
variables.update(namespace_facts(af))
if C.INJECT_FACTS_AS_VARS:
variables.update(clean_facts(af))
# save the notification target in the result, if it was specified, as
# this task may be running in a loop in which case the notification
# may be item-specific, ie. "notify: service {{item}}"
if self._task.notify is not None:
result['_ansible_notify'] = self._task.notify
# add the delegated vars to the result, so we can reference them
# on the results side without having to do any further templating
# FIXME: we only want a limited set of variables here, so this is currently
# hardcoded but should be possibly fixed if we want more or if
# there is another source of truth we can use
delegated_vars = variables.get('ansible_delegated_vars', dict()).get(self._task.delegate_to, dict()).copy()
if len(delegated_vars) > 0:
result["_ansible_delegated_vars"] = {'ansible_delegated_host': self._task.delegate_to}
for k in ('ansible_host', ):
result["_ansible_delegated_vars"][k] = delegated_vars.get(k)
# and return
display.debug("attempt loop complete, returning result")
return result
def _poll_async_result(self, result, templar, task_vars=None):
'''
Polls for the specified JID to be complete
'''
if task_vars is None:
task_vars = self._job_vars
async_jid = result.get('ansible_job_id')
if async_jid is None:
return dict(failed=True, msg="No job id was returned by the async task")
# Create a new pseudo-task to run the async_status module, and run
# that (with a sleep for "poll" seconds between each retry) until the
# async time limit is exceeded.
async_task = Task().load(dict(action='async_status jid=%s' % async_jid, environment=self._task.environment))
# FIXME: this is no longer the case, normal takes care of all, see if this can just be generalized
# Because this is an async task, the action handler is async. However,
# we need the 'normal' action handler for the status check, so get it
# now via the action_loader
async_handler = self._shared_loader_obj.action_loader.get(
'async_status',
task=async_task,
connection=self._connection,
play_context=self._play_context,
loader=self._loader,
templar=templar,
shared_loader_obj=self._shared_loader_obj,
)
time_left = self._task.async_val
while time_left > 0:
time.sleep(self._task.poll)
try:
async_result = async_handler.run(task_vars=task_vars)
# We do not bail out of the loop in cases where the failure
# is associated with a parsing error. The async_runner can
# have issues which result in a half-written/unparseable result
# file on disk, which manifests to the user as a timeout happening
# before it's time to timeout.
if (int(async_result.get('finished', 0)) == 1 or
('failed' in async_result and async_result.get('_ansible_parsed', False)) or
'skipped' in async_result):
break
except Exception as e:
# Connections can raise exceptions during polling (eg, network bounce, reboot); these should be non-fatal.
# On an exception, call the connection's reset method if it has one
# (eg, drop/recreate WinRM connection; some reused connections are in a broken state)
display.vvvv("Exception during async poll, retrying... (%s)" % to_text(e))
display.debug("Async poll exception was:\n%s" % to_text(traceback.format_exc()))
try:
async_handler._connection.reset()
except AttributeError:
pass
# Little hack to raise the exception if we've exhausted the timeout period
time_left -= self._task.poll
if time_left <= 0:
raise
else:
time_left -= self._task.poll
if int(async_result.get('finished', 0)) != 1:
if async_result.get('_ansible_parsed'):
return dict(failed=True, msg="async task did not complete within the requested time - %ss" % self._task.async_val)
else:
return dict(failed=True, msg="async task produced unparseable results", async_result=async_result)
else:
return async_result
def _get_become(self, name):
become = become_loader.get(name)
if not become:
raise AnsibleError("Invalid become method specified, could not find matching plugin: '%s'. "
"Use `ansible-doc -t become -l` to list available plugins." % name)
return become
def _get_connection(self, variables, templar):
'''
Reads the connection property for the host, and returns the
correct connection object from the list of connection plugins
'''
if self._task.delegate_to is not None:
# since we're delegating, we don't want to use interpreter values
# which would have been set for the original target host
for i in list(variables.keys()):
if isinstance(i, string_types) and i.startswith('ansible_') and i.endswith('_interpreter'):
del variables[i]
# now replace the interpreter values with those that may have come
# from the delegated-to host
delegated_vars = variables.get('ansible_delegated_vars', dict()).get(self._task.delegate_to, dict())
if isinstance(delegated_vars, dict):
for i in delegated_vars:
if isinstance(i, string_types) and i.startswith("ansible_") and i.endswith("_interpreter"):
variables[i] = delegated_vars[i]
# load connection
conn_type = self._play_context.connection
connection = self._shared_loader_obj.connection_loader.get(
conn_type,
self._play_context,
self._new_stdin,
task_uuid=self._task._uuid,
ansible_playbook_pid=to_text(os.getppid())
)
if not connection:
raise AnsibleError("the connection plugin '%s' was not found" % conn_type)
# load become plugin if needed
become_plugin = None
if self._play_context.become:
become_plugin = self._get_become(self._play_context.become_method)
if getattr(become_plugin, 'require_tty', False) and not getattr(connection, 'has_tty', False):
raise AnsibleError(
"The '%s' connection does not provide a tty which is required for the selected "
"become plugin: %s." % (conn_type, become_plugin.name)
)
try:
connection.set_become_plugin(become_plugin)
except AttributeError:
# Connection plugin does not support set_become_plugin
pass
# Backwards compat for connection plugins that don't support become plugins
# Just do this unconditionally for now, we could move it inside of the
# AttributeError above later
self._play_context.set_become_plugin(become_plugin)
# FIXME: remove once all plugins pull all data from self._options
self._play_context.set_attributes_from_plugin(connection)
if any(((connection.supports_persistence and C.USE_PERSISTENT_CONNECTIONS), connection.force_persistence)):
self._play_context.timeout = connection.get_option('persistent_command_timeout')
display.vvvv('attempting to start connection', host=self._play_context.remote_addr)
display.vvvv('using connection plugin %s' % connection.transport, host=self._play_context.remote_addr)
options = self._get_persistent_connection_options(connection, variables, templar)
socket_path = start_connection(self._play_context, options)
display.vvvv('local domain socket path is %s' % socket_path, host=self._play_context.remote_addr)
setattr(connection, '_socket_path', socket_path)
return connection
def _get_persistent_connection_options(self, connection, variables, templar):
final_vars = combine_vars(variables, variables.get('ansible_delegated_vars', dict()).get(self._task.delegate_to, dict()))
option_vars = C.config.get_plugin_vars('connection', connection._load_name)
plugin = connection._sub_plugin
if plugin.get('type'):
option_vars.extend(C.config.get_plugin_vars(plugin['type'], plugin['name']))
options = {}
for k in option_vars:
if k in final_vars:
options[k] = templar.template(final_vars[k])
return options
def _set_plugin_options(self, plugin_type, variables, templar, task_keys):
try:
plugin = getattr(self._connection, '_%s' % plugin_type)
except AttributeError:
# Some plugins are assigned to private attrs, ``become`` is not
plugin = getattr(self._connection, plugin_type)
option_vars = C.config.get_plugin_vars(plugin_type, plugin._load_name)
options = {}
for k in option_vars:
if k in variables:
options[k] = templar.template(variables[k])
# TODO move to task method?
plugin.set_options(task_keys=task_keys, var_options=options)
def _set_connection_options(self, variables, templar):
# Keep the pre-delegate values for these keys
PRESERVE_ORIG = ('inventory_hostname',)
# create copy with delegation built in
final_vars = combine_vars(
variables,
variables.get('ansible_delegated_vars', {}).get(self._task.delegate_to, {})
)
# grab list of usable vars for this plugin
option_vars = C.config.get_plugin_vars('connection', self._connection._load_name)
# create dict of 'templated vars'
options = {'_extras': {}}
for k in option_vars:
if k in PRESERVE_ORIG:
options[k] = templar.template(variables[k])
elif k in final_vars:
options[k] = templar.template(final_vars[k])
# add extras if plugin supports them
if getattr(self._connection, 'allow_extras', False):
for k in final_vars:
if k.startswith('ansible_%s_' % self._connection._load_name) and k not in options:
options['_extras'][k] = templar.template(final_vars[k])
task_keys = self._task.dump_attrs()
# set options with 'templated vars' specific to this plugin and dependant ones
self._connection.set_options(task_keys=task_keys, var_options=options)
self._set_plugin_options('shell', final_vars, templar, task_keys)
if self._connection.become is not None:
# FIXME: find alternate route to provide passwords,
# keep out of play objects to avoid accidental disclosure
task_keys['become_pass'] = self._play_context.become_pass
self._set_plugin_options('become', final_vars, templar, task_keys)
# FOR BACKWARDS COMPAT:
for option in ('become_user', 'become_flags', 'become_exe'):
try:
setattr(self._play_context, option, self._connection.become.get_option(option))
except KeyError:
pass # some plugins don't support all base flags
self._play_context.prompt = self._connection.become.prompt
def _get_action_handler(self, connection, templar):
'''
Returns the correct action plugin to handle the requestion task action
'''
module_prefix = self._task.action.split('_')[0]
collections = self._task.collections
# let action plugin override module, fallback to 'normal' action plugin otherwise
if self._shared_loader_obj.action_loader.has_plugin(self._task.action, collection_list=collections):
handler_name = self._task.action
# FIXME: is this code path even live anymore? check w/ networking folks; it trips sometimes when it shouldn't
elif all((module_prefix in C.NETWORK_GROUP_MODULES, module_prefix in self._shared_loader_obj.action_loader)):
handler_name = module_prefix
else:
# FUTURE: once we're comfortable with collections impl, preface this action with ansible.builtin so it can't be hijacked
handler_name = 'normal'
collections = None # until then, we don't want the task's collection list to be consulted; use the builtin
handler = self._shared_loader_obj.action_loader.get(
handler_name,
task=self._task,
connection=connection,
play_context=self._play_context,
loader=self._loader,
templar=templar,
shared_loader_obj=self._shared_loader_obj,
collection_list=collections
)
if not handler:
raise AnsibleError("the handler '%s' was not found" % handler_name)
return handler
def start_connection(play_context, variables):
'''
Starts the persistent connection
'''
candidate_paths = [C.ANSIBLE_CONNECTION_PATH or os.path.dirname(sys.argv[0])]
candidate_paths.extend(os.environ['PATH'].split(os.pathsep))
for dirname in candidate_paths:
ansible_connection = os.path.join(dirname, 'ansible-connection')
if os.path.isfile(ansible_connection):
break
else:
raise AnsibleError("Unable to find location of 'ansible-connection'. "
"Please set or check the value of ANSIBLE_CONNECTION_PATH")
env = os.environ.copy()
env.update({
# HACK; most of these paths may change during the controller's lifetime
# (eg, due to late dynamic role includes, multi-playbook execution), without a way
# to invalidate/update, ansible-connection won't always see the same plugins the controller
# can.
'ANSIBLE_BECOME_PLUGINS': become_loader.print_paths(),
'ANSIBLE_CLICONF_PLUGINS': cliconf_loader.print_paths(),
'ANSIBLE_COLLECTIONS_PATHS': os.pathsep.join(AnsibleCollectionLoader().n_collection_paths),
'ANSIBLE_CONNECTION_PLUGINS': connection_loader.print_paths(),
'ANSIBLE_HTTPAPI_PLUGINS': httpapi_loader.print_paths(),
'ANSIBLE_NETCONF_PLUGINS': netconf_loader.print_paths(),
'ANSIBLE_TERMINAL_PLUGINS': terminal_loader.print_paths(),
})
python = sys.executable
master, slave = pty.openpty()
p = subprocess.Popen(
[python, ansible_connection, to_text(os.getppid())],
stdin=slave, stdout=subprocess.PIPE, stderr=subprocess.PIPE, env=env
)
os.close(slave)
# We need to set the pty into noncanonical mode. This ensures that we
# can receive lines longer than 4095 characters (plus newline) without
# truncating.
old = termios.tcgetattr(master)
new = termios.tcgetattr(master)
new[3] = new[3] & ~termios.ICANON
try:
termios.tcsetattr(master, termios.TCSANOW, new)
write_to_file_descriptor(master, variables)
write_to_file_descriptor(master, play_context.serialize())
(stdout, stderr) = p.communicate()
finally:
termios.tcsetattr(master, termios.TCSANOW, old)
os.close(master)
if p.returncode == 0:
result = json.loads(to_text(stdout, errors='surrogate_then_replace'))
else:
try:
result = json.loads(to_text(stderr, errors='surrogate_then_replace'))
except getattr(json.decoder, 'JSONDecodeError', ValueError):
# JSONDecodeError only available on Python 3.5+
result = {'error': to_text(stderr, errors='surrogate_then_replace')}
if 'messages' in result:
for level, message in result['messages']:
if level == 'log':
display.display(message, log_only=True)
elif level in ('debug', 'v', 'vv', 'vvv', 'vvvv', 'vvvvv', 'vvvvvv'):
getattr(display, level)(message, host=play_context.remote_addr)
else:
if hasattr(display, level):
getattr(display, level)(message)
else:
display.vvvv(message, host=play_context.remote_addr)
if 'error' in result:
if play_context.verbosity > 2:
if result.get('exception'):
msg = "The full traceback is:\n" + result['exception']
display.display(msg, color=C.COLOR_ERROR)
raise AnsibleError(result['error'])
return result['socket_path']
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 59,606 |
Remove UnsafeProxy
|
##### SUMMARY
There is no real reason that `UnsafeProxy` needs to exist.
We need to add unsafe bytes support, and by doing so within `UnsafeProxy` over complicates matters.
`wrap_var` should effectively take over the work that `UnsafeProxy` is doing, and then just rely on `wrap_var` directly using the proper `AnsibleUnsafe*` classes. Or callers can directly use `AnsibleUnsafe*` themselves when `wrap_var` isn't necessary.
##### ISSUE TYPE
- Feature Idea
##### COMPONENT NAME
lib/ansible/utils/unsafe_proxy.py
##### ADDITIONAL INFORMATION
<!--- Describe how the feature would be used, why it is needed and what it would solve -->
<!--- Paste example playbooks or commands between quotes below -->
```yaml
```
<!--- HINT: You can also paste gist.github.com links for larger files -->
|
https://github.com/ansible/ansible/issues/59606
|
https://github.com/ansible/ansible/pull/59711
|
e80f8048ee027ab0c7c8b5912fb6c69c44fb877a
|
164881d871964aa64e0f911d03ae270acbad253c
| 2019-07-25T20:15:19Z |
python
| 2019-08-07T15:39:01Z |
lib/ansible/template/__init__.py
|
# (c) 2012-2014, Michael DeHaan <[email protected]>
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
# Make coding more python3-ish
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import ast
import datetime
import os
import pkgutil
import pwd
import re
import time
from numbers import Number
try:
from hashlib import sha1
except ImportError:
from sha import sha as sha1
from jinja2.exceptions import TemplateSyntaxError, UndefinedError
from jinja2.loaders import FileSystemLoader
from jinja2.runtime import Context, StrictUndefined
from ansible import constants as C
from ansible.errors import AnsibleError, AnsibleFilterError, AnsibleUndefinedVariable, AnsibleAssertionError
from ansible.module_utils.six import iteritems, string_types, text_type
from ansible.module_utils._text import to_native, to_text, to_bytes
from ansible.module_utils.common._collections_compat import Sequence, Mapping, MutableMapping
from ansible.plugins.loader import filter_loader, lookup_loader, test_loader
from ansible.template.safe_eval import safe_eval
from ansible.template.template import AnsibleJ2Template
from ansible.template.vars import AnsibleJ2Vars
from ansible.utils.display import Display
from ansible.utils.unsafe_proxy import UnsafeProxy, wrap_var
# HACK: keep Python 2.6 controller tests happy in CI until they're properly split
try:
from importlib import import_module
except ImportError:
import_module = __import__
display = Display()
__all__ = ['Templar', 'generate_ansible_template_vars']
# A regex for checking to see if a variable we're trying to
# expand is just a single variable name.
# Primitive Types which we don't want Jinja to convert to strings.
NON_TEMPLATED_TYPES = (bool, Number)
JINJA2_OVERRIDE = '#jinja2:'
USE_JINJA2_NATIVE = False
if C.DEFAULT_JINJA2_NATIVE:
try:
from jinja2.nativetypes import NativeEnvironment as Environment
from ansible.template.native_helpers import ansible_native_concat as j2_concat
USE_JINJA2_NATIVE = True
except ImportError:
from jinja2 import Environment
from jinja2.utils import concat as j2_concat
from jinja2 import __version__ as j2_version
display.warning(
'jinja2_native requires Jinja 2.10 and above. '
'Version detected: %s. Falling back to default.' % j2_version
)
else:
from jinja2 import Environment
from jinja2.utils import concat as j2_concat
JINJA2_BEGIN_TOKENS = frozenset(('variable_begin', 'block_begin', 'comment_begin', 'raw_begin'))
JINJA2_END_TOKENS = frozenset(('variable_end', 'block_end', 'comment_end', 'raw_end'))
def generate_ansible_template_vars(path, dest_path=None):
b_path = to_bytes(path)
try:
template_uid = pwd.getpwuid(os.stat(b_path).st_uid).pw_name
except (KeyError, TypeError):
template_uid = os.stat(b_path).st_uid
temp_vars = {
'template_host': to_text(os.uname()[1]),
'template_path': path,
'template_mtime': datetime.datetime.fromtimestamp(os.path.getmtime(b_path)),
'template_uid': to_text(template_uid),
'template_fullpath': os.path.abspath(path),
'template_run_date': datetime.datetime.now(),
'template_destpath': to_native(dest_path) if dest_path else None,
}
managed_default = C.DEFAULT_MANAGED_STR
managed_str = managed_default.format(
host=temp_vars['template_host'],
uid=temp_vars['template_uid'],
file=temp_vars['template_path'],
)
temp_vars['ansible_managed'] = to_text(time.strftime(to_native(managed_str), time.localtime(os.path.getmtime(b_path))))
return temp_vars
def _escape_backslashes(data, jinja_env):
"""Double backslashes within jinja2 expressions
A user may enter something like this in a playbook::
debug:
msg: "Test Case 1\\3; {{ test1_name | regex_replace('^(.*)_name$', '\\1')}}"
The string inside of the {{ gets interpreted multiple times First by yaml.
Then by python. And finally by jinja2 as part of it's variable. Because
it is processed by both python and jinja2, the backslash escaped
characters get unescaped twice. This means that we'd normally have to use
four backslashes to escape that. This is painful for playbook authors as
they have to remember different rules for inside vs outside of a jinja2
expression (The backslashes outside of the "{{ }}" only get processed by
yaml and python. So they only need to be escaped once). The following
code fixes this by automatically performing the extra quoting of
backslashes inside of a jinja2 expression.
"""
if '\\' in data and '{{' in data:
new_data = []
d2 = jinja_env.preprocess(data)
in_var = False
for token in jinja_env.lex(d2):
if token[1] == 'variable_begin':
in_var = True
new_data.append(token[2])
elif token[1] == 'variable_end':
in_var = False
new_data.append(token[2])
elif in_var and token[1] == 'string':
# Double backslashes only if we're inside of a jinja2 variable
new_data.append(token[2].replace('\\', '\\\\'))
else:
new_data.append(token[2])
data = ''.join(new_data)
return data
def is_template(data, jinja_env):
"""This function attempts to quickly detect whether a value is a jinja2
template. To do so, we look for the first 2 matching jinja2 tokens for
start and end delimiters.
"""
found = None
start = True
comment = False
d2 = jinja_env.preprocess(data)
# This wraps a lot of code, but this is due to lex returing a generator
# so we may get an exception at any part of the loop
try:
for token in jinja_env.lex(d2):
if token[1] in JINJA2_BEGIN_TOKENS:
if start and token[1] == 'comment_begin':
# Comments can wrap other token types
comment = True
start = False
# Example: variable_end -> variable
found = token[1].split('_')[0]
elif token[1] in JINJA2_END_TOKENS:
if token[1].split('_')[0] == found:
return True
elif comment:
continue
return False
except TemplateSyntaxError:
return False
return False
def _count_newlines_from_end(in_str):
'''
Counts the number of newlines at the end of a string. This is used during
the jinja2 templating to ensure the count matches the input, since some newlines
may be thrown away during the templating.
'''
try:
i = len(in_str)
j = i - 1
while in_str[j] == '\n':
j -= 1
return i - 1 - j
except IndexError:
# Uncommon cases: zero length string and string containing only newlines
return i
def recursive_check_defined(item):
from jinja2.runtime import Undefined
if isinstance(item, MutableMapping):
for key in item:
recursive_check_defined(item[key])
elif isinstance(item, list):
for i in item:
recursive_check_defined(i)
else:
if isinstance(item, Undefined):
raise AnsibleFilterError("{0} is undefined".format(item))
class AnsibleUndefined(StrictUndefined):
'''
A custom Undefined class, which returns further Undefined objects on access,
rather than throwing an exception.
'''
def __getattr__(self, name):
# Return original Undefined object to preserve the first failure context
return self
def __getitem__(self, key):
# Return original Undefined object to preserve the first failure context
return self
def __repr__(self):
return 'AnsibleUndefined'
class AnsibleContext(Context):
'''
A custom context, which intercepts resolve() calls and sets a flag
internally if any variable lookup returns an AnsibleUnsafe value. This
flag is checked post-templating, and (when set) will result in the
final templated result being wrapped via UnsafeProxy.
'''
def __init__(self, *args, **kwargs):
super(AnsibleContext, self).__init__(*args, **kwargs)
self.unsafe = False
def _is_unsafe(self, val):
'''
Our helper function, which will also recursively check dict and
list entries due to the fact that they may be repr'd and contain
a key or value which contains jinja2 syntax and would otherwise
lose the AnsibleUnsafe value.
'''
if isinstance(val, dict):
for key in val.keys():
if self._is_unsafe(val[key]):
return True
elif isinstance(val, list):
for item in val:
if self._is_unsafe(item):
return True
elif isinstance(val, string_types) and hasattr(val, '__UNSAFE__'):
return True
return False
def _update_unsafe(self, val):
if val is not None and not self.unsafe and self._is_unsafe(val):
self.unsafe = True
def resolve(self, key):
'''
The intercepted resolve(), which uses the helper above to set the
internal flag whenever an unsafe variable value is returned.
'''
val = super(AnsibleContext, self).resolve(key)
self._update_unsafe(val)
return val
def resolve_or_missing(self, key):
val = super(AnsibleContext, self).resolve_or_missing(key)
self._update_unsafe(val)
return val
class JinjaPluginIntercept(MutableMapping):
def __init__(self, delegatee, pluginloader, *args, **kwargs):
super(JinjaPluginIntercept, self).__init__(*args, **kwargs)
self._delegatee = delegatee
self._pluginloader = pluginloader
if self._pluginloader.class_name == 'FilterModule':
self._method_map_name = 'filters'
self._dirname = 'filter'
elif self._pluginloader.class_name == 'TestModule':
self._method_map_name = 'tests'
self._dirname = 'test'
self._collection_jinja_func_cache = {}
# FUTURE: we can cache FQ filter/test calls for the entire duration of a run, since a given collection's impl's
# aren't supposed to change during a run
def __getitem__(self, key):
if not isinstance(key, string_types):
raise ValueError('key must be a string')
key = to_native(key)
if '.' not in key: # might be a built-in value, delegate to base dict
return self._delegatee.__getitem__(key)
func = self._collection_jinja_func_cache.get(key)
if func:
return func
components = key.split('.')
if len(components) != 3:
raise KeyError('invalid plugin name: {0}'.format(key))
collection_name = '.'.join(components[0:2])
collection_pkg = 'ansible_collections.{0}.plugins.{1}'.format(collection_name, self._dirname)
# FIXME: error handling for bogus plugin name, bogus impl, bogus filter/test
# FIXME: move this capability into the Jinja plugin loader
pkg = import_module(collection_pkg)
for dummy, module_name, ispkg in pkgutil.iter_modules(pkg.__path__, prefix=collection_name + '.'):
if ispkg:
continue
plugin_impl = self._pluginloader.get(module_name)
method_map = getattr(plugin_impl, self._method_map_name)
for f in iteritems(method_map()):
fq_name = '.'.join((collection_name, f[0]))
self._collection_jinja_func_cache[fq_name] = f[1]
function_impl = self._collection_jinja_func_cache[key]
# FIXME: detect/warn on intra-collection function name collisions
return function_impl
def __setitem__(self, key, value):
return self._delegatee.__setitem__(key, value)
def __delitem__(self, key):
raise NotImplementedError()
def __iter__(self):
# not strictly accurate since we're not counting dynamically-loaded values
return iter(self._delegatee)
def __len__(self):
# not strictly accurate since we're not counting dynamically-loaded values
return len(self._delegatee)
class AnsibleEnvironment(Environment):
'''
Our custom environment, which simply allows us to override the class-level
values for the Template and Context classes used by jinja2 internally.
'''
context_class = AnsibleContext
template_class = AnsibleJ2Template
def __init__(self, *args, **kwargs):
super(AnsibleEnvironment, self).__init__(*args, **kwargs)
self.filters = JinjaPluginIntercept(self.filters, filter_loader)
self.tests = JinjaPluginIntercept(self.tests, test_loader)
class Templar:
'''
The main class for templating, with the main entry-point of template().
'''
def __init__(self, loader, shared_loader_obj=None, variables=None):
variables = {} if variables is None else variables
self._loader = loader
self._filters = None
self._tests = None
self._available_variables = variables
self._cached_result = {}
if loader:
self._basedir = loader.get_basedir()
else:
self._basedir = './'
if shared_loader_obj:
self._filter_loader = getattr(shared_loader_obj, 'filter_loader')
self._test_loader = getattr(shared_loader_obj, 'test_loader')
self._lookup_loader = getattr(shared_loader_obj, 'lookup_loader')
else:
self._filter_loader = filter_loader
self._test_loader = test_loader
self._lookup_loader = lookup_loader
# flags to determine whether certain failures during templating
# should result in fatal errors being raised
self._fail_on_lookup_errors = True
self._fail_on_filter_errors = True
self._fail_on_undefined_errors = C.DEFAULT_UNDEFINED_VAR_BEHAVIOR
self.environment = AnsibleEnvironment(
trim_blocks=True,
undefined=AnsibleUndefined,
extensions=self._get_extensions(),
finalize=self._finalize,
loader=FileSystemLoader(self._basedir),
)
# the current rendering context under which the templar class is working
self.cur_context = None
self.SINGLE_VAR = re.compile(r"^%s\s*(\w*)\s*%s$" % (self.environment.variable_start_string, self.environment.variable_end_string))
self._clean_regex = re.compile(r'(?:%s|%s|%s|%s)' % (
self.environment.variable_start_string,
self.environment.block_start_string,
self.environment.block_end_string,
self.environment.variable_end_string
))
self._no_type_regex = re.compile(r'.*?\|\s*(?:%s)(?:\([^\|]*\))?\s*\)?\s*(?:%s)' %
('|'.join(C.STRING_TYPE_FILTERS), self.environment.variable_end_string))
def _get_filters(self):
'''
Returns filter plugins, after loading and caching them if need be
'''
if self._filters is not None:
return self._filters.copy()
self._filters = dict()
for fp in self._filter_loader.all():
self._filters.update(fp.filters())
return self._filters.copy()
def _get_tests(self):
'''
Returns tests plugins, after loading and caching them if need be
'''
if self._tests is not None:
return self._tests.copy()
self._tests = dict()
for fp in self._test_loader.all():
self._tests.update(fp.tests())
return self._tests.copy()
def _get_extensions(self):
'''
Return jinja2 extensions to load.
If some extensions are set via jinja_extensions in ansible.cfg, we try
to load them with the jinja environment.
'''
jinja_exts = []
if C.DEFAULT_JINJA2_EXTENSIONS:
# make sure the configuration directive doesn't contain spaces
# and split extensions in an array
jinja_exts = C.DEFAULT_JINJA2_EXTENSIONS.replace(" ", "").split(',')
return jinja_exts
@property
def available_variables(self):
return self._available_variables
@available_variables.setter
def available_variables(self, variables):
'''
Sets the list of template variables this Templar instance will use
to template things, so we don't have to pass them around between
internal methods. We also clear the template cache here, as the variables
are being changed.
'''
if not isinstance(variables, dict):
raise AnsibleAssertionError("the type of 'variables' should be a dict but was a %s" % (type(variables)))
self._available_variables = variables
self._cached_result = {}
def set_available_variables(self, variables):
display.deprecated(
'set_available_variables is being deprecated. Use "@available_variables.setter" instead.',
version='2.13'
)
self.available_variables = variables
def template(self, variable, convert_bare=False, preserve_trailing_newlines=True, escape_backslashes=True, fail_on_undefined=None, overrides=None,
convert_data=True, static_vars=None, cache=True, disable_lookups=False):
'''
Templates (possibly recursively) any given data as input. If convert_bare is
set to True, the given data will be wrapped as a jinja2 variable ('{{foo}}')
before being sent through the template engine.
'''
static_vars = [''] if static_vars is None else static_vars
# Don't template unsafe variables, just return them.
if hasattr(variable, '__UNSAFE__'):
return variable
if fail_on_undefined is None:
fail_on_undefined = self._fail_on_undefined_errors
try:
if convert_bare:
variable = self._convert_bare_variable(variable)
if isinstance(variable, string_types):
result = variable
if self.is_possibly_template(variable):
# Check to see if the string we are trying to render is just referencing a single
# var. In this case we don't want to accidentally change the type of the variable
# to a string by using the jinja template renderer. We just want to pass it.
only_one = self.SINGLE_VAR.match(variable)
if only_one:
var_name = only_one.group(1)
if var_name in self._available_variables:
resolved_val = self._available_variables[var_name]
if isinstance(resolved_val, NON_TEMPLATED_TYPES):
return resolved_val
elif resolved_val is None:
return C.DEFAULT_NULL_REPRESENTATION
# Using a cache in order to prevent template calls with already templated variables
sha1_hash = None
if cache:
variable_hash = sha1(text_type(variable).encode('utf-8'))
options_hash = sha1(
(
text_type(preserve_trailing_newlines) +
text_type(escape_backslashes) +
text_type(fail_on_undefined) +
text_type(overrides)
).encode('utf-8')
)
sha1_hash = variable_hash.hexdigest() + options_hash.hexdigest()
if cache and sha1_hash in self._cached_result:
result = self._cached_result[sha1_hash]
else:
result = self.do_template(
variable,
preserve_trailing_newlines=preserve_trailing_newlines,
escape_backslashes=escape_backslashes,
fail_on_undefined=fail_on_undefined,
overrides=overrides,
disable_lookups=disable_lookups,
)
if not USE_JINJA2_NATIVE:
unsafe = hasattr(result, '__UNSAFE__')
if convert_data and not self._no_type_regex.match(variable):
# if this looks like a dictionary or list, convert it to such using the safe_eval method
if (result.startswith("{") and not result.startswith(self.environment.variable_start_string)) or \
result.startswith("[") or result in ("True", "False"):
eval_results = safe_eval(result, include_exceptions=True)
if eval_results[1] is None:
result = eval_results[0]
if unsafe:
result = wrap_var(result)
else:
# FIXME: if the safe_eval raised an error, should we do something with it?
pass
# we only cache in the case where we have a single variable
# name, to make sure we're not putting things which may otherwise
# be dynamic in the cache (filters, lookups, etc.)
if cache:
self._cached_result[sha1_hash] = result
return result
elif isinstance(variable, (list, tuple)):
return [self.template(
v,
preserve_trailing_newlines=preserve_trailing_newlines,
fail_on_undefined=fail_on_undefined,
overrides=overrides,
disable_lookups=disable_lookups,
) for v in variable]
elif isinstance(variable, (dict, Mapping)):
d = {}
# we don't use iteritems() here to avoid problems if the underlying dict
# changes sizes due to the templating, which can happen with hostvars
for k in variable.keys():
if k not in static_vars:
d[k] = self.template(
variable[k],
preserve_trailing_newlines=preserve_trailing_newlines,
fail_on_undefined=fail_on_undefined,
overrides=overrides,
disable_lookups=disable_lookups,
)
else:
d[k] = variable[k]
return d
else:
return variable
except AnsibleFilterError:
if self._fail_on_filter_errors:
raise
else:
return variable
def is_template(self, data):
'''lets us know if data has a template'''
if isinstance(data, string_types):
return is_template(data, self.environment)
elif isinstance(data, (list, tuple)):
for v in data:
if self.is_template(v):
return True
elif isinstance(data, dict):
for k in data:
if self.is_template(k) or self.is_template(data[k]):
return True
return False
templatable = is_template
def is_possibly_template(self, data):
'''Determines if a string looks like a template, by seeing if it
contains a jinja2 start delimiter. Does not guarantee that the string
is actually a template.
This is different than ``is_template`` which is more strict.
This method may return ``True`` on a string that is not templatable.
Useful when guarding passing a string for templating, but when
you want to allow the templating engine to make the final
assessment which may result in ``TemplateSyntaxError``.
'''
env = self.environment
if isinstance(data, string_types):
for marker in (env.block_start_string, env.variable_start_string, env.comment_start_string):
if marker in data:
return True
return False
def _convert_bare_variable(self, variable):
'''
Wraps a bare string, which may have an attribute portion (ie. foo.bar)
in jinja2 variable braces so that it is evaluated properly.
'''
if isinstance(variable, string_types):
contains_filters = "|" in variable
first_part = variable.split("|")[0].split(".")[0].split("[")[0]
if (contains_filters or first_part in self._available_variables) and self.environment.variable_start_string not in variable:
return "%s%s%s" % (self.environment.variable_start_string, variable, self.environment.variable_end_string)
# the variable didn't meet the conditions to be converted,
# so just return it as-is
return variable
def _finalize(self, thing):
'''
A custom finalize method for jinja2, which prevents None from being returned. This
avoids a string of ``"None"`` as ``None`` has no importance in YAML.
If using ANSIBLE_JINJA2_NATIVE we bypass this and return the actual value always
'''
if USE_JINJA2_NATIVE:
return thing
return thing if thing is not None else ''
def _fail_lookup(self, name, *args, **kwargs):
raise AnsibleError("The lookup `%s` was found, however lookups were disabled from templating" % name)
def _now_datetime(self, utc=False, fmt=None):
'''jinja2 global function to return current datetime, potentially formatted via strftime'''
if utc:
now = datetime.datetime.utcnow()
else:
now = datetime.datetime.now()
if fmt:
return now.strftime(fmt)
return now
def _query_lookup(self, name, *args, **kwargs):
''' wrapper for lookup, force wantlist true'''
kwargs['wantlist'] = True
return self._lookup(name, *args, **kwargs)
def _lookup(self, name, *args, **kwargs):
instance = self._lookup_loader.get(name.lower(), loader=self._loader, templar=self)
if instance is not None:
wantlist = kwargs.pop('wantlist', False)
allow_unsafe = kwargs.pop('allow_unsafe', C.DEFAULT_ALLOW_UNSAFE_LOOKUPS)
errors = kwargs.pop('errors', 'strict')
from ansible.utils.listify import listify_lookup_plugin_terms
loop_terms = listify_lookup_plugin_terms(terms=args, templar=self, loader=self._loader, fail_on_undefined=True, convert_bare=False)
# safely catch run failures per #5059
try:
ran = instance.run(loop_terms, variables=self._available_variables, **kwargs)
except (AnsibleUndefinedVariable, UndefinedError) as e:
raise AnsibleUndefinedVariable(e)
except Exception as e:
if self._fail_on_lookup_errors:
msg = u"An unhandled exception occurred while running the lookup plugin '%s'. Error was a %s, original message: %s" % \
(name, type(e), to_text(e))
if errors == 'warn':
display.warning(msg)
elif errors == 'ignore':
display.display(msg, log_only=True)
else:
raise AnsibleError(to_native(msg))
ran = [] if wantlist else None
if ran and not allow_unsafe:
if wantlist:
ran = wrap_var(ran)
else:
try:
ran = UnsafeProxy(",".join(ran))
except TypeError:
# Lookup Plugins should always return lists. Throw an error if that's not
# the case:
if not isinstance(ran, Sequence):
raise AnsibleError("The lookup plugin '%s' did not return a list."
% name)
# The TypeError we can recover from is when the value *inside* of the list
# is not a string
if len(ran) == 1:
ran = wrap_var(ran[0])
else:
ran = wrap_var(ran)
if self.cur_context:
self.cur_context.unsafe = True
return ran
else:
raise AnsibleError("lookup plugin (%s) not found" % name)
def do_template(self, data, preserve_trailing_newlines=True, escape_backslashes=True, fail_on_undefined=None, overrides=None, disable_lookups=False):
if USE_JINJA2_NATIVE and not isinstance(data, string_types):
return data
# For preserving the number of input newlines in the output (used
# later in this method)
data_newlines = _count_newlines_from_end(data)
if fail_on_undefined is None:
fail_on_undefined = self._fail_on_undefined_errors
try:
# allows template header overrides to change jinja2 options.
if overrides is None:
myenv = self.environment.overlay()
else:
myenv = self.environment.overlay(overrides)
# Get jinja env overrides from template
if hasattr(data, 'startswith') and data.startswith(JINJA2_OVERRIDE):
eol = data.find('\n')
line = data[len(JINJA2_OVERRIDE):eol]
data = data[eol + 1:]
for pair in line.split(','):
(key, val) = pair.split(':')
key = key.strip()
setattr(myenv, key, ast.literal_eval(val.strip()))
# Adds Ansible custom filters and tests
myenv.filters.update(self._get_filters())
myenv.tests.update(self._get_tests())
if escape_backslashes:
# Allow users to specify backslashes in playbooks as "\\" instead of as "\\\\".
data = _escape_backslashes(data, myenv)
try:
t = myenv.from_string(data)
except TemplateSyntaxError as e:
raise AnsibleError("template error while templating string: %s. String: %s" % (to_native(e), to_native(data)))
except Exception as e:
if 'recursion' in to_native(e):
raise AnsibleError("recursive loop detected in template string: %s" % to_native(data))
else:
return data
# jinja2 global is inconsistent across versions, this normalizes them
t.globals['dict'] = dict
if disable_lookups:
t.globals['query'] = t.globals['q'] = t.globals['lookup'] = self._fail_lookup
else:
t.globals['lookup'] = self._lookup
t.globals['query'] = t.globals['q'] = self._query_lookup
t.globals['now'] = self._now_datetime
t.globals['finalize'] = self._finalize
jvars = AnsibleJ2Vars(self, t.globals)
self.cur_context = new_context = t.new_context(jvars, shared=True)
rf = t.root_render_func(new_context)
try:
res = j2_concat(rf)
if getattr(new_context, 'unsafe', False):
res = wrap_var(res)
except TypeError as te:
if 'AnsibleUndefined' in to_native(te):
errmsg = "Unable to look up a name or access an attribute in template string (%s).\n" % to_native(data)
errmsg += "Make sure your variable name does not contain invalid characters like '-': %s" % to_native(te)
raise AnsibleUndefinedVariable(errmsg)
else:
display.debug("failing because of a type error, template data is: %s" % to_text(data))
raise AnsibleError("Unexpected templating type error occurred on (%s): %s" % (to_native(data), to_native(te)))
if USE_JINJA2_NATIVE and not isinstance(res, string_types):
return res
if preserve_trailing_newlines:
# The low level calls above do not preserve the newline
# characters at the end of the input data, so we use the
# calculate the difference in newlines and append them
# to the resulting output for parity
#
# jinja2 added a keep_trailing_newline option in 2.7 when
# creating an Environment. That would let us make this code
# better (remove a single newline if
# preserve_trailing_newlines is False). Once we can depend on
# that version being present, modify our code to set that when
# initializing self.environment and remove a single trailing
# newline here if preserve_newlines is False.
res_newlines = _count_newlines_from_end(res)
if data_newlines > res_newlines:
res += self.environment.newline_sequence * (data_newlines - res_newlines)
return res
except (UndefinedError, AnsibleUndefinedVariable) as e:
if fail_on_undefined:
raise AnsibleUndefinedVariable(e)
else:
display.debug("Ignoring undefined failure: %s" % to_text(e))
return data
# for backwards compatibility in case anyone is using old private method directly
_do_template = do_template
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 59,606 |
Remove UnsafeProxy
|
##### SUMMARY
There is no real reason that `UnsafeProxy` needs to exist.
We need to add unsafe bytes support, and by doing so within `UnsafeProxy` over complicates matters.
`wrap_var` should effectively take over the work that `UnsafeProxy` is doing, and then just rely on `wrap_var` directly using the proper `AnsibleUnsafe*` classes. Or callers can directly use `AnsibleUnsafe*` themselves when `wrap_var` isn't necessary.
##### ISSUE TYPE
- Feature Idea
##### COMPONENT NAME
lib/ansible/utils/unsafe_proxy.py
##### ADDITIONAL INFORMATION
<!--- Describe how the feature would be used, why it is needed and what it would solve -->
<!--- Paste example playbooks or commands between quotes below -->
```yaml
```
<!--- HINT: You can also paste gist.github.com links for larger files -->
|
https://github.com/ansible/ansible/issues/59606
|
https://github.com/ansible/ansible/pull/59711
|
e80f8048ee027ab0c7c8b5912fb6c69c44fb877a
|
164881d871964aa64e0f911d03ae270acbad253c
| 2019-07-25T20:15:19Z |
python
| 2019-08-07T15:39:01Z |
lib/ansible/utils/unsafe_proxy.py
|
# PYTHON SOFTWARE FOUNDATION LICENSE VERSION 2
# --------------------------------------------
#
# 1. This LICENSE AGREEMENT is between the Python Software Foundation
# ("PSF"), and the Individual or Organization ("Licensee") accessing and
# otherwise using this software ("Python") in source or binary form and
# its associated documentation.
#
# 2. Subject to the terms and conditions of this License Agreement, PSF hereby
# grants Licensee a nonexclusive, royalty-free, world-wide license to reproduce,
# analyze, test, perform and/or display publicly, prepare derivative works,
# distribute, and otherwise use Python alone or in any derivative version,
# provided, however, that PSF's License Agreement and PSF's notice of copyright,
# i.e., "Copyright (c) 2001, 2002, 2003, 2004, 2005, 2006, 2007, 2008, 2009, 2010,
# 2011, 2012, 2013, 2014 Python Software Foundation; All Rights Reserved" are
# retained in Python alone or in any derivative version prepared by Licensee.
#
# 3. In the event Licensee prepares a derivative work that is based on
# or incorporates Python or any part thereof, and wants to make
# the derivative work available to others as provided herein, then
# Licensee hereby agrees to include in any such work a brief summary of
# the changes made to Python.
#
# 4. PSF is making Python available to Licensee on an "AS IS"
# basis. PSF MAKES NO REPRESENTATIONS OR WARRANTIES, EXPRESS OR
# IMPLIED. BY WAY OF EXAMPLE, BUT NOT LIMITATION, PSF MAKES NO AND
# DISCLAIMS ANY REPRESENTATION OR WARRANTY OF MERCHANTABILITY OR FITNESS
# FOR ANY PARTICULAR PURPOSE OR THAT THE USE OF PYTHON WILL NOT
# INFRINGE ANY THIRD PARTY RIGHTS.
#
# 5. PSF SHALL NOT BE LIABLE TO LICENSEE OR ANY OTHER USERS OF PYTHON
# FOR ANY INCIDENTAL, SPECIAL, OR CONSEQUENTIAL DAMAGES OR LOSS AS
# A RESULT OF MODIFYING, DISTRIBUTING, OR OTHERWISE USING PYTHON,
# OR ANY DERIVATIVE THEREOF, EVEN IF ADVISED OF THE POSSIBILITY THEREOF.
#
# 6. This License Agreement will automatically terminate upon a material
# breach of its terms and conditions.
#
# 7. Nothing in this License Agreement shall be deemed to create any
# relationship of agency, partnership, or joint venture between PSF and
# Licensee. This License Agreement does not grant permission to use PSF
# trademarks or trade name in a trademark sense to endorse or promote
# products or services of Licensee, or any third party.
#
# 8. By copying, installing or otherwise using Python, Licensee
# agrees to be bound by the terms and conditions of this License
# Agreement.
#
# Original Python Recipe for Proxy:
# http://code.activestate.com/recipes/496741-object-proxying/
# Author: Tomer Filiba
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
from ansible.module_utils.six import string_types, text_type, binary_type
from ansible.module_utils._text import to_text
from ansible.module_utils.common._collections_compat import Mapping, MutableSequence, Set
__all__ = ['UnsafeProxy', 'AnsibleUnsafe', 'wrap_var']
class AnsibleUnsafe(object):
__UNSAFE__ = True
class AnsibleUnsafeText(text_type, AnsibleUnsafe):
pass
class AnsibleUnsafeBytes(binary_type, AnsibleUnsafe):
pass
class UnsafeProxy(object):
def __new__(cls, obj, *args, **kwargs):
# In our usage we should only receive unicode strings.
# This conditional and conversion exists to sanity check the values
# we're given but we may want to take it out for testing and sanitize
# our input instead.
if isinstance(obj, string_types) and not isinstance(obj, AnsibleUnsafeBytes):
obj = AnsibleUnsafeText(to_text(obj, errors='surrogate_or_strict'))
return obj
def _wrap_dict(v):
for k in v.keys():
if v[k] is not None:
v[wrap_var(k)] = wrap_var(v[k])
return v
def _wrap_list(v):
for idx, item in enumerate(v):
if item is not None:
v[idx] = wrap_var(item)
return v
def _wrap_set(v):
return set(item if item is None else wrap_var(item) for item in v)
def wrap_var(v):
if isinstance(v, Mapping):
v = _wrap_dict(v)
elif isinstance(v, MutableSequence):
v = _wrap_list(v)
elif isinstance(v, Set):
v = _wrap_set(v)
elif v is not None and not isinstance(v, AnsibleUnsafe):
v = UnsafeProxy(v)
return v
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 59,606 |
Remove UnsafeProxy
|
##### SUMMARY
There is no real reason that `UnsafeProxy` needs to exist.
We need to add unsafe bytes support, and by doing so within `UnsafeProxy` over complicates matters.
`wrap_var` should effectively take over the work that `UnsafeProxy` is doing, and then just rely on `wrap_var` directly using the proper `AnsibleUnsafe*` classes. Or callers can directly use `AnsibleUnsafe*` themselves when `wrap_var` isn't necessary.
##### ISSUE TYPE
- Feature Idea
##### COMPONENT NAME
lib/ansible/utils/unsafe_proxy.py
##### ADDITIONAL INFORMATION
<!--- Describe how the feature would be used, why it is needed and what it would solve -->
<!--- Paste example playbooks or commands between quotes below -->
```yaml
```
<!--- HINT: You can also paste gist.github.com links for larger files -->
|
https://github.com/ansible/ansible/issues/59606
|
https://github.com/ansible/ansible/pull/59711
|
e80f8048ee027ab0c7c8b5912fb6c69c44fb877a
|
164881d871964aa64e0f911d03ae270acbad253c
| 2019-07-25T20:15:19Z |
python
| 2019-08-07T15:39:01Z |
test/units/utils/test_unsafe_proxy.py
|
# -*- coding: utf-8 -*-
# (c) 2018 Matt Martz <[email protected]>
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
from ansible.module_utils.six import PY3
from ansible.utils.unsafe_proxy import AnsibleUnsafe, AnsibleUnsafeText, UnsafeProxy, wrap_var
def test_UnsafeProxy():
assert isinstance(UnsafeProxy({}), dict)
assert not isinstance(UnsafeProxy({}), AnsibleUnsafe)
assert isinstance(UnsafeProxy('foo'), AnsibleUnsafeText)
def test_wrap_var_string():
assert isinstance(wrap_var('foo'), AnsibleUnsafeText)
assert isinstance(wrap_var(u'foo'), AnsibleUnsafeText)
if PY3:
assert isinstance(wrap_var(b'foo'), type(b''))
assert not isinstance(wrap_var(b'foo'), AnsibleUnsafe)
else:
assert isinstance(wrap_var(b'foo'), AnsibleUnsafeText)
def test_wrap_var_dict():
assert isinstance(wrap_var(dict(foo='bar')), dict)
assert not isinstance(wrap_var(dict(foo='bar')), AnsibleUnsafe)
assert isinstance(wrap_var(dict(foo='bar'))['foo'], AnsibleUnsafeText)
def test_wrap_var_dict_None():
assert wrap_var(dict(foo=None))['foo'] is None
assert not isinstance(wrap_var(dict(foo=None))['foo'], AnsibleUnsafe)
def test_wrap_var_list():
assert isinstance(wrap_var(['foo']), list)
assert not isinstance(wrap_var(['foo']), AnsibleUnsafe)
assert isinstance(wrap_var(['foo'])[0], AnsibleUnsafeText)
def test_wrap_var_list_None():
assert wrap_var([None])[0] is None
assert not isinstance(wrap_var([None])[0], AnsibleUnsafe)
def test_wrap_var_set():
assert isinstance(wrap_var(set(['foo'])), set)
assert not isinstance(wrap_var(set(['foo'])), AnsibleUnsafe)
for item in wrap_var(set(['foo'])):
assert isinstance(item, AnsibleUnsafeText)
def test_wrap_var_set_None():
for item in wrap_var(set([None])):
assert item is None
assert not isinstance(item, AnsibleUnsafe)
def test_wrap_var_tuple():
assert isinstance(wrap_var(('foo',)), tuple)
assert not isinstance(wrap_var(('foo',)), AnsibleUnsafe)
assert isinstance(wrap_var(('foo',))[0], type(''))
assert not isinstance(wrap_var(('foo',))[0], AnsibleUnsafe)
def test_wrap_var_None():
assert wrap_var(None) is None
assert not isinstance(wrap_var(None), AnsibleUnsafe)
def test_wrap_var_unsafe():
assert isinstance(wrap_var(AnsibleUnsafeText(u'foo')), AnsibleUnsafeText)
def test_AnsibleUnsafeText():
assert isinstance(AnsibleUnsafeText(u'foo'), AnsibleUnsafe)
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 59,949 |
Bump deprecated skip option for first_found
|
##### SUMMARY
The `skip` option for `first_found` was indicated to be removed in 2.8.
The `version` under `deprecated` isn't meant to indicate the version the feature was deprecated in, but the version it will be removed in.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
lib/ansible/plugins/lookup/first_found.py
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
2.8
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
<!--- Paste example playbooks or commands between quotes below -->
```yaml
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes -->
```paste below
```
|
https://github.com/ansible/ansible/issues/59949
|
https://github.com/ansible/ansible/pull/60161
|
c954c0727179064ce0ecf62336ec74173d36ec31
|
707e33793d683254a511cd1ec825df86a5121feb
| 2019-08-01T18:08:03Z |
python
| 2019-08-08T18:55:11Z |
changelogs/fragments/59949-undeprecate-first-found-skip.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 59,949 |
Bump deprecated skip option for first_found
|
##### SUMMARY
The `skip` option for `first_found` was indicated to be removed in 2.8.
The `version` under `deprecated` isn't meant to indicate the version the feature was deprecated in, but the version it will be removed in.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
lib/ansible/plugins/lookup/first_found.py
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
2.8
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
<!--- Paste example playbooks or commands between quotes below -->
```yaml
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes -->
```paste below
```
|
https://github.com/ansible/ansible/issues/59949
|
https://github.com/ansible/ansible/pull/60161
|
c954c0727179064ce0ecf62336ec74173d36ec31
|
707e33793d683254a511cd1ec825df86a5121feb
| 2019-08-01T18:08:03Z |
python
| 2019-08-08T18:55:11Z |
lib/ansible/plugins/lookup/first_found.py
|
# (c) 2013, seth vidal <[email protected]> red hat, inc
# (c) 2017 Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
DOCUMENTATION = """
lookup: first_found
author: Seth Vidal <[email protected]>
version_added: historical
short_description: return first file found from list
description:
- this lookup checks a list of files and paths and returns the full path to the first combination found.
- As all lookups, when fed relative paths it will try use the current task's location first and go up the chain
to the containing role/play/include/etc's location.
- The list of files has precedence over the paths searched.
i.e, A task in a role has a 'file1' in the play's relative path, this will be used, 'file2' in role's relative path will not.
- Either a list of files C(_terms) or a key `files` with a list of files is required for this plugin to operate.
notes:
- This lookup can be used in 'dual mode', either passing a list of file names or a dictionary that has C(files) and C(paths).
options:
_terms:
description: list of file names
files:
description: list of file names
paths:
description: list of paths in which to look for the files
skip:
type: boolean
default: False
description: Return an empty list if no file is found, instead of an error.
deprecated:
why: A generic that applies to all errors exists for all lookups.
version: "2.8"
alternative: The generic ``errors=ignore``
"""
EXAMPLES = """
- name: show first existing file or ignore if none do
debug: msg={{lookup('first_found', findme, errors='ignore')}}
vars:
findme:
- "/path/to/foo.txt"
- "bar.txt" # will be looked in files/ dir relative to role and/or play
- "/path/to/biz.txt"
- name: |
include tasks only if files exist. Note the use of query() to return
a blank list for the loop if no files are found.
import_tasks: '{{ item }}'
vars:
params:
files:
- path/tasks.yaml
- path/other_tasks.yaml
loop: "{{ q('first_found', params, errors='ignore') }}"
- name: |
copy first existing file found to /some/file,
looking in relative directories from where the task is defined and
including any play objects that contain it
copy: src={{lookup('first_found', findme)}} dest=/some/file
vars:
findme:
- foo
- "{{inventory_hostname}}"
- bar
- name: same copy but specific paths
copy: src={{lookup('first_found', params)}} dest=/some/file
vars:
params:
files:
- foo
- "{{inventory_hostname}}"
- bar
paths:
- /tmp/production
- /tmp/staging
- name: INTERFACES | Create Ansible header for /etc/network/interfaces
template:
src: "{{ lookup('first_found', findme)}}"
dest: "/etc/foo.conf"
vars:
findme:
- "{{ ansible_virtualization_type }}_foo.conf"
- "default_foo.conf"
- name: read vars from first file found, use 'vars/' relative subdir
include_vars: "{{lookup('first_found', params)}}"
vars:
params:
files:
- '{{ansible_os_distribution}}.yml'
- '{{ansible_os_family}}.yml'
- default.yml
paths:
- 'vars'
"""
RETURN = """
_raw:
description:
- path to file found
"""
import os
from jinja2.exceptions import UndefinedError
from ansible.errors import AnsibleFileNotFound, AnsibleLookupError, AnsibleUndefinedVariable
from ansible.module_utils.six import string_types
from ansible.module_utils.parsing.convert_bool import boolean
from ansible.plugins.lookup import LookupBase
class LookupModule(LookupBase):
def run(self, terms, variables, **kwargs):
anydict = False
skip = False
for term in terms:
if isinstance(term, dict):
anydict = True
total_search = []
if anydict:
for term in terms:
if isinstance(term, dict):
if 'skip' in term:
self._display.deprecated('Use errors="ignore" instead of skip', version='2.12')
files = term.get('files', [])
paths = term.get('paths', [])
skip = boolean(term.get('skip', False), strict=False)
filelist = files
if isinstance(files, string_types):
files = files.replace(',', ' ')
files = files.replace(';', ' ')
filelist = files.split(' ')
pathlist = paths
if paths:
if isinstance(paths, string_types):
paths = paths.replace(',', ' ')
paths = paths.replace(':', ' ')
paths = paths.replace(';', ' ')
pathlist = paths.split(' ')
if not pathlist:
total_search = filelist
else:
for path in pathlist:
for fn in filelist:
f = os.path.join(path, fn)
total_search.append(f)
else:
total_search.append(term)
else:
total_search = self._flatten(terms)
for fn in total_search:
try:
fn = self._templar.template(fn)
except (AnsibleUndefinedVariable, UndefinedError):
continue
# get subdir if set by task executor, default to files otherwise
subdir = getattr(self, '_subdir', 'files')
path = None
path = self.find_file_in_search_path(variables, subdir, fn, ignore_missing=True)
if path is not None:
return [path]
if skip:
return []
raise AnsibleLookupError("No file was found when using first_found. Use errors='ignore' to allow this task to be skipped if no "
"files are found")
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 58,942 |
first_found error message recommends deprecated usage
|
##### SUMMARY
<!--- Explain the problem briefly below -->
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
with_first_found lookup plugin
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.8.1
config file = /etc/ansible/ansible.cfg
configured module search path = [u'/Users/jeff.geerling/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/lib/python2.7/site-packages/ansible
executable location = /usr/local/bin/ansible
python version = 2.7.15 (default, Jul 23 2018, 21:27:06) [GCC 4.2.1 Compatible Apple LLVM 9.1.0 (clang-902.0.39.2)]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
ANSIBLE_NOCOWS(/Users/jeff.geerling/Dropbox/Development/GitHub/drupal-pi/ansible.cfg) = True
ANSIBLE_PIPELINING(/Users/jeff.geerling/Dropbox/Development/GitHub/drupal-pi/ansible.cfg) = True
ANSIBLE_SSH_CONTROL_PATH(/Users/jeff.geerling/Dropbox/Development/GitHub/drupal-pi/ansible.cfg) = /tmp/ansible-ssh-%%h-%
DEFAULT_LOAD_CALLBACK_PLUGINS(/Users/jeff.geerling/Dropbox/Development/GitHub/drupal-pi/ansible.cfg) = True
DEFAULT_ROLES_PATH(/Users/jeff.geerling/Dropbox/Development/GitHub/drupal-pi/ansible.cfg) = [u'/Users/jeff.geerling/Drop
DEFAULT_STDOUT_CALLBACK(/Users/jeff.geerling/Dropbox/Development/GitHub/drupal-pi/ansible.cfg) = yaml
RETRY_FILES_ENABLED(/Users/jeff.geerling/Dropbox/Development/GitHub/drupal-pi/ansible.cfg) = False
```
##### OS / ENVIRONMENT
macOS 10.14
##### STEPS TO REPRODUCE
Add a task that uses with_first_found:
```yaml
- name: Include non-existent-file.yml if it exists.
include_tasks: "{{ item }}"
with_first_found:
- files:
- non-existent-file.yml
tags: ['always']
```
Get error message:
```
TASK [Include non-existent-file.yml if it exists.] ************************************************************************
fatal: [10.0.100.136]: FAILED! =>
msg: 'No file was found when using first_found. Use the ''skip: true'' option to allow this task to be skipped if no files are found'
```
Change task to use recommended option:
```
- name: Include non-existent-file.yml if it exists.
include_tasks: "{{ item }}"
with_first_found:
- files:
- non-existent-file.yml
skip: true
tags: ['always']
```
```
TASK [Include non-existent-file.yml if it exists.] ************************************************************************
[DEPRECATION WARNING]: Use errors="ignore" instead of skip. This feature will be removed in version 2.12. Deprecation
warnings can be disabled by setting deprecation_warnings=False in ansible.cfg.
```
##### EXPECTED RESULTS
Documentation for failed task includes a non-deprecated usage recommendation.
##### ACTUAL RESULTS
Documentation for failed task recommends a deprecated parameter which, when used, throws a deprecation warning.
Related to: https://github.com/ansible/ansible/issues/56713 (I'm not sure why that particular issue was closed, though.
|
https://github.com/ansible/ansible/issues/58942
|
https://github.com/ansible/ansible/pull/60161
|
c954c0727179064ce0ecf62336ec74173d36ec31
|
707e33793d683254a511cd1ec825df86a5121feb
| 2019-07-10T19:50:05Z |
python
| 2019-08-08T18:55:11Z |
changelogs/fragments/59949-undeprecate-first-found-skip.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 58,942 |
first_found error message recommends deprecated usage
|
##### SUMMARY
<!--- Explain the problem briefly below -->
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
with_first_found lookup plugin
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.8.1
config file = /etc/ansible/ansible.cfg
configured module search path = [u'/Users/jeff.geerling/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/lib/python2.7/site-packages/ansible
executable location = /usr/local/bin/ansible
python version = 2.7.15 (default, Jul 23 2018, 21:27:06) [GCC 4.2.1 Compatible Apple LLVM 9.1.0 (clang-902.0.39.2)]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
ANSIBLE_NOCOWS(/Users/jeff.geerling/Dropbox/Development/GitHub/drupal-pi/ansible.cfg) = True
ANSIBLE_PIPELINING(/Users/jeff.geerling/Dropbox/Development/GitHub/drupal-pi/ansible.cfg) = True
ANSIBLE_SSH_CONTROL_PATH(/Users/jeff.geerling/Dropbox/Development/GitHub/drupal-pi/ansible.cfg) = /tmp/ansible-ssh-%%h-%
DEFAULT_LOAD_CALLBACK_PLUGINS(/Users/jeff.geerling/Dropbox/Development/GitHub/drupal-pi/ansible.cfg) = True
DEFAULT_ROLES_PATH(/Users/jeff.geerling/Dropbox/Development/GitHub/drupal-pi/ansible.cfg) = [u'/Users/jeff.geerling/Drop
DEFAULT_STDOUT_CALLBACK(/Users/jeff.geerling/Dropbox/Development/GitHub/drupal-pi/ansible.cfg) = yaml
RETRY_FILES_ENABLED(/Users/jeff.geerling/Dropbox/Development/GitHub/drupal-pi/ansible.cfg) = False
```
##### OS / ENVIRONMENT
macOS 10.14
##### STEPS TO REPRODUCE
Add a task that uses with_first_found:
```yaml
- name: Include non-existent-file.yml if it exists.
include_tasks: "{{ item }}"
with_first_found:
- files:
- non-existent-file.yml
tags: ['always']
```
Get error message:
```
TASK [Include non-existent-file.yml if it exists.] ************************************************************************
fatal: [10.0.100.136]: FAILED! =>
msg: 'No file was found when using first_found. Use the ''skip: true'' option to allow this task to be skipped if no files are found'
```
Change task to use recommended option:
```
- name: Include non-existent-file.yml if it exists.
include_tasks: "{{ item }}"
with_first_found:
- files:
- non-existent-file.yml
skip: true
tags: ['always']
```
```
TASK [Include non-existent-file.yml if it exists.] ************************************************************************
[DEPRECATION WARNING]: Use errors="ignore" instead of skip. This feature will be removed in version 2.12. Deprecation
warnings can be disabled by setting deprecation_warnings=False in ansible.cfg.
```
##### EXPECTED RESULTS
Documentation for failed task includes a non-deprecated usage recommendation.
##### ACTUAL RESULTS
Documentation for failed task recommends a deprecated parameter which, when used, throws a deprecation warning.
Related to: https://github.com/ansible/ansible/issues/56713 (I'm not sure why that particular issue was closed, though.
|
https://github.com/ansible/ansible/issues/58942
|
https://github.com/ansible/ansible/pull/60161
|
c954c0727179064ce0ecf62336ec74173d36ec31
|
707e33793d683254a511cd1ec825df86a5121feb
| 2019-07-10T19:50:05Z |
python
| 2019-08-08T18:55:11Z |
lib/ansible/plugins/lookup/first_found.py
|
# (c) 2013, seth vidal <[email protected]> red hat, inc
# (c) 2017 Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
DOCUMENTATION = """
lookup: first_found
author: Seth Vidal <[email protected]>
version_added: historical
short_description: return first file found from list
description:
- this lookup checks a list of files and paths and returns the full path to the first combination found.
- As all lookups, when fed relative paths it will try use the current task's location first and go up the chain
to the containing role/play/include/etc's location.
- The list of files has precedence over the paths searched.
i.e, A task in a role has a 'file1' in the play's relative path, this will be used, 'file2' in role's relative path will not.
- Either a list of files C(_terms) or a key `files` with a list of files is required for this plugin to operate.
notes:
- This lookup can be used in 'dual mode', either passing a list of file names or a dictionary that has C(files) and C(paths).
options:
_terms:
description: list of file names
files:
description: list of file names
paths:
description: list of paths in which to look for the files
skip:
type: boolean
default: False
description: Return an empty list if no file is found, instead of an error.
deprecated:
why: A generic that applies to all errors exists for all lookups.
version: "2.8"
alternative: The generic ``errors=ignore``
"""
EXAMPLES = """
- name: show first existing file or ignore if none do
debug: msg={{lookup('first_found', findme, errors='ignore')}}
vars:
findme:
- "/path/to/foo.txt"
- "bar.txt" # will be looked in files/ dir relative to role and/or play
- "/path/to/biz.txt"
- name: |
include tasks only if files exist. Note the use of query() to return
a blank list for the loop if no files are found.
import_tasks: '{{ item }}'
vars:
params:
files:
- path/tasks.yaml
- path/other_tasks.yaml
loop: "{{ q('first_found', params, errors='ignore') }}"
- name: |
copy first existing file found to /some/file,
looking in relative directories from where the task is defined and
including any play objects that contain it
copy: src={{lookup('first_found', findme)}} dest=/some/file
vars:
findme:
- foo
- "{{inventory_hostname}}"
- bar
- name: same copy but specific paths
copy: src={{lookup('first_found', params)}} dest=/some/file
vars:
params:
files:
- foo
- "{{inventory_hostname}}"
- bar
paths:
- /tmp/production
- /tmp/staging
- name: INTERFACES | Create Ansible header for /etc/network/interfaces
template:
src: "{{ lookup('first_found', findme)}}"
dest: "/etc/foo.conf"
vars:
findme:
- "{{ ansible_virtualization_type }}_foo.conf"
- "default_foo.conf"
- name: read vars from first file found, use 'vars/' relative subdir
include_vars: "{{lookup('first_found', params)}}"
vars:
params:
files:
- '{{ansible_os_distribution}}.yml'
- '{{ansible_os_family}}.yml'
- default.yml
paths:
- 'vars'
"""
RETURN = """
_raw:
description:
- path to file found
"""
import os
from jinja2.exceptions import UndefinedError
from ansible.errors import AnsibleFileNotFound, AnsibleLookupError, AnsibleUndefinedVariable
from ansible.module_utils.six import string_types
from ansible.module_utils.parsing.convert_bool import boolean
from ansible.plugins.lookup import LookupBase
class LookupModule(LookupBase):
def run(self, terms, variables, **kwargs):
anydict = False
skip = False
for term in terms:
if isinstance(term, dict):
anydict = True
total_search = []
if anydict:
for term in terms:
if isinstance(term, dict):
if 'skip' in term:
self._display.deprecated('Use errors="ignore" instead of skip', version='2.12')
files = term.get('files', [])
paths = term.get('paths', [])
skip = boolean(term.get('skip', False), strict=False)
filelist = files
if isinstance(files, string_types):
files = files.replace(',', ' ')
files = files.replace(';', ' ')
filelist = files.split(' ')
pathlist = paths
if paths:
if isinstance(paths, string_types):
paths = paths.replace(',', ' ')
paths = paths.replace(':', ' ')
paths = paths.replace(';', ' ')
pathlist = paths.split(' ')
if not pathlist:
total_search = filelist
else:
for path in pathlist:
for fn in filelist:
f = os.path.join(path, fn)
total_search.append(f)
else:
total_search.append(term)
else:
total_search = self._flatten(terms)
for fn in total_search:
try:
fn = self._templar.template(fn)
except (AnsibleUndefinedVariable, UndefinedError):
continue
# get subdir if set by task executor, default to files otherwise
subdir = getattr(self, '_subdir', 'files')
path = None
path = self.find_file_in_search_path(variables, subdir, fn, ignore_missing=True)
if path is not None:
return [path]
if skip:
return []
raise AnsibleLookupError("No file was found when using first_found. Use errors='ignore' to allow this task to be skipped if no "
"files are found")
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 58,992 |
[Docs] add pointer to content_collector for migrating modules to collections
|
<!--- Verify first that your improvement is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below, add suggestions to wording or structure -->
<!--- HINT: Did you know the documentation has an "Edit on GitHub" link on every page ? -->
Add a description of https://github.com/ansible/content_collector to the collections techpreview file with appropriate warnings that this tool is still in active development, but is available for experimentation and feedback.
##### ISSUE TYPE
- Documentation Report
##### COMPONENT NAME
<!--- Write the short name of the rst file, module, plugin, task or feature below, use your best guess if unsure -->
docs.ansible.com
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. OS version, browser, etc. -->
##### ADDITIONAL INFORMATION
<!--- Describe how this improves the documentation, e.g. before/after situation or screenshots -->
<!--- HINT: You can paste gist.github.com links for larger files -->
|
https://github.com/ansible/ansible/issues/58992
|
https://github.com/ansible/ansible/pull/59881
|
b0ec91e69e4cc578f0238ea7b60aeac91f2eedcd
|
ed21aba4422e82fe633e2e164ff5810bdb597595
| 2019-07-11T18:52:13Z |
python
| 2019-08-08T19:17:26Z |
docs/docsite/rst/dev_guide/collections_tech_preview.rst
|
:orphan:
.. _collections:
***********
Collections
***********
Collections are a distribution format for Ansible content. They can be used to
package and distribute playbooks, roles, modules, and plugins.
You can publish and use collections through `Ansible Galaxy <https://galaxy.ansible.com>`_.
.. important::
This feature is available in Ansible 2.8 as a *Technology Preview* and therefore is not fully supported. It should only be used for testing and should not be deployed in a production environment.
Future Galaxy or Ansible releases may introduce breaking changes.
.. contents::
:local:
:depth: 2
Collection structure
====================
Collections follow a simple data structure. None of the directories are required unless you have specific content that belongs in one of them. A collection does require a ``galaxy.yml`` file at the root level of the collection. This file contains all of the metadata that Galaxy
and other tools need in order to package, build and publish the collection.::
collection/
├── docs/
├── galaxy.yml
├── plugins/
│ ├── modules/
│ │ └── module1.py
│ ├── inventory/
│ └── .../
├── README.md
├── roles/
│ ├── role1/
│ ├── role2/
│ └── .../
├── playbooks/
│ ├── files/
│ ├── vars/
│ ├── templates/
│ └── tasks/
└── tests/
.. note::
* Ansible only accepts ``.yml`` extensions for galaxy.yml.
* See the `draft collection <https://github.com/bcoca/collection>`_ for an example of a full collection structure.
* Not all directories are currently in use. Those are placeholders for future features.
galaxy.yml
----------
A collection must have a ``galaxy.yml`` file that contains the necessary information to build a collection artifact.
See :ref:`collections_galaxy_meta` for details.
docs directory
---------------
Keep general documentation for the collection here. Plugins and modules still keep their specific documentation embedded as Python docstrings. Use the ``docs`` folder to describe how to use the roles and plugins the collection provides, role requirements, and so on. Currently we are looking at Markdown as the standard format for documentation files, but this is subject to change.
Use ``ansible-doc`` to view documentation for plugins inside a collection:
.. code-block:: bash
ansible-doc -t lookup my_namespace.my_collection.lookup1
The ``ansible-doc`` command requires the fully qualified collection name (FQCN) to display specific plugin documentation. In this example, ``my_namespace`` is the namespace and ``my_collection`` is the collection name within that namespace.
.. note:: The Ansible collection namespace is defined in the ``galaxy.yml`` file and is not equivalent to the GitHub repository name.
plugins directory
------------------
Add a 'per plugin type' specific subdirectory here, including ``module_utils`` which is usable not only by modules, but by any other plugin by using their FQCN. This is a way to distribute modules, lookups, filters, and so on, without having to import a role in every play.
module_utils
^^^^^^^^^^^^
When coding with ``module_utils`` in a collection, the Python ``import`` statement needs to take into account the FQCN along with the ``ansible_collections`` convention. The resulting Python import will look like ``from ansible_collections.{namespace}.{collection}.plugins.module_utils.{util} import {something}``
The following example snippet shows a module using both default Ansible ``module_utils`` and
those provided by a collection. In this example the namespace is
``ansible_example``, the collection is ``community``, and the ``module_util`` in
question is called ``qradar`` such that the FQCN is ``ansible_example.community.plugins.module_utils.qradar``:
.. code-block:: python
from ansible.module_utils.basic import AnsibleModule
from ansible.module_utils._text import to_text
from ansible.module_utils.six.moves.urllib.parse import urlencode, quote_plus
from ansible.module_utils.six.moves.urllib.error import HTTPError
from ansible_collections.ansible_example.community.plugins.module_utils.qradar import QRadarRequest
argspec = dict(
name=dict(required=True, type='str'),
state=dict(choices=['present', 'absent'], required=True),
)
module = AnsibleModule(
argument_spec=argspec,
supports_check_mode=True
)
qradar_request = QRadarRequest(
module,
headers={"Content-Type": "application/json"},
not_rest_data_keys=['state']
)
roles directory
----------------
Collection roles are mostly the same as existing roles, but with a couple of limitations:
- Role names are now limited to contain only lowercase alphanumeric characters, plus ``_`` and start with an alpha character.
- Roles in a collection cannot contain plugins any more. Plugins must live in the collection ``plugins`` directory tree. Each plugin is accessible to all roles in the collection.
The directory name of the role is used as the role name. Therefore, the directory name must comply with the
above role name rules.
The collection import into Galaxy will fail if a role name does not comply with these rules.
You can migrate 'traditional roles' into a collection but they must follow the rules above. You man need to rename roles if they don't conform. You will have to move or link any role-based plugins to the collection specific directories.
.. note::
For roles imported into Galaxy directly from a GitHub repository, setting the ``role_name`` value in the role's
metadata overrides the role name used by Galaxy. For collections, that value is ignored. When importing a
collection, Galaxy uses the role directory as the name of the role and ignores the ``role_name`` metadata value.
playbooks directory
--------------------
TBD.
tests directory
----------------
TBD. Expect tests for the collection itself to reside here.
.. _creating_collections:
Creating collections
======================
To create a collection:
#. Initialize a collection with :ref:`ansible-galaxy collection init<creating_collections_skeleton>` to create the skeleton directory structure.
#. Add your content to the collection.
#. Build the collection into a collection artifact with :ref:`ansible-galaxy collection build<building_collections>`.
#. Publish the collection artifact to Galaxy with :ref:`ansible-galaxy collection publish<publishing_collections>`.
A user can then install your collection on their systems.
.. note::
Any references to ``ansible-galaxy`` below will be of a 'working version' that is in development for the 2.9
release. As such, the command and this documentation section is subject to frequent changes.
Currently the ``ansible-galaxy collection`` command implements the following sub commands:
* ``init``: Create a basic collection skeleton based on the default template included with Ansible or your own template.
* ``build``: Create a collection artifact that can be uploaded to Galaxy or your own repository.
* ``publish``: Publish a built collection artifact to Galaxy.
* ``install``: Install one or more collections.
To learn more about the ``ansible-galaxy`` cli tool, see the :ref:`ansible-galaxy` man page.
.. _creating_collections_skeleton:
Creating a collection skeleton
------------------------------
To start a new collection:
.. code-block:: bash
collection_dir#> ansible-galaxy collection init my_namespace.my_collection
Then you can populate the directories with the content you want inside the collection. See
https://github.com/bcoca/collection to get a better idea of what you can place inside a collection.
.. _building_collections:
Building collections
--------------------
To build a collection, run ``ansible-galaxy collection build`` from inside the root directory of the collection:
.. code-block:: bash
collection_dir#> ansible-galaxy collection build my_namespace.my_collection
This creates
a tarball of the built collection in the current directory which can be uploaded to Galaxy.::
my_collection/
├── galaxy.yml
├── ...
├── my_namespace-my_collection-1.0.0.tar.gz
└── ...
.. note::
Certain files and folders are excluded when building the collection artifact. This is not currently configurable
and is a work in progress so the collection artifact may contain files you would not wish to distribute.
This tarball is mainly intended to upload to Galaxy
as a distribution method, but you can use it directly to install the collection on target systems.
.. _publishing_collections:
Publishing collections
----------------------
You can publish collections to Galaxy using the ``ansible-galaxy collection publish`` command or the Galaxy UI itself.
.. note:: Once you upload a version of a collection, you cannot delete or modify that version. Ensure that everything looks okay before you upload it.
Upload using ansible-galaxy
^^^^^^^^^^^^^^^^^^^^^^^^^^^
To upload the collection artifact with the ``ansible-galaxy`` command:
.. code-block:: bash
ansible-galaxy collection publish path/to/my_namespace-my_collection-1.0.0.tar.gz --api-key=SECRET
The above command triggers an import process, just as if you uploaded the collection through the Galaxy website.
The command waits until the import process completes before reporting the status back. If you wish to continue
without waiting for the import result, use the ``--no-wait`` argument and manually look at the import progress in your
`My Imports <https://galaxy.ansible.com/my-imports/>`_ page.
The API key is a secret token used by Ansible Galaxy to protect your content. You can find your API key at your
`Galaxy profile preferences <https://galaxy.ansible.com/me/preferences>`_ page.
Upload from the Galaxy website
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
To upload your collection artifact directly on Galaxy:
#. Go to the `My Content <https://galaxy.ansible.com/my-content/namespaces>`_ page, and click the **Add Content** button on one of your namespaces.
#. From the **Add Content** dialogue, click **Upload New Collection**, and select the collection archive file from your local filesystem.
When uploading collections it doesn't matter which namespace you select. The collection will be uploaded to the
namespace specified in the collection metadata in the ``galaxy.yml`` file. If you're not an owner of the
namespace, the upload request will fail.
Once Galaxy uploads and accepts a collection, you will be redirected to the **My Imports** page, which displays output from the
import process, including any errors or warnings about the metadata and content contained in the collection.
Collection versions
-------------------
Once you upload a version of a collection, you cannot delete or modify that version. Ensure that everything looks okay before
uploading. The only way to change a collection is to release a new version. The latest version of a collection (by highest version number)
will be the version displayed everywhere in Galaxy; however, users will still be able to download older versions.
Installing collections
----------------------
You can use the ``ansible-galaxy collection install`` command to install a collection on your system. The collection by default is installed at ``/path/ansible_collections/my_namespace/my_collection``. You can optionally add the ``-p`` option to specify an alternate location.
To install a collection hosted in Galaxy:
.. code-block:: bash
ansible-galaxy collection install my_namespace.my_collection -p /path
You can also directly use the tarball from your build:
.. code-block:: bash
ansible-galaxy collection install my_namespace-my_collection-1.0.0.tar.gz -p ./collections/ansible_collections
.. note::
The install command automatically appends the path ``ansible_collections`` to the one specified with the ``-p`` option unless the
parent directory is already in a folder called ``ansible_collections``.
You should use one of the values configured in :ref:`COLLECTIONS_PATHS` for your path. This is also where Ansible itself will expect to find collections when attempting to use them.
You can also keep a collection adjacent to the current playbook, under a ``collections/ansible_collections/`` directory structure.
::
play.yml
├── collections/
│ └── ansible_collections/
│ └── my_namespace/
│ └── my_collection/<collection structure lives here>
Installing an older version of a collection
-------------------------------------------
By default ``ansible-galaxy`` installs the latest collection that is available but you can add a version range
identifier to install a specific version.
To install the 1.0.0 version of the collection:
.. code-block:: bash
ansible-galaxy collection install my_namespace.my_collection:1.0.0
To install the 1.0.0-beta.1 version of the collection:
.. code-block:: bash
ansible-galaxy collection install my_namespace.my_collection:==1.0.0-beta.1
To install the collections that are greater than or equal to 1.0.0 or less than 2.0.0:
.. code-block:: bash
ansible-galaxy collection install my_namespace.my_collection:>=1.0.0,<2.0.0
You can specify multiple range identifiers which are split by ``,``. You can use the following range identifiers:
* ``*``: Any version, this is the default used when no range specified is set.
* ``!=``: Version is not equal to the one specified.
* ``==``: Version must be the one specified.
* ``>=``: Version is greater than or equal to the one specified.
* ``>``: Version is greater than the one specified.
* ``<=``: Version is less than or equal to the one specified.
* ``<``: Version is less than the one specified.
.. note::
The ``ansible-galaxy`` command ignores any pre-release versions unless the ``==`` range identifier is used to
explicitly set to that pre-release version.
.. _collection_requirements_file:
Install multiple collections with a requirements file
-----------------------------------------------------
You can also setup a ``requirements.yml`` file to install multiple collections in one command. This file is a YAML file in the format:
.. code-block:: yaml+jinja
---
collections:
# With just the collection name
- my_namespace.my_collection
# With the collection name, version, and source options
- name: my_namespace.my_other_collection
version: 'version range identifiers (default: ``*``)'
source: 'The Galaxy URL to pull the collection from (default: ``--api-server`` from cmdline)'
The ``version`` key can take in the same range identifier format documented above.
Using collections
=================
Once installed, you can reference a collection content by its FQCN:
.. code-block:: yaml
- hosts: all
tasks:
- my_namespace.my_collection.mymodule:
option1: value
This works for roles or any type of plugin distributed within the collection:
.. code-block:: yaml
- hosts: all
tasks:
- include_role:
name : my_namespace.my_collection.role1
- my_namespace.mycollection.mymodule:
option1: value
- debug:
msg: '{{ lookup("my_namespace.my_collection.lookup1", 'param1')| my_namespace.my_collection.filter1 }}'
To avoid a lot of typing, you can use the ``collections`` keyword added in Ansbile 2.8:
.. code-block:: yaml
- hosts: all
collections:
- my_namespace.my_collection
tasks:
- include_role:
name: role1
- mymodule:
option1: value
- debug:
msg: '{{ lookup("my_namespace.my_collection.lookup1", 'param1')| my_namespace.my_collection.filter1 }}'
This keyword creates a 'search path' for non namespaced plugin references. It does not import roles or anything else.
Notice that you still need the FQCN for non-action or module plugins.
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 58,541 |
vmware_guest: autoselect_datastore=True should select a datatore reachable on the ESXi
|
##### SUMMARY
The following snippet from https://github.com/ansible/ansible/blob/devel/test/integration/targets/vmware_guest/tasks/create_d1_c1_f0.yml uses `autoselect_datastore: True`.
```yaml
- name: create new VMs
vmware_guest:
validate_certs: False
hostname: "{{ vcenter_hostname }}"
username: "{{ vcenter_username }}"
password: "{{ vcenter_password }}"
name: newvm_1
#template: "{{ item|basename }}"
guest_id: centos64Guest
datacenter: "{{ dc1 }}"
hardware:
num_cpus: 1
num_cpu_cores_per_socket: 1
memory_mb: 256
hotadd_memory: true
hotadd_cpu: false
max_connections: 10
disk:
- size: 1gb
type: thin
autoselect_datastore: True
state: poweredoff
folder: F0
```
One of my two datastores is dedicated to the ISO image and so, is readOnly. But it still get selected consistently. I use the following hack to avoid the problem:
```diff
diff --git a/lib/ansible/modules/cloud/vmware/vmware_guest.py b/lib/ansible/modules/cloud/vmware/vmware_guest.py
index 6a63e97798..3648e3e87f 100644
--- a/lib/ansible/modules/cloud/vmware/vmware_guest.py
+++ b/lib/ansible/modules/cloud/vmware/vmware_guest.py
@@ -1925,6 +1925,15 @@ class PyVmomiHelper(PyVmomi):
datastore_freespace = 0
for ds in datastores:
+ is_readonly = False
+ for h in ds.host:
+ if h.mountInfo.accessMode == 'readOnly':
+ is_readonly = True
+ break
+
+ if is_readonly:
+ continue
+
if (ds.summary.freeSpace > datastore_freespace) or (ds.summary.freeSpace == datastore_freespace and not datastore):
# If datastore field is provided, filter destination datastores
if 'datastore' in self.params['disk'][0] and \
```
<!--- Explain the problem briefly below -->
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
vmware_guest
|
https://github.com/ansible/ansible/issues/58541
|
https://github.com/ansible/ansible/pull/58872
|
57dc7ec265bbc741126fa46e44ff3bb6adae5624
|
647b78a09cc45df0c25eaf794c06915f3e2ee9c5
| 2019-06-29T02:40:20Z |
python
| 2019-08-09T02:26:52Z |
lib/ansible/module_utils/vmware.py
|
# -*- coding: utf-8 -*-
# Copyright: (c) 2015, Joseph Callen <jcallen () csc.com>
# Copyright: (c) 2018, Ansible Project
# Copyright: (c) 2018, James E. King III (@jeking3) <[email protected]>
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
import atexit
import ansible.module_utils.common._collections_compat as collections_compat
import json
import os
import re
import ssl
import time
import traceback
from random import randint
from distutils.version import StrictVersion
REQUESTS_IMP_ERR = None
try:
# requests is required for exception handling of the ConnectionError
import requests
HAS_REQUESTS = True
except ImportError:
REQUESTS_IMP_ERR = traceback.format_exc()
HAS_REQUESTS = False
PYVMOMI_IMP_ERR = None
try:
from pyVim import connect
from pyVmomi import vim, vmodl, VmomiSupport
HAS_PYVMOMI = True
HAS_PYVMOMIJSON = hasattr(VmomiSupport, 'VmomiJSONEncoder')
except ImportError:
PYVMOMI_IMP_ERR = traceback.format_exc()
HAS_PYVMOMI = False
HAS_PYVMOMIJSON = False
from ansible.module_utils._text import to_text, to_native
from ansible.module_utils.six import integer_types, iteritems, string_types, raise_from
from ansible.module_utils.six.moves.urllib.parse import urlparse
from ansible.module_utils.basic import env_fallback, missing_required_lib
from ansible.module_utils.urls import generic_urlparse
class TaskError(Exception):
def __init__(self, *args, **kwargs):
super(TaskError, self).__init__(*args, **kwargs)
def wait_for_task(task, max_backoff=64, timeout=3600):
"""Wait for given task using exponential back-off algorithm.
Args:
task: VMware task object
max_backoff: Maximum amount of sleep time in seconds
timeout: Timeout for the given task in seconds
Returns: Tuple with True and result for successful task
Raises: TaskError on failure
"""
failure_counter = 0
start_time = time.time()
while True:
if time.time() - start_time >= timeout:
raise TaskError("Timeout")
if task.info.state == vim.TaskInfo.State.success:
return True, task.info.result
if task.info.state == vim.TaskInfo.State.error:
error_msg = task.info.error
host_thumbprint = None
try:
error_msg = error_msg.msg
if hasattr(task.info.error, 'thumbprint'):
host_thumbprint = task.info.error.thumbprint
except AttributeError:
pass
finally:
raise_from(TaskError(error_msg, host_thumbprint), task.info.error)
if task.info.state in [vim.TaskInfo.State.running, vim.TaskInfo.State.queued]:
sleep_time = min(2 ** failure_counter + randint(1, 1000) / 1000, max_backoff)
time.sleep(sleep_time)
failure_counter += 1
def wait_for_vm_ip(content, vm, timeout=300):
facts = dict()
interval = 15
while timeout > 0:
_facts = gather_vm_facts(content, vm)
if _facts['ipv4'] or _facts['ipv6']:
facts = _facts
break
time.sleep(interval)
timeout -= interval
return facts
def find_obj(content, vimtype, name, first=True, folder=None):
container = content.viewManager.CreateContainerView(folder or content.rootFolder, recursive=True, type=vimtype)
# Get all objects matching type (and name if given)
obj_list = [obj for obj in container.view if not name or to_text(obj.name) == to_text(name)]
container.Destroy()
# Return first match or None
if first:
if obj_list:
return obj_list[0]
return None
# Return all matching objects or empty list
return obj_list
def find_dvspg_by_name(dv_switch, portgroup_name):
portgroups = dv_switch.portgroup
for pg in portgroups:
if pg.name == portgroup_name:
return pg
return None
def find_object_by_name(content, name, obj_type, folder=None, recurse=True):
if not isinstance(obj_type, list):
obj_type = [obj_type]
objects = get_all_objs(content, obj_type, folder=folder, recurse=recurse)
for obj in objects:
if obj.name == name:
return obj
return None
def find_cluster_by_name(content, cluster_name, datacenter=None):
if datacenter:
folder = datacenter.hostFolder
else:
folder = content.rootFolder
return find_object_by_name(content, cluster_name, [vim.ClusterComputeResource], folder=folder)
def find_datacenter_by_name(content, datacenter_name):
return find_object_by_name(content, datacenter_name, [vim.Datacenter])
def get_parent_datacenter(obj):
""" Walk the parent tree to find the objects datacenter """
if isinstance(obj, vim.Datacenter):
return obj
datacenter = None
while True:
if not hasattr(obj, 'parent'):
break
obj = obj.parent
if isinstance(obj, vim.Datacenter):
datacenter = obj
break
return datacenter
def find_datastore_by_name(content, datastore_name):
return find_object_by_name(content, datastore_name, [vim.Datastore])
def find_dvs_by_name(content, switch_name, folder=None):
return find_object_by_name(content, switch_name, [vim.DistributedVirtualSwitch], folder=folder)
def find_hostsystem_by_name(content, hostname):
return find_object_by_name(content, hostname, [vim.HostSystem])
def find_resource_pool_by_name(content, resource_pool_name):
return find_object_by_name(content, resource_pool_name, [vim.ResourcePool])
def find_network_by_name(content, network_name):
return find_object_by_name(content, network_name, [vim.Network])
def find_vm_by_id(content, vm_id, vm_id_type="vm_name", datacenter=None,
cluster=None, folder=None, match_first=False):
""" UUID is unique to a VM, every other id returns the first match. """
si = content.searchIndex
vm = None
if vm_id_type == 'dns_name':
vm = si.FindByDnsName(datacenter=datacenter, dnsName=vm_id, vmSearch=True)
elif vm_id_type == 'uuid':
# Search By BIOS UUID rather than instance UUID
vm = si.FindByUuid(datacenter=datacenter, instanceUuid=False, uuid=vm_id, vmSearch=True)
elif vm_id_type == 'instance_uuid':
vm = si.FindByUuid(datacenter=datacenter, instanceUuid=True, uuid=vm_id, vmSearch=True)
elif vm_id_type == 'ip':
vm = si.FindByIp(datacenter=datacenter, ip=vm_id, vmSearch=True)
elif vm_id_type == 'vm_name':
folder = None
if cluster:
folder = cluster
elif datacenter:
folder = datacenter.hostFolder
vm = find_vm_by_name(content, vm_id, folder)
elif vm_id_type == 'inventory_path':
searchpath = folder
# get all objects for this path
f_obj = si.FindByInventoryPath(searchpath)
if f_obj:
if isinstance(f_obj, vim.Datacenter):
f_obj = f_obj.vmFolder
for c_obj in f_obj.childEntity:
if not isinstance(c_obj, vim.VirtualMachine):
continue
if c_obj.name == vm_id:
vm = c_obj
if match_first:
break
return vm
def find_vm_by_name(content, vm_name, folder=None, recurse=True):
return find_object_by_name(content, vm_name, [vim.VirtualMachine], folder=folder, recurse=recurse)
def find_host_portgroup_by_name(host, portgroup_name):
for portgroup in host.config.network.portgroup:
if portgroup.spec.name == portgroup_name:
return portgroup
return None
def compile_folder_path_for_object(vobj):
""" make a /vm/foo/bar/baz like folder path for an object """
paths = []
if isinstance(vobj, vim.Folder):
paths.append(vobj.name)
thisobj = vobj
while hasattr(thisobj, 'parent'):
thisobj = thisobj.parent
try:
moid = thisobj._moId
except AttributeError:
moid = None
if moid in ['group-d1', 'ha-folder-root']:
break
if isinstance(thisobj, vim.Folder):
paths.append(thisobj.name)
paths.reverse()
return '/' + '/'.join(paths)
def _get_vm_prop(vm, attributes):
"""Safely get a property or return None"""
result = vm
for attribute in attributes:
try:
result = getattr(result, attribute)
except (AttributeError, IndexError):
return None
return result
def gather_vm_facts(content, vm):
""" Gather facts from vim.VirtualMachine object. """
facts = {
'module_hw': True,
'hw_name': vm.config.name,
'hw_power_status': vm.summary.runtime.powerState,
'hw_guest_full_name': vm.summary.guest.guestFullName,
'hw_guest_id': vm.summary.guest.guestId,
'hw_product_uuid': vm.config.uuid,
'hw_processor_count': vm.config.hardware.numCPU,
'hw_cores_per_socket': vm.config.hardware.numCoresPerSocket,
'hw_memtotal_mb': vm.config.hardware.memoryMB,
'hw_interfaces': [],
'hw_datastores': [],
'hw_files': [],
'hw_esxi_host': None,
'hw_guest_ha_state': None,
'hw_is_template': vm.config.template,
'hw_folder': None,
'hw_version': vm.config.version,
'instance_uuid': vm.config.instanceUuid,
'guest_tools_status': _get_vm_prop(vm, ('guest', 'toolsRunningStatus')),
'guest_tools_version': _get_vm_prop(vm, ('guest', 'toolsVersion')),
'guest_question': vm.summary.runtime.question,
'guest_consolidation_needed': vm.summary.runtime.consolidationNeeded,
'ipv4': None,
'ipv6': None,
'annotation': vm.config.annotation,
'customvalues': {},
'snapshots': [],
'current_snapshot': None,
'vnc': {},
'moid': vm._moId,
'vimref': "vim.VirtualMachine:%s" % vm._moId,
}
# facts that may or may not exist
if vm.summary.runtime.host:
try:
host = vm.summary.runtime.host
facts['hw_esxi_host'] = host.summary.config.name
facts['hw_cluster'] = host.parent.name if host.parent and isinstance(host.parent, vim.ClusterComputeResource) else None
except vim.fault.NoPermission:
# User does not have read permission for the host system,
# proceed without this value. This value does not contribute or hamper
# provisioning or power management operations.
pass
if vm.summary.runtime.dasVmProtection:
facts['hw_guest_ha_state'] = vm.summary.runtime.dasVmProtection.dasProtected
datastores = vm.datastore
for ds in datastores:
facts['hw_datastores'].append(ds.info.name)
try:
files = vm.config.files
layout = vm.layout
if files:
facts['hw_files'] = [files.vmPathName]
for item in layout.snapshot:
for snap in item.snapshotFile:
if 'vmsn' in snap:
facts['hw_files'].append(snap)
for item in layout.configFile:
facts['hw_files'].append(os.path.join(os.path.dirname(files.vmPathName), item))
for item in vm.layout.logFile:
facts['hw_files'].append(os.path.join(files.logDirectory, item))
for item in vm.layout.disk:
for disk in item.diskFile:
facts['hw_files'].append(disk)
except Exception:
pass
facts['hw_folder'] = PyVmomi.get_vm_path(content, vm)
cfm = content.customFieldsManager
# Resolve custom values
for value_obj in vm.summary.customValue:
kn = value_obj.key
if cfm is not None and cfm.field:
for f in cfm.field:
if f.key == value_obj.key:
kn = f.name
# Exit the loop immediately, we found it
break
facts['customvalues'][kn] = value_obj.value
net_dict = {}
vmnet = _get_vm_prop(vm, ('guest', 'net'))
if vmnet:
for device in vmnet:
net_dict[device.macAddress] = list(device.ipAddress)
if vm.guest.ipAddress:
if ':' in vm.guest.ipAddress:
facts['ipv6'] = vm.guest.ipAddress
else:
facts['ipv4'] = vm.guest.ipAddress
ethernet_idx = 0
for entry in vm.config.hardware.device:
if not hasattr(entry, 'macAddress'):
continue
if entry.macAddress:
mac_addr = entry.macAddress
mac_addr_dash = mac_addr.replace(':', '-')
else:
mac_addr = mac_addr_dash = None
if (hasattr(entry, 'backing') and hasattr(entry.backing, 'port') and
hasattr(entry.backing.port, 'portKey') and hasattr(entry.backing.port, 'portgroupKey')):
port_group_key = entry.backing.port.portgroupKey
port_key = entry.backing.port.portKey
else:
port_group_key = None
port_key = None
factname = 'hw_eth' + str(ethernet_idx)
facts[factname] = {
'addresstype': entry.addressType,
'label': entry.deviceInfo.label,
'macaddress': mac_addr,
'ipaddresses': net_dict.get(entry.macAddress, None),
'macaddress_dash': mac_addr_dash,
'summary': entry.deviceInfo.summary,
'portgroup_portkey': port_key,
'portgroup_key': port_group_key,
}
facts['hw_interfaces'].append('eth' + str(ethernet_idx))
ethernet_idx += 1
snapshot_facts = list_snapshots(vm)
if 'snapshots' in snapshot_facts:
facts['snapshots'] = snapshot_facts['snapshots']
facts['current_snapshot'] = snapshot_facts['current_snapshot']
facts['vnc'] = get_vnc_extraconfig(vm)
return facts
def deserialize_snapshot_obj(obj):
return {'id': obj.id,
'name': obj.name,
'description': obj.description,
'creation_time': obj.createTime,
'state': obj.state}
def list_snapshots_recursively(snapshots):
snapshot_data = []
for snapshot in snapshots:
snapshot_data.append(deserialize_snapshot_obj(snapshot))
snapshot_data = snapshot_data + list_snapshots_recursively(snapshot.childSnapshotList)
return snapshot_data
def get_current_snap_obj(snapshots, snapob):
snap_obj = []
for snapshot in snapshots:
if snapshot.snapshot == snapob:
snap_obj.append(snapshot)
snap_obj = snap_obj + get_current_snap_obj(snapshot.childSnapshotList, snapob)
return snap_obj
def list_snapshots(vm):
result = {}
snapshot = _get_vm_prop(vm, ('snapshot',))
if not snapshot:
return result
if vm.snapshot is None:
return result
result['snapshots'] = list_snapshots_recursively(vm.snapshot.rootSnapshotList)
current_snapref = vm.snapshot.currentSnapshot
current_snap_obj = get_current_snap_obj(vm.snapshot.rootSnapshotList, current_snapref)
if current_snap_obj:
result['current_snapshot'] = deserialize_snapshot_obj(current_snap_obj[0])
else:
result['current_snapshot'] = dict()
return result
def get_vnc_extraconfig(vm):
result = {}
for opts in vm.config.extraConfig:
for optkeyname in ['enabled', 'ip', 'port', 'password']:
if opts.key.lower() == "remotedisplay.vnc." + optkeyname:
result[optkeyname] = opts.value
return result
def vmware_argument_spec():
return dict(
hostname=dict(type='str',
required=False,
fallback=(env_fallback, ['VMWARE_HOST']),
),
username=dict(type='str',
aliases=['user', 'admin'],
required=False,
fallback=(env_fallback, ['VMWARE_USER'])),
password=dict(type='str',
aliases=['pass', 'pwd'],
required=False,
no_log=True,
fallback=(env_fallback, ['VMWARE_PASSWORD'])),
port=dict(type='int',
default=443,
fallback=(env_fallback, ['VMWARE_PORT'])),
validate_certs=dict(type='bool',
required=False,
default=True,
fallback=(env_fallback, ['VMWARE_VALIDATE_CERTS'])
),
proxy_host=dict(type='str',
required=False,
default=None,
fallback=(env_fallback, ['VMWARE_PROXY_HOST'])),
proxy_port=dict(type='int',
required=False,
default=None,
fallback=(env_fallback, ['VMWARE_PROXY_PORT'])),
)
def connect_to_api(module, disconnect_atexit=True, return_si=False):
hostname = module.params['hostname']
username = module.params['username']
password = module.params['password']
port = module.params.get('port', 443)
validate_certs = module.params['validate_certs']
if not hostname:
module.fail_json(msg="Hostname parameter is missing."
" Please specify this parameter in task or"
" export environment variable like 'export VMWARE_HOST=ESXI_HOSTNAME'")
if not username:
module.fail_json(msg="Username parameter is missing."
" Please specify this parameter in task or"
" export environment variable like 'export VMWARE_USER=ESXI_USERNAME'")
if not password:
module.fail_json(msg="Password parameter is missing."
" Please specify this parameter in task or"
" export environment variable like 'export VMWARE_PASSWORD=ESXI_PASSWORD'")
if validate_certs and not hasattr(ssl, 'SSLContext'):
module.fail_json(msg='pyVim does not support changing verification mode with python < 2.7.9. Either update '
'python or use validate_certs=false.')
ssl_context = ssl.SSLContext(ssl.PROTOCOL_SSLv23)
if validate_certs:
ssl_context.verify_mode = ssl.CERT_REQUIRED
ssl_context.check_hostname = True
ssl_context.load_default_certs()
service_instance = None
proxy_host = module.params.get('proxy_host')
proxy_port = module.params.get('proxy_port')
connect_args = dict(
host=hostname,
port=port,
)
if ssl_context:
connect_args.update(sslContext=ssl_context)
msg_suffix = ''
try:
if proxy_host:
msg_suffix = " [proxy: %s:%d]" % (proxy_host, proxy_port)
connect_args.update(httpProxyHost=proxy_host, httpProxyPort=proxy_port)
smart_stub = connect.SmartStubAdapter(**connect_args)
session_stub = connect.VimSessionOrientedStub(smart_stub, connect.VimSessionOrientedStub.makeUserLoginMethod(username, password))
service_instance = vim.ServiceInstance('ServiceInstance', session_stub)
else:
connect_args.update(user=username, pwd=password)
service_instance = connect.SmartConnect(**connect_args)
except vim.fault.InvalidLogin as invalid_login:
msg = "Unable to log on to vCenter or ESXi API at %s:%s " % (hostname, port)
module.fail_json(msg="%s as %s: %s" % (msg, username, invalid_login.msg) + msg_suffix)
except vim.fault.NoPermission as no_permission:
module.fail_json(msg="User %s does not have required permission"
" to log on to vCenter or ESXi API at %s:%s : %s" % (username, hostname, port, no_permission.msg))
except (requests.ConnectionError, ssl.SSLError) as generic_req_exc:
module.fail_json(msg="Unable to connect to vCenter or ESXi API at %s on TCP/%s: %s" % (hostname, port, generic_req_exc))
except vmodl.fault.InvalidRequest as invalid_request:
# Request is malformed
msg = "Failed to get a response from server %s:%s " % (hostname, port)
module.fail_json(msg="%s as request is malformed: %s" % (msg, invalid_request.msg) + msg_suffix)
except Exception as generic_exc:
msg = "Unknown error while connecting to vCenter or ESXi API at %s:%s" % (hostname, port) + msg_suffix
module.fail_json(msg="%s : %s" % (msg, generic_exc))
if service_instance is None:
msg = "Unknown error while connecting to vCenter or ESXi API at %s:%s" % (hostname, port)
module.fail_json(msg=msg + msg_suffix)
# Disabling atexit should be used in special cases only.
# Such as IP change of the ESXi host which removes the connection anyway.
# Also removal significantly speeds up the return of the module
if disconnect_atexit:
atexit.register(connect.Disconnect, service_instance)
if return_si:
return service_instance, service_instance.RetrieveContent()
return service_instance.RetrieveContent()
def get_all_objs(content, vimtype, folder=None, recurse=True):
if not folder:
folder = content.rootFolder
obj = {}
container = content.viewManager.CreateContainerView(folder, vimtype, recurse)
for managed_object_ref in container.view:
obj.update({managed_object_ref: managed_object_ref.name})
return obj
def run_command_in_guest(content, vm, username, password, program_path, program_args, program_cwd, program_env):
result = {'failed': False}
tools_status = vm.guest.toolsStatus
if (tools_status == 'toolsNotInstalled' or
tools_status == 'toolsNotRunning'):
result['failed'] = True
result['msg'] = "VMwareTools is not installed or is not running in the guest"
return result
# https://github.com/vmware/pyvmomi/blob/master/docs/vim/vm/guest/NamePasswordAuthentication.rst
creds = vim.vm.guest.NamePasswordAuthentication(
username=username, password=password
)
try:
# https://github.com/vmware/pyvmomi/blob/master/docs/vim/vm/guest/ProcessManager.rst
pm = content.guestOperationsManager.processManager
# https://www.vmware.com/support/developer/converter-sdk/conv51_apireference/vim.vm.guest.ProcessManager.ProgramSpec.html
ps = vim.vm.guest.ProcessManager.ProgramSpec(
# programPath=program,
# arguments=args
programPath=program_path,
arguments=program_args,
workingDirectory=program_cwd,
)
res = pm.StartProgramInGuest(vm, creds, ps)
result['pid'] = res
pdata = pm.ListProcessesInGuest(vm, creds, [res])
# wait for pid to finish
while not pdata[0].endTime:
time.sleep(1)
pdata = pm.ListProcessesInGuest(vm, creds, [res])
result['owner'] = pdata[0].owner
result['startTime'] = pdata[0].startTime.isoformat()
result['endTime'] = pdata[0].endTime.isoformat()
result['exitCode'] = pdata[0].exitCode
if result['exitCode'] != 0:
result['failed'] = True
result['msg'] = "program exited non-zero"
else:
result['msg'] = "program completed successfully"
except Exception as e:
result['msg'] = str(e)
result['failed'] = True
return result
def serialize_spec(clonespec):
"""Serialize a clonespec or a relocation spec"""
data = {}
attrs = dir(clonespec)
attrs = [x for x in attrs if not x.startswith('_')]
for x in attrs:
xo = getattr(clonespec, x)
if callable(xo):
continue
xt = type(xo)
if xo is None:
data[x] = None
elif isinstance(xo, vim.vm.ConfigSpec):
data[x] = serialize_spec(xo)
elif isinstance(xo, vim.vm.RelocateSpec):
data[x] = serialize_spec(xo)
elif isinstance(xo, vim.vm.device.VirtualDisk):
data[x] = serialize_spec(xo)
elif isinstance(xo, vim.vm.device.VirtualDeviceSpec.FileOperation):
data[x] = to_text(xo)
elif isinstance(xo, vim.Description):
data[x] = {
'dynamicProperty': serialize_spec(xo.dynamicProperty),
'dynamicType': serialize_spec(xo.dynamicType),
'label': serialize_spec(xo.label),
'summary': serialize_spec(xo.summary),
}
elif hasattr(xo, 'name'):
data[x] = to_text(xo) + ':' + to_text(xo.name)
elif isinstance(xo, vim.vm.ProfileSpec):
pass
elif issubclass(xt, list):
data[x] = []
for xe in xo:
data[x].append(serialize_spec(xe))
elif issubclass(xt, string_types + integer_types + (float, bool)):
if issubclass(xt, integer_types):
data[x] = int(xo)
else:
data[x] = to_text(xo)
elif issubclass(xt, bool):
data[x] = xo
elif issubclass(xt, dict):
data[to_text(x)] = {}
for k, v in xo.items():
k = to_text(k)
data[x][k] = serialize_spec(v)
else:
data[x] = str(xt)
return data
def find_host_by_cluster_datacenter(module, content, datacenter_name, cluster_name, host_name):
dc = find_datacenter_by_name(content, datacenter_name)
if dc is None:
module.fail_json(msg="Unable to find datacenter with name %s" % datacenter_name)
cluster = find_cluster_by_name(content, cluster_name, datacenter=dc)
if cluster is None:
module.fail_json(msg="Unable to find cluster with name %s" % cluster_name)
for host in cluster.host:
if host.name == host_name:
return host, cluster
return None, cluster
def set_vm_power_state(content, vm, state, force, timeout=0):
"""
Set the power status for a VM determined by the current and
requested states. force is forceful
"""
facts = gather_vm_facts(content, vm)
expected_state = state.replace('_', '').replace('-', '').lower()
current_state = facts['hw_power_status'].lower()
result = dict(
changed=False,
failed=False,
)
# Need Force
if not force and current_state not in ['poweredon', 'poweredoff']:
result['failed'] = True
result['msg'] = "Virtual Machine is in %s power state. Force is required!" % current_state
return result
# State is not already true
if current_state != expected_state:
task = None
try:
if expected_state == 'poweredoff':
task = vm.PowerOff()
elif expected_state == 'poweredon':
task = vm.PowerOn()
elif expected_state == 'restarted':
if current_state in ('poweredon', 'poweringon', 'resetting', 'poweredoff'):
task = vm.Reset()
else:
result['failed'] = True
result['msg'] = "Cannot restart virtual machine in the current state %s" % current_state
elif expected_state == 'suspended':
if current_state in ('poweredon', 'poweringon'):
task = vm.Suspend()
else:
result['failed'] = True
result['msg'] = 'Cannot suspend virtual machine in the current state %s' % current_state
elif expected_state in ['shutdownguest', 'rebootguest']:
if current_state == 'poweredon':
if vm.guest.toolsRunningStatus == 'guestToolsRunning':
if expected_state == 'shutdownguest':
task = vm.ShutdownGuest()
if timeout > 0:
result.update(wait_for_poweroff(vm, timeout))
else:
task = vm.RebootGuest()
# Set result['changed'] immediately because
# shutdown and reboot return None.
result['changed'] = True
else:
result['failed'] = True
result['msg'] = "VMware tools should be installed for guest shutdown/reboot"
else:
result['failed'] = True
result['msg'] = "Virtual machine %s must be in poweredon state for guest shutdown/reboot" % vm.name
else:
result['failed'] = True
result['msg'] = "Unsupported expected state provided: %s" % expected_state
except Exception as e:
result['failed'] = True
result['msg'] = to_text(e)
if task:
wait_for_task(task)
if task.info.state == 'error':
result['failed'] = True
result['msg'] = task.info.error.msg
else:
result['changed'] = True
# need to get new metadata if changed
result['instance'] = gather_vm_facts(content, vm)
return result
def wait_for_poweroff(vm, timeout=300):
result = dict()
interval = 15
while timeout > 0:
if vm.runtime.powerState.lower() == 'poweredoff':
break
time.sleep(interval)
timeout -= interval
else:
result['failed'] = True
result['msg'] = 'Timeout while waiting for VM power off.'
return result
class PyVmomi(object):
def __init__(self, module):
"""
Constructor
"""
if not HAS_REQUESTS:
module.fail_json(msg=missing_required_lib('requests'),
exception=REQUESTS_IMP_ERR)
if not HAS_PYVMOMI:
module.fail_json(msg=missing_required_lib('PyVmomi'),
exception=PYVMOMI_IMP_ERR)
self.module = module
self.params = module.params
self.current_vm_obj = None
self.si, self.content = connect_to_api(self.module, return_si=True)
self.custom_field_mgr = []
if self.content.customFieldsManager: # not an ESXi
self.custom_field_mgr = self.content.customFieldsManager.field
def is_vcenter(self):
"""
Check if given hostname is vCenter or ESXi host
Returns: True if given connection is with vCenter server
False if given connection is with ESXi server
"""
api_type = None
try:
api_type = self.content.about.apiType
except (vmodl.RuntimeFault, vim.fault.VimFault) as exc:
self.module.fail_json(msg="Failed to get status of vCenter server : %s" % exc.msg)
if api_type == 'VirtualCenter':
return True
elif api_type == 'HostAgent':
return False
def get_managed_objects_properties(self, vim_type, properties=None):
"""
Look up a Managed Object Reference in vCenter / ESXi Environment
:param vim_type: Type of vim object e.g, for datacenter - vim.Datacenter
:param properties: List of properties related to vim object e.g. Name
:return: local content object
"""
# Get Root Folder
root_folder = self.content.rootFolder
if properties is None:
properties = ['name']
# Create Container View with default root folder
mor = self.content.viewManager.CreateContainerView(root_folder, [vim_type], True)
# Create Traversal spec
traversal_spec = vmodl.query.PropertyCollector.TraversalSpec(
name="traversal_spec",
path='view',
skip=False,
type=vim.view.ContainerView
)
# Create Property Spec
property_spec = vmodl.query.PropertyCollector.PropertySpec(
type=vim_type, # Type of object to retrieved
all=False,
pathSet=properties
)
# Create Object Spec
object_spec = vmodl.query.PropertyCollector.ObjectSpec(
obj=mor,
skip=True,
selectSet=[traversal_spec]
)
# Create Filter Spec
filter_spec = vmodl.query.PropertyCollector.FilterSpec(
objectSet=[object_spec],
propSet=[property_spec],
reportMissingObjectsInResults=False
)
return self.content.propertyCollector.RetrieveContents([filter_spec])
# Virtual Machine related functions
def get_vm(self):
"""
Find unique virtual machine either by UUID, MoID or Name.
Returns: virtual machine object if found, else None.
"""
vm_obj = None
user_desired_path = None
use_instance_uuid = self.params.get('use_instance_uuid') or False
if self.params['uuid'] and not use_instance_uuid:
vm_obj = find_vm_by_id(self.content, vm_id=self.params['uuid'], vm_id_type="uuid")
elif self.params['uuid'] and use_instance_uuid:
vm_obj = find_vm_by_id(self.content,
vm_id=self.params['uuid'],
vm_id_type="instance_uuid")
elif self.params['name']:
objects = self.get_managed_objects_properties(vim_type=vim.VirtualMachine, properties=['name'])
vms = []
for temp_vm_object in objects:
if len(temp_vm_object.propSet) != 1:
continue
for temp_vm_object_property in temp_vm_object.propSet:
if temp_vm_object_property.val == self.params['name']:
vms.append(temp_vm_object.obj)
break
# get_managed_objects_properties may return multiple virtual machine,
# following code tries to find user desired one depending upon the folder specified.
if len(vms) > 1:
# We have found multiple virtual machines, decide depending upon folder value
if self.params['folder'] is None:
self.module.fail_json(msg="Multiple virtual machines with same name [%s] found, "
"Folder value is a required parameter to find uniqueness "
"of the virtual machine" % self.params['name'],
details="Please see documentation of the vmware_guest module "
"for folder parameter.")
# Get folder path where virtual machine is located
# User provided folder where user thinks virtual machine is present
user_folder = self.params['folder']
# User defined datacenter
user_defined_dc = self.params['datacenter']
# User defined datacenter's object
datacenter_obj = find_datacenter_by_name(self.content, self.params['datacenter'])
# Get Path for Datacenter
dcpath = compile_folder_path_for_object(vobj=datacenter_obj)
# Nested folder does not return trailing /
if not dcpath.endswith('/'):
dcpath += '/'
if user_folder in [None, '', '/']:
# User provided blank value or
# User provided only root value, we fail
self.module.fail_json(msg="vmware_guest found multiple virtual machines with same "
"name [%s], please specify folder path other than blank "
"or '/'" % self.params['name'])
elif user_folder.startswith('/vm/'):
# User provided nested folder under VMware default vm folder i.e. folder = /vm/india/finance
user_desired_path = "%s%s%s" % (dcpath, user_defined_dc, user_folder)
else:
# User defined datacenter is not nested i.e. dcpath = '/' , or
# User defined datacenter is nested i.e. dcpath = '/F0/DC0' or
# User provided folder starts with / and datacenter i.e. folder = /ha-datacenter/ or
# User defined folder starts with datacenter without '/' i.e.
# folder = DC0/vm/india/finance or
# folder = DC0/vm
user_desired_path = user_folder
for vm in vms:
# Check if user has provided same path as virtual machine
actual_vm_folder_path = self.get_vm_path(content=self.content, vm_name=vm)
if not actual_vm_folder_path.startswith("%s%s" % (dcpath, user_defined_dc)):
continue
if user_desired_path in actual_vm_folder_path:
vm_obj = vm
break
elif vms:
# Unique virtual machine found.
vm_obj = vms[0]
elif self.params['moid']:
vm_obj = VmomiSupport.templateOf('VirtualMachine')(self.params['moid'], self.si._stub)
if vm_obj:
self.current_vm_obj = vm_obj
return vm_obj
def gather_facts(self, vm):
"""
Gather facts of virtual machine.
Args:
vm: Name of virtual machine.
Returns: Facts dictionary of the given virtual machine.
"""
return gather_vm_facts(self.content, vm)
@staticmethod
def get_vm_path(content, vm_name):
"""
Find the path of virtual machine.
Args:
content: VMware content object
vm_name: virtual machine managed object
Returns: Folder of virtual machine if exists, else None
"""
folder_name = None
folder = vm_name.parent
if folder:
folder_name = folder.name
fp = folder.parent
# climb back up the tree to find our path, stop before the root folder
while fp is not None and fp.name is not None and fp != content.rootFolder:
folder_name = fp.name + '/' + folder_name
try:
fp = fp.parent
except Exception:
break
folder_name = '/' + folder_name
return folder_name
def get_vm_or_template(self, template_name=None):
"""
Find the virtual machine or virtual machine template using name
used for cloning purpose.
Args:
template_name: Name of virtual machine or virtual machine template
Returns: virtual machine or virtual machine template object
"""
template_obj = None
if not template_name:
return template_obj
if "/" in template_name:
vm_obj_path = os.path.dirname(template_name)
vm_obj_name = os.path.basename(template_name)
template_obj = find_vm_by_id(self.content, vm_obj_name, vm_id_type="inventory_path", folder=vm_obj_path)
if template_obj:
return template_obj
else:
template_obj = find_vm_by_id(self.content, vm_id=template_name, vm_id_type="uuid")
if template_obj:
return template_obj
objects = self.get_managed_objects_properties(vim_type=vim.VirtualMachine, properties=['name'])
templates = []
for temp_vm_object in objects:
if len(temp_vm_object.propSet) != 1:
continue
for temp_vm_object_property in temp_vm_object.propSet:
if temp_vm_object_property.val == template_name:
templates.append(temp_vm_object.obj)
break
if len(templates) > 1:
# We have found multiple virtual machine templates
self.module.fail_json(msg="Multiple virtual machines or templates with same name [%s] found." % template_name)
elif templates:
template_obj = templates[0]
return template_obj
# Cluster related functions
def find_cluster_by_name(self, cluster_name, datacenter_name=None):
"""
Find Cluster by name in given datacenter
Args:
cluster_name: Name of cluster name to find
datacenter_name: (optional) Name of datacenter
Returns: True if found
"""
return find_cluster_by_name(self.content, cluster_name, datacenter=datacenter_name)
def get_all_hosts_by_cluster(self, cluster_name):
"""
Get all hosts from cluster by cluster name
Args:
cluster_name: Name of cluster
Returns: List of hosts
"""
cluster_obj = self.find_cluster_by_name(cluster_name=cluster_name)
if cluster_obj:
return [host for host in cluster_obj.host]
else:
return []
# Hosts related functions
def find_hostsystem_by_name(self, host_name):
"""
Find Host by name
Args:
host_name: Name of ESXi host
Returns: True if found
"""
return find_hostsystem_by_name(self.content, hostname=host_name)
def get_all_host_objs(self, cluster_name=None, esxi_host_name=None):
"""
Get all host system managed object
Args:
cluster_name: Name of Cluster
esxi_host_name: Name of ESXi server
Returns: A list of all host system managed objects, else empty list
"""
host_obj_list = []
if not self.is_vcenter():
hosts = get_all_objs(self.content, [vim.HostSystem]).keys()
if hosts:
host_obj_list.append(list(hosts)[0])
else:
if cluster_name:
cluster_obj = self.find_cluster_by_name(cluster_name=cluster_name)
if cluster_obj:
host_obj_list = [host for host in cluster_obj.host]
else:
self.module.fail_json(changed=False, msg="Cluster '%s' not found" % cluster_name)
elif esxi_host_name:
if isinstance(esxi_host_name, str):
esxi_host_name = [esxi_host_name]
for host in esxi_host_name:
esxi_host_obj = self.find_hostsystem_by_name(host_name=host)
if esxi_host_obj:
host_obj_list.append(esxi_host_obj)
else:
self.module.fail_json(changed=False, msg="ESXi '%s' not found" % host)
return host_obj_list
def host_version_at_least(self, version=None, vm_obj=None, host_name=None):
"""
Check that the ESXi Host is at least a specific version number
Args:
vm_obj: virtual machine object, required one of vm_obj, host_name
host_name (string): ESXi host name
version (tuple): a version tuple, for example (6, 7, 0)
Returns: bool
"""
if vm_obj:
host_system = vm_obj.summary.runtime.host
elif host_name:
host_system = self.find_hostsystem_by_name(host_name=host_name)
else:
self.module.fail_json(msg='VM object or ESXi host name must be set one.')
if host_system and version:
host_version = host_system.summary.config.product.version
return StrictVersion(host_version) >= StrictVersion('.'.join(map(str, version)))
else:
self.module.fail_json(msg='Unable to get the ESXi host from vm: %s, or hostname %s,'
'or the passed ESXi version: %s is None.' % (vm_obj, host_name, version))
# Network related functions
@staticmethod
def find_host_portgroup_by_name(host, portgroup_name):
"""
Find Portgroup on given host
Args:
host: Host config object
portgroup_name: Name of portgroup
Returns: True if found else False
"""
for portgroup in host.config.network.portgroup:
if portgroup.spec.name == portgroup_name:
return portgroup
return False
def get_all_port_groups_by_host(self, host_system):
"""
Get all Port Group by host
Args:
host_system: Name of Host System
Returns: List of Port Group Spec
"""
pgs_list = []
for pg in host_system.config.network.portgroup:
pgs_list.append(pg)
return pgs_list
def find_network_by_name(self, network_name=None):
"""
Get network specified by name
Args:
network_name: Name of network
Returns: List of network managed objects
"""
networks = []
if not network_name:
return networks
objects = self.get_managed_objects_properties(vim_type=vim.Network, properties=['name'])
for temp_vm_object in objects:
if len(temp_vm_object.propSet) != 1:
continue
for temp_vm_object_property in temp_vm_object.propSet:
if temp_vm_object_property.val == network_name:
networks.append(temp_vm_object.obj)
break
return networks
def network_exists_by_name(self, network_name=None):
"""
Check if network with a specified name exists or not
Args:
network_name: Name of network
Returns: True if network exists else False
"""
ret = False
if not network_name:
return ret
ret = True if self.find_network_by_name(network_name=network_name) else False
return ret
# Datacenter
def find_datacenter_by_name(self, datacenter_name):
"""
Get datacenter managed object by name
Args:
datacenter_name: Name of datacenter
Returns: datacenter managed object if found else None
"""
return find_datacenter_by_name(self.content, datacenter_name=datacenter_name)
def find_datastore_by_name(self, datastore_name):
"""
Get datastore managed object by name
Args:
datastore_name: Name of datastore
Returns: datastore managed object if found else None
"""
return find_datastore_by_name(self.content, datastore_name=datastore_name)
# Datastore cluster
def find_datastore_cluster_by_name(self, datastore_cluster_name):
"""
Get datastore cluster managed object by name
Args:
datastore_cluster_name: Name of datastore cluster
Returns: Datastore cluster managed object if found else None
"""
data_store_clusters = get_all_objs(self.content, [vim.StoragePod])
for dsc in data_store_clusters:
if dsc.name == datastore_cluster_name:
return dsc
return None
# Resource pool
def find_resource_pool_by_name(self, resource_pool_name, folder=None):
"""
Get resource pool managed object by name
Args:
resource_pool_name: Name of resource pool
Returns: Resource pool managed object if found else None
"""
if not folder:
folder = self.content.rootFolder
resource_pools = get_all_objs(self.content, [vim.ResourcePool], folder=folder)
for rp in resource_pools:
if rp.name == resource_pool_name:
return rp
return None
def find_resource_pool_by_cluster(self, resource_pool_name='Resources', cluster=None):
"""
Get resource pool managed object by cluster object
Args:
resource_pool_name: Name of resource pool
cluster: Managed object of cluster
Returns: Resource pool managed object if found else None
"""
desired_rp = None
if not cluster:
return desired_rp
if resource_pool_name != 'Resources':
# Resource pool name is different than default 'Resources'
resource_pools = cluster.resourcePool.resourcePool
if resource_pools:
for rp in resource_pools:
if rp.name == resource_pool_name:
desired_rp = rp
break
else:
desired_rp = cluster.resourcePool
return desired_rp
# VMDK stuff
def vmdk_disk_path_split(self, vmdk_path):
"""
Takes a string in the format
[datastore_name] path/to/vm_name.vmdk
Returns a tuple with multiple strings:
1. datastore_name: The name of the datastore (without brackets)
2. vmdk_fullpath: The "path/to/vm_name.vmdk" portion
3. vmdk_filename: The "vm_name.vmdk" portion of the string (os.path.basename equivalent)
4. vmdk_folder: The "path/to/" portion of the string (os.path.dirname equivalent)
"""
try:
datastore_name = re.match(r'^\[(.*?)\]', vmdk_path, re.DOTALL).groups()[0]
vmdk_fullpath = re.match(r'\[.*?\] (.*)$', vmdk_path).groups()[0]
vmdk_filename = os.path.basename(vmdk_fullpath)
vmdk_folder = os.path.dirname(vmdk_fullpath)
return datastore_name, vmdk_fullpath, vmdk_filename, vmdk_folder
except (IndexError, AttributeError) as e:
self.module.fail_json(msg="Bad path '%s' for filename disk vmdk image: %s" % (vmdk_path, to_native(e)))
def find_vmdk_file(self, datastore_obj, vmdk_fullpath, vmdk_filename, vmdk_folder):
"""
Return vSphere file object or fail_json
Args:
datastore_obj: Managed object of datastore
vmdk_fullpath: Path of VMDK file e.g., path/to/vm/vmdk_filename.vmdk
vmdk_filename: Name of vmdk e.g., VM0001_1.vmdk
vmdk_folder: Base dir of VMDK e.g, path/to/vm
"""
browser = datastore_obj.browser
datastore_name = datastore_obj.name
datastore_name_sq = "[" + datastore_name + "]"
if browser is None:
self.module.fail_json(msg="Unable to access browser for datastore %s" % datastore_name)
detail_query = vim.host.DatastoreBrowser.FileInfo.Details(
fileOwner=True,
fileSize=True,
fileType=True,
modification=True
)
search_spec = vim.host.DatastoreBrowser.SearchSpec(
details=detail_query,
matchPattern=[vmdk_filename],
searchCaseInsensitive=True,
)
search_res = browser.SearchSubFolders(
datastorePath=datastore_name_sq,
searchSpec=search_spec
)
changed = False
vmdk_path = datastore_name_sq + " " + vmdk_fullpath
try:
changed, result = wait_for_task(search_res)
except TaskError as task_e:
self.module.fail_json(msg=to_native(task_e))
if not changed:
self.module.fail_json(msg="No valid disk vmdk image found for path %s" % vmdk_path)
target_folder_paths = [
datastore_name_sq + " " + vmdk_folder + '/',
datastore_name_sq + " " + vmdk_folder,
]
for file_result in search_res.info.result:
for f in getattr(file_result, 'file'):
if f.path == vmdk_filename and file_result.folderPath in target_folder_paths:
return f
self.module.fail_json(msg="No vmdk file found for path specified [%s]" % vmdk_path)
#
# Conversion to JSON
#
def _deepmerge(self, d, u):
"""
Deep merges u into d.
Credit:
https://bit.ly/2EDOs1B (stackoverflow question 3232943)
License:
cc-by-sa 3.0 (https://creativecommons.org/licenses/by-sa/3.0/)
Changes:
using collections_compat for compatibility
Args:
- d (dict): dict to merge into
- u (dict): dict to merge into d
Returns:
dict, with u merged into d
"""
for k, v in iteritems(u):
if isinstance(v, collections_compat.Mapping):
d[k] = self._deepmerge(d.get(k, {}), v)
else:
d[k] = v
return d
def _extract(self, data, remainder):
"""
This is used to break down dotted properties for extraction.
Args:
- data (dict): result of _jsonify on a property
- remainder: the remainder of the dotted property to select
Return:
dict
"""
result = dict()
if '.' not in remainder:
result[remainder] = data[remainder]
return result
key, remainder = remainder.split('.', 1)
result[key] = self._extract(data[key], remainder)
return result
def _jsonify(self, obj):
"""
Convert an object from pyVmomi into JSON.
Args:
- obj (object): vim object
Return:
dict
"""
return json.loads(json.dumps(obj, cls=VmomiSupport.VmomiJSONEncoder,
sort_keys=True, strip_dynamic=True))
def to_json(self, obj, properties=None):
"""
Convert a vSphere (pyVmomi) Object into JSON. This is a deep
transformation. The list of properties is optional - if not
provided then all properties are deeply converted. The resulting
JSON is sorted to improve human readability.
Requires upstream support from pyVmomi > 6.7.1
(https://github.com/vmware/pyvmomi/pull/732)
Args:
- obj (object): vim object
- properties (list, optional): list of properties following
the property collector specification, for example:
["config.hardware.memoryMB", "name", "overallStatus"]
default is a complete object dump, which can be large
Return:
dict
"""
if not HAS_PYVMOMIJSON:
self.module.fail_json(msg='The installed version of pyvmomi lacks JSON output support; need pyvmomi>6.7.1')
result = dict()
if properties:
for prop in properties:
try:
if '.' in prop:
key, remainder = prop.split('.', 1)
tmp = dict()
tmp[key] = self._extract(self._jsonify(getattr(obj, key)), remainder)
self._deepmerge(result, tmp)
else:
result[prop] = self._jsonify(getattr(obj, prop))
# To match gather_vm_facts output
prop_name = prop
if prop.lower() == '_moid':
prop_name = 'moid'
elif prop.lower() == '_vimref':
prop_name = 'vimref'
result[prop_name] = result[prop]
except (AttributeError, KeyError):
self.module.fail_json(msg="Property '{0}' not found.".format(prop))
else:
result = self._jsonify(obj)
return result
def get_folder_path(self, cur):
full_path = '/' + cur.name
while hasattr(cur, 'parent') and cur.parent:
if cur.parent == self.content.rootFolder:
break
cur = cur.parent
full_path = '/' + cur.name + full_path
return full_path
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 58,541 |
vmware_guest: autoselect_datastore=True should select a datatore reachable on the ESXi
|
##### SUMMARY
The following snippet from https://github.com/ansible/ansible/blob/devel/test/integration/targets/vmware_guest/tasks/create_d1_c1_f0.yml uses `autoselect_datastore: True`.
```yaml
- name: create new VMs
vmware_guest:
validate_certs: False
hostname: "{{ vcenter_hostname }}"
username: "{{ vcenter_username }}"
password: "{{ vcenter_password }}"
name: newvm_1
#template: "{{ item|basename }}"
guest_id: centos64Guest
datacenter: "{{ dc1 }}"
hardware:
num_cpus: 1
num_cpu_cores_per_socket: 1
memory_mb: 256
hotadd_memory: true
hotadd_cpu: false
max_connections: 10
disk:
- size: 1gb
type: thin
autoselect_datastore: True
state: poweredoff
folder: F0
```
One of my two datastores is dedicated to the ISO image and so, is readOnly. But it still get selected consistently. I use the following hack to avoid the problem:
```diff
diff --git a/lib/ansible/modules/cloud/vmware/vmware_guest.py b/lib/ansible/modules/cloud/vmware/vmware_guest.py
index 6a63e97798..3648e3e87f 100644
--- a/lib/ansible/modules/cloud/vmware/vmware_guest.py
+++ b/lib/ansible/modules/cloud/vmware/vmware_guest.py
@@ -1925,6 +1925,15 @@ class PyVmomiHelper(PyVmomi):
datastore_freespace = 0
for ds in datastores:
+ is_readonly = False
+ for h in ds.host:
+ if h.mountInfo.accessMode == 'readOnly':
+ is_readonly = True
+ break
+
+ if is_readonly:
+ continue
+
if (ds.summary.freeSpace > datastore_freespace) or (ds.summary.freeSpace == datastore_freespace and not datastore):
# If datastore field is provided, filter destination datastores
if 'datastore' in self.params['disk'][0] and \
```
<!--- Explain the problem briefly below -->
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
vmware_guest
|
https://github.com/ansible/ansible/issues/58541
|
https://github.com/ansible/ansible/pull/58872
|
57dc7ec265bbc741126fa46e44ff3bb6adae5624
|
647b78a09cc45df0c25eaf794c06915f3e2ee9c5
| 2019-06-29T02:40:20Z |
python
| 2019-08-09T02:26:52Z |
lib/ansible/modules/cloud/vmware/vmware_guest.py
|
#!/usr/bin/python
# -*- coding: utf-8 -*-
#
# This module is also sponsored by E.T.A.I. (www.etai.fr)
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
ANSIBLE_METADATA = {'metadata_version': '1.1',
'status': ['preview'],
'supported_by': 'community'}
DOCUMENTATION = r'''
---
module: vmware_guest
short_description: Manages virtual machines in vCenter
description: >
This module can be used to create new virtual machines from templates or other virtual machines,
manage power state of virtual machine such as power on, power off, suspend, shutdown, reboot, restart etc.,
modify various virtual machine components like network, disk, customization etc.,
rename a virtual machine and remove a virtual machine with associated components.
version_added: '2.2'
author:
- Loic Blot (@nerzhul) <[email protected]>
- Philippe Dellaert (@pdellaert) <[email protected]>
- Abhijeet Kasurde (@Akasurde) <[email protected]>
requirements:
- python >= 2.6
- PyVmomi
notes:
- Please make sure that the user used for vmware_guest has the correct level of privileges.
- For example, following is the list of minimum privileges required by users to create virtual machines.
- " DataStore > Allocate Space"
- " Virtual Machine > Configuration > Add New Disk"
- " Virtual Machine > Configuration > Add or Remove Device"
- " Virtual Machine > Inventory > Create New"
- " Network > Assign Network"
- " Resource > Assign Virtual Machine to Resource Pool"
- "Module may require additional privileges as well, which may be required for gathering facts - e.g. ESXi configurations."
- Tested on vSphere 5.5, 6.0, 6.5 and 6.7
- Use SCSI disks instead of IDE when you want to expand online disks by specifing a SCSI controller
- "For additional information please visit Ansible VMware community wiki - U(https://github.com/ansible/community/wiki/VMware)."
options:
state:
description:
- Specify the state the virtual machine should be in.
- 'If C(state) is set to C(present) and virtual machine exists, ensure the virtual machine
configurations conforms to task arguments.'
- 'If C(state) is set to C(absent) and virtual machine exists, then the specified virtual machine
is removed with its associated components.'
- 'If C(state) is set to one of the following C(poweredon), C(poweredoff), C(present), C(restarted), C(suspended)
and virtual machine does not exists, then virtual machine is deployed with given parameters.'
- 'If C(state) is set to C(poweredon) and virtual machine exists with powerstate other than powered on,
then the specified virtual machine is powered on.'
- 'If C(state) is set to C(poweredoff) and virtual machine exists with powerstate other than powered off,
then the specified virtual machine is powered off.'
- 'If C(state) is set to C(restarted) and virtual machine exists, then the virtual machine is restarted.'
- 'If C(state) is set to C(suspended) and virtual machine exists, then the virtual machine is set to suspended mode.'
- 'If C(state) is set to C(shutdownguest) and virtual machine exists, then the virtual machine is shutdown.'
- 'If C(state) is set to C(rebootguest) and virtual machine exists, then the virtual machine is rebooted.'
default: present
choices: [ present, absent, poweredon, poweredoff, restarted, suspended, shutdownguest, rebootguest ]
name:
description:
- Name of the virtual machine to work with.
- Virtual machine names in vCenter are not necessarily unique, which may be problematic, see C(name_match).
- 'If multiple virtual machines with same name exists, then C(folder) is required parameter to
identify uniqueness of the virtual machine.'
- This parameter is required, if C(state) is set to C(poweredon), C(poweredoff), C(present), C(restarted), C(suspended)
and virtual machine does not exists.
- This parameter is case sensitive.
required: yes
name_match:
description:
- If multiple virtual machines matching the name, use the first or last found.
default: 'first'
choices: [ first, last ]
uuid:
description:
- UUID of the virtual machine to manage if known, this is VMware's unique identifier.
- This is required if C(name) is not supplied.
- If virtual machine does not exists, then this parameter is ignored.
- Please note that a supplied UUID will be ignored on virtual machine creation, as VMware creates the UUID internally.
use_instance_uuid:
description:
- Whether to use the VMware instance UUID rather than the BIOS UUID.
default: no
type: bool
version_added: '2.8'
template:
description:
- Template or existing virtual machine used to create new virtual machine.
- If this value is not set, virtual machine is created without using a template.
- If the virtual machine already exists, this parameter will be ignored.
- This parameter is case sensitive.
- You can also specify template or VM UUID for identifying source. version_added 2.8. Use C(hw_product_uuid) from M(vmware_guest_facts) as UUID value.
- From version 2.8 onwards, absolute path to virtual machine or template can be used.
aliases: [ 'template_src' ]
is_template:
description:
- Flag the instance as a template.
- This will mark the given virtual machine as template.
default: 'no'
type: bool
version_added: '2.3'
folder:
description:
- Destination folder, absolute path to find an existing guest or create the new guest.
- The folder should include the datacenter. ESX's datacenter is ha-datacenter.
- This parameter is case sensitive.
- This parameter is required, while deploying new virtual machine. version_added 2.5.
- 'If multiple machines are found with same name, this parameter is used to identify
uniqueness of the virtual machine. version_added 2.5'
- 'Examples:'
- ' folder: /ha-datacenter/vm'
- ' folder: ha-datacenter/vm'
- ' folder: /datacenter1/vm'
- ' folder: datacenter1/vm'
- ' folder: /datacenter1/vm/folder1'
- ' folder: datacenter1/vm/folder1'
- ' folder: /folder1/datacenter1/vm'
- ' folder: folder1/datacenter1/vm'
- ' folder: /folder1/datacenter1/vm/folder2'
hardware:
description:
- Manage virtual machine's hardware attributes.
- All parameters case sensitive.
- 'Valid attributes are:'
- ' - C(hotadd_cpu) (boolean): Allow virtual CPUs to be added while the virtual machine is running.'
- ' - C(hotremove_cpu) (boolean): Allow virtual CPUs to be removed while the virtual machine is running.
version_added: 2.5'
- ' - C(hotadd_memory) (boolean): Allow memory to be added while the virtual machine is running.'
- ' - C(memory_mb) (integer): Amount of memory in MB.'
- ' - C(nested_virt) (bool): Enable nested virtualization. version_added: 2.5'
- ' - C(num_cpus) (integer): Number of CPUs.'
- ' - C(num_cpu_cores_per_socket) (integer): Number of Cores Per Socket.'
- " C(num_cpus) must be a multiple of C(num_cpu_cores_per_socket).
For example to create a VM with 2 sockets of 4 cores, specify C(num_cpus): 8 and C(num_cpu_cores_per_socket): 4"
- ' - C(scsi) (string): Valid values are C(buslogic), C(lsilogic), C(lsilogicsas) and C(paravirtual) (default).'
- " - C(memory_reservation_lock) (boolean): If set true, memory resource reservation for the virtual machine
will always be equal to the virtual machine's memory size. version_added: 2.5"
- ' - C(max_connections) (integer): Maximum number of active remote display connections for the virtual machines.
version_added: 2.5.'
- ' - C(mem_limit) (integer): The memory utilization of a virtual machine will not exceed this limit. Unit is MB.
version_added: 2.5'
- ' - C(mem_reservation) (integer): The amount of memory resource that is guaranteed available to the virtual
machine. Unit is MB. C(memory_reservation) is alias to this. version_added: 2.5'
- ' - C(cpu_limit) (integer): The CPU utilization of a virtual machine will not exceed this limit. Unit is MHz.
version_added: 2.5'
- ' - C(cpu_reservation) (integer): The amount of CPU resource that is guaranteed available to the virtual machine.
Unit is MHz. version_added: 2.5'
- ' - C(version) (integer): The Virtual machine hardware versions. Default is 10 (ESXi 5.5 and onwards).
Please check VMware documentation for correct virtual machine hardware version.
Incorrect hardware version may lead to failure in deployment. If hardware version is already equal to the given
version then no action is taken. version_added: 2.6'
- ' - C(boot_firmware) (string): Choose which firmware should be used to boot the virtual machine.
Allowed values are "bios" and "efi". version_added: 2.7'
- ' - C(virt_based_security) (bool): Enable Virtualization Based Security feature for Windows 10.
(Support from Virtual machine hardware version 14, Guest OS Windows 10 64 bit, Windows Server 2016)'
guest_id:
description:
- Set the guest ID.
- This parameter is case sensitive.
- 'Examples:'
- " virtual machine with RHEL7 64 bit, will be 'rhel7_64Guest'"
- " virtual machine with CensOS 64 bit, will be 'centos64Guest'"
- " virtual machine with Ubuntu 64 bit, will be 'ubuntu64Guest'"
- This field is required when creating a virtual machine, not required when creating from the template.
- >
Valid values are referenced here:
U(https://code.vmware.com/apis/358/vsphere#/doc/vim.vm.GuestOsDescriptor.GuestOsIdentifier.html)
version_added: '2.3'
disk:
description:
- A list of disks to add.
- This parameter is case sensitive.
- Shrinking disks is not supported.
- Removing existing disks of the virtual machine is not supported.
- 'Valid attributes are:'
- ' - C(size_[tb,gb,mb,kb]) (integer): Disk storage size in specified unit.'
- ' - C(type) (string): Valid values are:'
- ' - C(thin) thin disk'
- ' - C(eagerzeroedthick) eagerzeroedthick disk, added in version 2.5'
- ' Default: C(None) thick disk, no eagerzero.'
- ' - C(datastore) (string): The name of datastore which will be used for the disk. If C(autoselect_datastore) is set to True,
then will select the less used datastore whose name contains this "disk.datastore" string.'
- ' - C(filename) (string): Existing disk image to be used. Filename must already exist on the datastore.'
- ' Specify filename string in C([datastore_name] path/to/file.vmdk) format. Added in version 2.8.'
- ' - C(autoselect_datastore) (bool): select the less used datastore. "disk.datastore" and "disk.autoselect_datastore"
will not be used if C(datastore) is specified outside this C(disk) configuration.'
- ' - C(disk_mode) (string): Type of disk mode. Added in version 2.6'
- ' - Available options are :'
- ' - C(persistent): Changes are immediately and permanently written to the virtual disk. This is default.'
- ' - C(independent_persistent): Same as persistent, but not affected by snapshots.'
- ' - C(independent_nonpersistent): Changes to virtual disk are made to a redo log and discarded at power off, but not affected by snapshots.'
cdrom:
description:
- A CD-ROM configuration for the virtual machine.
- 'Valid attributes are:'
- ' - C(type) (string): The type of CD-ROM, valid options are C(none), C(client) or C(iso). With C(none) the CD-ROM will be disconnected but present.'
- ' - C(iso_path) (string): The datastore path to the ISO file to use, in the form of C([datastore1] path/to/file.iso). Required if type is set C(iso).'
version_added: '2.5'
resource_pool:
description:
- Use the given resource pool for virtual machine operation.
- This parameter is case sensitive.
- Resource pool should be child of the selected host parent.
version_added: '2.3'
wait_for_ip_address:
description:
- Wait until vCenter detects an IP address for the virtual machine.
- This requires vmware-tools (vmtoolsd) to properly work after creation.
- "vmware-tools needs to be installed on the given virtual machine in order to work with this parameter."
default: 'no'
type: bool
wait_for_customization:
description:
- Wait until vCenter detects all guest customizations as successfully completed.
- When enabled, the VM will automatically be powered on.
default: 'no'
type: bool
version_added: '2.8'
state_change_timeout:
description:
- If the C(state) is set to C(shutdownguest), by default the module will return immediately after sending the shutdown signal.
- If this argument is set to a positive integer, the module will instead wait for the virtual machine to reach the poweredoff state.
- The value sets a timeout in seconds for the module to wait for the state change.
default: 0
version_added: '2.6'
snapshot_src:
description:
- Name of the existing snapshot to use to create a clone of a virtual machine.
- This parameter is case sensitive.
- While creating linked clone using C(linked_clone) parameter, this parameter is required.
version_added: '2.4'
linked_clone:
description:
- Whether to create a linked clone from the snapshot specified.
- If specified, then C(snapshot_src) is required parameter.
default: 'no'
type: bool
version_added: '2.4'
force:
description:
- Ignore warnings and complete the actions.
- This parameter is useful while removing virtual machine which is powered on state.
- 'This module reflects the VMware vCenter API and UI workflow, as such, in some cases the `force` flag will
be mandatory to perform the action to ensure you are certain the action has to be taken, no matter what the consequence.
This is specifically the case for removing a powered on the virtual machine when C(state) is set to C(absent).'
default: 'no'
type: bool
datacenter:
description:
- Destination datacenter for the deploy operation.
- This parameter is case sensitive.
default: ha-datacenter
cluster:
description:
- The cluster name where the virtual machine will run.
- This is a required parameter, if C(esxi_hostname) is not set.
- C(esxi_hostname) and C(cluster) are mutually exclusive parameters.
- This parameter is case sensitive.
version_added: '2.3'
esxi_hostname:
description:
- The ESXi hostname where the virtual machine will run.
- This is a required parameter, if C(cluster) is not set.
- C(esxi_hostname) and C(cluster) are mutually exclusive parameters.
- This parameter is case sensitive.
annotation:
description:
- A note or annotation to include in the virtual machine.
version_added: '2.3'
customvalues:
description:
- Define a list of custom values to set on virtual machine.
- A custom value object takes two fields C(key) and C(value).
- Incorrect key and values will be ignored.
version_added: '2.3'
networks:
description:
- A list of networks (in the order of the NICs).
- Removing NICs is not allowed, while reconfiguring the virtual machine.
- All parameters and VMware object names are case sensetive.
- 'One of the below parameters is required per entry:'
- ' - C(name) (string): Name of the portgroup or distributed virtual portgroup for this interface.
When specifying distributed virtual portgroup make sure given C(esxi_hostname) or C(cluster) is associated with it.'
- ' - C(vlan) (integer): VLAN number for this interface.'
- 'Optional parameters per entry (used for virtual hardware):'
- ' - C(device_type) (string): Virtual network device (one of C(e1000), C(e1000e), C(pcnet32), C(vmxnet2), C(vmxnet3) (default), C(sriov)).'
- ' - C(mac) (string): Customize MAC address.'
- ' - C(dvswitch_name) (string): Name of the distributed vSwitch.
This value is required if multiple distributed portgroups exists with the same name. version_added 2.7'
- ' - C(start_connected) (bool): Indicates that virtual network adapter starts with associated virtual machine powers on. version_added: 2.5'
- 'Optional parameters per entry (used for OS customization):'
- ' - C(type) (string): Type of IP assignment (either C(dhcp) or C(static)). C(dhcp) is default.'
- ' - C(ip) (string): Static IP address (implies C(type: static)).'
- ' - C(netmask) (string): Static netmask required for C(ip).'
- ' - C(gateway) (string): Static gateway.'
- ' - C(dns_servers) (string): DNS servers for this network interface (Windows).'
- ' - C(domain) (string): Domain name for this network interface (Windows).'
- ' - C(wake_on_lan) (bool): Indicates if wake-on-LAN is enabled on this virtual network adapter. version_added: 2.5'
- ' - C(allow_guest_control) (bool): Enables guest control over whether the connectable device is connected. version_added: 2.5'
version_added: '2.3'
customization:
description:
- Parameters for OS customization when cloning from the template or the virtual machine, or apply to the existing virtual machine directly.
- Not all operating systems are supported for customization with respective vCenter version,
please check VMware documentation for respective OS customization.
- For supported customization operating system matrix, (see U(http://partnerweb.vmware.com/programs/guestOS/guest-os-customization-matrix.pdf))
- All parameters and VMware object names are case sensitive.
- Linux based OSes requires Perl package to be installed for OS customizations.
- 'Common parameters (Linux/Windows):'
- ' - C(existing_vm) (bool): If set to C(True), do OS customization on the specified virtual machine directly.
If set to C(False) or not specified, do OS customization when cloning from the template or the virtual machine. version_added: 2.8'
- ' - C(dns_servers) (list): List of DNS servers to configure.'
- ' - C(dns_suffix) (list): List of domain suffixes, also known as DNS search path (default: C(domain) parameter).'
- ' - C(domain) (string): DNS domain name to use.'
- ' - C(hostname) (string): Computer hostname (default: shorted C(name) parameter). Allowed characters are alphanumeric (uppercase and lowercase)
and minus, rest of the characters are dropped as per RFC 952.'
- 'Parameters related to Linux customization:'
- ' - C(timezone) (string): Timezone (See List of supported time zones for different vSphere versions in Linux/Unix
systems (2145518) U(https://kb.vmware.com/s/article/2145518)). version_added: 2.9'
- ' - C(hwclockUTC) (bool): Specifies whether the hardware clock is in UTC or local time.
True when the hardware clock is in UTC, False when the hardware clock is in local time. version_added: 2.9'
- 'Parameters related to Windows customization:'
- ' - C(autologon) (bool): Auto logon after virtual machine customization (default: False).'
- ' - C(autologoncount) (int): Number of autologon after reboot (default: 1).'
- ' - C(domainadmin) (string): User used to join in AD domain (mandatory with C(joindomain)).'
- ' - C(domainadminpassword) (string): Password used to join in AD domain (mandatory with C(joindomain)).'
- ' - C(fullname) (string): Server owner name (default: Administrator).'
- ' - C(joindomain) (string): AD domain to join (Not compatible with C(joinworkgroup)).'
- ' - C(joinworkgroup) (string): Workgroup to join (Not compatible with C(joindomain), default: WORKGROUP).'
- ' - C(orgname) (string): Organisation name (default: ACME).'
- ' - C(password) (string): Local administrator password.'
- ' - C(productid) (string): Product ID.'
- ' - C(runonce) (list): List of commands to run at first user logon.'
- ' - C(timezone) (int): Timezone (See U(https://msdn.microsoft.com/en-us/library/ms912391.aspx)).'
version_added: '2.3'
vapp_properties:
description:
- A list of vApp properties
- 'For full list of attributes and types refer to: U(https://github.com/vmware/pyvmomi/blob/master/docs/vim/vApp/PropertyInfo.rst)'
- 'Basic attributes are:'
- ' - C(id) (string): Property id - required.'
- ' - C(value) (string): Property value.'
- ' - C(type) (string): Value type, string type by default.'
- ' - C(operation): C(remove): This attribute is required only when removing properties.'
version_added: '2.6'
customization_spec:
description:
- Unique name identifying the requested customization specification.
- This parameter is case sensitive.
- If set, then overrides C(customization) parameter values.
version_added: '2.6'
datastore:
description:
- Specify datastore or datastore cluster to provision virtual machine.
- 'This parameter takes precedence over "disk.datastore" parameter.'
- 'This parameter can be used to override datastore or datastore cluster setting of the virtual machine when deployed
from the template.'
- Please see example for more usage.
version_added: '2.7'
convert:
description:
- Specify convert disk type while cloning template or virtual machine.
choices: [ thin, thick, eagerzeroedthick ]
version_added: '2.8'
extends_documentation_fragment: vmware.documentation
'''
EXAMPLES = r'''
- name: Create a virtual machine on given ESXi hostname
vmware_guest:
hostname: "{{ vcenter_hostname }}"
username: "{{ vcenter_username }}"
password: "{{ vcenter_password }}"
validate_certs: no
folder: /DC1/vm/
name: test_vm_0001
state: poweredon
guest_id: centos64Guest
# This is hostname of particular ESXi server on which user wants VM to be deployed
esxi_hostname: "{{ esxi_hostname }}"
disk:
- size_gb: 10
type: thin
datastore: datastore1
hardware:
memory_mb: 512
num_cpus: 4
scsi: paravirtual
networks:
- name: VM Network
mac: aa:bb:dd:aa:00:14
ip: 10.10.10.100
netmask: 255.255.255.0
device_type: vmxnet3
wait_for_ip_address: yes
delegate_to: localhost
register: deploy_vm
- name: Create a virtual machine from a template
vmware_guest:
hostname: "{{ vcenter_hostname }}"
username: "{{ vcenter_username }}"
password: "{{ vcenter_password }}"
validate_certs: no
folder: /testvms
name: testvm_2
state: poweredon
template: template_el7
disk:
- size_gb: 10
type: thin
datastore: g73_datastore
hardware:
memory_mb: 512
num_cpus: 6
num_cpu_cores_per_socket: 3
scsi: paravirtual
memory_reservation_lock: True
mem_limit: 8096
mem_reservation: 4096
cpu_limit: 8096
cpu_reservation: 4096
max_connections: 5
hotadd_cpu: True
hotremove_cpu: True
hotadd_memory: False
version: 12 # Hardware version of virtual machine
boot_firmware: "efi"
cdrom:
type: iso
iso_path: "[datastore1] livecd.iso"
networks:
- name: VM Network
mac: aa:bb:dd:aa:00:14
wait_for_ip_address: yes
delegate_to: localhost
register: deploy
- name: Clone a virtual machine from Windows template and customize
vmware_guest:
hostname: "{{ vcenter_hostname }}"
username: "{{ vcenter_username }}"
password: "{{ vcenter_password }}"
validate_certs: no
datacenter: datacenter1
cluster: cluster
name: testvm-2
template: template_windows
networks:
- name: VM Network
ip: 192.168.1.100
netmask: 255.255.255.0
gateway: 192.168.1.1
mac: aa:bb:dd:aa:00:14
domain: my_domain
dns_servers:
- 192.168.1.1
- 192.168.1.2
- vlan: 1234
type: dhcp
customization:
autologon: yes
dns_servers:
- 192.168.1.1
- 192.168.1.2
domain: my_domain
password: new_vm_password
runonce:
- powershell.exe -ExecutionPolicy Unrestricted -File C:\Windows\Temp\ConfigureRemotingForAnsible.ps1 -ForceNewSSLCert -EnableCredSSP
delegate_to: localhost
- name: Clone a virtual machine from Linux template and customize
vmware_guest:
hostname: "{{ vcenter_hostname }}"
username: "{{ vcenter_username }}"
password: "{{ vcenter_password }}"
validate_certs: no
datacenter: "{{ datacenter }}"
state: present
folder: /DC1/vm
template: "{{ template }}"
name: "{{ vm_name }}"
cluster: DC1_C1
networks:
- name: VM Network
ip: 192.168.10.11
netmask: 255.255.255.0
wait_for_ip_address: True
customization:
domain: "{{ guest_domain }}"
dns_servers:
- 8.9.9.9
- 7.8.8.9
dns_suffix:
- example.com
- example2.com
delegate_to: localhost
- name: Rename a virtual machine (requires the virtual machine's uuid)
vmware_guest:
hostname: "{{ vcenter_hostname }}"
username: "{{ vcenter_username }}"
password: "{{ vcenter_password }}"
validate_certs: no
uuid: "{{ vm_uuid }}"
name: new_name
state: present
delegate_to: localhost
- name: Remove a virtual machine by uuid
vmware_guest:
hostname: "{{ vcenter_hostname }}"
username: "{{ vcenter_username }}"
password: "{{ vcenter_password }}"
validate_certs: no
uuid: "{{ vm_uuid }}"
state: absent
delegate_to: localhost
- name: Manipulate vApp properties
vmware_guest:
hostname: "{{ vcenter_hostname }}"
username: "{{ vcenter_username }}"
password: "{{ vcenter_password }}"
validate_certs: no
name: vm_name
state: present
vapp_properties:
- id: remoteIP
category: Backup
label: Backup server IP
type: str
value: 10.10.10.1
- id: old_property
operation: remove
delegate_to: localhost
- name: Set powerstate of a virtual machine to poweroff by using UUID
vmware_guest:
hostname: "{{ vcenter_hostname }}"
username: "{{ vcenter_username }}"
password: "{{ vcenter_password }}"
validate_certs: no
uuid: "{{ vm_uuid }}"
state: poweredoff
delegate_to: localhost
- name: Deploy a virtual machine in a datastore different from the datastore of the template
vmware_guest:
hostname: "{{ vcenter_hostname }}"
username: "{{ vcenter_username }}"
password: "{{ vcenter_password }}"
name: "{{ vm_name }}"
state: present
template: "{{ template_name }}"
# Here datastore can be different which holds template
datastore: "{{ virtual_machine_datastore }}"
hardware:
memory_mb: 512
num_cpus: 2
scsi: paravirtual
delegate_to: localhost
'''
RETURN = r'''
instance:
description: metadata about the new virtual machine
returned: always
type: dict
sample: None
'''
import re
import time
import string
HAS_PYVMOMI = False
try:
from pyVmomi import vim, vmodl, VmomiSupport
HAS_PYVMOMI = True
except ImportError:
pass
from random import randint
from ansible.module_utils.basic import AnsibleModule
from ansible.module_utils.common.network import is_mac
from ansible.module_utils._text import to_text, to_native
from ansible.module_utils.vmware import (find_obj, gather_vm_facts, get_all_objs,
compile_folder_path_for_object, serialize_spec,
vmware_argument_spec, set_vm_power_state, PyVmomi,
find_dvs_by_name, find_dvspg_by_name, wait_for_vm_ip,
wait_for_task, TaskError)
class PyVmomiDeviceHelper(object):
""" This class is a helper to create easily VMware Objects for PyVmomiHelper """
def __init__(self, module):
self.module = module
self.next_disk_unit_number = 0
self.scsi_device_type = {
'lsilogic': vim.vm.device.VirtualLsiLogicController,
'paravirtual': vim.vm.device.ParaVirtualSCSIController,
'buslogic': vim.vm.device.VirtualBusLogicController,
'lsilogicsas': vim.vm.device.VirtualLsiLogicSASController,
}
def create_scsi_controller(self, scsi_type):
scsi_ctl = vim.vm.device.VirtualDeviceSpec()
scsi_ctl.operation = vim.vm.device.VirtualDeviceSpec.Operation.add
scsi_device = self.scsi_device_type.get(scsi_type, vim.vm.device.ParaVirtualSCSIController)
scsi_ctl.device = scsi_device()
scsi_ctl.device.busNumber = 0
# While creating a new SCSI controller, temporary key value
# should be unique negative integers
scsi_ctl.device.key = -randint(1000, 9999)
scsi_ctl.device.hotAddRemove = True
scsi_ctl.device.sharedBus = 'noSharing'
scsi_ctl.device.scsiCtlrUnitNumber = 7
return scsi_ctl
def is_scsi_controller(self, device):
return isinstance(device, tuple(self.scsi_device_type.values()))
@staticmethod
def create_ide_controller():
ide_ctl = vim.vm.device.VirtualDeviceSpec()
ide_ctl.operation = vim.vm.device.VirtualDeviceSpec.Operation.add
ide_ctl.device = vim.vm.device.VirtualIDEController()
ide_ctl.device.deviceInfo = vim.Description()
# While creating a new IDE controller, temporary key value
# should be unique negative integers
ide_ctl.device.key = -randint(200, 299)
ide_ctl.device.busNumber = 0
return ide_ctl
@staticmethod
def create_cdrom(ide_ctl, cdrom_type, iso_path=None):
cdrom_spec = vim.vm.device.VirtualDeviceSpec()
cdrom_spec.operation = vim.vm.device.VirtualDeviceSpec.Operation.add
cdrom_spec.device = vim.vm.device.VirtualCdrom()
cdrom_spec.device.controllerKey = ide_ctl.device.key
cdrom_spec.device.key = -1
cdrom_spec.device.connectable = vim.vm.device.VirtualDevice.ConnectInfo()
cdrom_spec.device.connectable.allowGuestControl = True
cdrom_spec.device.connectable.startConnected = (cdrom_type != "none")
if cdrom_type in ["none", "client"]:
cdrom_spec.device.backing = vim.vm.device.VirtualCdrom.RemotePassthroughBackingInfo()
elif cdrom_type == "iso":
cdrom_spec.device.backing = vim.vm.device.VirtualCdrom.IsoBackingInfo(fileName=iso_path)
return cdrom_spec
@staticmethod
def is_equal_cdrom(vm_obj, cdrom_device, cdrom_type, iso_path):
if cdrom_type == "none":
return (isinstance(cdrom_device.backing, vim.vm.device.VirtualCdrom.RemotePassthroughBackingInfo) and
cdrom_device.connectable.allowGuestControl and
not cdrom_device.connectable.startConnected and
(vm_obj.runtime.powerState != vim.VirtualMachinePowerState.poweredOn or not cdrom_device.connectable.connected))
elif cdrom_type == "client":
return (isinstance(cdrom_device.backing, vim.vm.device.VirtualCdrom.RemotePassthroughBackingInfo) and
cdrom_device.connectable.allowGuestControl and
cdrom_device.connectable.startConnected and
(vm_obj.runtime.powerState != vim.VirtualMachinePowerState.poweredOn or cdrom_device.connectable.connected))
elif cdrom_type == "iso":
return (isinstance(cdrom_device.backing, vim.vm.device.VirtualCdrom.IsoBackingInfo) and
cdrom_device.backing.fileName == iso_path and
cdrom_device.connectable.allowGuestControl and
cdrom_device.connectable.startConnected and
(vm_obj.runtime.powerState != vim.VirtualMachinePowerState.poweredOn or cdrom_device.connectable.connected))
def create_scsi_disk(self, scsi_ctl, disk_index=None):
diskspec = vim.vm.device.VirtualDeviceSpec()
diskspec.operation = vim.vm.device.VirtualDeviceSpec.Operation.add
diskspec.device = vim.vm.device.VirtualDisk()
diskspec.device.backing = vim.vm.device.VirtualDisk.FlatVer2BackingInfo()
diskspec.device.controllerKey = scsi_ctl.device.key
if self.next_disk_unit_number == 7:
raise AssertionError()
if disk_index == 7:
raise AssertionError()
"""
Configure disk unit number.
"""
if disk_index is not None:
diskspec.device.unitNumber = disk_index
self.next_disk_unit_number = disk_index + 1
else:
diskspec.device.unitNumber = self.next_disk_unit_number
self.next_disk_unit_number += 1
# unit number 7 is reserved to SCSI controller, increase next index
if self.next_disk_unit_number == 7:
self.next_disk_unit_number += 1
return diskspec
def get_device(self, device_type, name):
nic_dict = dict(pcnet32=vim.vm.device.VirtualPCNet32(),
vmxnet2=vim.vm.device.VirtualVmxnet2(),
vmxnet3=vim.vm.device.VirtualVmxnet3(),
e1000=vim.vm.device.VirtualE1000(),
e1000e=vim.vm.device.VirtualE1000e(),
sriov=vim.vm.device.VirtualSriovEthernetCard(),
)
if device_type in nic_dict:
return nic_dict[device_type]
else:
self.module.fail_json(msg='Invalid device_type "%s"'
' for network "%s"' % (device_type, name))
def create_nic(self, device_type, device_label, device_infos):
nic = vim.vm.device.VirtualDeviceSpec()
nic.device = self.get_device(device_type, device_infos['name'])
nic.device.wakeOnLanEnabled = bool(device_infos.get('wake_on_lan', True))
nic.device.deviceInfo = vim.Description()
nic.device.deviceInfo.label = device_label
nic.device.deviceInfo.summary = device_infos['name']
nic.device.connectable = vim.vm.device.VirtualDevice.ConnectInfo()
nic.device.connectable.startConnected = bool(device_infos.get('start_connected', True))
nic.device.connectable.allowGuestControl = bool(device_infos.get('allow_guest_control', True))
nic.device.connectable.connected = True
if 'mac' in device_infos and is_mac(device_infos['mac']):
nic.device.addressType = 'manual'
nic.device.macAddress = device_infos['mac']
else:
nic.device.addressType = 'generated'
return nic
def integer_value(self, input_value, name):
"""
Function to return int value for given input, else return error
Args:
input_value: Input value to retrive int value from
name: Name of the Input value (used to build error message)
Returns: (int) if integer value can be obtained, otherwise will send a error message.
"""
if isinstance(input_value, int):
return input_value
elif isinstance(input_value, str) and input_value.isdigit():
return int(input_value)
else:
self.module.fail_json(msg='"%s" attribute should be an'
' integer value.' % name)
class PyVmomiCache(object):
""" This class caches references to objects which are requested multiples times but not modified """
def __init__(self, content, dc_name=None):
self.content = content
self.dc_name = dc_name
self.networks = {}
self.clusters = {}
self.esx_hosts = {}
self.parent_datacenters = {}
def find_obj(self, content, types, name, confine_to_datacenter=True):
""" Wrapper around find_obj to set datacenter context """
result = find_obj(content, types, name)
if result and confine_to_datacenter:
if to_text(self.get_parent_datacenter(result).name) != to_text(self.dc_name):
result = None
objects = self.get_all_objs(content, types, confine_to_datacenter=True)
for obj in objects:
if name is None or to_text(obj.name) == to_text(name):
return obj
return result
def get_all_objs(self, content, types, confine_to_datacenter=True):
""" Wrapper around get_all_objs to set datacenter context """
objects = get_all_objs(content, types)
if confine_to_datacenter:
if hasattr(objects, 'items'):
# resource pools come back as a dictionary
# make a copy
tmpobjs = objects.copy()
for k, v in objects.items():
parent_dc = self.get_parent_datacenter(k)
if parent_dc.name != self.dc_name:
tmpobjs.pop(k, None)
objects = tmpobjs
else:
# everything else should be a list
objects = [x for x in objects if self.get_parent_datacenter(x).name == self.dc_name]
return objects
def get_network(self, network):
if network not in self.networks:
self.networks[network] = self.find_obj(self.content, [vim.Network], network)
return self.networks[network]
def get_cluster(self, cluster):
if cluster not in self.clusters:
self.clusters[cluster] = self.find_obj(self.content, [vim.ClusterComputeResource], cluster)
return self.clusters[cluster]
def get_esx_host(self, host):
if host not in self.esx_hosts:
self.esx_hosts[host] = self.find_obj(self.content, [vim.HostSystem], host)
return self.esx_hosts[host]
def get_parent_datacenter(self, obj):
""" Walk the parent tree to find the objects datacenter """
if isinstance(obj, vim.Datacenter):
return obj
if obj in self.parent_datacenters:
return self.parent_datacenters[obj]
datacenter = None
while True:
if not hasattr(obj, 'parent'):
break
obj = obj.parent
if isinstance(obj, vim.Datacenter):
datacenter = obj
break
self.parent_datacenters[obj] = datacenter
return datacenter
class PyVmomiHelper(PyVmomi):
def __init__(self, module):
super(PyVmomiHelper, self).__init__(module)
self.device_helper = PyVmomiDeviceHelper(self.module)
self.configspec = None
self.relospec = None
self.change_detected = False # a change was detected and needs to be applied through reconfiguration
self.change_applied = False # a change was applied meaning at least one task succeeded
self.customspec = None
self.cache = PyVmomiCache(self.content, dc_name=self.params['datacenter'])
def gather_facts(self, vm):
return gather_vm_facts(self.content, vm)
def remove_vm(self, vm):
# https://www.vmware.com/support/developer/converter-sdk/conv60_apireference/vim.ManagedEntity.html#destroy
if vm.summary.runtime.powerState.lower() == 'poweredon':
self.module.fail_json(msg="Virtual machine %s found in 'powered on' state, "
"please use 'force' parameter to remove or poweroff VM "
"and try removing VM again." % vm.name)
task = vm.Destroy()
self.wait_for_task(task)
if task.info.state == 'error':
return {'changed': self.change_applied, 'failed': True, 'msg': task.info.error.msg, 'op': 'destroy'}
else:
return {'changed': self.change_applied, 'failed': False}
def configure_guestid(self, vm_obj, vm_creation=False):
# guest_id is not required when using templates
if self.params['template']:
return
# guest_id is only mandatory on VM creation
if vm_creation and self.params['guest_id'] is None:
self.module.fail_json(msg="guest_id attribute is mandatory for VM creation")
if self.params['guest_id'] and \
(vm_obj is None or self.params['guest_id'].lower() != vm_obj.summary.config.guestId.lower()):
self.change_detected = True
self.configspec.guestId = self.params['guest_id']
def configure_resource_alloc_info(self, vm_obj):
"""
Function to configure resource allocation information about virtual machine
:param vm_obj: VM object in case of reconfigure, None in case of deploy
:return: None
"""
rai_change_detected = False
memory_allocation = vim.ResourceAllocationInfo()
cpu_allocation = vim.ResourceAllocationInfo()
if 'hardware' in self.params:
if 'mem_limit' in self.params['hardware']:
mem_limit = None
try:
mem_limit = int(self.params['hardware'].get('mem_limit'))
except ValueError:
self.module.fail_json(msg="hardware.mem_limit attribute should be an integer value.")
memory_allocation.limit = mem_limit
if vm_obj is None or memory_allocation.limit != vm_obj.config.memoryAllocation.limit:
rai_change_detected = True
if 'mem_reservation' in self.params['hardware'] or 'memory_reservation' in self.params['hardware']:
mem_reservation = self.params['hardware'].get('mem_reservation')
if mem_reservation is None:
mem_reservation = self.params['hardware'].get('memory_reservation')
try:
mem_reservation = int(mem_reservation)
except ValueError:
self.module.fail_json(msg="hardware.mem_reservation or hardware.memory_reservation should be an integer value.")
memory_allocation.reservation = mem_reservation
if vm_obj is None or \
memory_allocation.reservation != vm_obj.config.memoryAllocation.reservation:
rai_change_detected = True
if 'cpu_limit' in self.params['hardware']:
cpu_limit = None
try:
cpu_limit = int(self.params['hardware'].get('cpu_limit'))
except ValueError:
self.module.fail_json(msg="hardware.cpu_limit attribute should be an integer value.")
cpu_allocation.limit = cpu_limit
if vm_obj is None or cpu_allocation.limit != vm_obj.config.cpuAllocation.limit:
rai_change_detected = True
if 'cpu_reservation' in self.params['hardware']:
cpu_reservation = None
try:
cpu_reservation = int(self.params['hardware'].get('cpu_reservation'))
except ValueError:
self.module.fail_json(msg="hardware.cpu_reservation should be an integer value.")
cpu_allocation.reservation = cpu_reservation
if vm_obj is None or \
cpu_allocation.reservation != vm_obj.config.cpuAllocation.reservation:
rai_change_detected = True
if rai_change_detected:
self.configspec.memoryAllocation = memory_allocation
self.configspec.cpuAllocation = cpu_allocation
self.change_detected = True
def configure_cpu_and_memory(self, vm_obj, vm_creation=False):
# set cpu/memory/etc
if 'hardware' in self.params:
if 'num_cpus' in self.params['hardware']:
try:
num_cpus = int(self.params['hardware']['num_cpus'])
except ValueError:
self.module.fail_json(msg="hardware.num_cpus attribute should be an integer value.")
# check VM power state and cpu hot-add/hot-remove state before re-config VM
if vm_obj and vm_obj.runtime.powerState == vim.VirtualMachinePowerState.poweredOn:
if not vm_obj.config.cpuHotRemoveEnabled and num_cpus < vm_obj.config.hardware.numCPU:
self.module.fail_json(msg="Configured cpu number is less than the cpu number of the VM, "
"cpuHotRemove is not enabled")
if not vm_obj.config.cpuHotAddEnabled and num_cpus > vm_obj.config.hardware.numCPU:
self.module.fail_json(msg="Configured cpu number is more than the cpu number of the VM, "
"cpuHotAdd is not enabled")
if 'num_cpu_cores_per_socket' in self.params['hardware']:
try:
num_cpu_cores_per_socket = int(self.params['hardware']['num_cpu_cores_per_socket'])
except ValueError:
self.module.fail_json(msg="hardware.num_cpu_cores_per_socket attribute "
"should be an integer value.")
if num_cpus % num_cpu_cores_per_socket != 0:
self.module.fail_json(msg="hardware.num_cpus attribute should be a multiple "
"of hardware.num_cpu_cores_per_socket")
self.configspec.numCoresPerSocket = num_cpu_cores_per_socket
if vm_obj is None or self.configspec.numCoresPerSocket != vm_obj.config.hardware.numCoresPerSocket:
self.change_detected = True
self.configspec.numCPUs = num_cpus
if vm_obj is None or self.configspec.numCPUs != vm_obj.config.hardware.numCPU:
self.change_detected = True
# num_cpu is mandatory for VM creation
elif vm_creation and not self.params['template']:
self.module.fail_json(msg="hardware.num_cpus attribute is mandatory for VM creation")
if 'memory_mb' in self.params['hardware']:
try:
memory_mb = int(self.params['hardware']['memory_mb'])
except ValueError:
self.module.fail_json(msg="Failed to parse hardware.memory_mb value."
" Please refer the documentation and provide"
" correct value.")
# check VM power state and memory hotadd state before re-config VM
if vm_obj and vm_obj.runtime.powerState == vim.VirtualMachinePowerState.poweredOn:
if vm_obj.config.memoryHotAddEnabled and memory_mb < vm_obj.config.hardware.memoryMB:
self.module.fail_json(msg="Configured memory is less than memory size of the VM, "
"operation is not supported")
elif not vm_obj.config.memoryHotAddEnabled and memory_mb != vm_obj.config.hardware.memoryMB:
self.module.fail_json(msg="memoryHotAdd is not enabled")
self.configspec.memoryMB = memory_mb
if vm_obj is None or self.configspec.memoryMB != vm_obj.config.hardware.memoryMB:
self.change_detected = True
# memory_mb is mandatory for VM creation
elif vm_creation and not self.params['template']:
self.module.fail_json(msg="hardware.memory_mb attribute is mandatory for VM creation")
if 'hotadd_memory' in self.params['hardware']:
if vm_obj and vm_obj.runtime.powerState == vim.VirtualMachinePowerState.poweredOn and \
vm_obj.config.memoryHotAddEnabled != bool(self.params['hardware']['hotadd_memory']):
self.module.fail_json(msg="Configure hotadd memory operation is not supported when VM is power on")
self.configspec.memoryHotAddEnabled = bool(self.params['hardware']['hotadd_memory'])
if vm_obj is None or self.configspec.memoryHotAddEnabled != vm_obj.config.memoryHotAddEnabled:
self.change_detected = True
if 'hotadd_cpu' in self.params['hardware']:
if vm_obj and vm_obj.runtime.powerState == vim.VirtualMachinePowerState.poweredOn and \
vm_obj.config.cpuHotAddEnabled != bool(self.params['hardware']['hotadd_cpu']):
self.module.fail_json(msg="Configure hotadd cpu operation is not supported when VM is power on")
self.configspec.cpuHotAddEnabled = bool(self.params['hardware']['hotadd_cpu'])
if vm_obj is None or self.configspec.cpuHotAddEnabled != vm_obj.config.cpuHotAddEnabled:
self.change_detected = True
if 'hotremove_cpu' in self.params['hardware']:
if vm_obj and vm_obj.runtime.powerState == vim.VirtualMachinePowerState.poweredOn and \
vm_obj.config.cpuHotRemoveEnabled != bool(self.params['hardware']['hotremove_cpu']):
self.module.fail_json(msg="Configure hotremove cpu operation is not supported when VM is power on")
self.configspec.cpuHotRemoveEnabled = bool(self.params['hardware']['hotremove_cpu'])
if vm_obj is None or self.configspec.cpuHotRemoveEnabled != vm_obj.config.cpuHotRemoveEnabled:
self.change_detected = True
if 'memory_reservation_lock' in self.params['hardware']:
self.configspec.memoryReservationLockedToMax = bool(self.params['hardware']['memory_reservation_lock'])
if vm_obj is None or self.configspec.memoryReservationLockedToMax != vm_obj.config.memoryReservationLockedToMax:
self.change_detected = True
if 'boot_firmware' in self.params['hardware']:
# boot firmware re-config can cause boot issue
if vm_obj is not None:
return
boot_firmware = self.params['hardware']['boot_firmware'].lower()
if boot_firmware not in ('bios', 'efi'):
self.module.fail_json(msg="hardware.boot_firmware value is invalid [%s]."
" Need one of ['bios', 'efi']." % boot_firmware)
self.configspec.firmware = boot_firmware
self.change_detected = True
def configure_cdrom(self, vm_obj):
# Configure the VM CD-ROM
if "cdrom" in self.params and self.params["cdrom"]:
if "type" not in self.params["cdrom"] or self.params["cdrom"]["type"] not in ["none", "client", "iso"]:
self.module.fail_json(msg="cdrom.type is mandatory")
if self.params["cdrom"]["type"] == "iso" and ("iso_path" not in self.params["cdrom"] or not self.params["cdrom"]["iso_path"]):
self.module.fail_json(msg="cdrom.iso_path is mandatory in case cdrom.type is iso")
if vm_obj and vm_obj.config.template:
# Changing CD-ROM settings on a template is not supported
return
cdrom_spec = None
cdrom_device = self.get_vm_cdrom_device(vm=vm_obj)
iso_path = self.params["cdrom"]["iso_path"] if "iso_path" in self.params["cdrom"] else None
if cdrom_device is None:
# Creating new CD-ROM
ide_device = self.get_vm_ide_device(vm=vm_obj)
if ide_device is None:
# Creating new IDE device
ide_device = self.device_helper.create_ide_controller()
self.change_detected = True
self.configspec.deviceChange.append(ide_device)
elif len(ide_device.device) > 3:
self.module.fail_json(msg="hardware.cdrom specified for a VM or template which already has 4 IDE devices of which none are a cdrom")
cdrom_spec = self.device_helper.create_cdrom(ide_ctl=ide_device, cdrom_type=self.params["cdrom"]["type"], iso_path=iso_path)
if vm_obj and vm_obj.runtime.powerState == vim.VirtualMachinePowerState.poweredOn:
cdrom_spec.device.connectable.connected = (self.params["cdrom"]["type"] != "none")
elif not self.device_helper.is_equal_cdrom(vm_obj=vm_obj, cdrom_device=cdrom_device, cdrom_type=self.params["cdrom"]["type"], iso_path=iso_path):
# Updating an existing CD-ROM
if self.params["cdrom"]["type"] in ["client", "none"]:
cdrom_device.backing = vim.vm.device.VirtualCdrom.RemotePassthroughBackingInfo()
elif self.params["cdrom"]["type"] == "iso":
cdrom_device.backing = vim.vm.device.VirtualCdrom.IsoBackingInfo(fileName=iso_path)
cdrom_device.connectable = vim.vm.device.VirtualDevice.ConnectInfo()
cdrom_device.connectable.allowGuestControl = True
cdrom_device.connectable.startConnected = (self.params["cdrom"]["type"] != "none")
if vm_obj and vm_obj.runtime.powerState == vim.VirtualMachinePowerState.poweredOn:
cdrom_device.connectable.connected = (self.params["cdrom"]["type"] != "none")
cdrom_spec = vim.vm.device.VirtualDeviceSpec()
cdrom_spec.operation = vim.vm.device.VirtualDeviceSpec.Operation.edit
cdrom_spec.device = cdrom_device
if cdrom_spec:
self.change_detected = True
self.configspec.deviceChange.append(cdrom_spec)
def configure_hardware_params(self, vm_obj):
"""
Function to configure hardware related configuration of virtual machine
Args:
vm_obj: virtual machine object
"""
if 'hardware' in self.params:
if 'max_connections' in self.params['hardware']:
# maxMksConnections == max_connections
self.configspec.maxMksConnections = int(self.params['hardware']['max_connections'])
if vm_obj is None or self.configspec.maxMksConnections != vm_obj.config.maxMksConnections:
self.change_detected = True
if 'nested_virt' in self.params['hardware']:
self.configspec.nestedHVEnabled = bool(self.params['hardware']['nested_virt'])
if vm_obj is None or self.configspec.nestedHVEnabled != bool(vm_obj.config.nestedHVEnabled):
self.change_detected = True
if 'version' in self.params['hardware']:
hw_version_check_failed = False
temp_version = self.params['hardware'].get('version', 10)
try:
temp_version = int(temp_version)
except ValueError:
hw_version_check_failed = True
if temp_version not in range(3, 15):
hw_version_check_failed = True
if hw_version_check_failed:
self.module.fail_json(msg="Failed to set hardware.version '%s' value as valid"
" values range from 3 (ESX 2.x) to 14 (ESXi 6.5 and greater)." % temp_version)
# Hardware version is denoted as "vmx-10"
version = "vmx-%02d" % temp_version
self.configspec.version = version
if vm_obj is None or self.configspec.version != vm_obj.config.version:
self.change_detected = True
if vm_obj is not None:
# VM exists and we need to update the hardware version
current_version = vm_obj.config.version
# current_version = "vmx-10"
version_digit = int(current_version.split("-", 1)[-1])
if temp_version < version_digit:
self.module.fail_json(msg="Current hardware version '%d' which is greater than the specified"
" version '%d'. Downgrading hardware version is"
" not supported. Please specify version greater"
" than the current version." % (version_digit,
temp_version))
new_version = "vmx-%02d" % temp_version
try:
task = vm_obj.UpgradeVM_Task(new_version)
self.wait_for_task(task)
if task.info.state == 'error':
return {'changed': self.change_applied, 'failed': True, 'msg': task.info.error.msg, 'op': 'upgrade'}
except vim.fault.AlreadyUpgraded:
# Don't fail if VM is already upgraded.
pass
if 'virt_based_security' in self.params['hardware']:
host_version = self.select_host().summary.config.product.version
if int(host_version.split('.')[0]) < 6 or (int(host_version.split('.')[0]) == 6 and int(host_version.split('.')[1]) < 7):
self.module.fail_json(msg="ESXi version %s not support VBS." % host_version)
guest_ids = ['windows9_64Guest', 'windows9Server64Guest']
if vm_obj is None:
guestid = self.configspec.guestId
else:
guestid = vm_obj.summary.config.guestId
if guestid not in guest_ids:
self.module.fail_json(msg="Guest '%s' not support VBS." % guestid)
if (vm_obj is None and int(self.configspec.version.split('-')[1]) >= 14) or \
(vm_obj and int(vm_obj.config.version.split('-')[1]) >= 14 and (vm_obj.runtime.powerState == vim.VirtualMachinePowerState.poweredOff)):
self.configspec.flags = vim.vm.FlagInfo()
self.configspec.flags.vbsEnabled = bool(self.params['hardware']['virt_based_security'])
if bool(self.params['hardware']['virt_based_security']):
self.configspec.flags.vvtdEnabled = True
self.configspec.nestedHVEnabled = True
if (vm_obj is None and self.configspec.firmware == 'efi') or \
(vm_obj and vm_obj.config.firmware == 'efi'):
self.configspec.bootOptions = vim.vm.BootOptions()
self.configspec.bootOptions.efiSecureBootEnabled = True
else:
self.module.fail_json(msg="Not support VBS when firmware is BIOS.")
if vm_obj is None or self.configspec.flags.vbsEnabled != vm_obj.config.flags.vbsEnabled:
self.change_detected = True
def get_device_by_type(self, vm=None, type=None):
if vm is None or type is None:
return None
for device in vm.config.hardware.device:
if isinstance(device, type):
return device
return None
def get_vm_cdrom_device(self, vm=None):
return self.get_device_by_type(vm=vm, type=vim.vm.device.VirtualCdrom)
def get_vm_ide_device(self, vm=None):
return self.get_device_by_type(vm=vm, type=vim.vm.device.VirtualIDEController)
def get_vm_network_interfaces(self, vm=None):
device_list = []
if vm is None:
return device_list
nw_device_types = (vim.vm.device.VirtualPCNet32, vim.vm.device.VirtualVmxnet2,
vim.vm.device.VirtualVmxnet3, vim.vm.device.VirtualE1000,
vim.vm.device.VirtualE1000e, vim.vm.device.VirtualSriovEthernetCard)
for device in vm.config.hardware.device:
if isinstance(device, nw_device_types):
device_list.append(device)
return device_list
def sanitize_network_params(self):
"""
Sanitize user provided network provided params
Returns: A sanitized list of network params, else fails
"""
network_devices = list()
# Clean up user data here
for network in self.params['networks']:
if 'name' not in network and 'vlan' not in network:
self.module.fail_json(msg="Please specify at least a network name or"
" a VLAN name under VM network list.")
if 'name' in network and self.cache.get_network(network['name']) is None:
self.module.fail_json(msg="Network '%(name)s' does not exist." % network)
elif 'vlan' in network:
dvps = self.cache.get_all_objs(self.content, [vim.dvs.DistributedVirtualPortgroup])
for dvp in dvps:
if hasattr(dvp.config.defaultPortConfig, 'vlan') and \
isinstance(dvp.config.defaultPortConfig.vlan.vlanId, int) and \
str(dvp.config.defaultPortConfig.vlan.vlanId) == str(network['vlan']):
network['name'] = dvp.config.name
break
if 'dvswitch_name' in network and \
dvp.config.distributedVirtualSwitch.name == network['dvswitch_name'] and \
dvp.config.name == network['vlan']:
network['name'] = dvp.config.name
break
if dvp.config.name == network['vlan']:
network['name'] = dvp.config.name
break
else:
self.module.fail_json(msg="VLAN '%(vlan)s' does not exist." % network)
if 'type' in network:
if network['type'] not in ['dhcp', 'static']:
self.module.fail_json(msg="Network type '%(type)s' is not a valid parameter."
" Valid parameters are ['dhcp', 'static']." % network)
if network['type'] != 'static' and ('ip' in network or 'netmask' in network):
self.module.fail_json(msg='Static IP information provided for network "%(name)s",'
' but "type" is set to "%(type)s".' % network)
else:
# Type is optional parameter, if user provided IP or Subnet assume
# network type as 'static'
if 'ip' in network or 'netmask' in network:
network['type'] = 'static'
else:
# User wants network type as 'dhcp'
network['type'] = 'dhcp'
if network.get('type') == 'static':
if 'ip' in network and 'netmask' not in network:
self.module.fail_json(msg="'netmask' is required if 'ip' is"
" specified under VM network list.")
if 'ip' not in network and 'netmask' in network:
self.module.fail_json(msg="'ip' is required if 'netmask' is"
" specified under VM network list.")
validate_device_types = ['pcnet32', 'vmxnet2', 'vmxnet3', 'e1000', 'e1000e', 'sriov']
if 'device_type' in network and network['device_type'] not in validate_device_types:
self.module.fail_json(msg="Device type specified '%s' is not valid."
" Please specify correct device"
" type from ['%s']." % (network['device_type'],
"', '".join(validate_device_types)))
if 'mac' in network and not is_mac(network['mac']):
self.module.fail_json(msg="Device MAC address '%s' is invalid."
" Please provide correct MAC address." % network['mac'])
network_devices.append(network)
return network_devices
def configure_network(self, vm_obj):
# Ignore empty networks, this permits to keep networks when deploying a template/cloning a VM
if len(self.params['networks']) == 0:
return
network_devices = self.sanitize_network_params()
# List current device for Clone or Idempotency
current_net_devices = self.get_vm_network_interfaces(vm=vm_obj)
if len(network_devices) < len(current_net_devices):
self.module.fail_json(msg="Given network device list is lesser than current VM device list (%d < %d). "
"Removing interfaces is not allowed"
% (len(network_devices), len(current_net_devices)))
for key in range(0, len(network_devices)):
nic_change_detected = False
network_name = network_devices[key]['name']
if key < len(current_net_devices) and (vm_obj or self.params['template']):
# We are editing existing network devices, this is either when
# are cloning from VM or Template
nic = vim.vm.device.VirtualDeviceSpec()
nic.operation = vim.vm.device.VirtualDeviceSpec.Operation.edit
nic.device = current_net_devices[key]
if ('wake_on_lan' in network_devices[key] and
nic.device.wakeOnLanEnabled != network_devices[key].get('wake_on_lan')):
nic.device.wakeOnLanEnabled = network_devices[key].get('wake_on_lan')
nic_change_detected = True
if ('start_connected' in network_devices[key] and
nic.device.connectable.startConnected != network_devices[key].get('start_connected')):
nic.device.connectable.startConnected = network_devices[key].get('start_connected')
nic_change_detected = True
if ('allow_guest_control' in network_devices[key] and
nic.device.connectable.allowGuestControl != network_devices[key].get('allow_guest_control')):
nic.device.connectable.allowGuestControl = network_devices[key].get('allow_guest_control')
nic_change_detected = True
if nic.device.deviceInfo.summary != network_name:
nic.device.deviceInfo.summary = network_name
nic_change_detected = True
if 'device_type' in network_devices[key]:
device = self.device_helper.get_device(network_devices[key]['device_type'], network_name)
device_class = type(device)
if not isinstance(nic.device, device_class):
self.module.fail_json(msg="Changing the device type is not possible when interface is already present. "
"The failing device type is %s" % network_devices[key]['device_type'])
# Changing mac address has no effect when editing interface
if 'mac' in network_devices[key] and nic.device.macAddress != current_net_devices[key].macAddress:
self.module.fail_json(msg="Changing MAC address has not effect when interface is already present. "
"The failing new MAC address is %s" % nic.device.macAddress)
else:
# Default device type is vmxnet3, VMware best practice
device_type = network_devices[key].get('device_type', 'vmxnet3')
nic = self.device_helper.create_nic(device_type,
'Network Adapter %s' % (key + 1),
network_devices[key])
nic.operation = vim.vm.device.VirtualDeviceSpec.Operation.add
nic_change_detected = True
if hasattr(self.cache.get_network(network_name), 'portKeys'):
# VDS switch
pg_obj = None
if 'dvswitch_name' in network_devices[key]:
dvs_name = network_devices[key]['dvswitch_name']
dvs_obj = find_dvs_by_name(self.content, dvs_name)
if dvs_obj is None:
self.module.fail_json(msg="Unable to find distributed virtual switch %s" % dvs_name)
pg_obj = find_dvspg_by_name(dvs_obj, network_name)
if pg_obj is None:
self.module.fail_json(msg="Unable to find distributed port group %s" % network_name)
else:
pg_obj = self.cache.find_obj(self.content, [vim.dvs.DistributedVirtualPortgroup], network_name)
# TODO: (akasurde) There is no way to find association between resource pool and distributed virtual portgroup
# For now, check if we are able to find distributed virtual switch
if not pg_obj.config.distributedVirtualSwitch:
self.module.fail_json(msg="Failed to find distributed virtual switch which is associated with"
" distributed virtual portgroup '%s'. Make sure hostsystem is associated with"
" the given distributed virtual portgroup. Also, check if user has correct"
" permission to access distributed virtual switch in the given portgroup." % pg_obj.name)
if (nic.device.backing and
(not hasattr(nic.device.backing, 'port') or
(nic.device.backing.port.portgroupKey != pg_obj.key or
nic.device.backing.port.switchUuid != pg_obj.config.distributedVirtualSwitch.uuid))):
nic_change_detected = True
dvs_port_connection = vim.dvs.PortConnection()
dvs_port_connection.portgroupKey = pg_obj.key
# If user specifies distributed port group without associating to the hostsystem on which
# virtual machine is going to be deployed then we get error. We can infer that there is no
# association between given distributed port group and host system.
host_system = self.params.get('esxi_hostname')
if host_system and host_system not in [host.config.host.name for host in pg_obj.config.distributedVirtualSwitch.config.host]:
self.module.fail_json(msg="It seems that host system '%s' is not associated with distributed"
" virtual portgroup '%s'. Please make sure host system is associated"
" with given distributed virtual portgroup" % (host_system, pg_obj.name))
dvs_port_connection.switchUuid = pg_obj.config.distributedVirtualSwitch.uuid
nic.device.backing = vim.vm.device.VirtualEthernetCard.DistributedVirtualPortBackingInfo()
nic.device.backing.port = dvs_port_connection
elif isinstance(self.cache.get_network(network_name), vim.OpaqueNetwork):
# NSX-T Logical Switch
nic.device.backing = vim.vm.device.VirtualEthernetCard.OpaqueNetworkBackingInfo()
network_id = self.cache.get_network(network_name).summary.opaqueNetworkId
nic.device.backing.opaqueNetworkType = 'nsx.LogicalSwitch'
nic.device.backing.opaqueNetworkId = network_id
nic.device.deviceInfo.summary = 'nsx.LogicalSwitch: %s' % network_id
nic_change_detected = True
else:
# vSwitch
if not isinstance(nic.device.backing, vim.vm.device.VirtualEthernetCard.NetworkBackingInfo):
nic.device.backing = vim.vm.device.VirtualEthernetCard.NetworkBackingInfo()
nic_change_detected = True
net_obj = self.cache.get_network(network_name)
if nic.device.backing.network != net_obj:
nic.device.backing.network = net_obj
nic_change_detected = True
if nic.device.backing.deviceName != network_name:
nic.device.backing.deviceName = network_name
nic_change_detected = True
if nic_change_detected:
# Change to fix the issue found while configuring opaque network
# VMs cloned from a template with opaque network will get disconnected
# Replacing deprecated config parameter with relocation Spec
if isinstance(self.cache.get_network(network_name), vim.OpaqueNetwork):
self.relospec.deviceChange.append(nic)
else:
self.configspec.deviceChange.append(nic)
self.change_detected = True
def configure_vapp_properties(self, vm_obj):
if len(self.params['vapp_properties']) == 0:
return
for x in self.params['vapp_properties']:
if not x.get('id'):
self.module.fail_json(msg="id is required to set vApp property")
new_vmconfig_spec = vim.vApp.VmConfigSpec()
if vm_obj:
# VM exists
# This is primarily for vcsim/integration tests, unset vAppConfig was not seen on my deployments
orig_spec = vm_obj.config.vAppConfig if vm_obj.config.vAppConfig else new_vmconfig_spec
vapp_properties_current = dict((x.id, x) for x in orig_spec.property)
vapp_properties_to_change = dict((x['id'], x) for x in self.params['vapp_properties'])
# each property must have a unique key
# init key counter with max value + 1
all_keys = [x.key for x in orig_spec.property]
new_property_index = max(all_keys) + 1 if all_keys else 0
for property_id, property_spec in vapp_properties_to_change.items():
is_property_changed = False
new_vapp_property_spec = vim.vApp.PropertySpec()
if property_id in vapp_properties_current:
if property_spec.get('operation') == 'remove':
new_vapp_property_spec.operation = 'remove'
new_vapp_property_spec.removeKey = vapp_properties_current[property_id].key
is_property_changed = True
else:
# this is 'edit' branch
new_vapp_property_spec.operation = 'edit'
new_vapp_property_spec.info = vapp_properties_current[property_id]
try:
for property_name, property_value in property_spec.items():
if property_name == 'operation':
# operation is not an info object property
# if set to anything other than 'remove' we don't fail
continue
# Updating attributes only if needed
if getattr(new_vapp_property_spec.info, property_name) != property_value:
setattr(new_vapp_property_spec.info, property_name, property_value)
is_property_changed = True
except Exception as e:
msg = "Failed to set vApp property field='%s' and value='%s'. Error: %s" % (property_name, property_value, to_text(e))
self.module.fail_json(msg=msg)
else:
if property_spec.get('operation') == 'remove':
# attempt to delete non-existent property
continue
# this is add new property branch
new_vapp_property_spec.operation = 'add'
property_info = vim.vApp.PropertyInfo()
property_info.classId = property_spec.get('classId')
property_info.instanceId = property_spec.get('instanceId')
property_info.id = property_spec.get('id')
property_info.category = property_spec.get('category')
property_info.label = property_spec.get('label')
property_info.type = property_spec.get('type', 'string')
property_info.userConfigurable = property_spec.get('userConfigurable', True)
property_info.defaultValue = property_spec.get('defaultValue')
property_info.value = property_spec.get('value', '')
property_info.description = property_spec.get('description')
new_vapp_property_spec.info = property_info
new_vapp_property_spec.info.key = new_property_index
new_property_index += 1
is_property_changed = True
if is_property_changed:
new_vmconfig_spec.property.append(new_vapp_property_spec)
else:
# New VM
all_keys = [x.key for x in new_vmconfig_spec.property]
new_property_index = max(all_keys) + 1 if all_keys else 0
vapp_properties_to_change = dict((x['id'], x) for x in self.params['vapp_properties'])
is_property_changed = False
for property_id, property_spec in vapp_properties_to_change.items():
new_vapp_property_spec = vim.vApp.PropertySpec()
# this is add new property branch
new_vapp_property_spec.operation = 'add'
property_info = vim.vApp.PropertyInfo()
property_info.classId = property_spec.get('classId')
property_info.instanceId = property_spec.get('instanceId')
property_info.id = property_spec.get('id')
property_info.category = property_spec.get('category')
property_info.label = property_spec.get('label')
property_info.type = property_spec.get('type', 'string')
property_info.userConfigurable = property_spec.get('userConfigurable', True)
property_info.defaultValue = property_spec.get('defaultValue')
property_info.value = property_spec.get('value', '')
property_info.description = property_spec.get('description')
new_vapp_property_spec.info = property_info
new_vapp_property_spec.info.key = new_property_index
new_property_index += 1
is_property_changed = True
if is_property_changed:
new_vmconfig_spec.property.append(new_vapp_property_spec)
if new_vmconfig_spec.property:
self.configspec.vAppConfig = new_vmconfig_spec
self.change_detected = True
def customize_customvalues(self, vm_obj):
if len(self.params['customvalues']) == 0:
return
facts = self.gather_facts(vm_obj)
for kv in self.params['customvalues']:
if 'key' not in kv or 'value' not in kv:
self.module.exit_json(msg="customvalues items required both 'key' and 'value' fields.")
key_id = None
for field in self.content.customFieldsManager.field:
if field.name == kv['key']:
key_id = field.key
break
if not key_id:
self.module.fail_json(msg="Unable to find custom value key %s" % kv['key'])
# If kv is not kv fetched from facts, change it
if kv['key'] not in facts['customvalues'] or facts['customvalues'][kv['key']] != kv['value']:
self.content.customFieldsManager.SetField(entity=vm_obj, key=key_id, value=kv['value'])
self.change_detected = True
def customize_vm(self, vm_obj):
# User specified customization specification
custom_spec_name = self.params.get('customization_spec')
if custom_spec_name:
cc_mgr = self.content.customizationSpecManager
if cc_mgr.DoesCustomizationSpecExist(name=custom_spec_name):
temp_spec = cc_mgr.GetCustomizationSpec(name=custom_spec_name)
self.customspec = temp_spec.spec
return
else:
self.module.fail_json(msg="Unable to find customization specification"
" '%s' in given configuration." % custom_spec_name)
# Network settings
adaptermaps = []
for network in self.params['networks']:
guest_map = vim.vm.customization.AdapterMapping()
guest_map.adapter = vim.vm.customization.IPSettings()
if 'ip' in network and 'netmask' in network:
guest_map.adapter.ip = vim.vm.customization.FixedIp()
guest_map.adapter.ip.ipAddress = str(network['ip'])
guest_map.adapter.subnetMask = str(network['netmask'])
elif 'type' in network and network['type'] == 'dhcp':
guest_map.adapter.ip = vim.vm.customization.DhcpIpGenerator()
if 'gateway' in network:
guest_map.adapter.gateway = network['gateway']
# On Windows, DNS domain and DNS servers can be set by network interface
# https://pubs.vmware.com/vi3/sdk/ReferenceGuide/vim.vm.customization.IPSettings.html
if 'domain' in network:
guest_map.adapter.dnsDomain = network['domain']
elif 'domain' in self.params['customization']:
guest_map.adapter.dnsDomain = self.params['customization']['domain']
if 'dns_servers' in network:
guest_map.adapter.dnsServerList = network['dns_servers']
elif 'dns_servers' in self.params['customization']:
guest_map.adapter.dnsServerList = self.params['customization']['dns_servers']
adaptermaps.append(guest_map)
# Global DNS settings
globalip = vim.vm.customization.GlobalIPSettings()
if 'dns_servers' in self.params['customization']:
globalip.dnsServerList = self.params['customization']['dns_servers']
# TODO: Maybe list the different domains from the interfaces here by default ?
if 'dns_suffix' in self.params['customization']:
dns_suffix = self.params['customization']['dns_suffix']
if isinstance(dns_suffix, list):
globalip.dnsSuffixList = " ".join(dns_suffix)
else:
globalip.dnsSuffixList = dns_suffix
elif 'domain' in self.params['customization']:
globalip.dnsSuffixList = self.params['customization']['domain']
if self.params['guest_id']:
guest_id = self.params['guest_id']
else:
guest_id = vm_obj.summary.config.guestId
# For windows guest OS, use SysPrep
# https://pubs.vmware.com/vi3/sdk/ReferenceGuide/vim.vm.customization.Sysprep.html#field_detail
if 'win' in guest_id:
ident = vim.vm.customization.Sysprep()
ident.userData = vim.vm.customization.UserData()
# Setting hostName, orgName and fullName is mandatory, so we set some default when missing
ident.userData.computerName = vim.vm.customization.FixedName()
# computer name will be truncated to 15 characters if using VM name
default_name = self.params['name'].replace(' ', '')
default_name = ''.join([c for c in default_name if c not in string.punctuation])
ident.userData.computerName.name = str(self.params['customization'].get('hostname', default_name[0:15]))
ident.userData.fullName = str(self.params['customization'].get('fullname', 'Administrator'))
ident.userData.orgName = str(self.params['customization'].get('orgname', 'ACME'))
if 'productid' in self.params['customization']:
ident.userData.productId = str(self.params['customization']['productid'])
ident.guiUnattended = vim.vm.customization.GuiUnattended()
if 'autologon' in self.params['customization']:
ident.guiUnattended.autoLogon = self.params['customization']['autologon']
ident.guiUnattended.autoLogonCount = self.params['customization'].get('autologoncount', 1)
if 'timezone' in self.params['customization']:
# Check if timezone value is a int before proceeding.
ident.guiUnattended.timeZone = self.device_helper.integer_value(
self.params['customization']['timezone'],
'customization.timezone')
ident.identification = vim.vm.customization.Identification()
if self.params['customization'].get('password', '') != '':
ident.guiUnattended.password = vim.vm.customization.Password()
ident.guiUnattended.password.value = str(self.params['customization']['password'])
ident.guiUnattended.password.plainText = True
if 'joindomain' in self.params['customization']:
if 'domainadmin' not in self.params['customization'] or 'domainadminpassword' not in self.params['customization']:
self.module.fail_json(msg="'domainadmin' and 'domainadminpassword' entries are mandatory in 'customization' section to use "
"joindomain feature")
ident.identification.domainAdmin = str(self.params['customization']['domainadmin'])
ident.identification.joinDomain = str(self.params['customization']['joindomain'])
ident.identification.domainAdminPassword = vim.vm.customization.Password()
ident.identification.domainAdminPassword.value = str(self.params['customization']['domainadminpassword'])
ident.identification.domainAdminPassword.plainText = True
elif 'joinworkgroup' in self.params['customization']:
ident.identification.joinWorkgroup = str(self.params['customization']['joinworkgroup'])
if 'runonce' in self.params['customization']:
ident.guiRunOnce = vim.vm.customization.GuiRunOnce()
ident.guiRunOnce.commandList = self.params['customization']['runonce']
else:
# FIXME: We have no clue whether this non-Windows OS is actually Linux, hence it might fail!
# For Linux guest OS, use LinuxPrep
# https://pubs.vmware.com/vi3/sdk/ReferenceGuide/vim.vm.customization.LinuxPrep.html
ident = vim.vm.customization.LinuxPrep()
# TODO: Maybe add domain from interface if missing ?
if 'domain' in self.params['customization']:
ident.domain = str(self.params['customization']['domain'])
ident.hostName = vim.vm.customization.FixedName()
hostname = str(self.params['customization'].get('hostname', self.params['name'].split('.')[0]))
# Remove all characters except alphanumeric and minus which is allowed by RFC 952
valid_hostname = re.sub(r"[^a-zA-Z0-9\-]", "", hostname)
ident.hostName.name = valid_hostname
# List of supported time zones for different vSphere versions in Linux/Unix systems
# https://kb.vmware.com/s/article/2145518
if 'timezone' in self.params['customization']:
ident.timeZone = str(self.params['customization']['timezone'])
if 'hwclockUTC' in self.params['customization']:
ident.hwClockUTC = self.params['customization']['hwclockUTC']
self.customspec = vim.vm.customization.Specification()
self.customspec.nicSettingMap = adaptermaps
self.customspec.globalIPSettings = globalip
self.customspec.identity = ident
def get_vm_scsi_controller(self, vm_obj):
# If vm_obj doesn't exist there is no SCSI controller to find
if vm_obj is None:
return None
for device in vm_obj.config.hardware.device:
if self.device_helper.is_scsi_controller(device):
scsi_ctl = vim.vm.device.VirtualDeviceSpec()
scsi_ctl.device = device
return scsi_ctl
return None
def get_configured_disk_size(self, expected_disk_spec):
# what size is it?
if [x for x in expected_disk_spec.keys() if x.startswith('size_') or x == 'size']:
# size, size_tb, size_gb, size_mb, size_kb
if 'size' in expected_disk_spec:
size_regex = re.compile(r'(\d+(?:\.\d+)?)([tgmkTGMK][bB])')
disk_size_m = size_regex.match(expected_disk_spec['size'])
try:
if disk_size_m:
expected = disk_size_m.group(1)
unit = disk_size_m.group(2)
else:
raise ValueError
if re.match(r'\d+\.\d+', expected):
# We found float value in string, let's typecast it
expected = float(expected)
else:
# We found int value in string, let's typecast it
expected = int(expected)
if not expected or not unit:
raise ValueError
except (TypeError, ValueError, NameError):
# Common failure
self.module.fail_json(msg="Failed to parse disk size please review value"
" provided using documentation.")
else:
param = [x for x in expected_disk_spec.keys() if x.startswith('size_')][0]
unit = param.split('_')[-1].lower()
expected = [x[1] for x in expected_disk_spec.items() if x[0].startswith('size_')][0]
expected = int(expected)
disk_units = dict(tb=3, gb=2, mb=1, kb=0)
if unit in disk_units:
unit = unit.lower()
return expected * (1024 ** disk_units[unit])
else:
self.module.fail_json(msg="%s is not a supported unit for disk size."
" Supported units are ['%s']." % (unit,
"', '".join(disk_units.keys())))
# No size found but disk, fail
self.module.fail_json(
msg="No size, size_kb, size_mb, size_gb or size_tb attribute found into disk configuration")
def find_vmdk(self, vmdk_path):
"""
Takes a vsphere datastore path in the format
[datastore_name] path/to/file.vmdk
Returns vsphere file object or raises RuntimeError
"""
datastore_name, vmdk_fullpath, vmdk_filename, vmdk_folder = self.vmdk_disk_path_split(vmdk_path)
datastore = self.cache.find_obj(self.content, [vim.Datastore], datastore_name)
if datastore is None:
self.module.fail_json(msg="Failed to find the datastore %s" % datastore_name)
return self.find_vmdk_file(datastore, vmdk_fullpath, vmdk_filename, vmdk_folder)
def add_existing_vmdk(self, vm_obj, expected_disk_spec, diskspec, scsi_ctl):
"""
Adds vmdk file described by expected_disk_spec['filename'], retrieves the file
information and adds the correct spec to self.configspec.deviceChange.
"""
filename = expected_disk_spec['filename']
# if this is a new disk, or the disk file names are different
if (vm_obj and diskspec.device.backing.fileName != filename) or vm_obj is None:
vmdk_file = self.find_vmdk(expected_disk_spec['filename'])
diskspec.device.backing.fileName = expected_disk_spec['filename']
diskspec.device.capacityInKB = VmomiSupport.vmodlTypes['long'](vmdk_file.fileSize / 1024)
diskspec.device.key = -1
self.change_detected = True
self.configspec.deviceChange.append(diskspec)
def configure_disks(self, vm_obj):
# Ignore empty disk list, this permits to keep disks when deploying a template/cloning a VM
if len(self.params['disk']) == 0:
return
scsi_ctl = self.get_vm_scsi_controller(vm_obj)
# Create scsi controller only if we are deploying a new VM, not a template or reconfiguring
if vm_obj is None or scsi_ctl is None:
scsi_ctl = self.device_helper.create_scsi_controller(self.get_scsi_type())
self.change_detected = True
self.configspec.deviceChange.append(scsi_ctl)
disks = [x for x in vm_obj.config.hardware.device if isinstance(x, vim.vm.device.VirtualDisk)] \
if vm_obj is not None else None
if disks is not None and self.params.get('disk') and len(self.params.get('disk')) < len(disks):
self.module.fail_json(msg="Provided disks configuration has less disks than "
"the target object (%d vs %d)" % (len(self.params.get('disk')), len(disks)))
disk_index = 0
for expected_disk_spec in self.params.get('disk'):
disk_modified = False
# If we are manipulating and existing objects which has disks and disk_index is in disks
if vm_obj is not None and disks is not None and disk_index < len(disks):
diskspec = vim.vm.device.VirtualDeviceSpec()
# set the operation to edit so that it knows to keep other settings
diskspec.operation = vim.vm.device.VirtualDeviceSpec.Operation.edit
diskspec.device = disks[disk_index]
else:
diskspec = self.device_helper.create_scsi_disk(scsi_ctl, disk_index)
disk_modified = True
# increment index for next disk search
disk_index += 1
# index 7 is reserved to SCSI controller
if disk_index == 7:
disk_index += 1
if 'disk_mode' in expected_disk_spec:
disk_mode = expected_disk_spec.get('disk_mode', 'persistent').lower()
valid_disk_mode = ['persistent', 'independent_persistent', 'independent_nonpersistent']
if disk_mode not in valid_disk_mode:
self.module.fail_json(msg="disk_mode specified is not valid."
" Should be one of ['%s']" % "', '".join(valid_disk_mode))
if (vm_obj and diskspec.device.backing.diskMode != disk_mode) or (vm_obj is None):
diskspec.device.backing.diskMode = disk_mode
disk_modified = True
else:
diskspec.device.backing.diskMode = "persistent"
# is it thin?
if 'type' in expected_disk_spec:
disk_type = expected_disk_spec.get('type', '').lower()
if disk_type == 'thin':
diskspec.device.backing.thinProvisioned = True
elif disk_type == 'eagerzeroedthick':
diskspec.device.backing.eagerlyScrub = True
if 'filename' in expected_disk_spec and expected_disk_spec['filename'] is not None:
self.add_existing_vmdk(vm_obj, expected_disk_spec, diskspec, scsi_ctl)
continue
elif vm_obj is None or self.params['template']:
# We are creating new VM or from Template
# Only create virtual device if not backed by vmdk in original template
if diskspec.device.backing.fileName == '':
diskspec.fileOperation = vim.vm.device.VirtualDeviceSpec.FileOperation.create
# which datastore?
if expected_disk_spec.get('datastore'):
# TODO: This is already handled by the relocation spec,
# but it needs to eventually be handled for all the
# other disks defined
pass
kb = self.get_configured_disk_size(expected_disk_spec)
# VMware doesn't allow to reduce disk sizes
if kb < diskspec.device.capacityInKB:
self.module.fail_json(
msg="Given disk size is smaller than found (%d < %d). Reducing disks is not allowed." %
(kb, diskspec.device.capacityInKB))
if kb != diskspec.device.capacityInKB or disk_modified:
diskspec.device.capacityInKB = kb
self.configspec.deviceChange.append(diskspec)
self.change_detected = True
def select_host(self):
hostsystem = self.cache.get_esx_host(self.params['esxi_hostname'])
if not hostsystem:
self.module.fail_json(msg='Failed to find ESX host "%(esxi_hostname)s"' % self.params)
if hostsystem.runtime.connectionState != 'connected' or hostsystem.runtime.inMaintenanceMode:
self.module.fail_json(msg='ESXi "%(esxi_hostname)s" is in invalid state or in maintenance mode.' % self.params)
return hostsystem
def autoselect_datastore(self):
datastore = None
datastores = self.cache.get_all_objs(self.content, [vim.Datastore])
if datastores is None or len(datastores) == 0:
self.module.fail_json(msg="Unable to find a datastore list when autoselecting")
datastore_freespace = 0
for ds in datastores:
if ds.summary.freeSpace > datastore_freespace:
datastore = ds
datastore_freespace = ds.summary.freeSpace
return datastore
def get_recommended_datastore(self, datastore_cluster_obj=None):
"""
Function to return Storage DRS recommended datastore from datastore cluster
Args:
datastore_cluster_obj: datastore cluster managed object
Returns: Name of recommended datastore from the given datastore cluster
"""
if datastore_cluster_obj is None:
return None
# Check if Datastore Cluster provided by user is SDRS ready
sdrs_status = datastore_cluster_obj.podStorageDrsEntry.storageDrsConfig.podConfig.enabled
if sdrs_status:
# We can get storage recommendation only if SDRS is enabled on given datastorage cluster
pod_sel_spec = vim.storageDrs.PodSelectionSpec()
pod_sel_spec.storagePod = datastore_cluster_obj
storage_spec = vim.storageDrs.StoragePlacementSpec()
storage_spec.podSelectionSpec = pod_sel_spec
storage_spec.type = 'create'
try:
rec = self.content.storageResourceManager.RecommendDatastores(storageSpec=storage_spec)
rec_action = rec.recommendations[0].action[0]
return rec_action.destination.name
except Exception:
# There is some error so we fall back to general workflow
pass
datastore = None
datastore_freespace = 0
for ds in datastore_cluster_obj.childEntity:
if isinstance(ds, vim.Datastore) and ds.summary.freeSpace > datastore_freespace:
# If datastore field is provided, filter destination datastores
datastore = ds
datastore_freespace = ds.summary.freeSpace
if datastore:
return datastore.name
return None
def select_datastore(self, vm_obj=None):
datastore = None
datastore_name = None
if len(self.params['disk']) != 0:
# TODO: really use the datastore for newly created disks
if 'autoselect_datastore' in self.params['disk'][0] and self.params['disk'][0]['autoselect_datastore']:
datastores = self.cache.get_all_objs(self.content, [vim.Datastore])
datastores = [x for x in datastores if self.cache.get_parent_datacenter(x).name == self.params['datacenter']]
if datastores is None or len(datastores) == 0:
self.module.fail_json(msg="Unable to find a datastore list when autoselecting")
datastore_freespace = 0
for ds in datastores:
if (ds.summary.freeSpace > datastore_freespace) or (ds.summary.freeSpace == datastore_freespace and not datastore):
# If datastore field is provided, filter destination datastores
if 'datastore' in self.params['disk'][0] and \
isinstance(self.params['disk'][0]['datastore'], str) and \
ds.name.find(self.params['disk'][0]['datastore']) < 0:
continue
datastore = ds
datastore_name = datastore.name
datastore_freespace = ds.summary.freeSpace
elif 'datastore' in self.params['disk'][0]:
datastore_name = self.params['disk'][0]['datastore']
# Check if user has provided datastore cluster first
datastore_cluster = self.cache.find_obj(self.content, [vim.StoragePod], datastore_name)
if datastore_cluster:
# If user specified datastore cluster so get recommended datastore
datastore_name = self.get_recommended_datastore(datastore_cluster_obj=datastore_cluster)
# Check if get_recommended_datastore or user specified datastore exists or not
datastore = self.cache.find_obj(self.content, [vim.Datastore], datastore_name)
else:
self.module.fail_json(msg="Either datastore or autoselect_datastore should be provided to select datastore")
if not datastore and self.params['template']:
# use the template's existing DS
disks = [x for x in vm_obj.config.hardware.device if isinstance(x, vim.vm.device.VirtualDisk)]
if disks:
datastore = disks[0].backing.datastore
datastore_name = datastore.name
# validation
if datastore:
dc = self.cache.get_parent_datacenter(datastore)
if dc.name != self.params['datacenter']:
datastore = self.autoselect_datastore()
datastore_name = datastore.name
if not datastore:
if len(self.params['disk']) != 0 or self.params['template'] is None:
self.module.fail_json(msg="Unable to find the datastore with given parameters."
" This could mean, %s is a non-existent virtual machine and module tried to"
" deploy it as new virtual machine with no disk. Please specify disks parameter"
" or specify template to clone from." % self.params['name'])
self.module.fail_json(msg="Failed to find a matching datastore")
return datastore, datastore_name
def obj_has_parent(self, obj, parent):
if obj is None and parent is None:
raise AssertionError()
current_parent = obj
while True:
if current_parent.name == parent.name:
return True
# Check if we have reached till root folder
moid = current_parent._moId
if moid in ['group-d1', 'ha-folder-root']:
return False
current_parent = current_parent.parent
if current_parent is None:
return False
def get_scsi_type(self):
disk_controller_type = "paravirtual"
# set cpu/memory/etc
if 'hardware' in self.params:
if 'scsi' in self.params['hardware']:
if self.params['hardware']['scsi'] in ['buslogic', 'paravirtual', 'lsilogic', 'lsilogicsas']:
disk_controller_type = self.params['hardware']['scsi']
else:
self.module.fail_json(msg="hardware.scsi attribute should be 'paravirtual' or 'lsilogic'")
return disk_controller_type
def find_folder(self, searchpath):
""" Walk inventory objects one position of the searchpath at a time """
# split the searchpath so we can iterate through it
paths = [x.replace('/', '') for x in searchpath.split('/')]
paths_total = len(paths) - 1
position = 0
# recursive walk while looking for next element in searchpath
root = self.content.rootFolder
while root and position <= paths_total:
change = False
if hasattr(root, 'childEntity'):
for child in root.childEntity:
if child.name == paths[position]:
root = child
position += 1
change = True
break
elif isinstance(root, vim.Datacenter):
if hasattr(root, 'vmFolder'):
if root.vmFolder.name == paths[position]:
root = root.vmFolder
position += 1
change = True
else:
root = None
if not change:
root = None
return root
def get_resource_pool(self, cluster=None, host=None, resource_pool=None):
""" Get a resource pool, filter on cluster, esxi_hostname or resource_pool if given """
cluster_name = cluster or self.params.get('cluster', None)
host_name = host or self.params.get('esxi_hostname', None)
resource_pool_name = resource_pool or self.params.get('resource_pool', None)
# get the datacenter object
datacenter = find_obj(self.content, [vim.Datacenter], self.params['datacenter'])
if not datacenter:
self.module.fail_json(msg='Unable to find datacenter "%s"' % self.params['datacenter'])
# if cluster is given, get the cluster object
if cluster_name:
cluster = find_obj(self.content, [vim.ComputeResource], cluster_name, folder=datacenter)
if not cluster:
self.module.fail_json(msg='Unable to find cluster "%s"' % cluster_name)
# if host is given, get the cluster object using the host
elif host_name:
host = find_obj(self.content, [vim.HostSystem], host_name, folder=datacenter)
if not host:
self.module.fail_json(msg='Unable to find host "%s"' % host_name)
cluster = host.parent
else:
cluster = None
# get resource pools limiting search to cluster or datacenter
resource_pool = find_obj(self.content, [vim.ResourcePool], resource_pool_name, folder=cluster or datacenter)
if not resource_pool:
if resource_pool_name:
self.module.fail_json(msg='Unable to find resource_pool "%s"' % resource_pool_name)
else:
self.module.fail_json(msg='Unable to find resource pool, need esxi_hostname, resource_pool, or cluster')
return resource_pool
def deploy_vm(self):
# https://github.com/vmware/pyvmomi-community-samples/blob/master/samples/clone_vm.py
# https://www.vmware.com/support/developer/vc-sdk/visdk25pubs/ReferenceGuide/vim.vm.CloneSpec.html
# https://www.vmware.com/support/developer/vc-sdk/visdk25pubs/ReferenceGuide/vim.vm.ConfigSpec.html
# https://www.vmware.com/support/developer/vc-sdk/visdk41pubs/ApiReference/vim.vm.RelocateSpec.html
# FIXME:
# - static IPs
self.folder = self.params.get('folder', None)
if self.folder is None:
self.module.fail_json(msg="Folder is required parameter while deploying new virtual machine")
# Prepend / if it was missing from the folder path, also strip trailing slashes
if not self.folder.startswith('/'):
self.folder = '/%(folder)s' % self.params
self.folder = self.folder.rstrip('/')
datacenter = self.cache.find_obj(self.content, [vim.Datacenter], self.params['datacenter'])
if datacenter is None:
self.module.fail_json(msg='No datacenter named %(datacenter)s was found' % self.params)
dcpath = compile_folder_path_for_object(datacenter)
# Nested folder does not have trailing /
if not dcpath.endswith('/'):
dcpath += '/'
# Check for full path first in case it was already supplied
if (self.folder.startswith(dcpath + self.params['datacenter'] + '/vm') or
self.folder.startswith(dcpath + '/' + self.params['datacenter'] + '/vm')):
fullpath = self.folder
elif self.folder.startswith('/vm/') or self.folder == '/vm':
fullpath = "%s%s%s" % (dcpath, self.params['datacenter'], self.folder)
elif self.folder.startswith('/'):
fullpath = "%s%s/vm%s" % (dcpath, self.params['datacenter'], self.folder)
else:
fullpath = "%s%s/vm/%s" % (dcpath, self.params['datacenter'], self.folder)
f_obj = self.content.searchIndex.FindByInventoryPath(fullpath)
# abort if no strategy was successful
if f_obj is None:
# Add some debugging values in failure.
details = {
'datacenter': datacenter.name,
'datacenter_path': dcpath,
'folder': self.folder,
'full_search_path': fullpath,
}
self.module.fail_json(msg='No folder %s matched in the search path : %s' % (self.folder, fullpath),
details=details)
destfolder = f_obj
if self.params['template']:
vm_obj = self.get_vm_or_template(template_name=self.params['template'])
if vm_obj is None:
self.module.fail_json(msg="Could not find a template named %(template)s" % self.params)
else:
vm_obj = None
# always get a resource_pool
resource_pool = self.get_resource_pool()
# set the destination datastore for VM & disks
if self.params['datastore']:
# Give precedence to datastore value provided by user
# User may want to deploy VM to specific datastore.
datastore_name = self.params['datastore']
# Check if user has provided datastore cluster first
datastore_cluster = self.cache.find_obj(self.content, [vim.StoragePod], datastore_name)
if datastore_cluster:
# If user specified datastore cluster so get recommended datastore
datastore_name = self.get_recommended_datastore(datastore_cluster_obj=datastore_cluster)
# Check if get_recommended_datastore or user specified datastore exists or not
datastore = self.cache.find_obj(self.content, [vim.Datastore], datastore_name)
else:
(datastore, datastore_name) = self.select_datastore(vm_obj)
self.configspec = vim.vm.ConfigSpec()
self.configspec.deviceChange = []
# create the relocation spec
self.relospec = vim.vm.RelocateSpec()
self.relospec.deviceChange = []
self.configure_guestid(vm_obj=vm_obj, vm_creation=True)
self.configure_cpu_and_memory(vm_obj=vm_obj, vm_creation=True)
self.configure_hardware_params(vm_obj=vm_obj)
self.configure_resource_alloc_info(vm_obj=vm_obj)
self.configure_vapp_properties(vm_obj=vm_obj)
self.configure_disks(vm_obj=vm_obj)
self.configure_network(vm_obj=vm_obj)
self.configure_cdrom(vm_obj=vm_obj)
# Find if we need network customizations (find keys in dictionary that requires customizations)
network_changes = False
for nw in self.params['networks']:
for key in nw:
# We don't need customizations for these keys
if key not in ('device_type', 'mac', 'name', 'vlan', 'type', 'start_connected'):
network_changes = True
break
if len(self.params['customization']) > 0 or network_changes or self.params.get('customization_spec') is not None:
self.customize_vm(vm_obj=vm_obj)
clonespec = None
clone_method = None
try:
if self.params['template']:
# Only select specific host when ESXi hostname is provided
if self.params['esxi_hostname']:
self.relospec.host = self.select_host()
self.relospec.datastore = datastore
# Convert disk present in template if is set
if self.params['convert']:
for device in vm_obj.config.hardware.device:
if isinstance(device, vim.vm.device.VirtualDisk):
disk_locator = vim.vm.RelocateSpec.DiskLocator()
disk_locator.diskBackingInfo = vim.vm.device.VirtualDisk.FlatVer2BackingInfo()
if self.params['convert'] in ['thin']:
disk_locator.diskBackingInfo.thinProvisioned = True
if self.params['convert'] in ['eagerzeroedthick']:
disk_locator.diskBackingInfo.eagerlyScrub = True
if self.params['convert'] in ['thick']:
disk_locator.diskBackingInfo.diskMode = "persistent"
disk_locator.diskId = device.key
disk_locator.datastore = datastore
self.relospec.disk.append(disk_locator)
# https://www.vmware.com/support/developer/vc-sdk/visdk41pubs/ApiReference/vim.vm.RelocateSpec.html
# > pool: For a clone operation from a template to a virtual machine, this argument is required.
self.relospec.pool = resource_pool
linked_clone = self.params.get('linked_clone')
snapshot_src = self.params.get('snapshot_src', None)
if linked_clone:
if snapshot_src is not None:
self.relospec.diskMoveType = vim.vm.RelocateSpec.DiskMoveOptions.createNewChildDiskBacking
else:
self.module.fail_json(msg="Parameter 'linked_src' and 'snapshot_src' are"
" required together for linked clone operation.")
clonespec = vim.vm.CloneSpec(template=self.params['is_template'], location=self.relospec)
if self.customspec:
clonespec.customization = self.customspec
if snapshot_src is not None:
if vm_obj.snapshot is None:
self.module.fail_json(msg="No snapshots present for virtual machine or template [%(template)s]" % self.params)
snapshot = self.get_snapshots_by_name_recursively(snapshots=vm_obj.snapshot.rootSnapshotList,
snapname=snapshot_src)
if len(snapshot) != 1:
self.module.fail_json(msg='virtual machine "%(template)s" does not contain'
' snapshot named "%(snapshot_src)s"' % self.params)
clonespec.snapshot = snapshot[0].snapshot
clonespec.config = self.configspec
clone_method = 'Clone'
try:
task = vm_obj.Clone(folder=destfolder, name=self.params['name'], spec=clonespec)
except vim.fault.NoPermission as e:
self.module.fail_json(msg="Failed to clone virtual machine %s to folder %s "
"due to permission issue: %s" % (self.params['name'],
destfolder,
to_native(e.msg)))
self.change_detected = True
else:
# ConfigSpec require name for VM creation
self.configspec.name = self.params['name']
self.configspec.files = vim.vm.FileInfo(logDirectory=None,
snapshotDirectory=None,
suspendDirectory=None,
vmPathName="[" + datastore_name + "]")
clone_method = 'CreateVM_Task'
try:
task = destfolder.CreateVM_Task(config=self.configspec, pool=resource_pool)
except vmodl.fault.InvalidRequest as e:
self.module.fail_json(msg="Failed to create virtual machine due to invalid configuration "
"parameter %s" % to_native(e.msg))
except vim.fault.RestrictedVersion as e:
self.module.fail_json(msg="Failed to create virtual machine due to "
"product versioning restrictions: %s" % to_native(e.msg))
self.change_detected = True
self.wait_for_task(task)
except TypeError as e:
self.module.fail_json(msg="TypeError was returned, please ensure to give correct inputs. %s" % to_text(e))
if task.info.state == 'error':
# https://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=2021361
# https://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=2173
# provide these to the user for debugging
clonespec_json = serialize_spec(clonespec)
configspec_json = serialize_spec(self.configspec)
kwargs = {
'changed': self.change_applied,
'failed': True,
'msg': task.info.error.msg,
'clonespec': clonespec_json,
'configspec': configspec_json,
'clone_method': clone_method
}
return kwargs
else:
# set annotation
vm = task.info.result
if self.params['annotation']:
annotation_spec = vim.vm.ConfigSpec()
annotation_spec.annotation = str(self.params['annotation'])
task = vm.ReconfigVM_Task(annotation_spec)
self.wait_for_task(task)
if task.info.state == 'error':
return {'changed': self.change_applied, 'failed': True, 'msg': task.info.error.msg, 'op': 'annotation'}
if self.params['customvalues']:
self.customize_customvalues(vm_obj=vm)
if self.params['wait_for_ip_address'] or self.params['wait_for_customization'] or self.params['state'] in ['poweredon', 'restarted']:
set_vm_power_state(self.content, vm, 'poweredon', force=False)
if self.params['wait_for_ip_address']:
self.wait_for_vm_ip(vm)
if self.params['wait_for_customization']:
is_customization_ok = self.wait_for_customization(vm)
if not is_customization_ok:
vm_facts = self.gather_facts(vm)
return {'changed': self.change_applied, 'failed': True, 'instance': vm_facts, 'op': 'customization'}
vm_facts = self.gather_facts(vm)
return {'changed': self.change_applied, 'failed': False, 'instance': vm_facts}
def get_snapshots_by_name_recursively(self, snapshots, snapname):
snap_obj = []
for snapshot in snapshots:
if snapshot.name == snapname:
snap_obj.append(snapshot)
else:
snap_obj = snap_obj + self.get_snapshots_by_name_recursively(snapshot.childSnapshotList, snapname)
return snap_obj
def reconfigure_vm(self):
self.configspec = vim.vm.ConfigSpec()
self.configspec.deviceChange = []
# create the relocation spec
self.relospec = vim.vm.RelocateSpec()
self.relospec.deviceChange = []
self.configure_guestid(vm_obj=self.current_vm_obj)
self.configure_cpu_and_memory(vm_obj=self.current_vm_obj)
self.configure_hardware_params(vm_obj=self.current_vm_obj)
self.configure_disks(vm_obj=self.current_vm_obj)
self.configure_network(vm_obj=self.current_vm_obj)
self.configure_cdrom(vm_obj=self.current_vm_obj)
self.customize_customvalues(vm_obj=self.current_vm_obj)
self.configure_resource_alloc_info(vm_obj=self.current_vm_obj)
self.configure_vapp_properties(vm_obj=self.current_vm_obj)
if self.params['annotation'] and self.current_vm_obj.config.annotation != self.params['annotation']:
self.configspec.annotation = str(self.params['annotation'])
self.change_detected = True
if self.params['resource_pool']:
self.relospec.pool = self.get_resource_pool()
if self.relospec.pool != self.current_vm_obj.resourcePool:
task = self.current_vm_obj.RelocateVM_Task(spec=self.relospec)
self.wait_for_task(task)
if task.info.state == 'error':
return {'changed': self.change_applied, 'failed': True, 'msg': task.info.error.msg, 'op': 'relocate'}
# Only send VMware task if we see a modification
if self.change_detected:
task = None
try:
task = self.current_vm_obj.ReconfigVM_Task(spec=self.configspec)
except vim.fault.RestrictedVersion as e:
self.module.fail_json(msg="Failed to reconfigure virtual machine due to"
" product versioning restrictions: %s" % to_native(e.msg))
self.wait_for_task(task)
if task.info.state == 'error':
return {'changed': self.change_applied, 'failed': True, 'msg': task.info.error.msg, 'op': 'reconfig'}
# Rename VM
if self.params['uuid'] and self.params['name'] and self.params['name'] != self.current_vm_obj.config.name:
task = self.current_vm_obj.Rename_Task(self.params['name'])
self.wait_for_task(task)
if task.info.state == 'error':
return {'changed': self.change_applied, 'failed': True, 'msg': task.info.error.msg, 'op': 'rename'}
# Mark VM as Template
if self.params['is_template'] and not self.current_vm_obj.config.template:
try:
self.current_vm_obj.MarkAsTemplate()
self.change_applied = True
except vmodl.fault.NotSupported as e:
self.module.fail_json(msg="Failed to mark virtual machine [%s] "
"as template: %s" % (self.params['name'], e.msg))
# Mark Template as VM
elif not self.params['is_template'] and self.current_vm_obj.config.template:
resource_pool = self.get_resource_pool()
kwargs = dict(pool=resource_pool)
if self.params.get('esxi_hostname', None):
host_system_obj = self.select_host()
kwargs.update(host=host_system_obj)
try:
self.current_vm_obj.MarkAsVirtualMachine(**kwargs)
self.change_applied = True
except vim.fault.InvalidState as invalid_state:
self.module.fail_json(msg="Virtual machine is not marked"
" as template : %s" % to_native(invalid_state.msg))
except vim.fault.InvalidDatastore as invalid_ds:
self.module.fail_json(msg="Converting template to virtual machine"
" operation cannot be performed on the"
" target datastores: %s" % to_native(invalid_ds.msg))
except vim.fault.CannotAccessVmComponent as cannot_access:
self.module.fail_json(msg="Failed to convert template to virtual machine"
" as operation unable access virtual machine"
" component: %s" % to_native(cannot_access.msg))
except vmodl.fault.InvalidArgument as invalid_argument:
self.module.fail_json(msg="Failed to convert template to virtual machine"
" due to : %s" % to_native(invalid_argument.msg))
except Exception as generic_exc:
self.module.fail_json(msg="Failed to convert template to virtual machine"
" due to generic error : %s" % to_native(generic_exc))
# Automatically update VMware UUID when converting template to VM.
# This avoids an interactive prompt during VM startup.
uuid_action = [x for x in self.current_vm_obj.config.extraConfig if x.key == "uuid.action"]
if not uuid_action:
uuid_action_opt = vim.option.OptionValue()
uuid_action_opt.key = "uuid.action"
uuid_action_opt.value = "create"
self.configspec.extraConfig.append(uuid_action_opt)
self.change_detected = True
# add customize existing VM after VM re-configure
if 'existing_vm' in self.params['customization'] and self.params['customization']['existing_vm']:
if self.current_vm_obj.config.template:
self.module.fail_json(msg="VM is template, not support guest OS customization.")
if self.current_vm_obj.runtime.powerState != vim.VirtualMachinePowerState.poweredOff:
self.module.fail_json(msg="VM is not in poweroff state, can not do guest OS customization.")
cus_result = self.customize_exist_vm()
if cus_result['failed']:
return cus_result
vm_facts = self.gather_facts(self.current_vm_obj)
return {'changed': self.change_applied, 'failed': False, 'instance': vm_facts}
def customize_exist_vm(self):
task = None
# Find if we need network customizations (find keys in dictionary that requires customizations)
network_changes = False
for nw in self.params['networks']:
for key in nw:
# We don't need customizations for these keys
if key not in ('device_type', 'mac', 'name', 'vlan', 'type', 'start_connected'):
network_changes = True
break
if len(self.params['customization']) > 1 or network_changes or self.params.get('customization_spec'):
self.customize_vm(vm_obj=self.current_vm_obj)
try:
task = self.current_vm_obj.CustomizeVM_Task(self.customspec)
except vim.fault.CustomizationFault as e:
self.module.fail_json(msg="Failed to customization virtual machine due to CustomizationFault: %s" % to_native(e.msg))
except vim.fault.RuntimeFault as e:
self.module.fail_json(msg="failed to customization virtual machine due to RuntimeFault: %s" % to_native(e.msg))
except Exception as e:
self.module.fail_json(msg="failed to customization virtual machine due to fault: %s" % to_native(e.msg))
self.wait_for_task(task)
if task.info.state == 'error':
return {'changed': self.change_applied, 'failed': True, 'msg': task.info.error.msg, 'op': 'customize_exist'}
if self.params['wait_for_customization']:
set_vm_power_state(self.content, self.current_vm_obj, 'poweredon', force=False)
is_customization_ok = self.wait_for_customization(self.current_vm_obj)
if not is_customization_ok:
return {'changed': self.change_applied, 'failed': True, 'op': 'wait_for_customize_exist'}
return {'changed': self.change_applied, 'failed': False}
def wait_for_task(self, task, poll_interval=1):
"""
Wait for a VMware task to complete. Terminal states are 'error' and 'success'.
Inputs:
- task: the task to wait for
- poll_interval: polling interval to check the task, in seconds
Modifies:
- self.change_applied
"""
# https://www.vmware.com/support/developer/vc-sdk/visdk25pubs/ReferenceGuide/vim.Task.html
# https://www.vmware.com/support/developer/vc-sdk/visdk25pubs/ReferenceGuide/vim.TaskInfo.html
# https://github.com/virtdevninja/pyvmomi-community-samples/blob/master/samples/tools/tasks.py
while task.info.state not in ['error', 'success']:
time.sleep(poll_interval)
self.change_applied = self.change_applied or task.info.state == 'success'
def wait_for_vm_ip(self, vm, poll=100, sleep=5):
ips = None
facts = {}
thispoll = 0
while not ips and thispoll <= poll:
newvm = self.get_vm()
facts = self.gather_facts(newvm)
if facts['ipv4'] or facts['ipv6']:
ips = True
else:
time.sleep(sleep)
thispoll += 1
return facts
def get_vm_events(self, vm, eventTypeIdList):
byEntity = vim.event.EventFilterSpec.ByEntity(entity=vm, recursion="self")
filterSpec = vim.event.EventFilterSpec(entity=byEntity, eventTypeId=eventTypeIdList)
eventManager = self.content.eventManager
return eventManager.QueryEvent(filterSpec)
def wait_for_customization(self, vm, poll=10000, sleep=10):
thispoll = 0
while thispoll <= poll:
eventStarted = self.get_vm_events(vm, ['CustomizationStartedEvent'])
if len(eventStarted):
thispoll = 0
while thispoll <= poll:
eventsFinishedResult = self.get_vm_events(vm, ['CustomizationSucceeded', 'CustomizationFailed'])
if len(eventsFinishedResult):
if not isinstance(eventsFinishedResult[0], vim.event.CustomizationSucceeded):
self.module.fail_json(msg='Customization failed with error {0}:\n{1}'.format(
eventsFinishedResult[0]._wsdlName, eventsFinishedResult[0].fullFormattedMessage))
return False
break
else:
time.sleep(sleep)
thispoll += 1
return True
else:
time.sleep(sleep)
thispoll += 1
self.module.fail_json('waiting for customizations timed out.')
return False
def main():
argument_spec = vmware_argument_spec()
argument_spec.update(
state=dict(type='str', default='present',
choices=['absent', 'poweredoff', 'poweredon', 'present', 'rebootguest', 'restarted', 'shutdownguest', 'suspended']),
template=dict(type='str', aliases=['template_src']),
is_template=dict(type='bool', default=False),
annotation=dict(type='str', aliases=['notes']),
customvalues=dict(type='list', default=[]),
name=dict(type='str'),
name_match=dict(type='str', choices=['first', 'last'], default='first'),
uuid=dict(type='str'),
use_instance_uuid=dict(type='bool', default=False),
folder=dict(type='str'),
guest_id=dict(type='str'),
disk=dict(type='list', default=[]),
cdrom=dict(type='dict', default={}),
hardware=dict(type='dict', default={}),
force=dict(type='bool', default=False),
datacenter=dict(type='str', default='ha-datacenter'),
esxi_hostname=dict(type='str'),
cluster=dict(type='str'),
wait_for_ip_address=dict(type='bool', default=False),
state_change_timeout=dict(type='int', default=0),
snapshot_src=dict(type='str'),
linked_clone=dict(type='bool', default=False),
networks=dict(type='list', default=[]),
resource_pool=dict(type='str'),
customization=dict(type='dict', default={}, no_log=True),
customization_spec=dict(type='str', default=None),
wait_for_customization=dict(type='bool', default=False),
vapp_properties=dict(type='list', default=[]),
datastore=dict(type='str'),
convert=dict(type='str', choices=['thin', 'thick', 'eagerzeroedthick']),
)
module = AnsibleModule(argument_spec=argument_spec,
supports_check_mode=True,
mutually_exclusive=[
['cluster', 'esxi_hostname'],
],
required_one_of=[
['name', 'uuid'],
],
)
result = {'failed': False, 'changed': False}
pyv = PyVmomiHelper(module)
# Check if the VM exists before continuing
vm = pyv.get_vm()
# VM already exists
if vm:
if module.params['state'] == 'absent':
# destroy it
if module.check_mode:
result.update(
vm_name=vm.name,
changed=True,
current_powerstate=vm.summary.runtime.powerState.lower(),
desired_operation='remove_vm',
)
module.exit_json(**result)
if module.params['force']:
# has to be poweredoff first
set_vm_power_state(pyv.content, vm, 'poweredoff', module.params['force'])
result = pyv.remove_vm(vm)
elif module.params['state'] == 'present':
if module.check_mode:
result.update(
vm_name=vm.name,
changed=True,
desired_operation='reconfigure_vm',
)
module.exit_json(**result)
result = pyv.reconfigure_vm()
elif module.params['state'] in ['poweredon', 'poweredoff', 'restarted', 'suspended', 'shutdownguest', 'rebootguest']:
if module.check_mode:
result.update(
vm_name=vm.name,
changed=True,
current_powerstate=vm.summary.runtime.powerState.lower(),
desired_operation='set_vm_power_state',
)
module.exit_json(**result)
# set powerstate
tmp_result = set_vm_power_state(pyv.content, vm, module.params['state'], module.params['force'], module.params['state_change_timeout'])
if tmp_result['changed']:
result["changed"] = True
if module.params['state'] in ['poweredon', 'restarted', 'rebootguest'] and module.params['wait_for_ip_address']:
wait_result = wait_for_vm_ip(pyv.content, vm)
if not wait_result:
module.fail_json(msg='Waiting for IP address timed out')
tmp_result['instance'] = wait_result
if not tmp_result["failed"]:
result["failed"] = False
result['instance'] = tmp_result['instance']
if tmp_result["failed"]:
result["failed"] = True
result["msg"] = tmp_result["msg"]
else:
# This should not happen
raise AssertionError()
# VM doesn't exist
else:
if module.params['state'] in ['poweredon', 'poweredoff', 'present', 'restarted', 'suspended']:
if module.check_mode:
result.update(
changed=True,
desired_operation='deploy_vm',
)
module.exit_json(**result)
result = pyv.deploy_vm()
if result['failed']:
module.fail_json(msg='Failed to create a virtual machine : %s' % result['msg'])
if result['failed']:
module.fail_json(**result)
else:
module.exit_json(**result)
if __name__ == '__main__':
main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 58,541 |
vmware_guest: autoselect_datastore=True should select a datatore reachable on the ESXi
|
##### SUMMARY
The following snippet from https://github.com/ansible/ansible/blob/devel/test/integration/targets/vmware_guest/tasks/create_d1_c1_f0.yml uses `autoselect_datastore: True`.
```yaml
- name: create new VMs
vmware_guest:
validate_certs: False
hostname: "{{ vcenter_hostname }}"
username: "{{ vcenter_username }}"
password: "{{ vcenter_password }}"
name: newvm_1
#template: "{{ item|basename }}"
guest_id: centos64Guest
datacenter: "{{ dc1 }}"
hardware:
num_cpus: 1
num_cpu_cores_per_socket: 1
memory_mb: 256
hotadd_memory: true
hotadd_cpu: false
max_connections: 10
disk:
- size: 1gb
type: thin
autoselect_datastore: True
state: poweredoff
folder: F0
```
One of my two datastores is dedicated to the ISO image and so, is readOnly. But it still get selected consistently. I use the following hack to avoid the problem:
```diff
diff --git a/lib/ansible/modules/cloud/vmware/vmware_guest.py b/lib/ansible/modules/cloud/vmware/vmware_guest.py
index 6a63e97798..3648e3e87f 100644
--- a/lib/ansible/modules/cloud/vmware/vmware_guest.py
+++ b/lib/ansible/modules/cloud/vmware/vmware_guest.py
@@ -1925,6 +1925,15 @@ class PyVmomiHelper(PyVmomi):
datastore_freespace = 0
for ds in datastores:
+ is_readonly = False
+ for h in ds.host:
+ if h.mountInfo.accessMode == 'readOnly':
+ is_readonly = True
+ break
+
+ if is_readonly:
+ continue
+
if (ds.summary.freeSpace > datastore_freespace) or (ds.summary.freeSpace == datastore_freespace and not datastore):
# If datastore field is provided, filter destination datastores
if 'datastore' in self.params['disk'][0] and \
```
<!--- Explain the problem briefly below -->
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
vmware_guest
|
https://github.com/ansible/ansible/issues/58541
|
https://github.com/ansible/ansible/pull/58872
|
57dc7ec265bbc741126fa46e44ff3bb6adae5624
|
647b78a09cc45df0c25eaf794c06915f3e2ee9c5
| 2019-06-29T02:40:20Z |
python
| 2019-08-09T02:26:52Z |
test/lib/ansible_test/_internal/cloud/vcenter.py
|
"""VMware vCenter plugin for integration tests."""
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import os
import time
from . import (
CloudProvider,
CloudEnvironment,
CloudEnvironmentConfig,
)
from ..util import (
find_executable,
display,
ApplicationError,
is_shippable,
ConfigParser,
SubprocessError,
)
from ..docker_util import (
docker_run,
docker_rm,
docker_inspect,
docker_pull,
get_docker_container_id,
)
from ..core_ci import (
AnsibleCoreCI,
)
from ..http import (
HttpClient,
)
class VcenterProvider(CloudProvider):
"""VMware vcenter/esx plugin. Sets up cloud resources for tests."""
DOCKER_SIMULATOR_NAME = 'vcenter-simulator'
def __init__(self, args):
"""
:type args: TestConfig
"""
super(VcenterProvider, self).__init__(args)
# The simulator must be pinned to a specific version to guarantee CI passes with the version used.
if os.environ.get('ANSIBLE_VCSIM_CONTAINER'):
self.image = os.environ.get('ANSIBLE_VCSIM_CONTAINER')
else:
self.image = 'quay.io/ansible/vcenter-test-container:1.5.0'
self.container_name = ''
# VMware tests can be run on govcsim or baremetal, either BYO with a static config
# file or hosted in worldstream. Using an env var value of 'worldstream' with appropriate
# CI credentials will deploy a dynamic baremetal environment. The simulator is the default
# if no other config if provided.
self.vmware_test_platform = os.environ.get('VMWARE_TEST_PLATFORM', '')
self.aci = None
self.insecure = False
self.endpoint = ''
self.hostname = ''
self.port = 443
self.proxy = None
def filter(self, targets, exclude):
"""Filter out the cloud tests when the necessary config and resources are not available.
:type targets: tuple[TestTarget]
:type exclude: list[str]
"""
if self.vmware_test_platform is None or 'govcsim':
docker = find_executable('docker', required=False)
if docker:
return
skip = 'cloud/%s/' % self.platform
skipped = [target.name for target in targets if skip in target.aliases]
if skipped:
exclude.append(skip)
display.warning('Excluding tests marked "%s" which require the "docker" command: %s'
% (skip.rstrip('/'), ', '.join(skipped)))
else:
if os.path.isfile(self.config_static_path):
return
aci = self._create_ansible_core_ci()
if os.path.isfile(aci.ci_key):
return
if is_shippable():
return
super(VcenterProvider, self).filter(targets, exclude)
def setup(self):
"""Setup the cloud resource before delegation and register a cleanup callback."""
super(VcenterProvider, self).setup()
self._set_cloud_config('vmware_test_platform', self.vmware_test_platform)
if self._use_static_config():
self._set_cloud_config('vmware_test_platform', 'static')
self._setup_static()
elif self.vmware_test_platform == 'worldstream':
self._setup_dynamic_baremetal()
else:
self._setup_dynamic_simulator()
def get_docker_run_options(self):
"""Get any additional options needed when delegating tests to a docker container.
:rtype: list[str]
"""
if self.managed and self.vmware_test_platform != 'worldstream':
return ['--link', self.DOCKER_SIMULATOR_NAME]
return []
def cleanup(self):
"""Clean up the cloud resource and any temporary configuration files after tests complete."""
if self.vmware_test_platform == 'worldstream':
if self.aci:
self.aci.stop()
if self.container_name:
docker_rm(self.args, self.container_name)
super(VcenterProvider, self).cleanup()
def _setup_dynamic_simulator(self):
"""Create a vcenter simulator using docker."""
container_id = get_docker_container_id()
if container_id:
display.info('Running in docker container: %s' % container_id, verbosity=1)
self.container_name = self.DOCKER_SIMULATOR_NAME
results = docker_inspect(self.args, self.container_name)
if results and not results[0].get('State', {}).get('Running'):
docker_rm(self.args, self.container_name)
results = []
if results:
display.info('Using the existing vCenter simulator docker container.', verbosity=1)
else:
display.info('Starting a new vCenter simulator docker container.', verbosity=1)
if not self.args.docker and not container_id:
# publish the simulator ports when not running inside docker
publish_ports = [
'-p', '80:80',
'-p', '443:443',
'-p', '8080:8080',
'-p', '8989:8989',
'-p', '5000:5000', # control port for flask app in simulator
]
else:
publish_ports = []
if not os.environ.get('ANSIBLE_VCSIM_CONTAINER'):
docker_pull(self.args, self.image)
docker_run(
self.args,
self.image,
['-d', '--name', self.container_name] + publish_ports,
)
if self.args.docker:
vcenter_host = self.DOCKER_SIMULATOR_NAME
elif container_id:
vcenter_host = self._get_simulator_address()
display.info('Found vCenter simulator container address: %s' % vcenter_host, verbosity=1)
else:
vcenter_host = 'localhost'
self._set_cloud_config('vcenter_host', vcenter_host)
def _get_simulator_address(self):
results = docker_inspect(self.args, self.container_name)
ipaddress = results[0]['NetworkSettings']['IPAddress']
return ipaddress
def _setup_dynamic_baremetal(self):
"""Request Esxi credentials through the Ansible Core CI service."""
display.info('Provisioning %s cloud environment.' % self.platform,
verbosity=1)
config = self._read_config_template()
aci = self._create_ansible_core_ci()
if not self.args.explain:
response = aci.start()
self.aci = aci
config = self._populate_config_template(config, response)
self._write_config(config)
def _create_ansible_core_ci(self):
"""
:rtype: AnsibleCoreCI
"""
return AnsibleCoreCI(self.args, 'vmware', 'vmware',
persist=False, stage=self.args.remote_stage,
provider='vmware')
def _setup_static(self):
parser = ConfigParser({
'vcenter_port': '443',
'vmware_proxy_host': '',
'vmware_proxy_port': ''})
parser.read(self.config_static_path)
self.endpoint = parser.get('DEFAULT', 'vcenter_hostname')
self.port = parser.get('DEFAULT', 'vcenter_port')
if parser.get('DEFAULT', 'vmware_validate_certs').lower() in ('no', 'false'):
self.insecure = True
proxy_host = parser.get('DEFAULT', 'vmware_proxy_host')
proxy_port = int(parser.get('DEFAULT', 'vmware_proxy_port'))
if proxy_host and proxy_port:
self.proxy = 'http://%s:%d' % (proxy_host, proxy_port)
self._wait_for_service()
def _wait_for_service(self):
"""Wait for the vCenter service endpoint to accept connections."""
if self.args.explain:
return
client = HttpClient(self.args, always=True, insecure=self.insecure, proxy=self.proxy)
endpoint = 'https://%s:%s' % (self.endpoint, self.port)
for i in range(1, 30):
display.info('Waiting for vCenter service: %s' % endpoint, verbosity=1)
try:
client.get(endpoint)
return
except SubprocessError:
pass
time.sleep(10)
raise ApplicationError('Timeout waiting for vCenter service.')
class VcenterEnvironment(CloudEnvironment):
"""VMware vcenter/esx environment plugin. Updates integration test environment after delegation."""
def get_environment_config(self):
"""
:rtype: CloudEnvironmentConfig
"""
vmware_test_platform = self._get_cloud_config('vmware_test_platform')
if vmware_test_platform in ('worldstream', 'static'):
parser = ConfigParser()
parser.read(self.config_path)
# Most of the test cases use ansible_vars, but we plan to refactor these
# to use env_vars, output both for now
env_vars = dict(
(key.upper(), value) for key, value in parser.items('DEFAULT', raw=True))
ansible_vars = dict(
resource_prefix=self.resource_prefix,
)
ansible_vars.update(dict(parser.items('DEFAULT', raw=True)))
else:
env_vars = dict(
VCENTER_HOST=self._get_cloud_config('vcenter_host'),
)
ansible_vars = dict(
vcsim=self._get_cloud_config('vcenter_host'),
)
return CloudEnvironmentConfig(
env_vars=env_vars,
ansible_vars=ansible_vars,
)
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 60,227 |
Move remaining ansible-test files into test/lib/
|
##### SUMMARY
Move the remaining files needed by `ansible-test` into the `test/lib/ansible_test/` directory. This is required to implement: https://github.com/ansible/ansible/issues/59884
##### ISSUE TYPE
Feature Idea
##### COMPONENT NAME
ansible-test
|
https://github.com/ansible/ansible/issues/60227
|
https://github.com/ansible/ansible/pull/60297
|
07051473f82faea238084fe653264fd24a24ff46
|
39b3fc0926645b26d60588af6a068c7570b778cf
| 2019-08-07T17:58:13Z |
python
| 2019-08-09T04:34:38Z |
docs/docsite/rst/dev_guide/testing/sanity/bin-symlinks.rst
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 60,227 |
Move remaining ansible-test files into test/lib/
|
##### SUMMARY
Move the remaining files needed by `ansible-test` into the `test/lib/ansible_test/` directory. This is required to implement: https://github.com/ansible/ansible/issues/59884
##### ISSUE TYPE
Feature Idea
##### COMPONENT NAME
ansible-test
|
https://github.com/ansible/ansible/issues/60227
|
https://github.com/ansible/ansible/pull/60297
|
07051473f82faea238084fe653264fd24a24ff46
|
39b3fc0926645b26d60588af6a068c7570b778cf
| 2019-08-07T17:58:13Z |
python
| 2019-08-09T04:34:38Z |
test/lib/ansible_test/_internal/ansible_util.py
|
"""Miscellaneous utility functions and classes specific to ansible cli tools."""
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import json
import os
from .constants import (
SOFT_RLIMIT_NOFILE,
)
from .util import (
common_environment,
display,
find_python,
ApplicationError,
ANSIBLE_ROOT,
ANSIBLE_LIB_ROOT,
ANSIBLE_TEST_DATA_ROOT,
)
from .util_common import (
run_command,
)
from .config import (
IntegrationConfig,
EnvironmentConfig,
)
from .data import (
data_context,
)
CHECK_YAML_VERSIONS = {}
def ansible_environment(args, color=True, ansible_config=None):
"""
:type args: CommonConfig
:type color: bool
:type ansible_config: str | None
:rtype: dict[str, str]
"""
env = common_environment()
path = env['PATH']
ansible_path = os.path.join(ANSIBLE_ROOT, 'bin')
if not path.startswith(ansible_path + os.path.pathsep):
path = ansible_path + os.path.pathsep + path
if ansible_config:
pass
elif isinstance(args, IntegrationConfig):
ansible_config = os.path.join(ANSIBLE_ROOT, 'test/integration/%s.cfg' % args.command)
else:
ansible_config = os.path.join(ANSIBLE_TEST_DATA_ROOT, '%s/ansible.cfg' % args.command)
if not args.explain and not os.path.exists(ansible_config):
raise ApplicationError('Configuration not found: %s' % ansible_config)
ansible = dict(
ANSIBLE_PYTHON_MODULE_RLIMIT_NOFILE=str(SOFT_RLIMIT_NOFILE),
ANSIBLE_FORCE_COLOR='%s' % 'true' if args.color and color else 'false',
ANSIBLE_DEPRECATION_WARNINGS='false',
ANSIBLE_HOST_KEY_CHECKING='false',
ANSIBLE_RETRY_FILES_ENABLED='false',
ANSIBLE_CONFIG=os.path.abspath(ansible_config),
ANSIBLE_LIBRARY='/dev/null',
PYTHONPATH=os.path.dirname(ANSIBLE_LIB_ROOT),
PAGER='/bin/cat',
PATH=path,
)
env.update(ansible)
if args.debug:
env.update(dict(
ANSIBLE_DEBUG='true',
ANSIBLE_LOG_PATH=os.path.abspath('test/results/logs/debug.log'),
))
if data_context().content.collection:
env.update(dict(
ANSIBLE_COLLECTIONS_PATHS=data_context().content.collection.root,
))
return env
def check_pyyaml(args, version):
"""
:type args: EnvironmentConfig
:type version: str
"""
if version in CHECK_YAML_VERSIONS:
return
python = find_python(version)
stdout, _dummy = run_command(args, [python, os.path.join(ANSIBLE_TEST_DATA_ROOT, 'yamlcheck.py')], capture=True)
if args.explain:
return
CHECK_YAML_VERSIONS[version] = result = json.loads(stdout)
yaml = result['yaml']
cloader = result['cloader']
if not yaml:
display.warning('PyYAML is not installed for interpreter: %s' % python)
elif not cloader:
display.warning('PyYAML will be slow due to installation without libyaml support for interpreter: %s' % python)
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 60,227 |
Move remaining ansible-test files into test/lib/
|
##### SUMMARY
Move the remaining files needed by `ansible-test` into the `test/lib/ansible_test/` directory. This is required to implement: https://github.com/ansible/ansible/issues/59884
##### ISSUE TYPE
Feature Idea
##### COMPONENT NAME
ansible-test
|
https://github.com/ansible/ansible/issues/60227
|
https://github.com/ansible/ansible/pull/60297
|
07051473f82faea238084fe653264fd24a24ff46
|
39b3fc0926645b26d60588af6a068c7570b778cf
| 2019-08-07T17:58:13Z |
python
| 2019-08-09T04:34:38Z |
test/lib/ansible_test/_internal/data.py
|
"""Context information for the current invocation of ansible-test."""
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import os
from . import types as t
from .util import (
ApplicationError,
import_plugins,
ANSIBLE_ROOT,
is_subdir,
ANSIBLE_IS_INSTALLED,
)
from .provider import (
find_path_provider,
get_path_provider_classes,
ProviderNotFoundForPath,
)
from .provider.source import (
SourceProvider,
)
from .provider.source.unversioned import (
UnversionedSource,
)
from .provider.layout import (
ContentLayout,
InstallLayout,
LayoutProvider,
)
class UnexpectedSourceRoot(ApplicationError):
"""Exception generated when a source root is found below a layout root."""
def __init__(self, source_root, layout_root): # type: (str, str) -> None
super(UnexpectedSourceRoot, self).__init__('Source root "%s" cannot be below layout root "%s".' % (source_root, layout_root))
self.source_root = source_root
self.layout_root = layout_root
class DataContext:
"""Data context providing details about the current execution environment for ansible-test."""
def __init__(self):
content_path = os.environ.get('ANSIBLE_TEST_CONTENT_ROOT')
current_path = os.getcwd()
self.__layout_providers = get_path_provider_classes(LayoutProvider)
self.__source_providers = get_path_provider_classes(SourceProvider)
self.payload_callbacks = [] # type: t.List[t.Callable[t.List[t.Tuple[str, str]], None]]
if content_path:
content = self.create_content_layout(self.__layout_providers, self.__source_providers, content_path, False)
if content.is_ansible:
install = InstallLayout(ANSIBLE_ROOT, content.all_files())
else:
install = None
elif is_subdir(current_path, ANSIBLE_ROOT):
content = self.create_content_layout(self.__layout_providers, self.__source_providers, ANSIBLE_ROOT, False)
install = InstallLayout(ANSIBLE_ROOT, content.all_files())
else:
content = self.create_content_layout(self.__layout_providers, self.__source_providers, current_path, True)
install = None
self.__install = install # type: t.Optional[InstallLayout]
self.content = content # type: ContentLayout
@staticmethod
def create_content_layout(layout_providers, # type: t.List[t.Type[LayoutProvider]]
source_providers, # type: t.List[t.Type[SourceProvider]]
root, # type: str
walk, # type: bool
): # type: (...) -> ContentLayout
"""Create a content layout using the given providers and root path."""
layout_provider = find_path_provider(LayoutProvider, layout_providers, root, walk)
try:
source_provider = find_path_provider(SourceProvider, source_providers, root, walk)
except ProviderNotFoundForPath:
source_provider = UnversionedSource(layout_provider.root)
if source_provider.root != layout_provider.root and is_subdir(source_provider.root, layout_provider.root):
raise UnexpectedSourceRoot(source_provider.root, layout_provider.root)
layout = layout_provider.create(layout_provider.root, source_provider.get_paths(layout_provider.root))
return layout
@staticmethod
def create_install_layout(source_providers): # type: (t.List[t.Type[SourceProvider]]) -> InstallLayout
"""Create an install layout using the given source provider."""
try:
source_provider = find_path_provider(SourceProvider, source_providers, ANSIBLE_ROOT, False)
except ProviderNotFoundForPath:
source_provider = UnversionedSource(ANSIBLE_ROOT)
paths = source_provider.get_paths(ANSIBLE_ROOT)
return InstallLayout(ANSIBLE_ROOT, paths)
@property
def install(self): # type: () -> InstallLayout
"""Return the install context, loaded on demand."""
if not self.__install:
self.__install = self.create_install_layout(self.__source_providers)
return self.__install
def register_payload_callback(self, callback): # type: (t.Callable[t.List[t.Tuple[str, str]], None]) -> None
"""Register the given payload callback."""
self.payload_callbacks.append(callback)
def data_init(): # type: () -> DataContext
"""Initialize provider plugins."""
provider_types = (
'layout',
'source',
)
for provider_type in provider_types:
import_plugins('provider/%s' % provider_type)
try:
context = DataContext()
except ProviderNotFoundForPath:
options = [
' - an Ansible collection: {...}/ansible_collections/{namespace}/{collection}/',
]
if not ANSIBLE_IS_INSTALLED:
options.insert(0, ' - the Ansible source: %s/' % ANSIBLE_ROOT)
raise ApplicationError('''The current working directory must be at or below:
%s
Current working directory: %s''' % ('\n'.join(options), os.getcwd()))
return context
def data_context(): # type: () -> DataContext
"""Return the current data context."""
try:
return data_context.instance
except AttributeError:
data_context.instance = data_init()
return data_context.instance
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 60,227 |
Move remaining ansible-test files into test/lib/
|
##### SUMMARY
Move the remaining files needed by `ansible-test` into the `test/lib/ansible_test/` directory. This is required to implement: https://github.com/ansible/ansible/issues/59884
##### ISSUE TYPE
Feature Idea
##### COMPONENT NAME
ansible-test
|
https://github.com/ansible/ansible/issues/60227
|
https://github.com/ansible/ansible/pull/60297
|
07051473f82faea238084fe653264fd24a24ff46
|
39b3fc0926645b26d60588af6a068c7570b778cf
| 2019-08-07T17:58:13Z |
python
| 2019-08-09T04:34:38Z |
test/lib/ansible_test/_internal/delegation.py
|
"""Delegate test execution to another environment."""
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import os
import re
import sys
import tempfile
from .executor import (
SUPPORTED_PYTHON_VERSIONS,
HTTPTESTER_HOSTS,
create_shell_command,
run_httptester,
start_httptester,
get_python_interpreter,
get_python_version,
get_docker_completion,
get_remote_completion,
)
from .config import (
TestConfig,
EnvironmentConfig,
IntegrationConfig,
ShellConfig,
SanityConfig,
UnitsConfig,
)
from .core_ci import (
AnsibleCoreCI,
)
from .manage_ci import (
ManagePosixCI,
ManageWindowsCI,
)
from .util import (
ApplicationError,
common_environment,
pass_vars,
display,
ANSIBLE_ROOT,
ANSIBLE_TEST_DATA_ROOT,
)
from .util_common import (
run_command,
)
from .docker_util import (
docker_exec,
docker_get,
docker_pull,
docker_put,
docker_rm,
docker_run,
docker_available,
docker_network_disconnect,
get_docker_networks,
)
from .cloud import (
get_cloud_providers,
)
from .target import (
IntegrationTarget,
)
from .data import (
data_context,
)
from .payload import (
create_payload,
)
def check_delegation_args(args):
"""
:type args: CommonConfig
"""
if not isinstance(args, EnvironmentConfig):
return
if args.docker:
get_python_version(args, get_docker_completion(), args.docker_raw)
elif args.remote:
get_python_version(args, get_remote_completion(), args.remote)
def delegate(args, exclude, require, integration_targets):
"""
:type args: EnvironmentConfig
:type exclude: list[str]
:type require: list[str]
:type integration_targets: tuple[IntegrationTarget]
:rtype: bool
"""
if isinstance(args, TestConfig):
with tempfile.NamedTemporaryFile(prefix='metadata-', suffix='.json', dir=data_context().content.root) as metadata_fd:
args.metadata_path = os.path.basename(metadata_fd.name)
args.metadata.to_file(args.metadata_path)
try:
return delegate_command(args, exclude, require, integration_targets)
finally:
args.metadata_path = None
else:
return delegate_command(args, exclude, require, integration_targets)
def delegate_command(args, exclude, require, integration_targets):
"""
:type args: EnvironmentConfig
:type exclude: list[str]
:type require: list[str]
:type integration_targets: tuple[IntegrationTarget]
:rtype: bool
"""
if args.tox:
delegate_tox(args, exclude, require, integration_targets)
return True
if args.docker:
delegate_docker(args, exclude, require, integration_targets)
return True
if args.remote:
delegate_remote(args, exclude, require, integration_targets)
return True
return False
def delegate_tox(args, exclude, require, integration_targets):
"""
:type args: EnvironmentConfig
:type exclude: list[str]
:type require: list[str]
:type integration_targets: tuple[IntegrationTarget]
"""
if args.python:
versions = (args.python_version,)
if args.python_version not in SUPPORTED_PYTHON_VERSIONS:
raise ApplicationError('tox does not support Python version %s' % args.python_version)
else:
versions = SUPPORTED_PYTHON_VERSIONS
if args.httptester:
needs_httptester = sorted(target.name for target in integration_targets if 'needs/httptester/' in target.aliases)
if needs_httptester:
display.warning('Use --docker or --remote to enable httptester for tests marked "needs/httptester": %s' % ', '.join(needs_httptester))
options = {
'--tox': args.tox_args,
'--tox-sitepackages': 0,
}
for version in versions:
tox = ['tox', '-c', os.path.join(ANSIBLE_TEST_DATA_ROOT, 'tox.ini'), '-e', 'py' + version.replace('.', '')]
if args.tox_sitepackages:
tox.append('--sitepackages')
tox.append('--')
cmd = generate_command(args, None, ANSIBLE_ROOT, data_context().content.root, options, exclude, require)
if not args.python:
cmd += ['--python', version]
# newer versions of tox do not support older python versions and will silently fall back to a different version
# passing this option will allow the delegated ansible-test to verify it is running under the expected python version
# tox 3.0.0 dropped official python 2.6 support: https://tox.readthedocs.io/en/latest/changelog.html#v3-0-0-2018-04-02
# tox 3.1.3 is the first version to support python 3.8 and later: https://tox.readthedocs.io/en/latest/changelog.html#v3-1-3-2018-08-03
# tox 3.1.3 appears to still work with python 2.6, making it a good version to use when supporting all python versions we use
# virtualenv 16.0.0 dropped python 2.6 support: https://virtualenv.pypa.io/en/latest/changes/#v16-0-0-2018-05-16
cmd += ['--check-python', version]
if isinstance(args, TestConfig):
if args.coverage and not args.coverage_label:
cmd += ['--coverage-label', 'tox-%s' % version]
env = common_environment()
# temporary solution to permit ansible-test delegated to tox to provision remote resources
optional = (
'SHIPPABLE',
'SHIPPABLE_BUILD_ID',
'SHIPPABLE_JOB_NUMBER',
)
env.update(pass_vars(required=[], optional=optional))
run_command(args, tox + cmd, env=env)
def delegate_docker(args, exclude, require, integration_targets):
"""
:type args: EnvironmentConfig
:type exclude: list[str]
:type require: list[str]
:type integration_targets: tuple[IntegrationTarget]
"""
test_image = args.docker
privileged = args.docker_privileged
if isinstance(args, ShellConfig):
use_httptester = args.httptester
else:
use_httptester = args.httptester and any('needs/httptester/' in target.aliases for target in integration_targets)
if use_httptester:
docker_pull(args, args.httptester)
docker_pull(args, test_image)
httptester_id = None
test_id = None
options = {
'--docker': 1,
'--docker-privileged': 0,
'--docker-util': 1,
}
python_interpreter = get_python_interpreter(args, get_docker_completion(), args.docker_raw)
install_root = '/root/ansible'
if data_context().content.collection:
content_root = os.path.join(install_root, data_context().content.collection.directory)
else:
content_root = install_root
cmd = generate_command(args, python_interpreter, install_root, content_root, options, exclude, require)
if isinstance(args, TestConfig):
if args.coverage and not args.coverage_label:
image_label = args.docker_raw
image_label = re.sub('[^a-zA-Z0-9]+', '-', image_label)
cmd += ['--coverage-label', 'docker-%s' % image_label]
if isinstance(args, IntegrationConfig):
if not args.allow_destructive:
cmd.append('--allow-destructive')
cmd_options = []
if isinstance(args, ShellConfig) or (isinstance(args, IntegrationConfig) and args.debug_strategy):
cmd_options.append('-it')
with tempfile.NamedTemporaryFile(prefix='ansible-source-', suffix='.tgz') as local_source_fd:
try:
create_payload(args, local_source_fd.name)
if use_httptester:
httptester_id = run_httptester(args)
else:
httptester_id = None
test_options = [
'--detach',
'--volume', '/sys/fs/cgroup:/sys/fs/cgroup:ro',
'--privileged=%s' % str(privileged).lower(),
]
if args.docker_memory:
test_options.extend([
'--memory=%d' % args.docker_memory,
'--memory-swap=%d' % args.docker_memory,
])
docker_socket = '/var/run/docker.sock'
if args.docker_seccomp != 'default':
test_options += ['--security-opt', 'seccomp=%s' % args.docker_seccomp]
if os.path.exists(docker_socket):
test_options += ['--volume', '%s:%s' % (docker_socket, docker_socket)]
if httptester_id:
test_options += ['--env', 'HTTPTESTER=1']
for host in HTTPTESTER_HOSTS:
test_options += ['--link', '%s:%s' % (httptester_id, host)]
if isinstance(args, IntegrationConfig):
cloud_platforms = get_cloud_providers(args)
for cloud_platform in cloud_platforms:
test_options += cloud_platform.get_docker_run_options()
test_id = docker_run(args, test_image, options=test_options)[0]
if args.explain:
test_id = 'test_id'
else:
test_id = test_id.strip()
# write temporary files to /root since /tmp isn't ready immediately on container start
docker_put(args, test_id, os.path.join(ANSIBLE_TEST_DATA_ROOT, 'setup', 'docker.sh'), '/root/docker.sh')
docker_exec(args, test_id, ['/bin/bash', '/root/docker.sh'])
docker_put(args, test_id, local_source_fd.name, '/root/ansible.tgz')
docker_exec(args, test_id, ['mkdir', '/root/ansible'])
docker_exec(args, test_id, ['tar', 'oxzf', '/root/ansible.tgz', '-C', '/root/ansible'])
# docker images are only expected to have a single python version available
if isinstance(args, UnitsConfig) and not args.python:
cmd += ['--python', 'default']
# run unit tests unprivileged to prevent stray writes to the source tree
# also disconnect from the network once requirements have been installed
if isinstance(args, UnitsConfig):
writable_dirs = [
os.path.join(install_root, '.pytest_cache'),
]
if content_root != install_root:
writable_dirs.append(os.path.join(content_root, 'test/results/junit'))
writable_dirs.append(os.path.join(content_root, 'test/results/coverage'))
docker_exec(args, test_id, ['mkdir', '-p'] + writable_dirs)
docker_exec(args, test_id, ['chmod', '777'] + writable_dirs)
if content_root == install_root:
docker_exec(args, test_id, ['find', os.path.join(content_root, 'test/results/'), '-type', 'd', '-exec', 'chmod', '777', '{}', '+'])
docker_exec(args, test_id, ['chmod', '755', '/root'])
docker_exec(args, test_id, ['chmod', '644', os.path.join(content_root, args.metadata_path)])
docker_exec(args, test_id, ['useradd', 'pytest', '--create-home'])
docker_exec(args, test_id, cmd + ['--requirements-mode', 'only'], options=cmd_options)
networks = get_docker_networks(args, test_id)
for network in networks:
docker_network_disconnect(args, test_id, network)
cmd += ['--requirements-mode', 'skip']
cmd_options += ['--user', 'pytest']
try:
docker_exec(args, test_id, cmd, options=cmd_options)
finally:
with tempfile.NamedTemporaryFile(prefix='ansible-result-', suffix='.tgz') as local_result_fd:
docker_exec(args, test_id, ['tar', 'czf', '/root/results.tgz', '-C', os.path.join(content_root, 'test'), 'results'])
docker_get(args, test_id, '/root/results.tgz', local_result_fd.name)
run_command(args, ['tar', 'oxzf', local_result_fd.name, '-C', 'test'])
finally:
if httptester_id:
docker_rm(args, httptester_id)
if test_id:
docker_rm(args, test_id)
def delegate_remote(args, exclude, require, integration_targets):
"""
:type args: EnvironmentConfig
:type exclude: list[str]
:type require: list[str]
:type integration_targets: tuple[IntegrationTarget]
"""
parts = args.remote.split('/', 1)
platform = parts[0]
version = parts[1]
core_ci = AnsibleCoreCI(args, platform, version, stage=args.remote_stage, provider=args.remote_provider)
success = False
raw = False
if isinstance(args, ShellConfig):
use_httptester = args.httptester
raw = args.raw
else:
use_httptester = args.httptester and any('needs/httptester/' in target.aliases for target in integration_targets)
if use_httptester and not docker_available():
display.warning('Assuming --disable-httptester since `docker` is not available.')
use_httptester = False
httptester_id = None
ssh_options = []
content_root = None
try:
core_ci.start()
if use_httptester:
httptester_id, ssh_options = start_httptester(args)
core_ci.wait()
python_version = get_python_version(args, get_remote_completion(), args.remote)
if platform == 'windows':
# Windows doesn't need the ansible-test fluff, just run the SSH command
manage = ManageWindowsCI(core_ci)
manage.setup(python_version)
cmd = ['powershell.exe']
elif raw:
manage = ManagePosixCI(core_ci)
manage.setup(python_version)
cmd = create_shell_command(['bash'])
else:
manage = ManagePosixCI(core_ci)
pwd = manage.setup(python_version)
options = {
'--remote': 1,
}
python_interpreter = get_python_interpreter(args, get_remote_completion(), args.remote)
install_root = os.path.join(pwd, 'ansible')
if data_context().content.collection:
content_root = os.path.join(install_root, data_context().content.collection.directory)
else:
content_root = install_root
cmd = generate_command(args, python_interpreter, install_root, content_root, options, exclude, require)
if httptester_id:
cmd += ['--inject-httptester']
if isinstance(args, TestConfig):
if args.coverage and not args.coverage_label:
cmd += ['--coverage-label', 'remote-%s-%s' % (platform, version)]
if isinstance(args, IntegrationConfig):
if not args.allow_destructive:
cmd.append('--allow-destructive')
# remote instances are only expected to have a single python version available
if isinstance(args, UnitsConfig) and not args.python:
cmd += ['--python', 'default']
if isinstance(args, IntegrationConfig):
cloud_platforms = get_cloud_providers(args)
for cloud_platform in cloud_platforms:
ssh_options += cloud_platform.get_remote_ssh_options()
try:
manage.ssh(cmd, ssh_options)
success = True
finally:
download = False
if platform != 'windows':
download = True
if isinstance(args, ShellConfig):
if args.raw:
download = False
if download and content_root:
manage.ssh('rm -rf /tmp/results && cp -a %s/test/results /tmp/results && chmod -R a+r /tmp/results' % content_root)
manage.download('/tmp/results', 'test')
finally:
if args.remote_terminate == 'always' or (args.remote_terminate == 'success' and success):
core_ci.stop()
if httptester_id:
docker_rm(args, httptester_id)
def generate_command(args, python_interpreter, install_root, content_root, options, exclude, require):
"""
:type args: EnvironmentConfig
:type python_interpreter: str | None
:type install_root: str
:type content_root: str
:type options: dict[str, int]
:type exclude: list[str]
:type require: list[str]
:rtype: list[str]
"""
options['--color'] = 1
cmd = [os.path.join(install_root, 'bin/ansible-test')]
if python_interpreter:
cmd = [python_interpreter] + cmd
# Force the encoding used during delegation.
# This is only needed because ansible-test relies on Python's file system encoding.
# Environments that do not have the locale configured are thus unable to work with unicode file paths.
# Examples include FreeBSD and some Linux containers.
env_vars = dict(
LC_ALL='en_US.UTF-8',
ANSIBLE_TEST_CONTENT_ROOT=content_root,
)
env_args = ['%s=%s' % (key, env_vars[key]) for key in sorted(env_vars)]
cmd = ['/usr/bin/env'] + env_args + cmd
cmd += list(filter_options(args, sys.argv[1:], options, exclude, require))
cmd += ['--color', 'yes' if args.color else 'no']
if args.requirements:
cmd += ['--requirements']
if isinstance(args, ShellConfig):
cmd = create_shell_command(cmd)
elif isinstance(args, SanityConfig):
if args.base_branch:
cmd += ['--base-branch', args.base_branch]
return cmd
def filter_options(args, argv, options, exclude, require):
"""
:type args: EnvironmentConfig
:type argv: list[str]
:type options: dict[str, int]
:type exclude: list[str]
:type require: list[str]
:rtype: collections.Iterable[str]
"""
options = options.copy()
options['--requirements'] = 0
options['--truncate'] = 1
options['--redact'] = 0
if isinstance(args, TestConfig):
options.update({
'--changed': 0,
'--tracked': 0,
'--untracked': 0,
'--ignore-committed': 0,
'--ignore-staged': 0,
'--ignore-unstaged': 0,
'--changed-from': 1,
'--changed-path': 1,
'--metadata': 1,
'--exclude': 1,
'--require': 1,
})
elif isinstance(args, SanityConfig):
options.update({
'--base-branch': 1,
})
remaining = 0
for arg in argv:
if not arg.startswith('-') and remaining:
remaining -= 1
continue
remaining = 0
parts = arg.split('=', 1)
key = parts[0]
if key in options:
remaining = options[key] - len(parts) + 1
continue
yield arg
for arg in args.delegate_args:
yield arg
for target in exclude:
yield '--exclude'
yield target
for target in require:
yield '--require'
yield target
if isinstance(args, TestConfig):
if args.metadata_path:
yield '--metadata'
yield args.metadata_path
yield '--truncate'
yield '%d' % args.truncate
if args.redact:
yield '--redact'
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 60,227 |
Move remaining ansible-test files into test/lib/
|
##### SUMMARY
Move the remaining files needed by `ansible-test` into the `test/lib/ansible_test/` directory. This is required to implement: https://github.com/ansible/ansible/issues/59884
##### ISSUE TYPE
Feature Idea
##### COMPONENT NAME
ansible-test
|
https://github.com/ansible/ansible/issues/60227
|
https://github.com/ansible/ansible/pull/60297
|
07051473f82faea238084fe653264fd24a24ff46
|
39b3fc0926645b26d60588af6a068c7570b778cf
| 2019-08-07T17:58:13Z |
python
| 2019-08-09T04:34:38Z |
test/lib/ansible_test/_internal/payload.py
|
"""Payload management for sending Ansible files and test content to other systems (VMs, containers)."""
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import os
import tarfile
import time
from .config import (
IntegrationConfig,
ShellConfig,
)
from .util import (
display,
ANSIBLE_ROOT,
)
from .data import (
data_context,
)
from .util_common import (
CommonConfig,
)
# improve performance by disabling uid/gid lookups
tarfile.pwd = None
tarfile.grp = None
def create_payload(args, dst_path): # type: (CommonConfig, str) -> None
"""Create a payload for delegation."""
if args.explain:
return
files = [(os.path.join(ANSIBLE_ROOT, path), path) for path in data_context().install.all_files()]
if not data_context().content.is_ansible:
files = [f for f in files if
f[1].startswith('bin/') or
f[1].startswith('lib/') or
f[1].startswith('test/lib/') or
f[1] in (
'test/integration/integration.cfg',
'test/integration/integration_config.yml',
'test/integration/inventory',
'test/integration/network-integration.cfg',
'test/integration/target-prefixes.network',
'test/integration/windows-integration.cfg',
)]
if not isinstance(args, (ShellConfig, IntegrationConfig)):
files = [f for f in files if not f[1].startswith('lib/ansible/modules/') or f[1] == 'lib/ansible/modules/__init__.py']
if data_context().content.collection:
files.extend((os.path.join(data_context().content.root, path), os.path.join(data_context().content.collection.directory, path))
for path in data_context().content.all_files())
for callback in data_context().payload_callbacks:
callback(files)
display.info('Creating a payload archive containing %d files...' % len(files), verbosity=1)
start = time.time()
with tarfile.TarFile.gzopen(dst_path, mode='w', compresslevel=4) as tar:
for src, dst in files:
display.info('%s -> %s' % (src, dst), verbosity=4)
tar.add(src, dst)
duration = time.time() - start
payload_size_bytes = os.path.getsize(dst_path)
display.info('Created a %d byte payload archive containing %d files in %d seconds.' % (payload_size_bytes, len(files), duration), verbosity=1)
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 60,227 |
Move remaining ansible-test files into test/lib/
|
##### SUMMARY
Move the remaining files needed by `ansible-test` into the `test/lib/ansible_test/` directory. This is required to implement: https://github.com/ansible/ansible/issues/59884
##### ISSUE TYPE
Feature Idea
##### COMPONENT NAME
ansible-test
|
https://github.com/ansible/ansible/issues/60227
|
https://github.com/ansible/ansible/pull/60297
|
07051473f82faea238084fe653264fd24a24ff46
|
39b3fc0926645b26d60588af6a068c7570b778cf
| 2019-08-07T17:58:13Z |
python
| 2019-08-09T04:34:38Z |
test/lib/ansible_test/_internal/provider/layout/__init__.py
|
"""Code for finding content."""
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import abc
import collections
import os
from ... import types as t
from ...util import (
ANSIBLE_ROOT,
)
from .. import (
PathProvider,
)
class Layout:
"""Description of content locations and helper methods to access content."""
def __init__(self,
root, # type: str
paths, # type: t.List[str]
): # type: (...) -> None
self.root = root
self.__paths = paths
self.__tree = paths_to_tree(paths)
def all_files(self): # type: () -> t.List[str]
"""Return a list of all file paths."""
return self.__paths
def walk_files(self, directory): # type: (str) -> t.List[str]
"""Return a list of file paths found recursively under the given directory."""
parts = directory.rstrip(os.sep).split(os.sep)
item = get_tree_item(self.__tree, parts)
if not item:
return []
directories = collections.deque(item[0].values())
files = list(item[1])
while directories:
item = directories.pop()
directories.extend(item[0].values())
files.extend(item[1])
return files
def get_dirs(self, directory): # type: (str) -> t.List[str]
"""Return a list directory paths found directly under the given directory."""
parts = directory.rstrip(os.sep).split(os.sep)
item = get_tree_item(self.__tree, parts)
return [os.path.join(directory, key) for key in item[0].keys()] if item else []
def get_files(self, directory): # type: (str) -> t.List[str]
"""Return a list of file paths found directly under the given directory."""
parts = directory.rstrip(os.sep).split(os.sep)
item = get_tree_item(self.__tree, parts)
return item[1] if item else []
class InstallLayout(Layout):
"""Information about the current Ansible install."""
class ContentLayout(Layout):
"""Information about the current Ansible content being tested."""
def __init__(self,
root, # type: str
paths, # type: t.List[str]
plugin_paths, # type: t.Dict[str, str]
collection=None, # type: t.Optional[CollectionDetail]
unit_path=None, # type: t.Optional[str]
unit_module_path=None, # type: t.Optional[str]
unit_module_utils_path=None, # type: t.Optional[str]
): # type: (...) -> None
super(ContentLayout, self).__init__(root, paths)
self.plugin_paths = plugin_paths
self.collection = collection
self.unit_path = unit_path
self.unit_module_path = unit_module_path
self.unit_module_utils_path = unit_module_utils_path
self.is_ansible = root == ANSIBLE_ROOT
@property
def prefix(self): # type: () -> str
"""Return the collection prefix or an empty string if not a collection."""
if self.collection:
return self.collection.prefix
return ''
@property
def module_path(self): # type: () -> t.Optional[str]
"""Return the path where modules are found, if any."""
return self.plugin_paths.get('modules')
@property
def module_utils_path(self): # type: () -> t.Optional[str]
"""Return the path where module_utils are found, if any."""
return self.plugin_paths.get('module_utils')
@property
def module_utils_powershell_path(self): # type: () -> t.Optional[str]
"""Return the path where powershell module_utils are found, if any."""
if self.is_ansible:
return os.path.join(self.plugin_paths['module_utils'], 'powershell')
return self.plugin_paths.get('module_utils')
@property
def module_utils_csharp_path(self): # type: () -> t.Optional[str]
"""Return the path where csharp module_utils are found, if any."""
if self.is_ansible:
return os.path.join(self.plugin_paths['module_utils'], 'csharp')
return self.plugin_paths.get('module_utils')
class CollectionDetail:
"""Details about the layout of the current collection."""
def __init__(self,
name, # type: str
namespace, # type: str
root, # type: str
prefix, # type: str
): # type: (...) -> None
self.name = name
self.namespace = namespace
self.root = root
self.prefix = prefix
self.directory = os.path.join('ansible_collections', namespace, name)
class LayoutProvider(PathProvider):
"""Base class for layout providers."""
PLUGIN_TYPES = (
'action',
'become',
'cache',
'callback',
'cliconf',
'connection',
'doc_fragments',
'filter',
'httpapi',
'inventory',
'lookup',
'module_utils',
'modules',
'netconf',
'shell',
'strategy',
'terminal',
'test',
'vars',
)
@abc.abstractmethod
def create(self, root, paths): # type: (str, t.List[str]) -> ContentLayout
"""Create a layout using the given root and paths."""
def paths_to_tree(paths): # type: (t.List[str]) -> t.Tuple(t.Dict[str, t.Any], t.List[str])
"""Return a filesystem tree from the given list of paths."""
tree = {}, []
for path in paths:
parts = path.split(os.sep)
root = tree
for part in parts[:-1]:
if part not in root[0]:
root[0][part] = {}, []
root = root[0][part]
root[1].append(path)
return tree
def get_tree_item(tree, parts): # type: (t.Tuple(t.Dict[str, t.Any], t.List[str]), t.List[str]) -> t.Optional[t.Tuple(t.Dict[str, t.Any], t.List[str])]
"""Return the portion of the tree found under the path given by parts, or None if it does not exist."""
root = tree
for part in parts:
root = root[0].get(part)
if not root:
return None
return root
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 60,227 |
Move remaining ansible-test files into test/lib/
|
##### SUMMARY
Move the remaining files needed by `ansible-test` into the `test/lib/ansible_test/` directory. This is required to implement: https://github.com/ansible/ansible/issues/59884
##### ISSUE TYPE
Feature Idea
##### COMPONENT NAME
ansible-test
|
https://github.com/ansible/ansible/issues/60227
|
https://github.com/ansible/ansible/pull/60297
|
07051473f82faea238084fe653264fd24a24ff46
|
39b3fc0926645b26d60588af6a068c7570b778cf
| 2019-08-07T17:58:13Z |
python
| 2019-08-09T04:34:38Z |
test/lib/ansible_test/_internal/provider/source/installed.py
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 60,227 |
Move remaining ansible-test files into test/lib/
|
##### SUMMARY
Move the remaining files needed by `ansible-test` into the `test/lib/ansible_test/` directory. This is required to implement: https://github.com/ansible/ansible/issues/59884
##### ISSUE TYPE
Feature Idea
##### COMPONENT NAME
ansible-test
|
https://github.com/ansible/ansible/issues/60227
|
https://github.com/ansible/ansible/pull/60297
|
07051473f82faea238084fe653264fd24a24ff46
|
39b3fc0926645b26d60588af6a068c7570b778cf
| 2019-08-07T17:58:13Z |
python
| 2019-08-09T04:34:38Z |
test/lib/ansible_test/_internal/provider/source/unversioned.py
|
"""Fallback source provider when no other provider matches the content root."""
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import os
from ... import types as t
from ...constants import (
TIMEOUT_PATH,
)
from . import (
SourceProvider,
)
class UnversionedSource(SourceProvider):
"""Fallback source provider when no other provider matches the content root."""
sequence = 0 # disable automatic detection
@staticmethod
def is_content_root(path): # type: (str) -> bool
"""Return True if the given path is a content root for this provider."""
return False
def get_paths(self, path): # type: (str) -> t.List[str]
"""Return the list of available content paths under the given path."""
paths = []
kill_any_dir = (
'.idea',
'.pytest_cache',
'__pycache__',
'ansible.egg-info',
)
kill_sub_dir = {
'test/runner': (
'.tox',
),
'test': (
'results',
'cache',
),
'docs/docsite': (
'_build',
),
}
kill_sub_file = {
'': (
TIMEOUT_PATH,
),
}
kill_extensions = (
'.pyc',
'.retry',
)
for root, dir_names, file_names in os.walk(path):
rel_root = os.path.relpath(root, path)
if rel_root == '.':
rel_root = ''
for kill in kill_any_dir + kill_sub_dir.get(rel_root, ()):
if kill in dir_names:
dir_names.remove(kill)
kill_files = kill_sub_file.get(rel_root, ())
paths.extend([os.path.join(rel_root, file_name) for file_name in file_names
if not os.path.splitext(file_name)[1] in kill_extensions and file_name not in kill_files])
return paths
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 60,227 |
Move remaining ansible-test files into test/lib/
|
##### SUMMARY
Move the remaining files needed by `ansible-test` into the `test/lib/ansible_test/` directory. This is required to implement: https://github.com/ansible/ansible/issues/59884
##### ISSUE TYPE
Feature Idea
##### COMPONENT NAME
ansible-test
|
https://github.com/ansible/ansible/issues/60227
|
https://github.com/ansible/ansible/pull/60297
|
07051473f82faea238084fe653264fd24a24ff46
|
39b3fc0926645b26d60588af6a068c7570b778cf
| 2019-08-07T17:58:13Z |
python
| 2019-08-09T04:34:38Z |
test/lib/ansible_test/_internal/sanity/bin_symlinks.py
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 60,227 |
Move remaining ansible-test files into test/lib/
|
##### SUMMARY
Move the remaining files needed by `ansible-test` into the `test/lib/ansible_test/` directory. This is required to implement: https://github.com/ansible/ansible/issues/59884
##### ISSUE TYPE
Feature Idea
##### COMPONENT NAME
ansible-test
|
https://github.com/ansible/ansible/issues/60227
|
https://github.com/ansible/ansible/pull/60297
|
07051473f82faea238084fe653264fd24a24ff46
|
39b3fc0926645b26d60588af6a068c7570b778cf
| 2019-08-07T17:58:13Z |
python
| 2019-08-09T04:34:38Z |
test/lib/ansible_test/_internal/util.py
|
"""Miscellaneous utility functions and classes."""
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import contextlib
import errno
import fcntl
import inspect
import os
import pkgutil
import random
import re
import shutil
import socket
import stat
import string
import subprocess
import sys
import time
from struct import unpack, pack
from termios import TIOCGWINSZ
try:
from abc import ABC
except ImportError:
from abc import ABCMeta
ABC = ABCMeta('ABC', (), {})
try:
# noinspection PyCompatibility
from configparser import ConfigParser
except ImportError:
# noinspection PyCompatibility,PyUnresolvedReferences
from ConfigParser import SafeConfigParser as ConfigParser
try:
# noinspection PyProtectedMember
from shlex import quote as cmd_quote
except ImportError:
# noinspection PyProtectedMember
from pipes import quote as cmd_quote
from . import types as t
try:
C = t.TypeVar('C')
except AttributeError:
C = None
DOCKER_COMPLETION = {} # type: t.Dict[str, t.Dict[str, str]]
REMOTE_COMPLETION = {} # type: t.Dict[str, t.Dict[str, str]]
PYTHON_PATHS = {} # type: t.Dict[str, str]
try:
# noinspection PyUnresolvedReferences
MAXFD = subprocess.MAXFD
except AttributeError:
MAXFD = -1
COVERAGE_CONFIG_NAME = 'coveragerc'
COVERAGE_OUTPUT_NAME = 'coverage'
ANSIBLE_TEST_ROOT = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
# assume running from install
ANSIBLE_ROOT = os.path.dirname(ANSIBLE_TEST_ROOT)
ANSIBLE_LIB_ROOT = os.path.join(ANSIBLE_ROOT, 'ansible')
ANSIBLE_IS_INSTALLED = True
if not os.path.exists(ANSIBLE_LIB_ROOT):
# running from source
ANSIBLE_ROOT = os.path.dirname(os.path.dirname(os.path.dirname(ANSIBLE_TEST_ROOT)))
ANSIBLE_LIB_ROOT = os.path.join(ANSIBLE_ROOT, 'lib', 'ansible')
ANSIBLE_IS_INSTALLED = False
ANSIBLE_TEST_DATA_ROOT = os.path.join(ANSIBLE_TEST_ROOT, '_data')
ANSIBLE_TEST_CONFIG_ROOT = os.path.join(ANSIBLE_TEST_ROOT, 'config')
# Modes are set to allow all users the same level of access.
# This permits files to be used in tests that change users.
# The only exception is write access to directories for the user creating them.
# This avoids having to modify the directory permissions a second time.
MODE_READ = stat.S_IRUSR | stat.S_IRGRP | stat.S_IROTH
MODE_FILE = MODE_READ
MODE_FILE_EXECUTE = MODE_FILE | stat.S_IXUSR | stat.S_IXGRP | stat.S_IXOTH
MODE_FILE_WRITE = MODE_FILE | stat.S_IWUSR | stat.S_IWGRP | stat.S_IWOTH
MODE_DIRECTORY = MODE_READ | stat.S_IWUSR | stat.S_IXUSR | stat.S_IXGRP | stat.S_IXOTH
MODE_DIRECTORY_WRITE = MODE_DIRECTORY | stat.S_IWGRP | stat.S_IWOTH
ENCODING = 'utf-8'
Text = type(u'')
def to_optional_bytes(value, errors='strict'): # type: (t.Optional[t.AnyStr], str) -> t.Optional[bytes]
"""Return the given value as bytes encoded using UTF-8 if not already bytes, or None if the value is None."""
return None if value is None else to_bytes(value, errors)
def to_optional_text(value, errors='strict'): # type: (t.Optional[t.AnyStr], str) -> t.Optional[t.Text]
"""Return the given value as text decoded using UTF-8 if not already text, or None if the value is None."""
return None if value is None else to_text(value, errors)
def to_bytes(value, errors='strict'): # type: (t.AnyStr, str) -> bytes
"""Return the given value as bytes encoded using UTF-8 if not already bytes."""
if isinstance(value, bytes):
return value
if isinstance(value, Text):
return value.encode(ENCODING, errors)
raise Exception('value is not bytes or text: %s' % type(value))
def to_text(value, errors='strict'): # type: (t.AnyStr, str) -> t.Text
"""Return the given value as text decoded using UTF-8 if not already text."""
if isinstance(value, bytes):
return value.decode(ENCODING, errors)
if isinstance(value, Text):
return value
raise Exception('value is not bytes or text: %s' % type(value))
def get_docker_completion():
"""
:rtype: dict[str, dict[str, str]]
"""
return get_parameterized_completion(DOCKER_COMPLETION, 'docker')
def get_remote_completion():
"""
:rtype: dict[str, dict[str, str]]
"""
return get_parameterized_completion(REMOTE_COMPLETION, 'remote')
def get_parameterized_completion(cache, name):
"""
:type cache: dict[str, dict[str, str]]
:type name: str
:rtype: dict[str, dict[str, str]]
"""
if not cache:
images = read_lines_without_comments(os.path.join(ANSIBLE_TEST_DATA_ROOT, 'completion', '%s.txt' % name), remove_blank_lines=True)
cache.update(dict(kvp for kvp in [parse_parameterized_completion(i) for i in images] if kvp))
return cache
def parse_parameterized_completion(value):
"""
:type value: str
:rtype: tuple[str, dict[str, str]]
"""
values = value.split()
if not values:
return None
name = values[0]
data = dict((kvp[0], kvp[1] if len(kvp) > 1 else '') for kvp in [item.split('=', 1) for item in values[1:]])
return name, data
def is_shippable():
"""
:rtype: bool
"""
return os.environ.get('SHIPPABLE') == 'true'
def remove_file(path):
"""
:type path: str
"""
if os.path.isfile(path):
os.remove(path)
def read_lines_without_comments(path, remove_blank_lines=False, optional=False): # type: (str, bool, bool) -> t.List[str]
"""
Returns lines from the specified text file with comments removed.
Comments are any content from a hash symbol to the end of a line.
Any spaces immediately before a comment are also removed.
"""
if optional and not os.path.exists(path):
return []
with open(path, 'r') as path_fd:
lines = path_fd.read().splitlines()
lines = [re.sub(r' *#.*$', '', line) for line in lines]
if remove_blank_lines:
lines = [line for line in lines if line]
return lines
def find_executable(executable, cwd=None, path=None, required=True):
"""
:type executable: str
:type cwd: str
:type path: str
:type required: bool | str
:rtype: str | None
"""
match = None
real_cwd = os.getcwd()
if not cwd:
cwd = real_cwd
if os.path.dirname(executable):
target = os.path.join(cwd, executable)
if os.path.exists(target) and os.access(target, os.F_OK | os.X_OK):
match = executable
else:
if path is None:
path = os.environ.get('PATH', os.path.defpath)
if path:
path_dirs = path.split(os.path.pathsep)
seen_dirs = set()
for path_dir in path_dirs:
if path_dir in seen_dirs:
continue
seen_dirs.add(path_dir)
if os.path.abspath(path_dir) == real_cwd:
path_dir = cwd
candidate = os.path.join(path_dir, executable)
if os.path.exists(candidate) and os.access(candidate, os.F_OK | os.X_OK):
match = candidate
break
if not match and required:
message = 'Required program "%s" not found.' % executable
if required != 'warning':
raise ApplicationError(message)
display.warning(message)
return match
def find_python(version, path=None, required=True):
"""
:type version: str
:type path: str | None
:type required: bool
:rtype: str
"""
version_info = tuple(int(n) for n in version.split('.'))
if not path and version_info == sys.version_info[:len(version_info)]:
python_bin = sys.executable
else:
python_bin = find_executable('python%s' % version, path=path, required=required)
return python_bin
def get_available_python_versions(versions): # type: (t.List[str]) -> t.Tuple[str, ...]
"""Return a tuple indicating which of the requested Python versions are available."""
return tuple(python_version for python_version in versions if find_python(python_version, required=False))
def generate_pip_command(python):
"""
:type python: str
:rtype: list[str]
"""
return [python, '-m', 'pip.__main__']
def raw_command(cmd, capture=False, env=None, data=None, cwd=None, explain=False, stdin=None, stdout=None,
cmd_verbosity=1, str_errors='strict'):
"""
:type cmd: collections.Iterable[str]
:type capture: bool
:type env: dict[str, str] | None
:type data: str | None
:type cwd: str | None
:type explain: bool
:type stdin: file | None
:type stdout: file | None
:type cmd_verbosity: int
:type str_errors: str
:rtype: str | None, str | None
"""
if not cwd:
cwd = os.getcwd()
if not env:
env = common_environment()
cmd = list(cmd)
escaped_cmd = ' '.join(cmd_quote(c) for c in cmd)
display.info('Run command: %s' % escaped_cmd, verbosity=cmd_verbosity, truncate=True)
display.info('Working directory: %s' % cwd, verbosity=2)
program = find_executable(cmd[0], cwd=cwd, path=env['PATH'], required='warning')
if program:
display.info('Program found: %s' % program, verbosity=2)
for key in sorted(env.keys()):
display.info('%s=%s' % (key, env[key]), verbosity=2)
if explain:
return None, None
communicate = False
if stdin is not None:
data = None
communicate = True
elif data is not None:
stdin = subprocess.PIPE
communicate = True
if stdout:
communicate = True
if capture:
stdout = stdout or subprocess.PIPE
stderr = subprocess.PIPE
communicate = True
else:
stderr = None
start = time.time()
process = None
try:
try:
cmd_bytes = [to_bytes(c) for c in cmd]
env_bytes = dict((to_bytes(k), to_bytes(v)) for k, v in env.items())
process = subprocess.Popen(cmd_bytes, env=env_bytes, stdin=stdin, stdout=stdout, stderr=stderr, cwd=cwd)
except OSError as ex:
if ex.errno == errno.ENOENT:
raise ApplicationError('Required program "%s" not found.' % cmd[0])
raise
if communicate:
data_bytes = to_optional_bytes(data)
stdout_bytes, stderr_bytes = process.communicate(data_bytes)
stdout_text = to_optional_text(stdout_bytes, str_errors) or u''
stderr_text = to_optional_text(stderr_bytes, str_errors) or u''
else:
process.wait()
stdout_text, stderr_text = None, None
finally:
if process and process.returncode is None:
process.kill()
display.info('') # the process we're interrupting may have completed a partial line of output
display.notice('Killed command to avoid an orphaned child process during handling of an unexpected exception.')
status = process.returncode
runtime = time.time() - start
display.info('Command exited with status %s after %s seconds.' % (status, runtime), verbosity=4)
if status == 0:
return stdout_text, stderr_text
raise SubprocessError(cmd, status, stdout_text, stderr_text, runtime)
def common_environment():
"""Common environment used for executing all programs."""
env = dict(
LC_ALL='en_US.UTF-8',
PATH=os.environ.get('PATH', os.path.defpath),
)
required = (
'HOME',
)
optional = (
'HTTPTESTER',
'LD_LIBRARY_PATH',
'SSH_AUTH_SOCK',
# MacOS High Sierra Compatibility
# http://sealiesoftware.com/blog/archive/2017/6/5/Objective-C_and_fork_in_macOS_1013.html
# Example configuration for macOS:
# export OBJC_DISABLE_INITIALIZE_FORK_SAFETY=YES
'OBJC_DISABLE_INITIALIZE_FORK_SAFETY',
'ANSIBLE_KEEP_REMOTE_FILES',
# MacOS Homebrew Compatibility
# https://cryptography.io/en/latest/installation/#building-cryptography-on-macos
# This may also be required to install pyyaml with libyaml support when installed in non-standard locations.
# Example configuration for brew on macOS:
# export LDFLAGS="-L$(brew --prefix openssl)/lib/ -L$(brew --prefix libyaml)/lib/"
# export CFLAGS="-I$(brew --prefix openssl)/include/ -I$(brew --prefix libyaml)/include/"
# However, this is not adequate for PyYAML 3.13, which is the latest version supported on Python 2.6.
# For that version the standard location must be used, or `pip install` must be invoked with additional options:
# --global-option=build_ext --global-option=-L{path_to_lib_dir}
'LDFLAGS',
'CFLAGS',
)
env.update(pass_vars(required=required, optional=optional))
return env
def pass_vars(required, optional):
"""
:type required: collections.Iterable[str]
:type optional: collections.Iterable[str]
:rtype: dict[str, str]
"""
env = {}
for name in required:
if name not in os.environ:
raise MissingEnvironmentVariable(name)
env[name] = os.environ[name]
for name in optional:
if name not in os.environ:
continue
env[name] = os.environ[name]
return env
def deepest_path(path_a, path_b):
"""Return the deepest of two paths, or None if the paths are unrelated.
:type path_a: str
:type path_b: str
:rtype: str | None
"""
if path_a == '.':
path_a = ''
if path_b == '.':
path_b = ''
if path_a.startswith(path_b):
return path_a or '.'
if path_b.startswith(path_a):
return path_b or '.'
return None
def remove_tree(path):
"""
:type path: str
"""
try:
shutil.rmtree(to_bytes(path))
except OSError as ex:
if ex.errno != errno.ENOENT:
raise
def make_dirs(path):
"""
:type path: str
"""
try:
os.makedirs(to_bytes(path))
except OSError as ex:
if ex.errno != errno.EEXIST:
raise
def is_binary_file(path):
"""
:type path: str
:rtype: bool
"""
assume_text = set([
'.cfg',
'.conf',
'.crt',
'.cs',
'.css',
'.html',
'.ini',
'.j2',
'.js',
'.json',
'.md',
'.pem',
'.ps1',
'.psm1',
'.py',
'.rst',
'.sh',
'.txt',
'.xml',
'.yaml',
'.yml',
])
assume_binary = set([
'.bin',
'.eot',
'.gz',
'.ico',
'.iso',
'.jpg',
'.otf',
'.p12',
'.png',
'.pyc',
'.rpm',
'.ttf',
'.woff',
'.woff2',
'.zip',
])
ext = os.path.splitext(path)[1]
if ext in assume_text:
return False
if ext in assume_binary:
return True
with open(path, 'rb') as path_fd:
return b'\0' in path_fd.read(1024)
def generate_password():
"""Generate a random password.
:rtype: str
"""
chars = [
string.ascii_letters,
string.digits,
string.ascii_letters,
string.digits,
'-',
] * 4
password = ''.join([random.choice(char) for char in chars[:-1]])
display.sensitive.add(password)
return password
class Display:
"""Manages color console output."""
clear = '\033[0m'
red = '\033[31m'
green = '\033[32m'
yellow = '\033[33m'
blue = '\033[34m'
purple = '\033[35m'
cyan = '\033[36m'
verbosity_colors = {
0: None,
1: green,
2: blue,
3: cyan,
}
def __init__(self):
self.verbosity = 0
self.color = sys.stdout.isatty()
self.warnings = []
self.warnings_unique = set()
self.info_stderr = False
self.rows = 0
self.columns = 0
self.truncate = 0
self.redact = False
self.sensitive = set()
if os.isatty(0):
self.rows, self.columns = unpack('HHHH', fcntl.ioctl(0, TIOCGWINSZ, pack('HHHH', 0, 0, 0, 0)))[:2]
def __warning(self, message):
"""
:type message: str
"""
self.print_message('WARNING: %s' % message, color=self.purple, fd=sys.stderr)
def review_warnings(self):
"""Review all warnings which previously occurred."""
if not self.warnings:
return
self.__warning('Reviewing previous %d warning(s):' % len(self.warnings))
for warning in self.warnings:
self.__warning(warning)
def warning(self, message, unique=False):
"""
:type message: str
:type unique: bool
"""
if unique:
if message in self.warnings_unique:
return
self.warnings_unique.add(message)
self.__warning(message)
self.warnings.append(message)
def notice(self, message):
"""
:type message: str
"""
self.print_message('NOTICE: %s' % message, color=self.purple, fd=sys.stderr)
def error(self, message):
"""
:type message: str
"""
self.print_message('ERROR: %s' % message, color=self.red, fd=sys.stderr)
def info(self, message, verbosity=0, truncate=False):
"""
:type message: str
:type verbosity: int
:type truncate: bool
"""
if self.verbosity >= verbosity:
color = self.verbosity_colors.get(verbosity, self.yellow)
self.print_message(message, color=color, fd=sys.stderr if self.info_stderr else sys.stdout, truncate=truncate)
def print_message(self, message, color=None, fd=sys.stdout, truncate=False): # pylint: disable=locally-disabled, invalid-name
"""
:type message: str
:type color: str | None
:type fd: file
:type truncate: bool
"""
if self.redact and self.sensitive:
for item in self.sensitive:
message = message.replace(item, '*' * len(item))
if truncate:
if len(message) > self.truncate > 5:
message = message[:self.truncate - 5] + ' ...'
if color and self.color:
# convert color resets in message to desired color
message = message.replace(self.clear, color)
message = '%s%s%s' % (color, message, self.clear)
if sys.version_info[0] == 2:
message = to_bytes(message)
print(message, file=fd)
fd.flush()
class ApplicationError(Exception):
"""General application error."""
class ApplicationWarning(Exception):
"""General application warning which interrupts normal program flow."""
class SubprocessError(ApplicationError):
"""Error resulting from failed subprocess execution."""
def __init__(self, cmd, status=0, stdout=None, stderr=None, runtime=None):
"""
:type cmd: list[str]
:type status: int
:type stdout: str | None
:type stderr: str | None
:type runtime: float | None
"""
message = 'Command "%s" returned exit status %s.\n' % (' '.join(cmd_quote(c) for c in cmd), status)
if stderr:
message += '>>> Standard Error\n'
message += '%s%s\n' % (stderr.strip(), Display.clear)
if stdout:
message += '>>> Standard Output\n'
message += '%s%s\n' % (stdout.strip(), Display.clear)
message = message.strip()
super(SubprocessError, self).__init__(message)
self.cmd = cmd
self.status = status
self.stdout = stdout
self.stderr = stderr
self.runtime = runtime
class MissingEnvironmentVariable(ApplicationError):
"""Error caused by missing environment variable."""
def __init__(self, name):
"""
:type name: str
"""
super(MissingEnvironmentVariable, self).__init__('Missing environment variable: %s' % name)
self.name = name
def docker_qualify_image(name):
"""
:type name: str
:rtype: str
"""
config = get_docker_completion().get(name, {})
return config.get('name', name)
def parse_to_list_of_dict(pattern, value):
"""
:type pattern: str
:type value: str
:return: list[dict[str, str]]
"""
matched = []
unmatched = []
for line in value.splitlines():
match = re.search(pattern, line)
if match:
matched.append(match.groupdict())
else:
unmatched.append(line)
if unmatched:
raise Exception('Pattern "%s" did not match values:\n%s' % (pattern, '\n'.join(unmatched)))
return matched
def get_available_port():
"""
:rtype: int
"""
# this relies on the kernel not reusing previously assigned ports immediately
socket_fd = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
with contextlib.closing(socket_fd):
socket_fd.bind(('', 0))
return socket_fd.getsockname()[1]
def get_subclasses(class_type): # type: (t.Type[C]) -> t.Set[t.Type[C]]
"""Returns the set of types that are concrete subclasses of the given type."""
subclasses = set()
queue = [class_type]
while queue:
parent = queue.pop()
for child in parent.__subclasses__():
if child not in subclasses:
if not inspect.isabstract(child):
subclasses.add(child)
queue.append(child)
return subclasses
def is_subdir(candidate_path, path): # type: (str, str) -> bool
"""Returns true if candidate_path is path or a subdirectory of path."""
if not path.endswith(os.sep):
path += os.sep
if not candidate_path.endswith(os.sep):
candidate_path += os.sep
return candidate_path.startswith(path)
def paths_to_dirs(paths): # type: (t.List[str]) -> t.List[str]
"""Returns a list of directories extracted from the given list of paths."""
dir_names = set()
for path in paths:
while True:
path = os.path.dirname(path)
if not path or path == os.path.sep:
break
dir_names.add(path + os.path.sep)
return sorted(dir_names)
def import_plugins(directory, root=None): # type: (str, t.Optional[str]) -> None
"""
Import plugins from the given directory relative to the given root.
If the root is not provided, the 'lib' directory for the test runner will be used.
"""
if root is None:
root = os.path.dirname(__file__)
path = os.path.join(root, directory)
package = __name__.rsplit('.', 1)[0]
prefix = '%s.%s.' % (package, directory.replace(os.sep, '.'))
for (_module_loader, name, _ispkg) in pkgutil.iter_modules([path], prefix=prefix):
module_path = os.path.join(root, name[len(package) + 1:].replace('.', os.sep) + '.py')
load_module(module_path, name)
def load_plugins(base_type, database): # type: (t.Type[C], t.Dict[str, t.Type[C]]) -> None
"""
Load plugins of the specified type and track them in the specified database.
Only plugins which have already been imported will be loaded.
"""
plugins = dict((sc.__module__.rsplit('.', 1)[1], sc) for sc in get_subclasses(base_type)) # type: t.Dict[str, t.Type[C]]
for plugin in plugins:
database[plugin] = plugins[plugin]
def load_module(path, name): # type: (str, str) -> None
"""Load a Python module using the given name and path."""
if name in sys.modules:
return
if sys.version_info >= (3, 4):
# noinspection PyUnresolvedReferences
import importlib.util
# noinspection PyUnresolvedReferences
spec = importlib.util.spec_from_file_location(name, path)
# noinspection PyUnresolvedReferences
module = importlib.util.module_from_spec(spec)
spec.loader.exec_module(module)
sys.modules[name] = module
else:
# noinspection PyDeprecation
import imp
with open(path, 'r') as module_file:
# noinspection PyDeprecation
imp.load_module(name, module_file, path, ('.py', 'r', imp.PY_SOURCE))
display = Display() # pylint: disable=locally-disabled, invalid-name
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 59,465 |
Relative Python import support in collections
|
##### SUMMARY
2.9/devel currently supports relative Python imports inside a collection for all but modules/module_utils. Extend the AnsiballZ analysis/bundling to support relative imports for those as well.
##### ISSUE TYPE
- Feature Idea
##### COMPONENT NAME
module_common.py
##### ADDITIONAL INFORMATION
This is needed to minimize the number of places a collection must refer to itself by name. Also a convenience for shorter intra-collection Python imports (eg `from ..module_utils.blah import AThing` vs. `from ansible_collections.myns.mycoll.plugins.module_utils.blah import AThing`)
|
https://github.com/ansible/ansible/issues/59465
|
https://github.com/ansible/ansible/pull/59950
|
d3624cf4a41591d546da1d4340123e70bf5984e6
|
3777c2e93df23ac6bf59de6196c28a756aed7c50
| 2019-07-23T18:12:01Z |
python
| 2019-08-09T06:23:12Z |
lib/ansible/plugins/loader.py
|
# (c) 2012, Daniel Hokka Zakrisson <[email protected]>
# (c) 2012-2014, Michael DeHaan <[email protected]> and others
# (c) 2017, Toshio Kuratomi <[email protected]>
# (c) 2017 Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import glob
import os
import os.path
import sys
import warnings
from collections import defaultdict
from ansible import constants as C
from ansible.errors import AnsibleError
from ansible.module_utils._text import to_bytes, to_text, to_native
from ansible.module_utils.six import string_types
from ansible.parsing.utils.yaml import from_yaml
from ansible.parsing.yaml.loader import AnsibleLoader
from ansible.plugins import get_plugin_class, MODULE_CACHE, PATH_CACHE, PLUGIN_PATH_CACHE
from ansible.utils.collection_loader import AnsibleCollectionLoader, AnsibleFlatMapLoader, is_collection_ref
from ansible.utils.display import Display
from ansible.utils.plugin_docs import add_fragments
try:
import importlib.util
imp = None
except ImportError:
import imp
# HACK: keep Python 2.6 controller tests happy in CI until they're properly split
try:
from importlib import import_module
except ImportError:
import_module = __import__
display = Display()
def get_all_plugin_loaders():
return [(name, obj) for (name, obj) in globals().items() if isinstance(obj, PluginLoader)]
def add_all_plugin_dirs(path):
''' add any existing plugin dirs in the path provided '''
b_path = to_bytes(path, errors='surrogate_or_strict')
if os.path.isdir(b_path):
for name, obj in get_all_plugin_loaders():
if obj.subdir:
plugin_path = os.path.join(b_path, to_bytes(obj.subdir))
if os.path.isdir(plugin_path):
obj.add_directory(to_text(plugin_path))
else:
display.warning("Ignoring invalid path provided to plugin path: '%s' is not a directory" % to_text(path))
def get_shell_plugin(shell_type=None, executable=None):
if not shell_type:
# default to sh
shell_type = 'sh'
# mostly for backwards compat
if executable:
if isinstance(executable, string_types):
shell_filename = os.path.basename(executable)
try:
shell = shell_loader.get(shell_filename)
except Exception:
shell = None
if shell is None:
for shell in shell_loader.all():
if shell_filename in shell.COMPATIBLE_SHELLS:
shell_type = shell.SHELL_FAMILY
break
else:
raise AnsibleError("Either a shell type or a shell executable must be provided ")
shell = shell_loader.get(shell_type)
if not shell:
raise AnsibleError("Could not find the shell plugin required (%s)." % shell_type)
if executable:
setattr(shell, 'executable', executable)
return shell
def add_dirs_to_loader(which_loader, paths):
loader = getattr(sys.modules[__name__], '%s_loader' % which_loader)
for path in paths:
loader.add_directory(path, with_subdir=True)
class PluginLoader:
'''
PluginLoader loads plugins from the configured plugin directories.
It searches for plugins by iterating through the combined list of play basedirs, configured
paths, and the python path. The first match is used.
'''
def __init__(self, class_name, package, config, subdir, aliases=None, required_base_class=None):
aliases = {} if aliases is None else aliases
self.class_name = class_name
self.base_class = required_base_class
self.package = package
self.subdir = subdir
# FIXME: remove alias dict in favor of alias by symlink?
self.aliases = aliases
if config and not isinstance(config, list):
config = [config]
elif not config:
config = []
self.config = config
if class_name not in MODULE_CACHE:
MODULE_CACHE[class_name] = {}
if class_name not in PATH_CACHE:
PATH_CACHE[class_name] = None
if class_name not in PLUGIN_PATH_CACHE:
PLUGIN_PATH_CACHE[class_name] = defaultdict(dict)
# hold dirs added at runtime outside of config
self._extra_dirs = []
# caches
self._module_cache = MODULE_CACHE[class_name]
self._paths = PATH_CACHE[class_name]
self._plugin_path_cache = PLUGIN_PATH_CACHE[class_name]
self._searched_paths = set()
def _clear_caches(self):
if C.OLD_PLUGIN_CACHE_CLEARING:
self._paths = None
else:
# reset global caches
MODULE_CACHE[self.class_name] = {}
PATH_CACHE[self.class_name] = None
PLUGIN_PATH_CACHE[self.class_name] = defaultdict(dict)
# reset internal caches
self._module_cache = MODULE_CACHE[self.class_name]
self._paths = PATH_CACHE[self.class_name]
self._plugin_path_cache = PLUGIN_PATH_CACHE[self.class_name]
self._searched_paths = set()
def __setstate__(self, data):
'''
Deserializer.
'''
class_name = data.get('class_name')
package = data.get('package')
config = data.get('config')
subdir = data.get('subdir')
aliases = data.get('aliases')
base_class = data.get('base_class')
PATH_CACHE[class_name] = data.get('PATH_CACHE')
PLUGIN_PATH_CACHE[class_name] = data.get('PLUGIN_PATH_CACHE')
self.__init__(class_name, package, config, subdir, aliases, base_class)
self._extra_dirs = data.get('_extra_dirs', [])
self._searched_paths = data.get('_searched_paths', set())
def __getstate__(self):
'''
Serializer.
'''
return dict(
class_name=self.class_name,
base_class=self.base_class,
package=self.package,
config=self.config,
subdir=self.subdir,
aliases=self.aliases,
_extra_dirs=self._extra_dirs,
_searched_paths=self._searched_paths,
PATH_CACHE=PATH_CACHE[self.class_name],
PLUGIN_PATH_CACHE=PLUGIN_PATH_CACHE[self.class_name],
)
def format_paths(self, paths):
''' Returns a string suitable for printing of the search path '''
# Uses a list to get the order right
ret = []
for i in paths:
if i not in ret:
ret.append(i)
return os.pathsep.join(ret)
def print_paths(self):
return self.format_paths(self._get_paths(subdirs=False))
def _all_directories(self, dir):
results = []
results.append(dir)
for root, subdirs, files in os.walk(dir, followlinks=True):
if '__init__.py' in files:
for x in subdirs:
results.append(os.path.join(root, x))
return results
def _get_package_paths(self, subdirs=True):
''' Gets the path of a Python package '''
if not self.package:
return []
if not hasattr(self, 'package_path'):
m = __import__(self.package)
parts = self.package.split('.')[1:]
for parent_mod in parts:
m = getattr(m, parent_mod)
self.package_path = os.path.dirname(m.__file__)
if subdirs:
return self._all_directories(self.package_path)
return [self.package_path]
def _get_paths(self, subdirs=True):
''' Return a list of paths to search for plugins in '''
# FIXME: This is potentially buggy if subdirs is sometimes True and sometimes False.
# In current usage, everything calls this with subdirs=True except for module_utils_loader and ansible-doc
# which always calls it with subdirs=False. So there currently isn't a problem with this caching.
if self._paths is not None:
return self._paths
ret = self._extra_dirs[:]
# look in any configured plugin paths, allow one level deep for subcategories
if self.config is not None:
for path in self.config:
path = os.path.realpath(os.path.expanduser(path))
if subdirs:
contents = glob.glob("%s/*" % path) + glob.glob("%s/*/*" % path)
for c in contents:
if os.path.isdir(c) and c not in ret:
ret.append(c)
if path not in ret:
ret.append(path)
# look for any plugins installed in the package subtree
# Note package path always gets added last so that every other type of
# path is searched before it.
ret.extend(self._get_package_paths(subdirs=subdirs))
# HACK: because powershell modules are in the same directory
# hierarchy as other modules we have to process them last. This is
# because powershell only works on windows but the other modules work
# anywhere (possibly including windows if the correct language
# interpreter is installed). the non-powershell modules can have any
# file extension and thus powershell modules are picked up in that.
# The non-hack way to fix this is to have powershell modules be
# a different PluginLoader/ModuleLoader. But that requires changing
# other things too (known thing to change would be PATHS_CACHE,
# PLUGIN_PATHS_CACHE, and MODULE_CACHE. Since those three dicts key
# on the class_name and neither regular modules nor powershell modules
# would have class_names, they would not work as written.
reordered_paths = []
win_dirs = []
for path in ret:
if path.endswith('windows'):
win_dirs.append(path)
else:
reordered_paths.append(path)
reordered_paths.extend(win_dirs)
# cache and return the result
self._paths = reordered_paths
return reordered_paths
def _load_config_defs(self, name, module, path):
''' Reads plugin docs to find configuration setting definitions, to push to config manager for later use '''
# plugins w/o class name don't support config
if self.class_name:
type_name = get_plugin_class(self.class_name)
# if type name != 'module_doc_fragment':
if type_name in C.CONFIGURABLE_PLUGINS:
dstring = AnsibleLoader(getattr(module, 'DOCUMENTATION', ''), file_name=path).get_single_data()
if dstring:
add_fragments(dstring, path, fragment_loader=fragment_loader)
if dstring and 'options' in dstring and isinstance(dstring['options'], dict):
C.config.initialize_plugin_configuration_definitions(type_name, name, dstring['options'])
display.debug('Loaded config def from plugin (%s/%s)' % (type_name, name))
def add_directory(self, directory, with_subdir=False):
''' Adds an additional directory to the search path '''
directory = os.path.realpath(directory)
if directory is not None:
if with_subdir:
directory = os.path.join(directory, self.subdir)
if directory not in self._extra_dirs:
# append the directory and invalidate the path cache
self._extra_dirs.append(directory)
self._clear_caches()
display.debug('Added %s to loader search path' % (directory))
def _find_fq_plugin(self, fq_name, extension):
fq_name = to_native(fq_name)
# prefix our extension Python namespace if it isn't already there
if not fq_name.startswith('ansible_collections.'):
fq_name = 'ansible_collections.' + fq_name
splitname = fq_name.rsplit('.', 1)
if len(splitname) != 2:
raise ValueError('{0} is not a valid namespace-qualified plugin name'.format(to_native(fq_name)))
package = splitname[0]
resource = splitname[1]
append_plugin_type = self.subdir.replace('_plugins', '')
if append_plugin_type == 'library':
append_plugin_type = 'modules'
package += '.plugins.{0}'.format(append_plugin_type)
if extension:
resource += extension
pkg = sys.modules.get(package)
if not pkg:
# FIXME: there must be cheaper/safer way to do this
pkg = import_module(package)
# if the package is one of our flatmaps, we need to consult its loader to find the path, since the file could be
# anywhere in the tree
if hasattr(pkg, '__loader__') and isinstance(pkg.__loader__, AnsibleFlatMapLoader):
try:
file_path = pkg.__loader__.find_file(resource)
return to_text(file_path)
except IOError:
# this loader already takes care of extensionless files, so if we didn't find it, just bail
return None
pkg_path = os.path.dirname(pkg.__file__)
resource_path = os.path.join(pkg_path, resource)
# FIXME: and is file or file link or ...
if os.path.exists(resource_path):
return to_text(resource_path)
# look for any matching extension in the package location (sans filter)
ext_blacklist = ['.pyc', '.pyo']
found_files = [f for f in glob.iglob(os.path.join(pkg_path, resource) + '.*') if os.path.isfile(f) and os.path.splitext(f)[1] not in ext_blacklist]
if not found_files:
return None
if len(found_files) > 1:
# TODO: warn?
pass
return to_text(found_files[0])
def _find_plugin(self, name, mod_type='', ignore_deprecated=False, check_aliases=False, collection_list=None):
''' Find a plugin named name '''
global _PLUGIN_FILTERS
if name in _PLUGIN_FILTERS[self.package]:
return None
if mod_type:
suffix = mod_type
elif self.class_name:
# Ansible plugins that run in the controller process (most plugins)
suffix = '.py'
else:
# Only Ansible Modules. Ansible modules can be any executable so
# they can have any suffix
suffix = ''
# HACK: need this right now so we can still load shipped PS module_utils
if (is_collection_ref(name) or collection_list) and not name.startswith('Ansible'):
if '.' in name or not collection_list:
candidates = [name]
else:
candidates = ['{0}.{1}'.format(c, name) for c in collection_list]
# TODO: keep actual errors, not just assembled messages
errors = []
for candidate_name in candidates:
try:
# HACK: refactor this properly
if candidate_name.startswith('ansible.legacy'):
# just pass the raw name to the old lookup function to check in all the usual locations
p = self._find_plugin_legacy(name.replace('ansible.legacy.', '', 1), ignore_deprecated, check_aliases, suffix)
else:
p = self._find_fq_plugin(candidate_name, suffix)
if p:
return p
except Exception as ex:
errors.append(to_native(ex))
if errors:
display.debug(msg='plugin lookup for {0} failed; errors: {1}'.format(name, '; '.join(errors)))
return None
# if we got here, there's no collection list and it's not an FQ name, so do legacy lookup
return self._find_plugin_legacy(name, ignore_deprecated, check_aliases, suffix)
def _find_plugin_legacy(self, name, ignore_deprecated=False, check_aliases=False, suffix=None):
if check_aliases:
name = self.aliases.get(name, name)
# The particular cache to look for modules within. This matches the
# requested mod_type
pull_cache = self._plugin_path_cache[suffix]
try:
return pull_cache[name]
except KeyError:
# Cache miss. Now let's find the plugin
pass
# TODO: Instead of using the self._paths cache (PATH_CACHE) and
# self._searched_paths we could use an iterator. Before enabling that
# we need to make sure we don't want to add additional directories
# (add_directory()) once we start using the iterator. Currently, it
# looks like _get_paths() never forces a cache refresh so if we expect
# additional directories to be added later, it is buggy.
for path in (p for p in self._get_paths() if p not in self._searched_paths and os.path.isdir(p)):
display.debug('trying %s' % path)
try:
full_paths = (os.path.join(path, f) for f in os.listdir(path))
except OSError as e:
display.warning("Error accessing plugin paths: %s" % to_text(e))
for full_path in (f for f in full_paths if os.path.isfile(f) and not f.endswith('__init__.py')):
full_name = os.path.basename(full_path)
# HACK: We have no way of executing python byte compiled files as ansible modules so specifically exclude them
# FIXME: I believe this is only correct for modules and module_utils.
# For all other plugins we want .pyc and .pyo should be valid
if any(full_path.endswith(x) for x in C.BLACKLIST_EXTS):
continue
splitname = os.path.splitext(full_name)
base_name = splitname[0]
try:
extension = splitname[1]
except IndexError:
extension = ''
# Module found, now enter it into the caches that match this file
if base_name not in self._plugin_path_cache['']:
self._plugin_path_cache[''][base_name] = full_path
if full_name not in self._plugin_path_cache['']:
self._plugin_path_cache[''][full_name] = full_path
if base_name not in self._plugin_path_cache[extension]:
self._plugin_path_cache[extension][base_name] = full_path
if full_name not in self._plugin_path_cache[extension]:
self._plugin_path_cache[extension][full_name] = full_path
self._searched_paths.add(path)
try:
return pull_cache[name]
except KeyError:
# Didn't find the plugin in this directory. Load modules from the next one
pass
# if nothing is found, try finding alias/deprecated
if not name.startswith('_'):
alias_name = '_' + name
# We've already cached all the paths at this point
if alias_name in pull_cache:
if not ignore_deprecated and not os.path.islink(pull_cache[alias_name]):
# FIXME: this is not always the case, some are just aliases
display.deprecated('%s is kept for backwards compatibility but usage is discouraged. ' # pylint: disable=ansible-deprecated-no-version
'The module documentation details page may explain more about this rationale.' % name.lstrip('_'))
return pull_cache[alias_name]
return None
def find_plugin(self, name, mod_type='', ignore_deprecated=False, check_aliases=False, collection_list=None):
''' Find a plugin named name '''
# Import here to avoid circular import
from ansible.vars.reserved import is_reserved_name
plugin = self._find_plugin(name, mod_type=mod_type, ignore_deprecated=ignore_deprecated, check_aliases=check_aliases, collection_list=collection_list)
if plugin and self.package == 'ansible.modules' and name not in ('gather_facts',) and is_reserved_name(name):
raise AnsibleError(
'Module "%s" shadows the name of a reserved keyword. Please rename or remove this module. Found at %s' % (name, plugin)
)
return plugin
def has_plugin(self, name, collection_list=None):
''' Checks if a plugin named name exists '''
try:
return self.find_plugin(name, collection_list=collection_list) is not None
except Exception as ex:
if isinstance(ex, AnsibleError):
raise
# log and continue, likely an innocuous type/package loading failure in collections import
display.debug('has_plugin error: {0}'.format(to_text(ex)))
__contains__ = has_plugin
def _load_module_source(self, name, path):
# avoid collisions across plugins
full_name = '.'.join([self.package, name])
if full_name in sys.modules:
# Avoids double loading, See https://github.com/ansible/ansible/issues/13110
return sys.modules[full_name]
with warnings.catch_warnings():
warnings.simplefilter("ignore", RuntimeWarning)
if imp is None:
spec = importlib.util.spec_from_file_location(to_native(full_name), to_native(path))
module = importlib.util.module_from_spec(spec)
spec.loader.exec_module(module)
sys.modules[full_name] = module
else:
with open(to_bytes(path), 'rb') as module_file:
# to_native is used here because imp.load_source's path is for tracebacks and python's traceback formatting uses native strings
module = imp.load_source(to_native(full_name), to_native(path), module_file)
return module
def _update_object(self, obj, name, path):
# set extra info on the module, in case we want it later
setattr(obj, '_original_path', path)
setattr(obj, '_load_name', name)
def get(self, name, *args, **kwargs):
''' instantiates a plugin of the given name using arguments '''
found_in_cache = True
class_only = kwargs.pop('class_only', False)
collection_list = kwargs.pop('collection_list', None)
if name in self.aliases:
name = self.aliases[name]
path = self.find_plugin(name, collection_list=collection_list)
if path is None:
return None
if path not in self._module_cache:
self._module_cache[path] = self._load_module_source(name, path)
self._load_config_defs(name, self._module_cache[path], path)
found_in_cache = False
obj = getattr(self._module_cache[path], self.class_name)
if self.base_class:
# The import path is hardcoded and should be the right place,
# so we are not expecting an ImportError.
module = __import__(self.package, fromlist=[self.base_class])
# Check whether this obj has the required base class.
try:
plugin_class = getattr(module, self.base_class)
except AttributeError:
return None
if not issubclass(obj, plugin_class):
return None
self._display_plugin_load(self.class_name, name, self._searched_paths, path, found_in_cache=found_in_cache, class_only=class_only)
if not class_only:
try:
obj = obj(*args, **kwargs)
except TypeError as e:
if "abstract" in e.args[0]:
# Abstract Base Class. The found plugin file does not
# fully implement the defined interface.
return None
raise
self._update_object(obj, name, path)
return obj
def _display_plugin_load(self, class_name, name, searched_paths, path, found_in_cache=None, class_only=None):
''' formats data to display debug info for plugin loading, also avoids processing unless really needed '''
if C.DEFAULT_DEBUG:
msg = 'Loading %s \'%s\' from %s' % (class_name, os.path.basename(name), path)
if len(searched_paths) > 1:
msg = '%s (searched paths: %s)' % (msg, self.format_paths(searched_paths))
if found_in_cache or class_only:
msg = '%s (found_in_cache=%s, class_only=%s)' % (msg, found_in_cache, class_only)
display.debug(msg)
def all(self, *args, **kwargs):
'''
Iterate through all plugins of this type
A plugin loader is initialized with a specific type. This function is an iterator returning
all of the plugins of that type to the caller.
:kwarg path_only: If this is set to True, then we return the paths to where the plugins reside
instead of an instance of the plugin. This conflicts with class_only and both should
not be set.
:kwarg class_only: If this is set to True then we return the python class which implements
a plugin rather than an instance of the plugin. This conflicts with path_only and both
should not be set.
:kwarg _dedupe: By default, we only return one plugin per plugin name. Deduplication happens
in the same way as the :meth:`get` and :meth:`find_plugin` methods resolve which plugin
should take precedence. If this is set to False, then we return all of the plugins
found, including those with duplicate names. In the case of duplicates, the order in
which they are returned is the one that would take precedence first, followed by the
others in decreasing precedence order. This should only be used by subclasses which
want to manage their own deduplication of the plugins.
:*args: Any extra arguments are passed to each plugin when it is instantiated.
:**kwargs: Any extra keyword arguments are passed to each plugin when it is instantiated.
'''
# TODO: Change the signature of this method to:
# def all(return_type='instance', args=None, kwargs=None):
# if args is None: args = []
# if kwargs is None: kwargs = {}
# return_type can be instance, class, or path.
# These changes will mean that plugin parameters won't conflict with our params and
# will also make it impossible to request both a path and a class at the same time.
#
# Move _dedupe to be a class attribute, CUSTOM_DEDUPE, with subclasses for filters and
# tests setting it to True
global _PLUGIN_FILTERS
dedupe = kwargs.pop('_dedupe', True)
path_only = kwargs.pop('path_only', False)
class_only = kwargs.pop('class_only', False)
# Having both path_only and class_only is a coding bug
if path_only and class_only:
raise AnsibleError('Do not set both path_only and class_only when calling PluginLoader.all()')
all_matches = []
found_in_cache = True
for i in self._get_paths():
all_matches.extend(glob.glob(os.path.join(i, "*.py")))
loaded_modules = set()
for path in sorted(all_matches, key=os.path.basename):
name = os.path.splitext(path)[0]
basename = os.path.basename(name)
if basename == '__init__' or basename in _PLUGIN_FILTERS[self.package]:
continue
if dedupe and basename in loaded_modules:
continue
loaded_modules.add(basename)
if path_only:
yield path
continue
if path not in self._module_cache:
try:
module = self._load_module_source(name, path)
self._load_config_defs(basename, module, path)
except Exception as e:
display.warning("Skipping plugin (%s) as it seems to be invalid: %s" % (path, to_text(e)))
continue
self._module_cache[path] = module
found_in_cache = False
try:
obj = getattr(self._module_cache[path], self.class_name)
except AttributeError as e:
display.warning("Skipping plugin (%s) as it seems to be invalid: %s" % (path, to_text(e)))
continue
if self.base_class:
# The import path is hardcoded and should be the right place,
# so we are not expecting an ImportError.
module = __import__(self.package, fromlist=[self.base_class])
# Check whether this obj has the required base class.
try:
plugin_class = getattr(module, self.base_class)
except AttributeError:
continue
if not issubclass(obj, plugin_class):
continue
self._display_plugin_load(self.class_name, basename, self._searched_paths, path, found_in_cache=found_in_cache, class_only=class_only)
if not class_only:
try:
obj = obj(*args, **kwargs)
except TypeError as e:
display.warning("Skipping plugin (%s) as it seems to be incomplete: %s" % (path, to_text(e)))
self._update_object(obj, basename, path)
yield obj
class Jinja2Loader(PluginLoader):
"""
PluginLoader optimized for Jinja2 plugins
The filter and test plugins are Jinja2 plugins encapsulated inside of our plugin format.
The way the calling code is setup, we need to do a few things differently in the all() method
"""
def find_plugin(self, name, collection_list=None):
# Nothing using Jinja2Loader use this method. We can't use the base class version because
# we deduplicate differently than the base class
if '.' in name:
return super(Jinja2Loader, self).find_plugin(name, collection_list=collection_list)
raise AnsibleError('No code should call find_plugin for Jinja2Loaders (Not implemented)')
def get(self, name, *args, **kwargs):
# Nothing using Jinja2Loader use this method. We can't use the base class version because
# we deduplicate differently than the base class
if '.' in name:
return super(Jinja2Loader, self).get(name, *args, **kwargs)
raise AnsibleError('No code should call find_plugin for Jinja2Loaders (Not implemented)')
def all(self, *args, **kwargs):
"""
Differences with :meth:`PluginLoader.all`:
* We do not deduplicate ansible plugin names. This is because we don't care about our
plugin names, here. We care about the names of the actual jinja2 plugins which are inside
of our plugins.
* We reverse the order of the list of plugins compared to other PluginLoaders. This is
because of how calling code chooses to sync the plugins from the list. It adds all the
Jinja2 plugins from one of our Ansible plugins into a dict. Then it adds the Jinja2
plugins from the next Ansible plugin, overwriting any Jinja2 plugins that had the same
name. This is an encapsulation violation (the PluginLoader should not know about what
calling code does with the data) but we're pushing the common code here. We'll fix
this in the future by moving more of the common code into this PluginLoader.
* We return a list. We could iterate the list instead but that's extra work for no gain because
the API receiving this doesn't care. It just needs an iterable
"""
# We don't deduplicate ansible plugin names. Instead, calling code deduplicates jinja2
# plugin names.
kwargs['_dedupe'] = False
# We have to instantiate a list of all plugins so that we can reverse it. We reverse it so
# that calling code will deduplicate this correctly.
plugins = [p for p in super(Jinja2Loader, self).all(*args, **kwargs)]
plugins.reverse()
return plugins
def _load_plugin_filter():
filters = defaultdict(frozenset)
user_set = False
if C.PLUGIN_FILTERS_CFG is None:
filter_cfg = '/etc/ansible/plugin_filters.yml'
else:
filter_cfg = C.PLUGIN_FILTERS_CFG
user_set = True
if os.path.exists(filter_cfg):
with open(filter_cfg, 'rb') as f:
try:
filter_data = from_yaml(f.read())
except Exception as e:
display.warning(u'The plugin filter file, {0} was not parsable.'
u' Skipping: {1}'.format(filter_cfg, to_text(e)))
return filters
try:
version = filter_data['filter_version']
except KeyError:
display.warning(u'The plugin filter file, {0} was invalid.'
u' Skipping.'.format(filter_cfg))
return filters
# Try to convert for people specifying version as a float instead of string
version = to_text(version)
version = version.strip()
if version == u'1.0':
# Modules and action plugins share the same blacklist since the difference between the
# two isn't visible to the users
try:
filters['ansible.modules'] = frozenset(filter_data['module_blacklist'])
except TypeError:
display.warning(u'Unable to parse the plugin filter file {0} as'
u' module_blacklist is not a list.'
u' Skipping.'.format(filter_cfg))
return filters
filters['ansible.plugins.action'] = filters['ansible.modules']
else:
display.warning(u'The plugin filter file, {0} was a version not recognized by this'
u' version of Ansible. Skipping.'.format(filter_cfg))
else:
if user_set:
display.warning(u'The plugin filter file, {0} does not exist.'
u' Skipping.'.format(filter_cfg))
# Specialcase the stat module as Ansible can run very few things if stat is blacklisted.
if 'stat' in filters['ansible.modules']:
raise AnsibleError('The stat module was specified in the module blacklist file, {0}, but'
' Ansible will not function without the stat module. Please remove stat'
' from the blacklist.'.format(to_native(filter_cfg)))
return filters
def _configure_collection_loader():
if not any((isinstance(l, AnsibleCollectionLoader) for l in sys.meta_path)):
sys.meta_path.insert(0, AnsibleCollectionLoader())
# TODO: All of the following is initialization code It should be moved inside of an initialization
# function which is called at some point early in the ansible and ansible-playbook CLI startup.
_PLUGIN_FILTERS = _load_plugin_filter()
_configure_collection_loader()
# doc fragments first
fragment_loader = PluginLoader(
'ModuleDocFragment',
'ansible.plugins.doc_fragments',
C.DOC_FRAGMENT_PLUGIN_PATH,
'doc_fragments',
)
action_loader = PluginLoader(
'ActionModule',
'ansible.plugins.action',
C.DEFAULT_ACTION_PLUGIN_PATH,
'action_plugins',
required_base_class='ActionBase',
)
cache_loader = PluginLoader(
'CacheModule',
'ansible.plugins.cache',
C.DEFAULT_CACHE_PLUGIN_PATH,
'cache_plugins',
)
callback_loader = PluginLoader(
'CallbackModule',
'ansible.plugins.callback',
C.DEFAULT_CALLBACK_PLUGIN_PATH,
'callback_plugins',
)
connection_loader = PluginLoader(
'Connection',
'ansible.plugins.connection',
C.DEFAULT_CONNECTION_PLUGIN_PATH,
'connection_plugins',
aliases={'paramiko': 'paramiko_ssh'},
required_base_class='ConnectionBase',
)
shell_loader = PluginLoader(
'ShellModule',
'ansible.plugins.shell',
'shell_plugins',
'shell_plugins',
)
module_loader = PluginLoader(
'',
'ansible.modules',
C.DEFAULT_MODULE_PATH,
'library',
)
module_utils_loader = PluginLoader(
'',
'ansible.module_utils',
C.DEFAULT_MODULE_UTILS_PATH,
'module_utils',
)
# NB: dedicated loader is currently necessary because PS module_utils expects "with subdir" lookup where
# regular module_utils doesn't. This can be revisited once we have more granular loaders.
ps_module_utils_loader = PluginLoader(
'',
'ansible.module_utils',
C.DEFAULT_MODULE_UTILS_PATH,
'module_utils',
)
lookup_loader = PluginLoader(
'LookupModule',
'ansible.plugins.lookup',
C.DEFAULT_LOOKUP_PLUGIN_PATH,
'lookup_plugins',
required_base_class='LookupBase',
)
filter_loader = Jinja2Loader(
'FilterModule',
'ansible.plugins.filter',
C.DEFAULT_FILTER_PLUGIN_PATH,
'filter_plugins',
)
test_loader = Jinja2Loader(
'TestModule',
'ansible.plugins.test',
C.DEFAULT_TEST_PLUGIN_PATH,
'test_plugins'
)
strategy_loader = PluginLoader(
'StrategyModule',
'ansible.plugins.strategy',
C.DEFAULT_STRATEGY_PLUGIN_PATH,
'strategy_plugins',
required_base_class='StrategyBase',
)
terminal_loader = PluginLoader(
'TerminalModule',
'ansible.plugins.terminal',
C.DEFAULT_TERMINAL_PLUGIN_PATH,
'terminal_plugins',
required_base_class='TerminalBase'
)
vars_loader = PluginLoader(
'VarsModule',
'ansible.plugins.vars',
C.DEFAULT_VARS_PLUGIN_PATH,
'vars_plugins',
)
cliconf_loader = PluginLoader(
'Cliconf',
'ansible.plugins.cliconf',
C.DEFAULT_CLICONF_PLUGIN_PATH,
'cliconf_plugins',
required_base_class='CliconfBase'
)
netconf_loader = PluginLoader(
'Netconf',
'ansible.plugins.netconf',
C.DEFAULT_NETCONF_PLUGIN_PATH,
'netconf_plugins',
required_base_class='NetconfBase'
)
inventory_loader = PluginLoader(
'InventoryModule',
'ansible.plugins.inventory',
C.DEFAULT_INVENTORY_PLUGIN_PATH,
'inventory_plugins'
)
httpapi_loader = PluginLoader(
'HttpApi',
'ansible.plugins.httpapi',
C.DEFAULT_HTTPAPI_PLUGIN_PATH,
'httpapi_plugins',
required_base_class='HttpApiBase',
)
become_loader = PluginLoader(
'BecomeModule',
'ansible.plugins.become',
C.BECOME_PLUGIN_PATH,
'become_plugins'
)
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 59,465 |
Relative Python import support in collections
|
##### SUMMARY
2.9/devel currently supports relative Python imports inside a collection for all but modules/module_utils. Extend the AnsiballZ analysis/bundling to support relative imports for those as well.
##### ISSUE TYPE
- Feature Idea
##### COMPONENT NAME
module_common.py
##### ADDITIONAL INFORMATION
This is needed to minimize the number of places a collection must refer to itself by name. Also a convenience for shorter intra-collection Python imports (eg `from ..module_utils.blah import AThing` vs. `from ansible_collections.myns.mycoll.plugins.module_utils.blah import AThing`)
|
https://github.com/ansible/ansible/issues/59465
|
https://github.com/ansible/ansible/pull/59950
|
d3624cf4a41591d546da1d4340123e70bf5984e6
|
3777c2e93df23ac6bf59de6196c28a756aed7c50
| 2019-07-23T18:12:01Z |
python
| 2019-08-09T06:23:12Z |
test/integration/targets/plugin_namespace/aliases
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 59,465 |
Relative Python import support in collections
|
##### SUMMARY
2.9/devel currently supports relative Python imports inside a collection for all but modules/module_utils. Extend the AnsiballZ analysis/bundling to support relative imports for those as well.
##### ISSUE TYPE
- Feature Idea
##### COMPONENT NAME
module_common.py
##### ADDITIONAL INFORMATION
This is needed to minimize the number of places a collection must refer to itself by name. Also a convenience for shorter intra-collection Python imports (eg `from ..module_utils.blah import AThing` vs. `from ansible_collections.myns.mycoll.plugins.module_utils.blah import AThing`)
|
https://github.com/ansible/ansible/issues/59465
|
https://github.com/ansible/ansible/pull/59950
|
d3624cf4a41591d546da1d4340123e70bf5984e6
|
3777c2e93df23ac6bf59de6196c28a756aed7c50
| 2019-07-23T18:12:01Z |
python
| 2019-08-09T06:23:12Z |
test/integration/targets/plugin_namespace/filter_plugins/test_filter.py
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 59,465 |
Relative Python import support in collections
|
##### SUMMARY
2.9/devel currently supports relative Python imports inside a collection for all but modules/module_utils. Extend the AnsiballZ analysis/bundling to support relative imports for those as well.
##### ISSUE TYPE
- Feature Idea
##### COMPONENT NAME
module_common.py
##### ADDITIONAL INFORMATION
This is needed to minimize the number of places a collection must refer to itself by name. Also a convenience for shorter intra-collection Python imports (eg `from ..module_utils.blah import AThing` vs. `from ansible_collections.myns.mycoll.plugins.module_utils.blah import AThing`)
|
https://github.com/ansible/ansible/issues/59465
|
https://github.com/ansible/ansible/pull/59950
|
d3624cf4a41591d546da1d4340123e70bf5984e6
|
3777c2e93df23ac6bf59de6196c28a756aed7c50
| 2019-07-23T18:12:01Z |
python
| 2019-08-09T06:23:12Z |
test/integration/targets/plugin_namespace/lookup_plugins/lookup_name.py
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 59,465 |
Relative Python import support in collections
|
##### SUMMARY
2.9/devel currently supports relative Python imports inside a collection for all but modules/module_utils. Extend the AnsiballZ analysis/bundling to support relative imports for those as well.
##### ISSUE TYPE
- Feature Idea
##### COMPONENT NAME
module_common.py
##### ADDITIONAL INFORMATION
This is needed to minimize the number of places a collection must refer to itself by name. Also a convenience for shorter intra-collection Python imports (eg `from ..module_utils.blah import AThing` vs. `from ansible_collections.myns.mycoll.plugins.module_utils.blah import AThing`)
|
https://github.com/ansible/ansible/issues/59465
|
https://github.com/ansible/ansible/pull/59950
|
d3624cf4a41591d546da1d4340123e70bf5984e6
|
3777c2e93df23ac6bf59de6196c28a756aed7c50
| 2019-07-23T18:12:01Z |
python
| 2019-08-09T06:23:12Z |
test/integration/targets/plugin_namespace/tasks/main.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 59,465 |
Relative Python import support in collections
|
##### SUMMARY
2.9/devel currently supports relative Python imports inside a collection for all but modules/module_utils. Extend the AnsiballZ analysis/bundling to support relative imports for those as well.
##### ISSUE TYPE
- Feature Idea
##### COMPONENT NAME
module_common.py
##### ADDITIONAL INFORMATION
This is needed to minimize the number of places a collection must refer to itself by name. Also a convenience for shorter intra-collection Python imports (eg `from ..module_utils.blah import AThing` vs. `from ansible_collections.myns.mycoll.plugins.module_utils.blah import AThing`)
|
https://github.com/ansible/ansible/issues/59465
|
https://github.com/ansible/ansible/pull/59950
|
d3624cf4a41591d546da1d4340123e70bf5984e6
|
3777c2e93df23ac6bf59de6196c28a756aed7c50
| 2019-07-23T18:12:01Z |
python
| 2019-08-09T06:23:12Z |
test/integration/targets/plugin_namespace/test_plugins/test_test.py
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 60,068 |
win_dns_record incorrectly creates PTR records
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below -->
When specifying a PTR record, the module incorrectly creates the entry.
- name: Create PTR Record
win_dns_record:
computer_name: "msdnsserver.example.com"
name: "10.202.64.62"
state: present
type: "PTR"
value: "awx_test.example.com"
zone: "10.in-addr.arpa"
Entry created:
10.62.64.202.10 Pointer (PTR) awx_test.example.com
To correctly create the record I used the following format:
- name: Create PTR Record
win_dns_record:
computer_name: "msdnsserver.example.com"
name: "62.64.202"
state: present
type: "PTR"
value: "awx_test.example.com"
zone: "10.in-addr.arpa"
which created record:
10.202.64.62 Pointer (PTR) awx_test.example.com
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
win_dns_record
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
# ansible --version
ansible 2.8.3
config file = /etc/ansible/ansible.cfg
configured module search path = [u'/root/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python2.7/site-packages/ansible
executable location = /bin/ansible
python version = 2.7.5 (default, Jun 11 2019, 12:19:05) [GCC 4.8.5 20150623 (Red Hat 4.8.5-36)]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
# ansible-config dump --only-changed
ANSIBLE_PIPELINING(/etc/ansible/ansible.cfg) = True
DEFAULT_LOAD_CALLBACK_PLUGINS(/etc/ansible/ansible.cfg) = True
DEFAULT_LOG_PATH(/etc/ansible/ansible.cfg) = /var/log/ansible.log
SHOW_CUSTOM_STATS(/etc/ansible/ansible.cfg) = True
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
Red Hat Enterprise Linux Server release 7.6 (Maipo)
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
<!--- Paste example playbooks or commands between quotes below -->
```yaml
- name: Create PTR Record
win_dns_record:
computer_name: "msdnsserver.example.com"
name: "10.202.64.62"
state: present
type: "PTR"
value: "awx_test.example.com"
zone: "10.in-addr.arpa"
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
10.202.64.62 Pointer (PTR) awx_test.example.com
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes -->
```paste below
10.62.64.202.10 Pointer (PTR) awx_test.example.com
```
|
https://github.com/ansible/ansible/issues/60068
|
https://github.com/ansible/ansible/pull/60158
|
6c09b5c65989b6c637fd58a8a892619f60906d49
|
b5f42869dc20a6a5c2fafcb4b3df15028e1de7b1
| 2019-08-05T13:04:30Z |
python
| 2019-08-09T16:04:09Z |
lib/ansible/modules/windows/win_dns_record.py
|
#!/usr/bin/python
# -*- coding: utf-8 -*-
# Copyright: (c) 2019, Hitachi ID Systems, Inc.
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
# This is a windows documentation stub. The actual code lives in the .ps1
# file of the same name.
ANSIBLE_METADATA = {'metadata_version': '1.1',
'status': ['preview'],
'supported_by': 'community'}
DOCUMENTATION = r'''
---
module: win_dns_record
version_added: "2.8"
short_description: Manage Windows Server DNS records
description:
- Manage DNS records within an existing Windows Server DNS zone.
author: John Nelson (@johnboy2)
requirements:
- This module requires Windows 8, Server 2012, or newer.
options:
name:
description:
- The name of the record.
required: yes
type: str
state:
description:
- Whether the record should exist or not.
choices: [ absent, present ]
default: present
type: str
ttl:
description:
- The "time to live" of the record, in seconds.
- Ignored when C(state=absent).
- Valid range is 1 - 31557600.
- Note that an Active Directory forest can specify a minimum TTL, and will
dynamically "round up" other values to that minimum.
default: 3600
type: int
type:
description:
- The type of DNS record to manage.
choices: [ A, AAAA, CNAME, PTR ]
required: yes
type: str
value:
description:
- The value(s) to specify. Required when C(state=present).
aliases: [ values ]
type: list
zone:
description:
- The name of the zone to manage (eg C(example.com)).
- The zone must already exist.
required: yes
type: str
computer_name:
description:
- Specifies a DNS server.
- You can specify an IP address or any value that resolves to an IP
address, such as a fully qualified domain name (FQDN), host name, or
NETBIOS name.
type: str
'''
EXAMPLES = r'''
- name: Create database server alias
win_dns_record:
name: "db1"
type: "CNAME"
value: "cgyl1404p.amer.example.com"
zone: "amer.example.com"
- name: Remove static record
win_dns_record:
name: "db1"
type: "A"
state: absent
zone: "amer.example.com"
'''
RETURN = r'''
'''
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 60,129 |
VMware: get_vm() fails with KeyError if uuid is not in the module param
|
##### SUMMARY
get_vm() method vmware.py module utility fails with KeyError if uuid is not in the module param from whch method is invoked
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
vmware.py module util
##### ANSIBLE VERSION
```
ansible 2.9.0.dev0
```
##### OS / ENVIRONMENT
Ubuntu 16
##### STEPS TO REPRODUCE
Creating new module. 'uuid' is not in my module arguments.
Called get_vm()
pyv = PyVmomi(module=module)
vm = pyv.get_vm()
##### EXPECTED RESULTS
Call should return vm_id if it exists
##### ACTUAL RESULTS
```
Get error
An exception occurred during task execution. To see the full traceback, use -vvv. The error was: KeyError: 'uuid'
fatal: [localhost]: FAILED! => {"changed": false, "module_stderr": "Traceback (most recent call last):\n File
\"/home/vmware/.ansible/tmp/ansible-tmp-1565095783.75439-259726275713856/AnsiballZ_vmware_content_deploy_template.py\", line 125, in <module>\n _ansiballz_main()\n File \"/home/vmware/.ansible/tmp/ansible-tmp-1565095783.75439-259726275713856/AnsiballZ_vmware_content_deploy_template.py\", line 117, in _ansiballz_main\n invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)\n File \"/home/vmware/.ansible/tmp/ansible-tmp-1565095783.75439-259726275713856/AnsiballZ_vmware_content_deploy_template.py\", line 51, in invoke_module\n spec.loader.exec_module(module)\n File \"<frozen importlib._bootstrap_external>\", line 665, in exec_module\n File \"<frozen importlib._bootstrap>\", line 222, in _call_with_frames_removed\n File \"/tmp/ansible_vmware_content_deploy_template_payload_69cftg21/__main__.py\", line 271, in <module>\n File \"/tmp/ansible_vmware_content_deploy_template_payload_69cftg21/__main__.py\", line 241, in main\n File \"/tmp/ansible_vmware_content_deploy_template_payload_69cftg21/ansible_vmware_content_deploy_template_payload.zip/ansible/module_utils/vmware.py\", line 891,
in get_vm\nKeyError: 'uuid'\n", "module_stdout": "", "msg": "MODULE FAILURE\nSee stdout/stderr for the exact error", "rc": 1}
```
|
https://github.com/ansible/ansible/issues/60129
|
https://github.com/ansible/ansible/pull/60204
|
8cbfa75038a7657c5c91bdf2fc59380dc463a1d3
|
0a90ec90c069fc7bbb06864711a6cafb699aef85
| 2019-08-06T12:36:30Z |
python
| 2019-08-12T08:33:13Z |
changelogs/fragments/60204-handle-KeyError-in-get_vm-API.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 60,129 |
VMware: get_vm() fails with KeyError if uuid is not in the module param
|
##### SUMMARY
get_vm() method vmware.py module utility fails with KeyError if uuid is not in the module param from whch method is invoked
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
vmware.py module util
##### ANSIBLE VERSION
```
ansible 2.9.0.dev0
```
##### OS / ENVIRONMENT
Ubuntu 16
##### STEPS TO REPRODUCE
Creating new module. 'uuid' is not in my module arguments.
Called get_vm()
pyv = PyVmomi(module=module)
vm = pyv.get_vm()
##### EXPECTED RESULTS
Call should return vm_id if it exists
##### ACTUAL RESULTS
```
Get error
An exception occurred during task execution. To see the full traceback, use -vvv. The error was: KeyError: 'uuid'
fatal: [localhost]: FAILED! => {"changed": false, "module_stderr": "Traceback (most recent call last):\n File
\"/home/vmware/.ansible/tmp/ansible-tmp-1565095783.75439-259726275713856/AnsiballZ_vmware_content_deploy_template.py\", line 125, in <module>\n _ansiballz_main()\n File \"/home/vmware/.ansible/tmp/ansible-tmp-1565095783.75439-259726275713856/AnsiballZ_vmware_content_deploy_template.py\", line 117, in _ansiballz_main\n invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)\n File \"/home/vmware/.ansible/tmp/ansible-tmp-1565095783.75439-259726275713856/AnsiballZ_vmware_content_deploy_template.py\", line 51, in invoke_module\n spec.loader.exec_module(module)\n File \"<frozen importlib._bootstrap_external>\", line 665, in exec_module\n File \"<frozen importlib._bootstrap>\", line 222, in _call_with_frames_removed\n File \"/tmp/ansible_vmware_content_deploy_template_payload_69cftg21/__main__.py\", line 271, in <module>\n File \"/tmp/ansible_vmware_content_deploy_template_payload_69cftg21/__main__.py\", line 241, in main\n File \"/tmp/ansible_vmware_content_deploy_template_payload_69cftg21/ansible_vmware_content_deploy_template_payload.zip/ansible/module_utils/vmware.py\", line 891,
in get_vm\nKeyError: 'uuid'\n", "module_stdout": "", "msg": "MODULE FAILURE\nSee stdout/stderr for the exact error", "rc": 1}
```
|
https://github.com/ansible/ansible/issues/60129
|
https://github.com/ansible/ansible/pull/60204
|
8cbfa75038a7657c5c91bdf2fc59380dc463a1d3
|
0a90ec90c069fc7bbb06864711a6cafb699aef85
| 2019-08-06T12:36:30Z |
python
| 2019-08-12T08:33:13Z |
lib/ansible/module_utils/vmware.py
|
# -*- coding: utf-8 -*-
# Copyright: (c) 2015, Joseph Callen <jcallen () csc.com>
# Copyright: (c) 2018, Ansible Project
# Copyright: (c) 2018, James E. King III (@jeking3) <[email protected]>
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
import atexit
import ansible.module_utils.common._collections_compat as collections_compat
import json
import os
import re
import ssl
import time
import traceback
from random import randint
from distutils.version import StrictVersion
REQUESTS_IMP_ERR = None
try:
# requests is required for exception handling of the ConnectionError
import requests
HAS_REQUESTS = True
except ImportError:
REQUESTS_IMP_ERR = traceback.format_exc()
HAS_REQUESTS = False
PYVMOMI_IMP_ERR = None
try:
from pyVim import connect
from pyVmomi import vim, vmodl, VmomiSupport
HAS_PYVMOMI = True
HAS_PYVMOMIJSON = hasattr(VmomiSupport, 'VmomiJSONEncoder')
except ImportError:
PYVMOMI_IMP_ERR = traceback.format_exc()
HAS_PYVMOMI = False
HAS_PYVMOMIJSON = False
from ansible.module_utils._text import to_text, to_native
from ansible.module_utils.six import integer_types, iteritems, string_types, raise_from
from ansible.module_utils.six.moves.urllib.parse import urlparse
from ansible.module_utils.basic import env_fallback, missing_required_lib
from ansible.module_utils.urls import generic_urlparse
class TaskError(Exception):
def __init__(self, *args, **kwargs):
super(TaskError, self).__init__(*args, **kwargs)
def wait_for_task(task, max_backoff=64, timeout=3600):
"""Wait for given task using exponential back-off algorithm.
Args:
task: VMware task object
max_backoff: Maximum amount of sleep time in seconds
timeout: Timeout for the given task in seconds
Returns: Tuple with True and result for successful task
Raises: TaskError on failure
"""
failure_counter = 0
start_time = time.time()
while True:
if time.time() - start_time >= timeout:
raise TaskError("Timeout")
if task.info.state == vim.TaskInfo.State.success:
return True, task.info.result
if task.info.state == vim.TaskInfo.State.error:
error_msg = task.info.error
host_thumbprint = None
try:
error_msg = error_msg.msg
if hasattr(task.info.error, 'thumbprint'):
host_thumbprint = task.info.error.thumbprint
except AttributeError:
pass
finally:
raise_from(TaskError(error_msg, host_thumbprint), task.info.error)
if task.info.state in [vim.TaskInfo.State.running, vim.TaskInfo.State.queued]:
sleep_time = min(2 ** failure_counter + randint(1, 1000) / 1000, max_backoff)
time.sleep(sleep_time)
failure_counter += 1
def wait_for_vm_ip(content, vm, timeout=300):
facts = dict()
interval = 15
while timeout > 0:
_facts = gather_vm_facts(content, vm)
if _facts['ipv4'] or _facts['ipv6']:
facts = _facts
break
time.sleep(interval)
timeout -= interval
return facts
def find_obj(content, vimtype, name, first=True, folder=None):
container = content.viewManager.CreateContainerView(folder or content.rootFolder, recursive=True, type=vimtype)
# Get all objects matching type (and name if given)
obj_list = [obj for obj in container.view if not name or to_text(obj.name) == to_text(name)]
container.Destroy()
# Return first match or None
if first:
if obj_list:
return obj_list[0]
return None
# Return all matching objects or empty list
return obj_list
def find_dvspg_by_name(dv_switch, portgroup_name):
portgroups = dv_switch.portgroup
for pg in portgroups:
if pg.name == portgroup_name:
return pg
return None
def find_object_by_name(content, name, obj_type, folder=None, recurse=True):
if not isinstance(obj_type, list):
obj_type = [obj_type]
objects = get_all_objs(content, obj_type, folder=folder, recurse=recurse)
for obj in objects:
if obj.name == name:
return obj
return None
def find_cluster_by_name(content, cluster_name, datacenter=None):
if datacenter:
folder = datacenter.hostFolder
else:
folder = content.rootFolder
return find_object_by_name(content, cluster_name, [vim.ClusterComputeResource], folder=folder)
def find_datacenter_by_name(content, datacenter_name):
return find_object_by_name(content, datacenter_name, [vim.Datacenter])
def get_parent_datacenter(obj):
""" Walk the parent tree to find the objects datacenter """
if isinstance(obj, vim.Datacenter):
return obj
datacenter = None
while True:
if not hasattr(obj, 'parent'):
break
obj = obj.parent
if isinstance(obj, vim.Datacenter):
datacenter = obj
break
return datacenter
def find_datastore_by_name(content, datastore_name):
return find_object_by_name(content, datastore_name, [vim.Datastore])
def find_dvs_by_name(content, switch_name, folder=None):
return find_object_by_name(content, switch_name, [vim.DistributedVirtualSwitch], folder=folder)
def find_hostsystem_by_name(content, hostname):
return find_object_by_name(content, hostname, [vim.HostSystem])
def find_resource_pool_by_name(content, resource_pool_name):
return find_object_by_name(content, resource_pool_name, [vim.ResourcePool])
def find_network_by_name(content, network_name):
return find_object_by_name(content, network_name, [vim.Network])
def find_vm_by_id(content, vm_id, vm_id_type="vm_name", datacenter=None,
cluster=None, folder=None, match_first=False):
""" UUID is unique to a VM, every other id returns the first match. """
si = content.searchIndex
vm = None
if vm_id_type == 'dns_name':
vm = si.FindByDnsName(datacenter=datacenter, dnsName=vm_id, vmSearch=True)
elif vm_id_type == 'uuid':
# Search By BIOS UUID rather than instance UUID
vm = si.FindByUuid(datacenter=datacenter, instanceUuid=False, uuid=vm_id, vmSearch=True)
elif vm_id_type == 'instance_uuid':
vm = si.FindByUuid(datacenter=datacenter, instanceUuid=True, uuid=vm_id, vmSearch=True)
elif vm_id_type == 'ip':
vm = si.FindByIp(datacenter=datacenter, ip=vm_id, vmSearch=True)
elif vm_id_type == 'vm_name':
folder = None
if cluster:
folder = cluster
elif datacenter:
folder = datacenter.hostFolder
vm = find_vm_by_name(content, vm_id, folder)
elif vm_id_type == 'inventory_path':
searchpath = folder
# get all objects for this path
f_obj = si.FindByInventoryPath(searchpath)
if f_obj:
if isinstance(f_obj, vim.Datacenter):
f_obj = f_obj.vmFolder
for c_obj in f_obj.childEntity:
if not isinstance(c_obj, vim.VirtualMachine):
continue
if c_obj.name == vm_id:
vm = c_obj
if match_first:
break
return vm
def find_vm_by_name(content, vm_name, folder=None, recurse=True):
return find_object_by_name(content, vm_name, [vim.VirtualMachine], folder=folder, recurse=recurse)
def find_host_portgroup_by_name(host, portgroup_name):
for portgroup in host.config.network.portgroup:
if portgroup.spec.name == portgroup_name:
return portgroup
return None
def compile_folder_path_for_object(vobj):
""" make a /vm/foo/bar/baz like folder path for an object """
paths = []
if isinstance(vobj, vim.Folder):
paths.append(vobj.name)
thisobj = vobj
while hasattr(thisobj, 'parent'):
thisobj = thisobj.parent
try:
moid = thisobj._moId
except AttributeError:
moid = None
if moid in ['group-d1', 'ha-folder-root']:
break
if isinstance(thisobj, vim.Folder):
paths.append(thisobj.name)
paths.reverse()
return '/' + '/'.join(paths)
def _get_vm_prop(vm, attributes):
"""Safely get a property or return None"""
result = vm
for attribute in attributes:
try:
result = getattr(result, attribute)
except (AttributeError, IndexError):
return None
return result
def gather_vm_facts(content, vm):
""" Gather facts from vim.VirtualMachine object. """
facts = {
'module_hw': True,
'hw_name': vm.config.name,
'hw_power_status': vm.summary.runtime.powerState,
'hw_guest_full_name': vm.summary.guest.guestFullName,
'hw_guest_id': vm.summary.guest.guestId,
'hw_product_uuid': vm.config.uuid,
'hw_processor_count': vm.config.hardware.numCPU,
'hw_cores_per_socket': vm.config.hardware.numCoresPerSocket,
'hw_memtotal_mb': vm.config.hardware.memoryMB,
'hw_interfaces': [],
'hw_datastores': [],
'hw_files': [],
'hw_esxi_host': None,
'hw_guest_ha_state': None,
'hw_is_template': vm.config.template,
'hw_folder': None,
'hw_version': vm.config.version,
'instance_uuid': vm.config.instanceUuid,
'guest_tools_status': _get_vm_prop(vm, ('guest', 'toolsRunningStatus')),
'guest_tools_version': _get_vm_prop(vm, ('guest', 'toolsVersion')),
'guest_question': vm.summary.runtime.question,
'guest_consolidation_needed': vm.summary.runtime.consolidationNeeded,
'ipv4': None,
'ipv6': None,
'annotation': vm.config.annotation,
'customvalues': {},
'snapshots': [],
'current_snapshot': None,
'vnc': {},
'moid': vm._moId,
'vimref': "vim.VirtualMachine:%s" % vm._moId,
}
# facts that may or may not exist
if vm.summary.runtime.host:
try:
host = vm.summary.runtime.host
facts['hw_esxi_host'] = host.summary.config.name
facts['hw_cluster'] = host.parent.name if host.parent and isinstance(host.parent, vim.ClusterComputeResource) else None
except vim.fault.NoPermission:
# User does not have read permission for the host system,
# proceed without this value. This value does not contribute or hamper
# provisioning or power management operations.
pass
if vm.summary.runtime.dasVmProtection:
facts['hw_guest_ha_state'] = vm.summary.runtime.dasVmProtection.dasProtected
datastores = vm.datastore
for ds in datastores:
facts['hw_datastores'].append(ds.info.name)
try:
files = vm.config.files
layout = vm.layout
if files:
facts['hw_files'] = [files.vmPathName]
for item in layout.snapshot:
for snap in item.snapshotFile:
if 'vmsn' in snap:
facts['hw_files'].append(snap)
for item in layout.configFile:
facts['hw_files'].append(os.path.join(os.path.dirname(files.vmPathName), item))
for item in vm.layout.logFile:
facts['hw_files'].append(os.path.join(files.logDirectory, item))
for item in vm.layout.disk:
for disk in item.diskFile:
facts['hw_files'].append(disk)
except Exception:
pass
facts['hw_folder'] = PyVmomi.get_vm_path(content, vm)
cfm = content.customFieldsManager
# Resolve custom values
for value_obj in vm.summary.customValue:
kn = value_obj.key
if cfm is not None and cfm.field:
for f in cfm.field:
if f.key == value_obj.key:
kn = f.name
# Exit the loop immediately, we found it
break
facts['customvalues'][kn] = value_obj.value
net_dict = {}
vmnet = _get_vm_prop(vm, ('guest', 'net'))
if vmnet:
for device in vmnet:
net_dict[device.macAddress] = list(device.ipAddress)
if vm.guest.ipAddress:
if ':' in vm.guest.ipAddress:
facts['ipv6'] = vm.guest.ipAddress
else:
facts['ipv4'] = vm.guest.ipAddress
ethernet_idx = 0
for entry in vm.config.hardware.device:
if not hasattr(entry, 'macAddress'):
continue
if entry.macAddress:
mac_addr = entry.macAddress
mac_addr_dash = mac_addr.replace(':', '-')
else:
mac_addr = mac_addr_dash = None
if (hasattr(entry, 'backing') and hasattr(entry.backing, 'port') and
hasattr(entry.backing.port, 'portKey') and hasattr(entry.backing.port, 'portgroupKey')):
port_group_key = entry.backing.port.portgroupKey
port_key = entry.backing.port.portKey
else:
port_group_key = None
port_key = None
factname = 'hw_eth' + str(ethernet_idx)
facts[factname] = {
'addresstype': entry.addressType,
'label': entry.deviceInfo.label,
'macaddress': mac_addr,
'ipaddresses': net_dict.get(entry.macAddress, None),
'macaddress_dash': mac_addr_dash,
'summary': entry.deviceInfo.summary,
'portgroup_portkey': port_key,
'portgroup_key': port_group_key,
}
facts['hw_interfaces'].append('eth' + str(ethernet_idx))
ethernet_idx += 1
snapshot_facts = list_snapshots(vm)
if 'snapshots' in snapshot_facts:
facts['snapshots'] = snapshot_facts['snapshots']
facts['current_snapshot'] = snapshot_facts['current_snapshot']
facts['vnc'] = get_vnc_extraconfig(vm)
return facts
def deserialize_snapshot_obj(obj):
return {'id': obj.id,
'name': obj.name,
'description': obj.description,
'creation_time': obj.createTime,
'state': obj.state}
def list_snapshots_recursively(snapshots):
snapshot_data = []
for snapshot in snapshots:
snapshot_data.append(deserialize_snapshot_obj(snapshot))
snapshot_data = snapshot_data + list_snapshots_recursively(snapshot.childSnapshotList)
return snapshot_data
def get_current_snap_obj(snapshots, snapob):
snap_obj = []
for snapshot in snapshots:
if snapshot.snapshot == snapob:
snap_obj.append(snapshot)
snap_obj = snap_obj + get_current_snap_obj(snapshot.childSnapshotList, snapob)
return snap_obj
def list_snapshots(vm):
result = {}
snapshot = _get_vm_prop(vm, ('snapshot',))
if not snapshot:
return result
if vm.snapshot is None:
return result
result['snapshots'] = list_snapshots_recursively(vm.snapshot.rootSnapshotList)
current_snapref = vm.snapshot.currentSnapshot
current_snap_obj = get_current_snap_obj(vm.snapshot.rootSnapshotList, current_snapref)
if current_snap_obj:
result['current_snapshot'] = deserialize_snapshot_obj(current_snap_obj[0])
else:
result['current_snapshot'] = dict()
return result
def get_vnc_extraconfig(vm):
result = {}
for opts in vm.config.extraConfig:
for optkeyname in ['enabled', 'ip', 'port', 'password']:
if opts.key.lower() == "remotedisplay.vnc." + optkeyname:
result[optkeyname] = opts.value
return result
def vmware_argument_spec():
return dict(
hostname=dict(type='str',
required=False,
fallback=(env_fallback, ['VMWARE_HOST']),
),
username=dict(type='str',
aliases=['user', 'admin'],
required=False,
fallback=(env_fallback, ['VMWARE_USER'])),
password=dict(type='str',
aliases=['pass', 'pwd'],
required=False,
no_log=True,
fallback=(env_fallback, ['VMWARE_PASSWORD'])),
port=dict(type='int',
default=443,
fallback=(env_fallback, ['VMWARE_PORT'])),
validate_certs=dict(type='bool',
required=False,
default=True,
fallback=(env_fallback, ['VMWARE_VALIDATE_CERTS'])
),
proxy_host=dict(type='str',
required=False,
default=None,
fallback=(env_fallback, ['VMWARE_PROXY_HOST'])),
proxy_port=dict(type='int',
required=False,
default=None,
fallback=(env_fallback, ['VMWARE_PROXY_PORT'])),
)
def connect_to_api(module, disconnect_atexit=True, return_si=False):
hostname = module.params['hostname']
username = module.params['username']
password = module.params['password']
port = module.params.get('port', 443)
validate_certs = module.params['validate_certs']
if not hostname:
module.fail_json(msg="Hostname parameter is missing."
" Please specify this parameter in task or"
" export environment variable like 'export VMWARE_HOST=ESXI_HOSTNAME'")
if not username:
module.fail_json(msg="Username parameter is missing."
" Please specify this parameter in task or"
" export environment variable like 'export VMWARE_USER=ESXI_USERNAME'")
if not password:
module.fail_json(msg="Password parameter is missing."
" Please specify this parameter in task or"
" export environment variable like 'export VMWARE_PASSWORD=ESXI_PASSWORD'")
if validate_certs and not hasattr(ssl, 'SSLContext'):
module.fail_json(msg='pyVim does not support changing verification mode with python < 2.7.9. Either update '
'python or use validate_certs=false.')
ssl_context = ssl.SSLContext(ssl.PROTOCOL_SSLv23)
if validate_certs:
ssl_context.verify_mode = ssl.CERT_REQUIRED
ssl_context.check_hostname = True
ssl_context.load_default_certs()
service_instance = None
proxy_host = module.params.get('proxy_host')
proxy_port = module.params.get('proxy_port')
connect_args = dict(
host=hostname,
port=port,
)
if ssl_context:
connect_args.update(sslContext=ssl_context)
msg_suffix = ''
try:
if proxy_host:
msg_suffix = " [proxy: %s:%d]" % (proxy_host, proxy_port)
connect_args.update(httpProxyHost=proxy_host, httpProxyPort=proxy_port)
smart_stub = connect.SmartStubAdapter(**connect_args)
session_stub = connect.VimSessionOrientedStub(smart_stub, connect.VimSessionOrientedStub.makeUserLoginMethod(username, password))
service_instance = vim.ServiceInstance('ServiceInstance', session_stub)
else:
connect_args.update(user=username, pwd=password)
service_instance = connect.SmartConnect(**connect_args)
except vim.fault.InvalidLogin as invalid_login:
msg = "Unable to log on to vCenter or ESXi API at %s:%s " % (hostname, port)
module.fail_json(msg="%s as %s: %s" % (msg, username, invalid_login.msg) + msg_suffix)
except vim.fault.NoPermission as no_permission:
module.fail_json(msg="User %s does not have required permission"
" to log on to vCenter or ESXi API at %s:%s : %s" % (username, hostname, port, no_permission.msg))
except (requests.ConnectionError, ssl.SSLError) as generic_req_exc:
module.fail_json(msg="Unable to connect to vCenter or ESXi API at %s on TCP/%s: %s" % (hostname, port, generic_req_exc))
except vmodl.fault.InvalidRequest as invalid_request:
# Request is malformed
msg = "Failed to get a response from server %s:%s " % (hostname, port)
module.fail_json(msg="%s as request is malformed: %s" % (msg, invalid_request.msg) + msg_suffix)
except Exception as generic_exc:
msg = "Unknown error while connecting to vCenter or ESXi API at %s:%s" % (hostname, port) + msg_suffix
module.fail_json(msg="%s : %s" % (msg, generic_exc))
if service_instance is None:
msg = "Unknown error while connecting to vCenter or ESXi API at %s:%s" % (hostname, port)
module.fail_json(msg=msg + msg_suffix)
# Disabling atexit should be used in special cases only.
# Such as IP change of the ESXi host which removes the connection anyway.
# Also removal significantly speeds up the return of the module
if disconnect_atexit:
atexit.register(connect.Disconnect, service_instance)
if return_si:
return service_instance, service_instance.RetrieveContent()
return service_instance.RetrieveContent()
def get_all_objs(content, vimtype, folder=None, recurse=True):
if not folder:
folder = content.rootFolder
obj = {}
container = content.viewManager.CreateContainerView(folder, vimtype, recurse)
for managed_object_ref in container.view:
obj.update({managed_object_ref: managed_object_ref.name})
return obj
def run_command_in_guest(content, vm, username, password, program_path, program_args, program_cwd, program_env):
result = {'failed': False}
tools_status = vm.guest.toolsStatus
if (tools_status == 'toolsNotInstalled' or
tools_status == 'toolsNotRunning'):
result['failed'] = True
result['msg'] = "VMwareTools is not installed or is not running in the guest"
return result
# https://github.com/vmware/pyvmomi/blob/master/docs/vim/vm/guest/NamePasswordAuthentication.rst
creds = vim.vm.guest.NamePasswordAuthentication(
username=username, password=password
)
try:
# https://github.com/vmware/pyvmomi/blob/master/docs/vim/vm/guest/ProcessManager.rst
pm = content.guestOperationsManager.processManager
# https://www.vmware.com/support/developer/converter-sdk/conv51_apireference/vim.vm.guest.ProcessManager.ProgramSpec.html
ps = vim.vm.guest.ProcessManager.ProgramSpec(
# programPath=program,
# arguments=args
programPath=program_path,
arguments=program_args,
workingDirectory=program_cwd,
)
res = pm.StartProgramInGuest(vm, creds, ps)
result['pid'] = res
pdata = pm.ListProcessesInGuest(vm, creds, [res])
# wait for pid to finish
while not pdata[0].endTime:
time.sleep(1)
pdata = pm.ListProcessesInGuest(vm, creds, [res])
result['owner'] = pdata[0].owner
result['startTime'] = pdata[0].startTime.isoformat()
result['endTime'] = pdata[0].endTime.isoformat()
result['exitCode'] = pdata[0].exitCode
if result['exitCode'] != 0:
result['failed'] = True
result['msg'] = "program exited non-zero"
else:
result['msg'] = "program completed successfully"
except Exception as e:
result['msg'] = str(e)
result['failed'] = True
return result
def serialize_spec(clonespec):
"""Serialize a clonespec or a relocation spec"""
data = {}
attrs = dir(clonespec)
attrs = [x for x in attrs if not x.startswith('_')]
for x in attrs:
xo = getattr(clonespec, x)
if callable(xo):
continue
xt = type(xo)
if xo is None:
data[x] = None
elif isinstance(xo, vim.vm.ConfigSpec):
data[x] = serialize_spec(xo)
elif isinstance(xo, vim.vm.RelocateSpec):
data[x] = serialize_spec(xo)
elif isinstance(xo, vim.vm.device.VirtualDisk):
data[x] = serialize_spec(xo)
elif isinstance(xo, vim.vm.device.VirtualDeviceSpec.FileOperation):
data[x] = to_text(xo)
elif isinstance(xo, vim.Description):
data[x] = {
'dynamicProperty': serialize_spec(xo.dynamicProperty),
'dynamicType': serialize_spec(xo.dynamicType),
'label': serialize_spec(xo.label),
'summary': serialize_spec(xo.summary),
}
elif hasattr(xo, 'name'):
data[x] = to_text(xo) + ':' + to_text(xo.name)
elif isinstance(xo, vim.vm.ProfileSpec):
pass
elif issubclass(xt, list):
data[x] = []
for xe in xo:
data[x].append(serialize_spec(xe))
elif issubclass(xt, string_types + integer_types + (float, bool)):
if issubclass(xt, integer_types):
data[x] = int(xo)
else:
data[x] = to_text(xo)
elif issubclass(xt, bool):
data[x] = xo
elif issubclass(xt, dict):
data[to_text(x)] = {}
for k, v in xo.items():
k = to_text(k)
data[x][k] = serialize_spec(v)
else:
data[x] = str(xt)
return data
def find_host_by_cluster_datacenter(module, content, datacenter_name, cluster_name, host_name):
dc = find_datacenter_by_name(content, datacenter_name)
if dc is None:
module.fail_json(msg="Unable to find datacenter with name %s" % datacenter_name)
cluster = find_cluster_by_name(content, cluster_name, datacenter=dc)
if cluster is None:
module.fail_json(msg="Unable to find cluster with name %s" % cluster_name)
for host in cluster.host:
if host.name == host_name:
return host, cluster
return None, cluster
def set_vm_power_state(content, vm, state, force, timeout=0):
"""
Set the power status for a VM determined by the current and
requested states. force is forceful
"""
facts = gather_vm_facts(content, vm)
expected_state = state.replace('_', '').replace('-', '').lower()
current_state = facts['hw_power_status'].lower()
result = dict(
changed=False,
failed=False,
)
# Need Force
if not force and current_state not in ['poweredon', 'poweredoff']:
result['failed'] = True
result['msg'] = "Virtual Machine is in %s power state. Force is required!" % current_state
return result
# State is not already true
if current_state != expected_state:
task = None
try:
if expected_state == 'poweredoff':
task = vm.PowerOff()
elif expected_state == 'poweredon':
task = vm.PowerOn()
elif expected_state == 'restarted':
if current_state in ('poweredon', 'poweringon', 'resetting', 'poweredoff'):
task = vm.Reset()
else:
result['failed'] = True
result['msg'] = "Cannot restart virtual machine in the current state %s" % current_state
elif expected_state == 'suspended':
if current_state in ('poweredon', 'poweringon'):
task = vm.Suspend()
else:
result['failed'] = True
result['msg'] = 'Cannot suspend virtual machine in the current state %s' % current_state
elif expected_state in ['shutdownguest', 'rebootguest']:
if current_state == 'poweredon':
if vm.guest.toolsRunningStatus == 'guestToolsRunning':
if expected_state == 'shutdownguest':
task = vm.ShutdownGuest()
if timeout > 0:
result.update(wait_for_poweroff(vm, timeout))
else:
task = vm.RebootGuest()
# Set result['changed'] immediately because
# shutdown and reboot return None.
result['changed'] = True
else:
result['failed'] = True
result['msg'] = "VMware tools should be installed for guest shutdown/reboot"
else:
result['failed'] = True
result['msg'] = "Virtual machine %s must be in poweredon state for guest shutdown/reboot" % vm.name
else:
result['failed'] = True
result['msg'] = "Unsupported expected state provided: %s" % expected_state
except Exception as e:
result['failed'] = True
result['msg'] = to_text(e)
if task:
wait_for_task(task)
if task.info.state == 'error':
result['failed'] = True
result['msg'] = task.info.error.msg
else:
result['changed'] = True
# need to get new metadata if changed
result['instance'] = gather_vm_facts(content, vm)
return result
def wait_for_poweroff(vm, timeout=300):
result = dict()
interval = 15
while timeout > 0:
if vm.runtime.powerState.lower() == 'poweredoff':
break
time.sleep(interval)
timeout -= interval
else:
result['failed'] = True
result['msg'] = 'Timeout while waiting for VM power off.'
return result
class PyVmomi(object):
def __init__(self, module):
"""
Constructor
"""
if not HAS_REQUESTS:
module.fail_json(msg=missing_required_lib('requests'),
exception=REQUESTS_IMP_ERR)
if not HAS_PYVMOMI:
module.fail_json(msg=missing_required_lib('PyVmomi'),
exception=PYVMOMI_IMP_ERR)
self.module = module
self.params = module.params
self.current_vm_obj = None
self.si, self.content = connect_to_api(self.module, return_si=True)
self.custom_field_mgr = []
if self.content.customFieldsManager: # not an ESXi
self.custom_field_mgr = self.content.customFieldsManager.field
def is_vcenter(self):
"""
Check if given hostname is vCenter or ESXi host
Returns: True if given connection is with vCenter server
False if given connection is with ESXi server
"""
api_type = None
try:
api_type = self.content.about.apiType
except (vmodl.RuntimeFault, vim.fault.VimFault) as exc:
self.module.fail_json(msg="Failed to get status of vCenter server : %s" % exc.msg)
if api_type == 'VirtualCenter':
return True
elif api_type == 'HostAgent':
return False
def get_managed_objects_properties(self, vim_type, properties=None):
"""
Look up a Managed Object Reference in vCenter / ESXi Environment
:param vim_type: Type of vim object e.g, for datacenter - vim.Datacenter
:param properties: List of properties related to vim object e.g. Name
:return: local content object
"""
# Get Root Folder
root_folder = self.content.rootFolder
if properties is None:
properties = ['name']
# Create Container View with default root folder
mor = self.content.viewManager.CreateContainerView(root_folder, [vim_type], True)
# Create Traversal spec
traversal_spec = vmodl.query.PropertyCollector.TraversalSpec(
name="traversal_spec",
path='view',
skip=False,
type=vim.view.ContainerView
)
# Create Property Spec
property_spec = vmodl.query.PropertyCollector.PropertySpec(
type=vim_type, # Type of object to retrieved
all=False,
pathSet=properties
)
# Create Object Spec
object_spec = vmodl.query.PropertyCollector.ObjectSpec(
obj=mor,
skip=True,
selectSet=[traversal_spec]
)
# Create Filter Spec
filter_spec = vmodl.query.PropertyCollector.FilterSpec(
objectSet=[object_spec],
propSet=[property_spec],
reportMissingObjectsInResults=False
)
return self.content.propertyCollector.RetrieveContents([filter_spec])
# Virtual Machine related functions
def get_vm(self):
"""
Find unique virtual machine either by UUID, MoID or Name.
Returns: virtual machine object if found, else None.
"""
vm_obj = None
user_desired_path = None
use_instance_uuid = self.params.get('use_instance_uuid') or False
if self.params['uuid'] and not use_instance_uuid:
vm_obj = find_vm_by_id(self.content, vm_id=self.params['uuid'], vm_id_type="uuid")
elif self.params['uuid'] and use_instance_uuid:
vm_obj = find_vm_by_id(self.content,
vm_id=self.params['uuid'],
vm_id_type="instance_uuid")
elif self.params['name']:
objects = self.get_managed_objects_properties(vim_type=vim.VirtualMachine, properties=['name'])
vms = []
for temp_vm_object in objects:
if len(temp_vm_object.propSet) != 1:
continue
for temp_vm_object_property in temp_vm_object.propSet:
if temp_vm_object_property.val == self.params['name']:
vms.append(temp_vm_object.obj)
break
# get_managed_objects_properties may return multiple virtual machine,
# following code tries to find user desired one depending upon the folder specified.
if len(vms) > 1:
# We have found multiple virtual machines, decide depending upon folder value
if self.params['folder'] is None:
self.module.fail_json(msg="Multiple virtual machines with same name [%s] found, "
"Folder value is a required parameter to find uniqueness "
"of the virtual machine" % self.params['name'],
details="Please see documentation of the vmware_guest module "
"for folder parameter.")
# Get folder path where virtual machine is located
# User provided folder where user thinks virtual machine is present
user_folder = self.params['folder']
# User defined datacenter
user_defined_dc = self.params['datacenter']
# User defined datacenter's object
datacenter_obj = find_datacenter_by_name(self.content, self.params['datacenter'])
# Get Path for Datacenter
dcpath = compile_folder_path_for_object(vobj=datacenter_obj)
# Nested folder does not return trailing /
if not dcpath.endswith('/'):
dcpath += '/'
if user_folder in [None, '', '/']:
# User provided blank value or
# User provided only root value, we fail
self.module.fail_json(msg="vmware_guest found multiple virtual machines with same "
"name [%s], please specify folder path other than blank "
"or '/'" % self.params['name'])
elif user_folder.startswith('/vm/'):
# User provided nested folder under VMware default vm folder i.e. folder = /vm/india/finance
user_desired_path = "%s%s%s" % (dcpath, user_defined_dc, user_folder)
else:
# User defined datacenter is not nested i.e. dcpath = '/' , or
# User defined datacenter is nested i.e. dcpath = '/F0/DC0' or
# User provided folder starts with / and datacenter i.e. folder = /ha-datacenter/ or
# User defined folder starts with datacenter without '/' i.e.
# folder = DC0/vm/india/finance or
# folder = DC0/vm
user_desired_path = user_folder
for vm in vms:
# Check if user has provided same path as virtual machine
actual_vm_folder_path = self.get_vm_path(content=self.content, vm_name=vm)
if not actual_vm_folder_path.startswith("%s%s" % (dcpath, user_defined_dc)):
continue
if user_desired_path in actual_vm_folder_path:
vm_obj = vm
break
elif vms:
# Unique virtual machine found.
vm_obj = vms[0]
elif self.params['moid']:
vm_obj = VmomiSupport.templateOf('VirtualMachine')(self.params['moid'], self.si._stub)
if vm_obj:
self.current_vm_obj = vm_obj
return vm_obj
def gather_facts(self, vm):
"""
Gather facts of virtual machine.
Args:
vm: Name of virtual machine.
Returns: Facts dictionary of the given virtual machine.
"""
return gather_vm_facts(self.content, vm)
@staticmethod
def get_vm_path(content, vm_name):
"""
Find the path of virtual machine.
Args:
content: VMware content object
vm_name: virtual machine managed object
Returns: Folder of virtual machine if exists, else None
"""
folder_name = None
folder = vm_name.parent
if folder:
folder_name = folder.name
fp = folder.parent
# climb back up the tree to find our path, stop before the root folder
while fp is not None and fp.name is not None and fp != content.rootFolder:
folder_name = fp.name + '/' + folder_name
try:
fp = fp.parent
except Exception:
break
folder_name = '/' + folder_name
return folder_name
def get_vm_or_template(self, template_name=None):
"""
Find the virtual machine or virtual machine template using name
used for cloning purpose.
Args:
template_name: Name of virtual machine or virtual machine template
Returns: virtual machine or virtual machine template object
"""
template_obj = None
if not template_name:
return template_obj
if "/" in template_name:
vm_obj_path = os.path.dirname(template_name)
vm_obj_name = os.path.basename(template_name)
template_obj = find_vm_by_id(self.content, vm_obj_name, vm_id_type="inventory_path", folder=vm_obj_path)
if template_obj:
return template_obj
else:
template_obj = find_vm_by_id(self.content, vm_id=template_name, vm_id_type="uuid")
if template_obj:
return template_obj
objects = self.get_managed_objects_properties(vim_type=vim.VirtualMachine, properties=['name'])
templates = []
for temp_vm_object in objects:
if len(temp_vm_object.propSet) != 1:
continue
for temp_vm_object_property in temp_vm_object.propSet:
if temp_vm_object_property.val == template_name:
templates.append(temp_vm_object.obj)
break
if len(templates) > 1:
# We have found multiple virtual machine templates
self.module.fail_json(msg="Multiple virtual machines or templates with same name [%s] found." % template_name)
elif templates:
template_obj = templates[0]
return template_obj
# Cluster related functions
def find_cluster_by_name(self, cluster_name, datacenter_name=None):
"""
Find Cluster by name in given datacenter
Args:
cluster_name: Name of cluster name to find
datacenter_name: (optional) Name of datacenter
Returns: True if found
"""
return find_cluster_by_name(self.content, cluster_name, datacenter=datacenter_name)
def get_all_hosts_by_cluster(self, cluster_name):
"""
Get all hosts from cluster by cluster name
Args:
cluster_name: Name of cluster
Returns: List of hosts
"""
cluster_obj = self.find_cluster_by_name(cluster_name=cluster_name)
if cluster_obj:
return [host for host in cluster_obj.host]
else:
return []
# Hosts related functions
def find_hostsystem_by_name(self, host_name):
"""
Find Host by name
Args:
host_name: Name of ESXi host
Returns: True if found
"""
return find_hostsystem_by_name(self.content, hostname=host_name)
def get_all_host_objs(self, cluster_name=None, esxi_host_name=None):
"""
Get all host system managed object
Args:
cluster_name: Name of Cluster
esxi_host_name: Name of ESXi server
Returns: A list of all host system managed objects, else empty list
"""
host_obj_list = []
if not self.is_vcenter():
hosts = get_all_objs(self.content, [vim.HostSystem]).keys()
if hosts:
host_obj_list.append(list(hosts)[0])
else:
if cluster_name:
cluster_obj = self.find_cluster_by_name(cluster_name=cluster_name)
if cluster_obj:
host_obj_list = [host for host in cluster_obj.host]
else:
self.module.fail_json(changed=False, msg="Cluster '%s' not found" % cluster_name)
elif esxi_host_name:
if isinstance(esxi_host_name, str):
esxi_host_name = [esxi_host_name]
for host in esxi_host_name:
esxi_host_obj = self.find_hostsystem_by_name(host_name=host)
if esxi_host_obj:
host_obj_list.append(esxi_host_obj)
else:
self.module.fail_json(changed=False, msg="ESXi '%s' not found" % host)
return host_obj_list
def host_version_at_least(self, version=None, vm_obj=None, host_name=None):
"""
Check that the ESXi Host is at least a specific version number
Args:
vm_obj: virtual machine object, required one of vm_obj, host_name
host_name (string): ESXi host name
version (tuple): a version tuple, for example (6, 7, 0)
Returns: bool
"""
if vm_obj:
host_system = vm_obj.summary.runtime.host
elif host_name:
host_system = self.find_hostsystem_by_name(host_name=host_name)
else:
self.module.fail_json(msg='VM object or ESXi host name must be set one.')
if host_system and version:
host_version = host_system.summary.config.product.version
return StrictVersion(host_version) >= StrictVersion('.'.join(map(str, version)))
else:
self.module.fail_json(msg='Unable to get the ESXi host from vm: %s, or hostname %s,'
'or the passed ESXi version: %s is None.' % (vm_obj, host_name, version))
# Network related functions
@staticmethod
def find_host_portgroup_by_name(host, portgroup_name):
"""
Find Portgroup on given host
Args:
host: Host config object
portgroup_name: Name of portgroup
Returns: True if found else False
"""
for portgroup in host.config.network.portgroup:
if portgroup.spec.name == portgroup_name:
return portgroup
return False
def get_all_port_groups_by_host(self, host_system):
"""
Get all Port Group by host
Args:
host_system: Name of Host System
Returns: List of Port Group Spec
"""
pgs_list = []
for pg in host_system.config.network.portgroup:
pgs_list.append(pg)
return pgs_list
def find_network_by_name(self, network_name=None):
"""
Get network specified by name
Args:
network_name: Name of network
Returns: List of network managed objects
"""
networks = []
if not network_name:
return networks
objects = self.get_managed_objects_properties(vim_type=vim.Network, properties=['name'])
for temp_vm_object in objects:
if len(temp_vm_object.propSet) != 1:
continue
for temp_vm_object_property in temp_vm_object.propSet:
if temp_vm_object_property.val == network_name:
networks.append(temp_vm_object.obj)
break
return networks
def network_exists_by_name(self, network_name=None):
"""
Check if network with a specified name exists or not
Args:
network_name: Name of network
Returns: True if network exists else False
"""
ret = False
if not network_name:
return ret
ret = True if self.find_network_by_name(network_name=network_name) else False
return ret
# Datacenter
def find_datacenter_by_name(self, datacenter_name):
"""
Get datacenter managed object by name
Args:
datacenter_name: Name of datacenter
Returns: datacenter managed object if found else None
"""
return find_datacenter_by_name(self.content, datacenter_name=datacenter_name)
def is_datastore_valid(self, datastore_obj=None):
"""
Check if datastore selected is valid or not
Args:
datastore_obj: datastore managed object
Returns: True if datastore is valid, False if not
"""
if not datastore_obj \
or datastore_obj.summary.maintenanceMode != 'normal' \
or not datastore_obj.summary.accessible:
return False
return True
def find_datastore_by_name(self, datastore_name):
"""
Get datastore managed object by name
Args:
datastore_name: Name of datastore
Returns: datastore managed object if found else None
"""
return find_datastore_by_name(self.content, datastore_name=datastore_name)
# Datastore cluster
def find_datastore_cluster_by_name(self, datastore_cluster_name):
"""
Get datastore cluster managed object by name
Args:
datastore_cluster_name: Name of datastore cluster
Returns: Datastore cluster managed object if found else None
"""
data_store_clusters = get_all_objs(self.content, [vim.StoragePod])
for dsc in data_store_clusters:
if dsc.name == datastore_cluster_name:
return dsc
return None
# Resource pool
def find_resource_pool_by_name(self, resource_pool_name, folder=None):
"""
Get resource pool managed object by name
Args:
resource_pool_name: Name of resource pool
Returns: Resource pool managed object if found else None
"""
if not folder:
folder = self.content.rootFolder
resource_pools = get_all_objs(self.content, [vim.ResourcePool], folder=folder)
for rp in resource_pools:
if rp.name == resource_pool_name:
return rp
return None
def find_resource_pool_by_cluster(self, resource_pool_name='Resources', cluster=None):
"""
Get resource pool managed object by cluster object
Args:
resource_pool_name: Name of resource pool
cluster: Managed object of cluster
Returns: Resource pool managed object if found else None
"""
desired_rp = None
if not cluster:
return desired_rp
if resource_pool_name != 'Resources':
# Resource pool name is different than default 'Resources'
resource_pools = cluster.resourcePool.resourcePool
if resource_pools:
for rp in resource_pools:
if rp.name == resource_pool_name:
desired_rp = rp
break
else:
desired_rp = cluster.resourcePool
return desired_rp
# VMDK stuff
def vmdk_disk_path_split(self, vmdk_path):
"""
Takes a string in the format
[datastore_name] path/to/vm_name.vmdk
Returns a tuple with multiple strings:
1. datastore_name: The name of the datastore (without brackets)
2. vmdk_fullpath: The "path/to/vm_name.vmdk" portion
3. vmdk_filename: The "vm_name.vmdk" portion of the string (os.path.basename equivalent)
4. vmdk_folder: The "path/to/" portion of the string (os.path.dirname equivalent)
"""
try:
datastore_name = re.match(r'^\[(.*?)\]', vmdk_path, re.DOTALL).groups()[0]
vmdk_fullpath = re.match(r'\[.*?\] (.*)$', vmdk_path).groups()[0]
vmdk_filename = os.path.basename(vmdk_fullpath)
vmdk_folder = os.path.dirname(vmdk_fullpath)
return datastore_name, vmdk_fullpath, vmdk_filename, vmdk_folder
except (IndexError, AttributeError) as e:
self.module.fail_json(msg="Bad path '%s' for filename disk vmdk image: %s" % (vmdk_path, to_native(e)))
def find_vmdk_file(self, datastore_obj, vmdk_fullpath, vmdk_filename, vmdk_folder):
"""
Return vSphere file object or fail_json
Args:
datastore_obj: Managed object of datastore
vmdk_fullpath: Path of VMDK file e.g., path/to/vm/vmdk_filename.vmdk
vmdk_filename: Name of vmdk e.g., VM0001_1.vmdk
vmdk_folder: Base dir of VMDK e.g, path/to/vm
"""
browser = datastore_obj.browser
datastore_name = datastore_obj.name
datastore_name_sq = "[" + datastore_name + "]"
if browser is None:
self.module.fail_json(msg="Unable to access browser for datastore %s" % datastore_name)
detail_query = vim.host.DatastoreBrowser.FileInfo.Details(
fileOwner=True,
fileSize=True,
fileType=True,
modification=True
)
search_spec = vim.host.DatastoreBrowser.SearchSpec(
details=detail_query,
matchPattern=[vmdk_filename],
searchCaseInsensitive=True,
)
search_res = browser.SearchSubFolders(
datastorePath=datastore_name_sq,
searchSpec=search_spec
)
changed = False
vmdk_path = datastore_name_sq + " " + vmdk_fullpath
try:
changed, result = wait_for_task(search_res)
except TaskError as task_e:
self.module.fail_json(msg=to_native(task_e))
if not changed:
self.module.fail_json(msg="No valid disk vmdk image found for path %s" % vmdk_path)
target_folder_paths = [
datastore_name_sq + " " + vmdk_folder + '/',
datastore_name_sq + " " + vmdk_folder,
]
for file_result in search_res.info.result:
for f in getattr(file_result, 'file'):
if f.path == vmdk_filename and file_result.folderPath in target_folder_paths:
return f
self.module.fail_json(msg="No vmdk file found for path specified [%s]" % vmdk_path)
#
# Conversion to JSON
#
def _deepmerge(self, d, u):
"""
Deep merges u into d.
Credit:
https://bit.ly/2EDOs1B (stackoverflow question 3232943)
License:
cc-by-sa 3.0 (https://creativecommons.org/licenses/by-sa/3.0/)
Changes:
using collections_compat for compatibility
Args:
- d (dict): dict to merge into
- u (dict): dict to merge into d
Returns:
dict, with u merged into d
"""
for k, v in iteritems(u):
if isinstance(v, collections_compat.Mapping):
d[k] = self._deepmerge(d.get(k, {}), v)
else:
d[k] = v
return d
def _extract(self, data, remainder):
"""
This is used to break down dotted properties for extraction.
Args:
- data (dict): result of _jsonify on a property
- remainder: the remainder of the dotted property to select
Return:
dict
"""
result = dict()
if '.' not in remainder:
result[remainder] = data[remainder]
return result
key, remainder = remainder.split('.', 1)
result[key] = self._extract(data[key], remainder)
return result
def _jsonify(self, obj):
"""
Convert an object from pyVmomi into JSON.
Args:
- obj (object): vim object
Return:
dict
"""
return json.loads(json.dumps(obj, cls=VmomiSupport.VmomiJSONEncoder,
sort_keys=True, strip_dynamic=True))
def to_json(self, obj, properties=None):
"""
Convert a vSphere (pyVmomi) Object into JSON. This is a deep
transformation. The list of properties is optional - if not
provided then all properties are deeply converted. The resulting
JSON is sorted to improve human readability.
Requires upstream support from pyVmomi > 6.7.1
(https://github.com/vmware/pyvmomi/pull/732)
Args:
- obj (object): vim object
- properties (list, optional): list of properties following
the property collector specification, for example:
["config.hardware.memoryMB", "name", "overallStatus"]
default is a complete object dump, which can be large
Return:
dict
"""
if not HAS_PYVMOMIJSON:
self.module.fail_json(msg='The installed version of pyvmomi lacks JSON output support; need pyvmomi>6.7.1')
result = dict()
if properties:
for prop in properties:
try:
if '.' in prop:
key, remainder = prop.split('.', 1)
tmp = dict()
tmp[key] = self._extract(self._jsonify(getattr(obj, key)), remainder)
self._deepmerge(result, tmp)
else:
result[prop] = self._jsonify(getattr(obj, prop))
# To match gather_vm_facts output
prop_name = prop
if prop.lower() == '_moid':
prop_name = 'moid'
elif prop.lower() == '_vimref':
prop_name = 'vimref'
result[prop_name] = result[prop]
except (AttributeError, KeyError):
self.module.fail_json(msg="Property '{0}' not found.".format(prop))
else:
result = self._jsonify(obj)
return result
def get_folder_path(self, cur):
full_path = '/' + cur.name
while hasattr(cur, 'parent') and cur.parent:
if cur.parent == self.content.rootFolder:
break
cur = cur.parent
full_path = '/' + cur.name + full_path
return full_path
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 60,307 |
user: createhome=no fails with [Errno 2] No such file or directory
|
##### SUMMARY
I am creating a system user account for Jenkins workers, and I want to create the home directory myself, after the user is already created. Ansible 2.8 let me, current devel branch fails.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
user
##### ANSIBLE VERSION
```
ansible 2.9.0.dev0
config file = /home/mg/src/deployments/provisioning/ansible.cfg
configured module search path = [u'/home/mg/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /home/mg/src/ansible/lib/ansible
executable location = /home/mg/src/ansible/bin/ansible
python version = 2.7.16 (default, Apr 6 2019, 01:42:57) [GCC 8.3.0]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```
ACTION_WARNINGS(/home/mg/src/deployments/provisioning/ansible.cfg) = False
CACHE_PLUGIN(/home/mg/src/deployments/provisioning/ansible.cfg) = jsonfile
CACHE_PLUGIN_CONNECTION(/home/mg/src/deployments/provisioning/ansible.cfg) = .cache/facts/
CACHE_PLUGIN_TIMEOUT(/home/mg/src/deployments/provisioning/ansible.cfg) = 86400
DEFAULT_CALLBACK_WHITELIST(/home/mg/src/deployments/provisioning/ansible.cfg) = [u'fancy_html']
DEFAULT_FORKS(/home/mg/src/deployments/provisioning/ansible.cfg) = 15
DEFAULT_GATHERING(/home/mg/src/deployments/provisioning/ansible.cfg) = smart
DEFAULT_HOST_LIST(/home/mg/src/deployments/provisioning/ansible.cfg) = [u'/home/mg/src/deployments/provisioning/inventory']
DEFAULT_LOG_PATH(/home/mg/src/deployments/provisioning/ansible.cfg) = /home/mg/src/deployments/provisioning/.cache/ansible.log
DEFAULT_REMOTE_USER(/home/mg/src/deployments/provisioning/ansible.cfg) = root
DEFAULT_STDOUT_CALLBACK(/home/mg/src/deployments/provisioning/ansible.cfg) = yaml
DEFAULT_VAULT_PASSWORD_FILE(/home/mg/src/deployments/provisioning/ansible.cfg) = /home/mg/src/deployments/provisioning/askpass.py
RETRY_FILES_ENABLED(/home/mg/src/deployments/provisioning/ansible.cfg) = False
```
##### OS / ENVIRONMENT
Ubuntu 19.04 on the controller, Ubuntu 16.04 and 18.04 on the targets.
##### STEPS TO REPRODUCE
```yaml
- hosts: all
tasks:
- name: create a user for the jenkins worker
user: name=jenkins-worker home=/var/lib/jenkins-worker createhome=no shell=/bin/bash system=yes
tags: [ jenkins, user ]
- name: create /var/lib/jenkins-worker
file: dest=/var/lib/jenkins-worker state=directory owner=jenkins-worker group=jenkins-worker
tags: [ jenkins, user ]
```
##### EXPECTED RESULTS
I expect there to be a /var/lib/jenkins-worker, owner by jenkins-worker:jenkins-worker, with no dotfiles inside it.
##### ACTUAL RESULTS
<!--- Paste verbatim command output between quotes -->
```
TASK [jenkins-worker : create a user for the jenkins worker] ******************************************************************************************************************************************************
fatal: [xenial]: FAILED! => changed=false
msg: '[Errno 2] No such file or directory: ''/var/lib/jenkins-worker'''
fatal: [bionic]: FAILED! => changed=false
msg: '[Errno 2] No such file or directory: ''/var/lib/jenkins-worker'''
```
What's even more fun is that when I re-ran the playbook, it succeeded, and I don't understand what even is going on any more.
|
https://github.com/ansible/ansible/issues/60307
|
https://github.com/ansible/ansible/pull/60310
|
13403b36889a24740178908d3f816d80545b06ad
|
c71622b31a52c44f171a4bb9da1331939ad7aa60
| 2019-08-09T10:43:06Z |
python
| 2019-08-12T14:37:45Z |
lib/ansible/modules/system/user.py
|
#!/usr/bin/python
# -*- coding: utf-8 -*-
# Copyright: (c) 2012, Stephen Fromm <[email protected]>
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
ANSIBLE_METADATA = {'metadata_version': '1.1',
'status': ['stableinterface'],
'supported_by': 'core'}
DOCUMENTATION = r'''
module: user
version_added: "0.2"
short_description: Manage user accounts
description:
- Manage user accounts and user attributes.
- For Windows targets, use the M(win_user) module instead.
options:
name:
description:
- Name of the user to create, remove or modify.
type: str
required: true
aliases: [ user ]
uid:
description:
- Optionally sets the I(UID) of the user.
type: int
comment:
description:
- Optionally sets the description (aka I(GECOS)) of user account.
type: str
hidden:
description:
- macOS only, optionally hide the user from the login window and system preferences.
- The default will be C(yes) if the I(system) option is used.
type: bool
version_added: "2.6"
non_unique:
description:
- Optionally when used with the -u option, this option allows to change the user ID to a non-unique value.
type: bool
default: no
version_added: "1.1"
seuser:
description:
- Optionally sets the seuser type (user_u) on selinux enabled systems.
type: str
version_added: "2.1"
group:
description:
- Optionally sets the user's primary group (takes a group name).
type: str
groups:
description:
- List of groups user will be added to. When set to an empty string C(''),
C(null), or C(~), the user is removed from all groups except the
primary group. (C(~) means C(null) in YAML)
- Before Ansible 2.3, the only input format allowed was a comma separated string.
- Mutually exclusive with C(local)
type: list
append:
description:
- If C(yes), add the user to the groups specified in C(groups).
- If C(no), user will only be added to the groups specified in C(groups),
removing them from all other groups.
- Mutually exclusive with C(local)
type: bool
default: no
shell:
description:
- Optionally set the user's shell.
- On macOS, before Ansible 2.5, the default shell for non-system users was C(/usr/bin/false).
Since Ansible 2.5, the default shell for non-system users on macOS is C(/bin/bash).
- On other operating systems, the default shell is determined by the underlying tool being
used. See Notes for details.
type: str
home:
description:
- Optionally set the user's home directory.
type: path
skeleton:
description:
- Optionally set a home skeleton directory.
- Requires C(create_home) option!
type: str
version_added: "2.0"
password:
description:
- Optionally set the user's password to this crypted value.
- On macOS systems, this value has to be cleartext. Beware of security issues.
- To create a disabled account on Linux systems, set this to C('!') or C('*').
- See U(https://docs.ansible.com/ansible/faq.html#how-do-i-generate-encrypted-passwords-for-the-user-module)
for details on various ways to generate these password values.
type: str
state:
description:
- Whether the account should exist or not, taking action if the state is different from what is stated.
type: str
choices: [ absent, present ]
default: present
create_home:
description:
- Unless set to C(no), a home directory will be made for the user
when the account is created or if the home directory does not exist.
- Changed from C(createhome) to C(create_home) in Ansible 2.5.
type: bool
default: yes
aliases: [ createhome ]
move_home:
description:
- "If set to C(yes) when used with C(home: ), attempt to move the user's old home
directory to the specified directory if it isn't there already and the old home exists."
type: bool
default: no
system:
description:
- When creating an account C(state=present), setting this to C(yes) makes the user a system account.
- This setting cannot be changed on existing users.
type: bool
default: no
force:
description:
- This only affects C(state=absent), it forces removal of the user and associated directories on supported platforms.
- The behavior is the same as C(userdel --force), check the man page for C(userdel) on your system for details and support.
- When used with C(generate_ssh_key=yes) this forces an existing key to be overwritten.
type: bool
default: no
remove:
description:
- This only affects C(state=absent), it attempts to remove directories associated with the user.
- The behavior is the same as C(userdel --remove), check the man page for details and support.
type: bool
default: no
login_class:
description:
- Optionally sets the user's login class, a feature of most BSD OSs.
type: str
generate_ssh_key:
description:
- Whether to generate a SSH key for the user in question.
- This will B(not) overwrite an existing SSH key unless used with C(force=yes).
type: bool
default: no
version_added: "0.9"
ssh_key_bits:
description:
- Optionally specify number of bits in SSH key to create.
type: int
default: default set by ssh-keygen
version_added: "0.9"
ssh_key_type:
description:
- Optionally specify the type of SSH key to generate.
- Available SSH key types will depend on implementation
present on target host.
type: str
default: rsa
version_added: "0.9"
ssh_key_file:
description:
- Optionally specify the SSH key filename.
- If this is a relative filename then it will be relative to the user's home directory.
- This parameter defaults to I(.ssh/id_rsa).
type: path
version_added: "0.9"
ssh_key_comment:
description:
- Optionally define the comment for the SSH key.
type: str
default: ansible-generated on $HOSTNAME
version_added: "0.9"
ssh_key_passphrase:
description:
- Set a passphrase for the SSH key.
- If no passphrase is provided, the SSH key will default to having no passphrase.
type: str
version_added: "0.9"
update_password:
description:
- C(always) will update passwords if they differ.
- C(on_create) will only set the password for newly created users.
type: str
choices: [ always, on_create ]
default: always
version_added: "1.3"
expires:
description:
- An expiry time for the user in epoch, it will be ignored on platforms that do not support this.
- Currently supported on GNU/Linux, FreeBSD, and DragonFlyBSD.
- Since Ansible 2.6 you can remove the expiry time specify a negative value.
Currently supported on GNU/Linux and FreeBSD.
type: float
version_added: "1.9"
password_lock:
description:
- Lock the password (usermod -L, pw lock, usermod -C).
- BUT implementation differs on different platforms, this option does not always mean the user cannot login via other methods.
- This option does not disable the user, only lock the password. Do not change the password in the same task.
- Currently supported on Linux, FreeBSD, DragonFlyBSD, NetBSD, OpenBSD.
type: bool
version_added: "2.6"
local:
description:
- Forces the use of "local" command alternatives on platforms that implement it.
- This is useful in environments that use centralized authentification when you want to manipulate the local users
(i.e. it uses C(luseradd) instead of C(useradd)).
- This will check C(/etc/passwd) for an existing account before invoking commands. If the local account database
exists somewhere other than C(/etc/passwd), this setting will not work properly.
- This requires that the above commands as well as C(/etc/passwd) must exist on the target host, otherwise it will be a fatal error.
- Mutually exclusive with C(groups) and C(append)
type: bool
default: no
version_added: "2.4"
profile:
description:
- Sets the profile of the user.
- Does nothing when used with other platforms.
- Can set multiple profiles using comma separation.
- To delete all the profiles, use C(profile='').
- Currently supported on Illumos/Solaris.
type: str
version_added: "2.8"
authorization:
description:
- Sets the authorization of the user.
- Does nothing when used with other platforms.
- Can set multiple authorizations using comma separation.
- To delete all authorizations, use C(authorization='').
- Currently supported on Illumos/Solaris.
type: str
version_added: "2.8"
role:
description:
- Sets the role of the user.
- Does nothing when used with other platforms.
- Can set multiple roles using comma separation.
- To delete all roles, use C(role='').
- Currently supported on Illumos/Solaris.
type: str
version_added: "2.8"
notes:
- There are specific requirements per platform on user management utilities. However
they generally come pre-installed with the system and Ansible will require they
are present at runtime. If they are not, a descriptive error message will be shown.
- On SunOS platforms, the shadow file is backed up automatically since this module edits it directly.
On other platforms, the shadow file is backed up by the underlying tools used by this module.
- On macOS, this module uses C(dscl) to create, modify, and delete accounts. C(dseditgroup) is used to
modify group membership. Accounts are hidden from the login window by modifying
C(/Library/Preferences/com.apple.loginwindow.plist).
- On FreeBSD, this module uses C(pw useradd) and C(chpass) to create, C(pw usermod) and C(chpass) to modify,
C(pw userdel) remove, C(pw lock) to lock, and C(pw unlock) to unlock accounts.
- On all other platforms, this module uses C(useradd) to create, C(usermod) to modify, and
C(userdel) to remove accounts.
seealso:
- module: authorized_key
- module: group
- module: win_user
author:
- Stephen Fromm (@sfromm)
'''
EXAMPLES = r'''
- name: Add the user 'johnd' with a specific uid and a primary group of 'admin'
user:
name: johnd
comment: John Doe
uid: 1040
group: admin
- name: Add the user 'james' with a bash shell, appending the group 'admins' and 'developers' to the user's groups
user:
name: james
shell: /bin/bash
groups: admins,developers
append: yes
- name: Remove the user 'johnd'
user:
name: johnd
state: absent
remove: yes
- name: Create a 2048-bit SSH key for user jsmith in ~jsmith/.ssh/id_rsa
user:
name: jsmith
generate_ssh_key: yes
ssh_key_bits: 2048
ssh_key_file: .ssh/id_rsa
- name: Added a consultant whose account you want to expire
user:
name: james18
shell: /bin/zsh
groups: developers
expires: 1422403387
- name: Starting at Ansible 2.6, modify user, remove expiry time
user:
name: james18
expires: -1
'''
RETURN = r'''
append:
description: Whether or not to append the user to groups
returned: When state is 'present' and the user exists
type: bool
sample: True
comment:
description: Comment section from passwd file, usually the user name
returned: When user exists
type: str
sample: Agent Smith
create_home:
description: Whether or not to create the home directory
returned: When user does not exist and not check mode
type: bool
sample: True
force:
description: Whether or not a user account was forcibly deleted
returned: When state is 'absent' and user exists
type: bool
sample: False
group:
description: Primary user group ID
returned: When user exists
type: int
sample: 1001
groups:
description: List of groups of which the user is a member
returned: When C(groups) is not empty and C(state) is 'present'
type: str
sample: 'chrony,apache'
home:
description: "Path to user's home directory"
returned: When C(state) is 'present'
type: str
sample: '/home/asmith'
move_home:
description: Whether or not to move an existing home directory
returned: When C(state) is 'present' and user exists
type: bool
sample: False
name:
description: User account name
returned: always
type: str
sample: asmith
password:
description: Masked value of the password
returned: When C(state) is 'present' and C(password) is not empty
type: str
sample: 'NOT_LOGGING_PASSWORD'
remove:
description: Whether or not to remove the user account
returned: When C(state) is 'absent' and user exists
type: bool
sample: True
shell:
description: User login shell
returned: When C(state) is 'present'
type: str
sample: '/bin/bash'
ssh_fingerprint:
description: Fingerprint of generated SSH key
returned: When C(generate_ssh_key) is C(True)
type: str
sample: '2048 SHA256:aYNHYcyVm87Igh0IMEDMbvW0QDlRQfE0aJugp684ko8 ansible-generated on host (RSA)'
ssh_key_file:
description: Path to generated SSH public key file
returned: When C(generate_ssh_key) is C(True)
type: str
sample: /home/asmith/.ssh/id_rsa
ssh_public_key:
description: Generated SSH public key file
returned: When C(generate_ssh_key) is C(True)
type: str
sample: >
'ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC95opt4SPEC06tOYsJQJIuN23BbLMGmYo8ysVZQc4h2DZE9ugbjWWGS1/pweUGjVstgzMkBEeBCByaEf/RJKNecKRPeGd2Bw9DCj/bn5Z6rGfNENKBmo
618mUJBvdlEgea96QGjOwSB7/gmonduC7gsWDMNcOdSE3wJMTim4lddiBx4RgC9yXsJ6Tkz9BHD73MXPpT5ETnse+A3fw3IGVSjaueVnlUyUmOBf7fzmZbhlFVXf2Zi2rFTXqvbdGHKkzpw1U8eB8xFPP7y
d5u1u0e6Acju/8aZ/l17IDFiLke5IzlqIMRTEbDwLNeO84YQKWTm9fODHzhYe0yvxqLiK07 ansible-generated on host'
stderr:
description: Standard error from running commands
returned: When stderr is returned by a command that is run
type: str
sample: Group wheels does not exist
stdout:
description: Standard output from running commands
returned: When standard output is returned by the command that is run
type: str
sample:
system:
description: Whether or not the account is a system account
returned: When C(system) is passed to the module and the account does not exist
type: bool
sample: True
uid:
description: User ID of the user account
returned: When C(UID) is passed to the module
type: int
sample: 1044
'''
import errno
import grp
import calendar
import os
import re
import pty
import pwd
import select
import shutil
import socket
import subprocess
import time
from ansible.module_utils import distro
from ansible.module_utils._text import to_native, to_bytes, to_text
from ansible.module_utils.basic import load_platform_subclass, AnsibleModule
try:
import spwd
HAVE_SPWD = True
except ImportError:
HAVE_SPWD = False
_HASH_RE = re.compile(r'[^a-zA-Z0-9./=]')
class User(object):
"""
This is a generic User manipulation class that is subclassed
based on platform.
A subclass may wish to override the following action methods:-
- create_user()
- remove_user()
- modify_user()
- ssh_key_gen()
- ssh_key_fingerprint()
- user_exists()
All subclasses MUST define platform and distribution (which may be None).
"""
platform = 'Generic'
distribution = None
PASSWORDFILE = '/etc/passwd'
SHADOWFILE = '/etc/shadow'
SHADOWFILE_EXPIRE_INDEX = 7
LOGIN_DEFS = '/etc/login.defs'
DATE_FORMAT = '%Y-%m-%d'
def __new__(cls, *args, **kwargs):
return load_platform_subclass(User, args, kwargs)
def __init__(self, module):
self.module = module
self.state = module.params['state']
self.name = module.params['name']
self.uid = module.params['uid']
self.hidden = module.params['hidden']
self.non_unique = module.params['non_unique']
self.seuser = module.params['seuser']
self.group = module.params['group']
self.comment = module.params['comment']
self.shell = module.params['shell']
self.password = module.params['password']
self.force = module.params['force']
self.remove = module.params['remove']
self.create_home = module.params['create_home']
self.move_home = module.params['move_home']
self.skeleton = module.params['skeleton']
self.system = module.params['system']
self.login_class = module.params['login_class']
self.append = module.params['append']
self.sshkeygen = module.params['generate_ssh_key']
self.ssh_bits = module.params['ssh_key_bits']
self.ssh_type = module.params['ssh_key_type']
self.ssh_comment = module.params['ssh_key_comment']
self.ssh_passphrase = module.params['ssh_key_passphrase']
self.update_password = module.params['update_password']
self.home = module.params['home']
self.expires = None
self.password_lock = module.params['password_lock']
self.groups = None
self.local = module.params['local']
self.profile = module.params['profile']
self.authorization = module.params['authorization']
self.role = module.params['role']
if module.params['groups'] is not None:
self.groups = ','.join(module.params['groups'])
if module.params['expires'] is not None:
try:
self.expires = time.gmtime(module.params['expires'])
except Exception as e:
module.fail_json(msg="Invalid value for 'expires' %s: %s" % (self.expires, to_native(e)))
if module.params['ssh_key_file'] is not None:
self.ssh_file = module.params['ssh_key_file']
else:
self.ssh_file = os.path.join('.ssh', 'id_%s' % self.ssh_type)
def check_password_encrypted(self):
# Darwin needs cleartext password, so skip validation
if self.module.params['password'] and self.platform != 'Darwin':
maybe_invalid = False
# Allow setting the password to * or ! in order to disable the account
if self.module.params['password'] in set(['*', '!']):
maybe_invalid = False
else:
# : for delimiter, * for disable user, ! for lock user
# these characters are invalid in the password
if any(char in self.module.params['password'] for char in ':*!'):
maybe_invalid = True
if '$' not in self.module.params['password']:
maybe_invalid = True
else:
fields = self.module.params['password'].split("$")
if len(fields) >= 3:
# contains character outside the crypto constraint
if bool(_HASH_RE.search(fields[-1])):
maybe_invalid = True
# md5
if fields[1] == '1' and len(fields[-1]) != 22:
maybe_invalid = True
# sha256
if fields[1] == '5' and len(fields[-1]) != 43:
maybe_invalid = True
# sha512
if fields[1] == '6' and len(fields[-1]) != 86:
maybe_invalid = True
else:
maybe_invalid = True
if maybe_invalid:
self.module.warn("The input password appears not to have been hashed. "
"The 'password' argument must be encrypted for this module to work properly.")
def execute_command(self, cmd, use_unsafe_shell=False, data=None, obey_checkmode=True):
if self.module.check_mode and obey_checkmode:
self.module.debug('In check mode, would have run: "%s"' % cmd)
return (0, '', '')
else:
# cast all args to strings ansible-modules-core/issues/4397
cmd = [str(x) for x in cmd]
return self.module.run_command(cmd, use_unsafe_shell=use_unsafe_shell, data=data)
def backup_shadow(self):
if not self.module.check_mode and self.SHADOWFILE:
return self.module.backup_local(self.SHADOWFILE)
def remove_user_userdel(self):
if self.local:
command_name = 'luserdel'
else:
command_name = 'userdel'
cmd = [self.module.get_bin_path(command_name, True)]
if self.force:
cmd.append('-f')
if self.remove:
cmd.append('-r')
cmd.append(self.name)
return self.execute_command(cmd)
def create_user_useradd(self):
if self.local:
command_name = 'luseradd'
else:
command_name = 'useradd'
cmd = [self.module.get_bin_path(command_name, True)]
if self.uid is not None:
cmd.append('-u')
cmd.append(self.uid)
if self.non_unique:
cmd.append('-o')
if self.seuser is not None:
cmd.append('-Z')
cmd.append(self.seuser)
if self.group is not None:
if not self.group_exists(self.group):
self.module.fail_json(msg="Group %s does not exist" % self.group)
cmd.append('-g')
cmd.append(self.group)
elif self.group_exists(self.name):
# use the -N option (no user group) if a group already
# exists with the same name as the user to prevent
# errors from useradd trying to create a group when
# USERGROUPS_ENAB is set in /etc/login.defs.
if os.path.exists('/etc/redhat-release'):
dist = distro.linux_distribution(full_distribution_name=False)
major_release = int(dist[1].split('.')[0])
if major_release <= 5:
cmd.append('-n')
else:
cmd.append('-N')
elif os.path.exists('/etc/SuSE-release'):
# -N did not exist in useradd before SLE 11 and did not
# automatically create a group
dist = distro.linux_distribution(full_distribution_name=False)
major_release = int(dist[1].split('.')[0])
if major_release >= 12:
cmd.append('-N')
else:
cmd.append('-N')
if self.groups is not None and not self.local and len(self.groups):
groups = self.get_groups_set()
cmd.append('-G')
cmd.append(','.join(groups))
if self.comment is not None:
cmd.append('-c')
cmd.append(self.comment)
if self.home is not None:
# If the specified path to the user home contains parent directories that
# do not exist, first create the home directory since useradd cannot
# create parent directories
parent = os.path.dirname(self.home)
if not os.path.isdir(parent):
self.create_homedir(self.home)
cmd.append('-d')
cmd.append(self.home)
if self.shell is not None:
cmd.append('-s')
cmd.append(self.shell)
if self.expires is not None:
cmd.append('-e')
if self.expires < time.gmtime(0):
cmd.append('')
else:
cmd.append(time.strftime(self.DATE_FORMAT, self.expires))
if self.password is not None:
cmd.append('-p')
cmd.append(self.password)
if self.create_home:
if not self.local:
cmd.append('-m')
if self.skeleton is not None:
cmd.append('-k')
cmd.append(self.skeleton)
else:
cmd.append('-M')
if self.system:
cmd.append('-r')
cmd.append(self.name)
return self.execute_command(cmd)
def _check_usermod_append(self):
# check if this version of usermod can append groups
if self.local:
command_name = 'lusermod'
else:
command_name = 'usermod'
usermod_path = self.module.get_bin_path(command_name, True)
# for some reason, usermod --help cannot be used by non root
# on RH/Fedora, due to lack of execute bit for others
if not os.access(usermod_path, os.X_OK):
return False
cmd = [usermod_path, '--help']
(rc, data1, data2) = self.execute_command(cmd, obey_checkmode=False)
helpout = data1 + data2
# check if --append exists
lines = to_native(helpout).split('\n')
for line in lines:
if line.strip().startswith('-a, --append'):
return True
return False
def modify_user_usermod(self):
if self.local:
command_name = 'lusermod'
else:
command_name = 'usermod'
cmd = [self.module.get_bin_path(command_name, True)]
info = self.user_info()
has_append = self._check_usermod_append()
if self.uid is not None and info[2] != int(self.uid):
cmd.append('-u')
cmd.append(self.uid)
if self.non_unique:
cmd.append('-o')
if self.group is not None:
if not self.group_exists(self.group):
self.module.fail_json(msg="Group %s does not exist" % self.group)
ginfo = self.group_info(self.group)
if info[3] != ginfo[2]:
cmd.append('-g')
cmd.append(self.group)
if self.groups is not None:
# get a list of all groups for the user, including the primary
current_groups = self.user_group_membership(exclude_primary=False)
groups_need_mod = False
groups = []
if self.groups == '':
if current_groups and not self.append:
groups_need_mod = True
else:
groups = self.get_groups_set(remove_existing=False)
group_diff = set(current_groups).symmetric_difference(groups)
if group_diff:
if self.append:
for g in groups:
if g in group_diff:
if has_append:
cmd.append('-a')
groups_need_mod = True
break
else:
groups_need_mod = True
if groups_need_mod and not self.local:
if self.append and not has_append:
cmd.append('-A')
cmd.append(','.join(group_diff))
else:
cmd.append('-G')
cmd.append(','.join(groups))
if self.comment is not None and info[4] != self.comment:
cmd.append('-c')
cmd.append(self.comment)
if self.home is not None and info[5] != self.home:
cmd.append('-d')
cmd.append(self.home)
if self.move_home:
cmd.append('-m')
if self.shell is not None and info[6] != self.shell:
cmd.append('-s')
cmd.append(self.shell)
if self.expires is not None:
current_expires = int(self.user_password()[1])
if self.expires < time.gmtime(0):
if current_expires >= 0:
cmd.append('-e')
cmd.append('')
else:
# Convert days since Epoch to seconds since Epoch as struct_time
current_expire_date = time.gmtime(current_expires * 86400)
# Current expires is negative or we compare year, month, and day only
if current_expires < 0 or current_expire_date[:3] != self.expires[:3]:
cmd.append('-e')
cmd.append(time.strftime(self.DATE_FORMAT, self.expires))
# Lock if no password or unlocked, unlock only if locked
if self.password_lock and not info[1].startswith('!'):
cmd.append('-L')
elif self.password_lock is False and info[1].startswith('!'):
# usermod will refuse to unlock a user with no password, module shows 'changed' regardless
cmd.append('-U')
if self.update_password == 'always' and self.password is not None and info[1] != self.password:
cmd.append('-p')
cmd.append(self.password)
# skip if no changes to be made
if len(cmd) == 1:
return (None, '', '')
cmd.append(self.name)
return self.execute_command(cmd)
def group_exists(self, group):
try:
# Try group as a gid first
grp.getgrgid(int(group))
return True
except (ValueError, KeyError):
try:
grp.getgrnam(group)
return True
except KeyError:
return False
def group_info(self, group):
if not self.group_exists(group):
return False
try:
# Try group as a gid first
return list(grp.getgrgid(int(group)))
except (ValueError, KeyError):
return list(grp.getgrnam(group))
def get_groups_set(self, remove_existing=True):
if self.groups is None:
return None
info = self.user_info()
groups = set(x.strip() for x in self.groups.split(',') if x)
for g in groups.copy():
if not self.group_exists(g):
self.module.fail_json(msg="Group %s does not exist" % (g))
if info and remove_existing and self.group_info(g)[2] == info[3]:
groups.remove(g)
return groups
def user_group_membership(self, exclude_primary=True):
''' Return a list of groups the user belongs to '''
groups = []
info = self.get_pwd_info()
for group in grp.getgrall():
if self.name in group.gr_mem:
# Exclude the user's primary group by default
if not exclude_primary:
groups.append(group[0])
else:
if info[3] != group.gr_gid:
groups.append(group[0])
return groups
def user_exists(self):
# The pwd module does not distinguish between local and directory accounts.
# It's output cannot be used to determine whether or not an account exists locally.
# It returns True if the account exists locally or in the directory, so instead
# look in the local PASSWORD file for an existing account.
if self.local:
if not os.path.exists(self.PASSWORDFILE):
self.module.fail_json(msg="'local: true' specified but unable to find local account file {0} to parse.".format(self.PASSWORDFILE))
exists = False
name_test = '{0}:'.format(self.name)
with open(self.PASSWORDFILE, 'rb') as f:
reversed_lines = f.readlines()[::-1]
for line in reversed_lines:
if line.startswith(to_bytes(name_test)):
exists = True
break
if not exists:
self.module.warn(
"'local: true' specified and user '{name}' was not found in {file}. "
"The local user account may already exist if the local account database exists "
"somewhere other than {file}.".format(file=self.PASSWORDFILE, name=self.name))
return exists
else:
try:
if pwd.getpwnam(self.name):
return True
except KeyError:
return False
def get_pwd_info(self):
if not self.user_exists():
return False
return list(pwd.getpwnam(self.name))
def user_info(self):
if not self.user_exists():
return False
info = self.get_pwd_info()
if len(info[1]) == 1 or len(info[1]) == 0:
info[1] = self.user_password()[0]
return info
def user_password(self):
passwd = ''
expires = ''
if HAVE_SPWD:
try:
passwd = spwd.getspnam(self.name)[1]
expires = spwd.getspnam(self.name)[7]
return passwd, expires
except KeyError:
return passwd, expires
except OSError as e:
# Python 3.6 raises PermissionError instead of KeyError
# Due to absence of PermissionError in python2.7 need to check
# errno
if e.errno in (errno.EACCES, errno.EPERM):
return passwd, expires
raise
if not self.user_exists():
return passwd, expires
elif self.SHADOWFILE:
passwd, expires = self.parse_shadow_file()
return passwd, expires
def parse_shadow_file(self):
passwd = ''
expires = ''
if os.path.exists(self.SHADOWFILE) and os.access(self.SHADOWFILE, os.R_OK):
with open(self.SHADOWFILE, 'r') as f:
for line in f:
if line.startswith('%s:' % self.name):
passwd = line.split(':')[1]
expires = line.split(':')[self.SHADOWFILE_EXPIRE_INDEX] or -1
return passwd, expires
def get_ssh_key_path(self):
info = self.user_info()
if os.path.isabs(self.ssh_file):
ssh_key_file = self.ssh_file
else:
if not os.path.exists(info[5]) and not self.module.check_mode:
raise Exception('User %s home directory does not exist' % self.name)
ssh_key_file = os.path.join(info[5], self.ssh_file)
return ssh_key_file
def ssh_key_gen(self):
info = self.user_info()
overwrite = None
try:
ssh_key_file = self.get_ssh_key_path()
except Exception as e:
return (1, '', to_native(e))
ssh_dir = os.path.dirname(ssh_key_file)
if not os.path.exists(ssh_dir):
if self.module.check_mode:
return (0, '', '')
try:
os.mkdir(ssh_dir, int('0700', 8))
os.chown(ssh_dir, info[2], info[3])
except OSError as e:
return (1, '', 'Failed to create %s: %s' % (ssh_dir, to_native(e)))
if os.path.exists(ssh_key_file):
if self.force:
# ssh-keygen doesn't support overwriting the key interactively, so send 'y' to confirm
overwrite = 'y'
else:
return (None, 'Key already exists, use "force: yes" to overwrite', '')
cmd = [self.module.get_bin_path('ssh-keygen', True)]
cmd.append('-t')
cmd.append(self.ssh_type)
if self.ssh_bits > 0:
cmd.append('-b')
cmd.append(self.ssh_bits)
cmd.append('-C')
cmd.append(self.ssh_comment)
cmd.append('-f')
cmd.append(ssh_key_file)
if self.ssh_passphrase is not None:
if self.module.check_mode:
self.module.debug('In check mode, would have run: "%s"' % cmd)
return (0, '', '')
master_in_fd, slave_in_fd = pty.openpty()
master_out_fd, slave_out_fd = pty.openpty()
master_err_fd, slave_err_fd = pty.openpty()
env = os.environ.copy()
env['LC_ALL'] = 'C'
try:
p = subprocess.Popen([to_bytes(c) for c in cmd],
stdin=slave_in_fd,
stdout=slave_out_fd,
stderr=slave_err_fd,
preexec_fn=os.setsid,
env=env)
out_buffer = b''
err_buffer = b''
while p.poll() is None:
r, w, e = select.select([master_out_fd, master_err_fd], [], [], 1)
first_prompt = b'Enter passphrase (empty for no passphrase):'
second_prompt = b'Enter same passphrase again'
prompt = first_prompt
for fd in r:
if fd == master_out_fd:
chunk = os.read(master_out_fd, 10240)
out_buffer += chunk
if prompt in out_buffer:
os.write(master_in_fd, to_bytes(self.ssh_passphrase, errors='strict') + b'\r')
prompt = second_prompt
else:
chunk = os.read(master_err_fd, 10240)
err_buffer += chunk
if prompt in err_buffer:
os.write(master_in_fd, to_bytes(self.ssh_passphrase, errors='strict') + b'\r')
prompt = second_prompt
if b'Overwrite (y/n)?' in out_buffer or b'Overwrite (y/n)?' in err_buffer:
# The key was created between us checking for existence and now
return (None, 'Key already exists', '')
rc = p.returncode
out = to_native(out_buffer)
err = to_native(err_buffer)
except OSError as e:
return (1, '', to_native(e))
else:
cmd.append('-N')
cmd.append('')
(rc, out, err) = self.execute_command(cmd, data=overwrite)
if rc == 0 and not self.module.check_mode:
# If the keys were successfully created, we should be able
# to tweak ownership.
os.chown(ssh_key_file, info[2], info[3])
os.chown('%s.pub' % ssh_key_file, info[2], info[3])
return (rc, out, err)
def ssh_key_fingerprint(self):
ssh_key_file = self.get_ssh_key_path()
if not os.path.exists(ssh_key_file):
return (1, 'SSH Key file %s does not exist' % ssh_key_file, '')
cmd = [self.module.get_bin_path('ssh-keygen', True)]
cmd.append('-l')
cmd.append('-f')
cmd.append(ssh_key_file)
return self.execute_command(cmd, obey_checkmode=False)
def get_ssh_public_key(self):
ssh_public_key_file = '%s.pub' % self.get_ssh_key_path()
try:
with open(ssh_public_key_file, 'r') as f:
ssh_public_key = f.read().strip()
except IOError:
return None
return ssh_public_key
def create_user(self):
# by default we use the create_user_useradd method
return self.create_user_useradd()
def remove_user(self):
# by default we use the remove_user_userdel method
return self.remove_user_userdel()
def modify_user(self):
# by default we use the modify_user_usermod method
return self.modify_user_usermod()
def create_homedir(self, path):
if not os.path.exists(path):
if self.skeleton is not None:
skeleton = self.skeleton
else:
skeleton = '/etc/skel'
if os.path.exists(skeleton):
try:
shutil.copytree(skeleton, path, symlinks=True)
except OSError as e:
self.module.exit_json(failed=True, msg="%s" % to_native(e))
else:
try:
os.makedirs(path)
except OSError as e:
self.module.exit_json(failed=True, msg="%s" % to_native(e))
# get umask from /etc/login.defs and set correct home mode
if os.path.exists(self.LOGIN_DEFS):
with open(self.LOGIN_DEFS, 'r') as f:
for line in f:
m = re.match(r'^UMASK\s+(\d+)$', line)
if m:
umask = int(m.group(1), 8)
mode = 0o777 & ~umask
try:
os.chmod(path, mode)
except OSError as e:
self.module.exit_json(failed=True, msg="%s" % to_native(e))
def chown_homedir(self, uid, gid, path):
try:
os.chown(path, uid, gid)
for root, dirs, files in os.walk(path):
for d in dirs:
os.chown(os.path.join(root, d), uid, gid)
for f in files:
os.chown(os.path.join(root, f), uid, gid)
except OSError as e:
self.module.exit_json(failed=True, msg="%s" % to_native(e))
# ===========================================
class FreeBsdUser(User):
"""
This is a FreeBSD User manipulation class - it uses the pw command
to manipulate the user database, followed by the chpass command
to change the password.
This overrides the following methods from the generic class:-
- create_user()
- remove_user()
- modify_user()
"""
platform = 'FreeBSD'
distribution = None
SHADOWFILE = '/etc/master.passwd'
SHADOWFILE_EXPIRE_INDEX = 6
DATE_FORMAT = '%d-%b-%Y'
def remove_user(self):
cmd = [
self.module.get_bin_path('pw', True),
'userdel',
'-n',
self.name
]
if self.remove:
cmd.append('-r')
return self.execute_command(cmd)
def create_user(self):
cmd = [
self.module.get_bin_path('pw', True),
'useradd',
'-n',
self.name,
]
if self.uid is not None:
cmd.append('-u')
cmd.append(self.uid)
if self.non_unique:
cmd.append('-o')
if self.comment is not None:
cmd.append('-c')
cmd.append(self.comment)
if self.home is not None:
cmd.append('-d')
cmd.append(self.home)
if self.group is not None:
if not self.group_exists(self.group):
self.module.fail_json(msg="Group %s does not exist" % self.group)
cmd.append('-g')
cmd.append(self.group)
if self.groups is not None:
groups = self.get_groups_set()
cmd.append('-G')
cmd.append(','.join(groups))
if self.create_home:
cmd.append('-m')
if self.skeleton is not None:
cmd.append('-k')
cmd.append(self.skeleton)
if self.shell is not None:
cmd.append('-s')
cmd.append(self.shell)
if self.login_class is not None:
cmd.append('-L')
cmd.append(self.login_class)
if self.expires is not None:
cmd.append('-e')
if self.expires < time.gmtime(0):
cmd.append('0')
else:
cmd.append(str(calendar.timegm(self.expires)))
# system cannot be handled currently - should we error if its requested?
# create the user
(rc, out, err) = self.execute_command(cmd)
if rc is not None and rc != 0:
self.module.fail_json(name=self.name, msg=err, rc=rc)
# we have to set the password in a second command
if self.password is not None:
cmd = [
self.module.get_bin_path('chpass', True),
'-p',
self.password,
self.name
]
return self.execute_command(cmd)
return (rc, out, err)
def modify_user(self):
cmd = [
self.module.get_bin_path('pw', True),
'usermod',
'-n',
self.name
]
cmd_len = len(cmd)
info = self.user_info()
if self.uid is not None and info[2] != int(self.uid):
cmd.append('-u')
cmd.append(self.uid)
if self.non_unique:
cmd.append('-o')
if self.comment is not None and info[4] != self.comment:
cmd.append('-c')
cmd.append(self.comment)
if self.home is not None:
if (info[5] != self.home and self.move_home) or (not os.path.exists(self.home) and self.create_home):
cmd.append('-m')
if info[5] != self.home:
cmd.append('-d')
cmd.append(self.home)
if self.skeleton is not None:
cmd.append('-k')
cmd.append(self.skeleton)
if self.group is not None:
if not self.group_exists(self.group):
self.module.fail_json(msg="Group %s does not exist" % self.group)
ginfo = self.group_info(self.group)
if info[3] != ginfo[2]:
cmd.append('-g')
cmd.append(self.group)
if self.shell is not None and info[6] != self.shell:
cmd.append('-s')
cmd.append(self.shell)
if self.login_class is not None:
# find current login class
user_login_class = None
if os.path.exists(self.SHADOWFILE) and os.access(self.SHADOWFILE, os.R_OK):
with open(self.SHADOWFILE, 'r') as f:
for line in f:
if line.startswith('%s:' % self.name):
user_login_class = line.split(':')[4]
# act only if login_class change
if self.login_class != user_login_class:
cmd.append('-L')
cmd.append(self.login_class)
if self.groups is not None:
current_groups = self.user_group_membership()
groups = self.get_groups_set()
group_diff = set(current_groups).symmetric_difference(groups)
groups_need_mod = False
if group_diff:
if self.append:
for g in groups:
if g in group_diff:
groups_need_mod = True
break
else:
groups_need_mod = True
if groups_need_mod:
cmd.append('-G')
new_groups = groups
if self.append:
new_groups = groups | set(current_groups)
cmd.append(','.join(new_groups))
if self.expires is not None:
current_expires = int(self.user_password()[1])
# If expiration is negative or zero and the current expiration is greater than zero, disable expiration.
# In OpenBSD, setting expiration to zero disables expiration. It does not expire the account.
if self.expires <= time.gmtime(0):
if current_expires > 0:
cmd.append('-e')
cmd.append('0')
else:
# Convert days since Epoch to seconds since Epoch as struct_time
current_expire_date = time.gmtime(current_expires)
# Current expires is negative or we compare year, month, and day only
if current_expires <= 0 or current_expire_date[:3] != self.expires[:3]:
cmd.append('-e')
cmd.append(str(calendar.timegm(self.expires)))
# modify the user if cmd will do anything
if cmd_len != len(cmd):
(rc, out, err) = self.execute_command(cmd)
if rc is not None and rc != 0:
self.module.fail_json(name=self.name, msg=err, rc=rc)
else:
(rc, out, err) = (None, '', '')
# we have to set the password in a second command
if self.update_password == 'always' and self.password is not None and info[1] != self.password:
cmd = [
self.module.get_bin_path('chpass', True),
'-p',
self.password,
self.name
]
return self.execute_command(cmd)
# we have to lock/unlock the password in a distinct command
if self.password_lock and not info[1].startswith('*LOCKED*'):
cmd = [
self.module.get_bin_path('pw', True),
'lock',
self.name
]
if self.uid is not None and info[2] != int(self.uid):
cmd.append('-u')
cmd.append(self.uid)
return self.execute_command(cmd)
elif self.password_lock is False and info[1].startswith('*LOCKED*'):
cmd = [
self.module.get_bin_path('pw', True),
'unlock',
self.name
]
if self.uid is not None and info[2] != int(self.uid):
cmd.append('-u')
cmd.append(self.uid)
return self.execute_command(cmd)
return (rc, out, err)
class DragonFlyBsdUser(FreeBsdUser):
"""
This is a DragonFlyBSD User manipulation class - it inherits the
FreeBsdUser class behaviors, such as using the pw command to
manipulate the user database, followed by the chpass command
to change the password.
"""
platform = 'DragonFly'
class OpenBSDUser(User):
"""
This is a OpenBSD User manipulation class.
Main differences are that OpenBSD:-
- has no concept of "system" account.
- has no force delete user
This overrides the following methods from the generic class:-
- create_user()
- remove_user()
- modify_user()
"""
platform = 'OpenBSD'
distribution = None
SHADOWFILE = '/etc/master.passwd'
def create_user(self):
cmd = [self.module.get_bin_path('useradd', True)]
if self.uid is not None:
cmd.append('-u')
cmd.append(self.uid)
if self.non_unique:
cmd.append('-o')
if self.group is not None:
if not self.group_exists(self.group):
self.module.fail_json(msg="Group %s does not exist" % self.group)
cmd.append('-g')
cmd.append(self.group)
if self.groups is not None:
groups = self.get_groups_set()
cmd.append('-G')
cmd.append(','.join(groups))
if self.comment is not None:
cmd.append('-c')
cmd.append(self.comment)
if self.home is not None:
cmd.append('-d')
cmd.append(self.home)
if self.shell is not None:
cmd.append('-s')
cmd.append(self.shell)
if self.login_class is not None:
cmd.append('-L')
cmd.append(self.login_class)
if self.password is not None and self.password != '*':
cmd.append('-p')
cmd.append(self.password)
if self.create_home:
cmd.append('-m')
if self.skeleton is not None:
cmd.append('-k')
cmd.append(self.skeleton)
cmd.append(self.name)
return self.execute_command(cmd)
def remove_user_userdel(self):
cmd = [self.module.get_bin_path('userdel', True)]
if self.remove:
cmd.append('-r')
cmd.append(self.name)
return self.execute_command(cmd)
def modify_user(self):
cmd = [self.module.get_bin_path('usermod', True)]
info = self.user_info()
if self.uid is not None and info[2] != int(self.uid):
cmd.append('-u')
cmd.append(self.uid)
if self.non_unique:
cmd.append('-o')
if self.group is not None:
if not self.group_exists(self.group):
self.module.fail_json(msg="Group %s does not exist" % self.group)
ginfo = self.group_info(self.group)
if info[3] != ginfo[2]:
cmd.append('-g')
cmd.append(self.group)
if self.groups is not None:
current_groups = self.user_group_membership()
groups_need_mod = False
groups_option = '-S'
groups = []
if self.groups == '':
if current_groups and not self.append:
groups_need_mod = True
else:
groups = self.get_groups_set()
group_diff = set(current_groups).symmetric_difference(groups)
if group_diff:
if self.append:
for g in groups:
if g in group_diff:
groups_option = '-G'
groups_need_mod = True
break
else:
groups_need_mod = True
if groups_need_mod:
cmd.append(groups_option)
cmd.append(','.join(groups))
if self.comment is not None and info[4] != self.comment:
cmd.append('-c')
cmd.append(self.comment)
if self.home is not None and info[5] != self.home:
if self.move_home:
cmd.append('-m')
cmd.append('-d')
cmd.append(self.home)
if self.shell is not None and info[6] != self.shell:
cmd.append('-s')
cmd.append(self.shell)
if self.login_class is not None:
# find current login class
user_login_class = None
userinfo_cmd = [self.module.get_bin_path('userinfo', True), self.name]
(rc, out, err) = self.execute_command(userinfo_cmd, obey_checkmode=False)
for line in out.splitlines():
tokens = line.split()
if tokens[0] == 'class' and len(tokens) == 2:
user_login_class = tokens[1]
# act only if login_class change
if self.login_class != user_login_class:
cmd.append('-L')
cmd.append(self.login_class)
if self.password_lock and not info[1].startswith('*'):
cmd.append('-Z')
elif self.password_lock is False and info[1].startswith('*'):
cmd.append('-U')
if self.update_password == 'always' and self.password is not None \
and self.password != '*' and info[1] != self.password:
cmd.append('-p')
cmd.append(self.password)
# skip if no changes to be made
if len(cmd) == 1:
return (None, '', '')
cmd.append(self.name)
return self.execute_command(cmd)
class NetBSDUser(User):
"""
This is a NetBSD User manipulation class.
Main differences are that NetBSD:-
- has no concept of "system" account.
- has no force delete user
This overrides the following methods from the generic class:-
- create_user()
- remove_user()
- modify_user()
"""
platform = 'NetBSD'
distribution = None
SHADOWFILE = '/etc/master.passwd'
def create_user(self):
cmd = [self.module.get_bin_path('useradd', True)]
if self.uid is not None:
cmd.append('-u')
cmd.append(self.uid)
if self.non_unique:
cmd.append('-o')
if self.group is not None:
if not self.group_exists(self.group):
self.module.fail_json(msg="Group %s does not exist" % self.group)
cmd.append('-g')
cmd.append(self.group)
if self.groups is not None:
groups = self.get_groups_set()
if len(groups) > 16:
self.module.fail_json(msg="Too many groups (%d) NetBSD allows for 16 max." % len(groups))
cmd.append('-G')
cmd.append(','.join(groups))
if self.comment is not None:
cmd.append('-c')
cmd.append(self.comment)
if self.home is not None:
cmd.append('-d')
cmd.append(self.home)
if self.shell is not None:
cmd.append('-s')
cmd.append(self.shell)
if self.login_class is not None:
cmd.append('-L')
cmd.append(self.login_class)
if self.password is not None:
cmd.append('-p')
cmd.append(self.password)
if self.create_home:
cmd.append('-m')
if self.skeleton is not None:
cmd.append('-k')
cmd.append(self.skeleton)
cmd.append(self.name)
return self.execute_command(cmd)
def remove_user_userdel(self):
cmd = [self.module.get_bin_path('userdel', True)]
if self.remove:
cmd.append('-r')
cmd.append(self.name)
return self.execute_command(cmd)
def modify_user(self):
cmd = [self.module.get_bin_path('usermod', True)]
info = self.user_info()
if self.uid is not None and info[2] != int(self.uid):
cmd.append('-u')
cmd.append(self.uid)
if self.non_unique:
cmd.append('-o')
if self.group is not None:
if not self.group_exists(self.group):
self.module.fail_json(msg="Group %s does not exist" % self.group)
ginfo = self.group_info(self.group)
if info[3] != ginfo[2]:
cmd.append('-g')
cmd.append(self.group)
if self.groups is not None:
current_groups = self.user_group_membership()
groups_need_mod = False
groups = []
if self.groups == '':
if current_groups and not self.append:
groups_need_mod = True
else:
groups = self.get_groups_set()
group_diff = set(current_groups).symmetric_difference(groups)
if group_diff:
if self.append:
for g in groups:
if g in group_diff:
groups = set(current_groups).union(groups)
groups_need_mod = True
break
else:
groups_need_mod = True
if groups_need_mod:
if len(groups) > 16:
self.module.fail_json(msg="Too many groups (%d) NetBSD allows for 16 max." % len(groups))
cmd.append('-G')
cmd.append(','.join(groups))
if self.comment is not None and info[4] != self.comment:
cmd.append('-c')
cmd.append(self.comment)
if self.home is not None and info[5] != self.home:
if self.move_home:
cmd.append('-m')
cmd.append('-d')
cmd.append(self.home)
if self.shell is not None and info[6] != self.shell:
cmd.append('-s')
cmd.append(self.shell)
if self.login_class is not None:
cmd.append('-L')
cmd.append(self.login_class)
if self.update_password == 'always' and self.password is not None and info[1] != self.password:
cmd.append('-p')
cmd.append(self.password)
if self.password_lock and not info[1].startswith('*LOCKED*'):
cmd.append('-C yes')
elif self.password_lock is False and info[1].startswith('*LOCKED*'):
cmd.append('-C no')
# skip if no changes to be made
if len(cmd) == 1:
return (None, '', '')
cmd.append(self.name)
return self.execute_command(cmd)
class SunOS(User):
"""
This is a SunOS User manipulation class - The main difference between
this class and the generic user class is that Solaris-type distros
don't support the concept of a "system" account and we need to
edit the /etc/shadow file manually to set a password. (Ugh)
This overrides the following methods from the generic class:-
- create_user()
- remove_user()
- modify_user()
- user_info()
"""
platform = 'SunOS'
distribution = None
SHADOWFILE = '/etc/shadow'
USER_ATTR = '/etc/user_attr'
def get_password_defaults(self):
# Read password aging defaults
try:
minweeks = ''
maxweeks = ''
warnweeks = ''
with open("/etc/default/passwd", 'r') as f:
for line in f:
line = line.strip()
if (line.startswith('#') or line == ''):
continue
m = re.match(r'^([^#]*)#(.*)$', line)
if m: # The line contains a hash / comment
line = m.group(1)
key, value = line.split('=')
if key == "MINWEEKS":
minweeks = value.rstrip('\n')
elif key == "MAXWEEKS":
maxweeks = value.rstrip('\n')
elif key == "WARNWEEKS":
warnweeks = value.rstrip('\n')
except Exception as err:
self.module.fail_json(msg="failed to read /etc/default/passwd: %s" % to_native(err))
return (minweeks, maxweeks, warnweeks)
def remove_user(self):
cmd = [self.module.get_bin_path('userdel', True)]
if self.remove:
cmd.append('-r')
cmd.append(self.name)
return self.execute_command(cmd)
def create_user(self):
cmd = [self.module.get_bin_path('useradd', True)]
if self.uid is not None:
cmd.append('-u')
cmd.append(self.uid)
if self.non_unique:
cmd.append('-o')
if self.group is not None:
if not self.group_exists(self.group):
self.module.fail_json(msg="Group %s does not exist" % self.group)
cmd.append('-g')
cmd.append(self.group)
if self.groups is not None:
groups = self.get_groups_set()
cmd.append('-G')
cmd.append(','.join(groups))
if self.comment is not None:
cmd.append('-c')
cmd.append(self.comment)
if self.home is not None:
cmd.append('-d')
cmd.append(self.home)
if self.shell is not None:
cmd.append('-s')
cmd.append(self.shell)
if self.create_home:
cmd.append('-m')
if self.skeleton is not None:
cmd.append('-k')
cmd.append(self.skeleton)
if self.profile is not None:
cmd.append('-P')
cmd.append(self.profile)
if self.authorization is not None:
cmd.append('-A')
cmd.append(self.authorization)
if self.role is not None:
cmd.append('-R')
cmd.append(self.role)
cmd.append(self.name)
(rc, out, err) = self.execute_command(cmd)
if rc is not None and rc != 0:
self.module.fail_json(name=self.name, msg=err, rc=rc)
if not self.module.check_mode:
# we have to set the password by editing the /etc/shadow file
if self.password is not None:
self.backup_shadow()
minweeks, maxweeks, warnweeks = self.get_password_defaults()
try:
lines = []
with open(self.SHADOWFILE, 'rb') as f:
for line in f:
line = to_native(line, errors='surrogate_or_strict')
fields = line.strip().split(':')
if not fields[0] == self.name:
lines.append(line)
continue
fields[1] = self.password
fields[2] = str(int(time.time() // 86400))
if minweeks:
try:
fields[3] = str(int(minweeks) * 7)
except ValueError:
# mirror solaris, which allows for any value in this field, and ignores anything that is not an int.
pass
if maxweeks:
try:
fields[4] = str(int(maxweeks) * 7)
except ValueError:
# mirror solaris, which allows for any value in this field, and ignores anything that is not an int.
pass
if warnweeks:
try:
fields[5] = str(int(warnweeks) * 7)
except ValueError:
# mirror solaris, which allows for any value in this field, and ignores anything that is not an int.
pass
line = ':'.join(fields)
lines.append('%s\n' % line)
with open(self.SHADOWFILE, 'w+') as f:
f.writelines(lines)
except Exception as err:
self.module.fail_json(msg="failed to update users password: %s" % to_native(err))
return (rc, out, err)
def modify_user_usermod(self):
cmd = [self.module.get_bin_path('usermod', True)]
cmd_len = len(cmd)
info = self.user_info()
if self.uid is not None and info[2] != int(self.uid):
cmd.append('-u')
cmd.append(self.uid)
if self.non_unique:
cmd.append('-o')
if self.group is not None:
if not self.group_exists(self.group):
self.module.fail_json(msg="Group %s does not exist" % self.group)
ginfo = self.group_info(self.group)
if info[3] != ginfo[2]:
cmd.append('-g')
cmd.append(self.group)
if self.groups is not None:
current_groups = self.user_group_membership()
groups = self.get_groups_set()
group_diff = set(current_groups).symmetric_difference(groups)
groups_need_mod = False
if group_diff:
if self.append:
for g in groups:
if g in group_diff:
groups_need_mod = True
break
else:
groups_need_mod = True
if groups_need_mod:
cmd.append('-G')
new_groups = groups
if self.append:
new_groups.update(current_groups)
cmd.append(','.join(new_groups))
if self.comment is not None and info[4] != self.comment:
cmd.append('-c')
cmd.append(self.comment)
if self.home is not None and info[5] != self.home:
if self.move_home:
cmd.append('-m')
cmd.append('-d')
cmd.append(self.home)
if self.shell is not None and info[6] != self.shell:
cmd.append('-s')
cmd.append(self.shell)
if self.profile is not None and info[7] != self.profile:
cmd.append('-P')
cmd.append(self.profile)
if self.authorization is not None and info[8] != self.authorization:
cmd.append('-A')
cmd.append(self.authorization)
if self.role is not None and info[9] != self.role:
cmd.append('-R')
cmd.append(self.role)
# modify the user if cmd will do anything
if cmd_len != len(cmd):
cmd.append(self.name)
(rc, out, err) = self.execute_command(cmd)
if rc is not None and rc != 0:
self.module.fail_json(name=self.name, msg=err, rc=rc)
else:
(rc, out, err) = (None, '', '')
# we have to set the password by editing the /etc/shadow file
if self.update_password == 'always' and self.password is not None and info[1] != self.password:
self.backup_shadow()
(rc, out, err) = (0, '', '')
if not self.module.check_mode:
minweeks, maxweeks, warnweeks = self.get_password_defaults()
try:
lines = []
with open(self.SHADOWFILE, 'rb') as f:
for line in f:
line = to_native(line, errors='surrogate_or_strict')
fields = line.strip().split(':')
if not fields[0] == self.name:
lines.append(line)
continue
fields[1] = self.password
fields[2] = str(int(time.time() // 86400))
if minweeks:
fields[3] = str(int(minweeks) * 7)
if maxweeks:
fields[4] = str(int(maxweeks) * 7)
if warnweeks:
fields[5] = str(int(warnweeks) * 7)
line = ':'.join(fields)
lines.append('%s\n' % line)
with open(self.SHADOWFILE, 'w+') as f:
f.writelines(lines)
rc = 0
except Exception as err:
self.module.fail_json(msg="failed to update users password: %s" % to_native(err))
return (rc, out, err)
def user_info(self):
info = super(SunOS, self).user_info()
if info:
info += self._user_attr_info()
return info
def _user_attr_info(self):
info = [''] * 3
with open(self.USER_ATTR, 'r') as file_handler:
for line in file_handler:
lines = line.strip().split('::::')
if lines[0] == self.name:
tmp = dict(x.split('=') for x in lines[1].split(';'))
info[0] = tmp.get('profiles', '')
info[1] = tmp.get('auths', '')
info[2] = tmp.get('roles', '')
return info
class DarwinUser(User):
"""
This is a Darwin macOS User manipulation class.
Main differences are that Darwin:-
- Handles accounts in a database managed by dscl(1)
- Has no useradd/groupadd
- Does not create home directories
- User password must be cleartext
- UID must be given
- System users must ben under 500
This overrides the following methods from the generic class:-
- user_exists()
- create_user()
- remove_user()
- modify_user()
"""
platform = 'Darwin'
distribution = None
SHADOWFILE = None
dscl_directory = '.'
fields = [
('comment', 'RealName'),
('home', 'NFSHomeDirectory'),
('shell', 'UserShell'),
('uid', 'UniqueID'),
('group', 'PrimaryGroupID'),
('hidden', 'IsHidden'),
]
def __init__(self, module):
super(DarwinUser, self).__init__(module)
# make the user hidden if option is set or deffer to system option
if self.hidden is None:
if self.system:
self.hidden = 1
elif self.hidden:
self.hidden = 1
else:
self.hidden = 0
# add hidden to processing if set
if self.hidden is not None:
self.fields.append(('hidden', 'IsHidden'))
def _get_dscl(self):
return [self.module.get_bin_path('dscl', True), self.dscl_directory]
def _list_user_groups(self):
cmd = self._get_dscl()
cmd += ['-search', '/Groups', 'GroupMembership', self.name]
(rc, out, err) = self.execute_command(cmd, obey_checkmode=False)
groups = []
for line in out.splitlines():
if line.startswith(' ') or line.startswith(')'):
continue
groups.append(line.split()[0])
return groups
def _get_user_property(self, property):
'''Return user PROPERTY as given my dscl(1) read or None if not found.'''
cmd = self._get_dscl()
cmd += ['-read', '/Users/%s' % self.name, property]
(rc, out, err) = self.execute_command(cmd, obey_checkmode=False)
if rc != 0:
return None
# from dscl(1)
# if property contains embedded spaces, the list will instead be
# displayed one entry per line, starting on the line after the key.
lines = out.splitlines()
# sys.stderr.write('*** |%s| %s -> %s\n' % (property, out, lines))
if len(lines) == 1:
return lines[0].split(': ')[1]
else:
if len(lines) > 2:
return '\n'.join([lines[1].strip()] + lines[2:])
else:
if len(lines) == 2:
return lines[1].strip()
else:
return None
def _get_next_uid(self, system=None):
'''
Return the next available uid. If system=True, then
uid should be below of 500, if possible.
'''
cmd = self._get_dscl()
cmd += ['-list', '/Users', 'UniqueID']
(rc, out, err) = self.execute_command(cmd, obey_checkmode=False)
if rc != 0:
self.module.fail_json(
msg="Unable to get the next available uid",
rc=rc,
out=out,
err=err
)
max_uid = 0
max_system_uid = 0
for line in out.splitlines():
current_uid = int(line.split(' ')[-1])
if max_uid < current_uid:
max_uid = current_uid
if max_system_uid < current_uid and current_uid < 500:
max_system_uid = current_uid
if system and (0 < max_system_uid < 499):
return max_system_uid + 1
return max_uid + 1
def _change_user_password(self):
'''Change password for SELF.NAME against SELF.PASSWORD.
Please note that password must be cleartext.
'''
# some documentation on how is stored passwords on OSX:
# http://blog.lostpassword.com/2012/07/cracking-mac-os-x-lion-accounts-passwords/
# http://null-byte.wonderhowto.com/how-to/hack-mac-os-x-lion-passwords-0130036/
# http://pastebin.com/RYqxi7Ca
# on OSX 10.8+ hash is SALTED-SHA512-PBKDF2
# https://pythonhosted.org/passlib/lib/passlib.hash.pbkdf2_digest.html
# https://gist.github.com/nueh/8252572
cmd = self._get_dscl()
if self.password:
cmd += ['-passwd', '/Users/%s' % self.name, self.password]
else:
cmd += ['-create', '/Users/%s' % self.name, 'Password', '*']
(rc, out, err) = self.execute_command(cmd)
if rc != 0:
self.module.fail_json(msg='Error when changing password', err=err, out=out, rc=rc)
return (rc, out, err)
def _make_group_numerical(self):
'''Convert SELF.GROUP to is stringed numerical value suitable for dscl.'''
if self.group is None:
self.group = 'nogroup'
try:
self.group = grp.getgrnam(self.group).gr_gid
except KeyError:
self.module.fail_json(msg='Group "%s" not found. Try to create it first using "group" module.' % self.group)
# We need to pass a string to dscl
self.group = str(self.group)
def __modify_group(self, group, action):
'''Add or remove SELF.NAME to or from GROUP depending on ACTION.
ACTION can be 'add' or 'remove' otherwise 'remove' is assumed. '''
if action == 'add':
option = '-a'
else:
option = '-d'
cmd = ['dseditgroup', '-o', 'edit', option, self.name, '-t', 'user', group]
(rc, out, err) = self.execute_command(cmd)
if rc != 0:
self.module.fail_json(msg='Cannot %s user "%s" to group "%s".'
% (action, self.name, group), err=err, out=out, rc=rc)
return (rc, out, err)
def _modify_group(self):
'''Add or remove SELF.NAME to or from GROUP depending on ACTION.
ACTION can be 'add' or 'remove' otherwise 'remove' is assumed. '''
rc = 0
out = ''
err = ''
changed = False
current = set(self._list_user_groups())
if self.groups is not None:
target = set(self.groups.split(','))
else:
target = set([])
if self.append is False:
for remove in current - target:
(_rc, _err, _out) = self.__modify_group(remove, 'delete')
rc += rc
out += _out
err += _err
changed = True
for add in target - current:
(_rc, _err, _out) = self.__modify_group(add, 'add')
rc += _rc
out += _out
err += _err
changed = True
return (rc, err, out, changed)
def _update_system_user(self):
'''Hide or show user on login window according SELF.SYSTEM.
Returns 0 if a change has been made, None otherwise.'''
plist_file = '/Library/Preferences/com.apple.loginwindow.plist'
# http://support.apple.com/kb/HT5017?viewlocale=en_US
cmd = ['defaults', 'read', plist_file, 'HiddenUsersList']
(rc, out, err) = self.execute_command(cmd, obey_checkmode=False)
# returned value is
# (
# "_userA",
# "_UserB",
# userc
# )
hidden_users = []
for x in out.splitlines()[1:-1]:
try:
x = x.split('"')[1]
except IndexError:
x = x.strip()
hidden_users.append(x)
if self.system:
if self.name not in hidden_users:
cmd = ['defaults', 'write', plist_file, 'HiddenUsersList', '-array-add', self.name]
(rc, out, err) = self.execute_command(cmd)
if rc != 0:
self.module.fail_json(msg='Cannot user "%s" to hidden user list.' % self.name, err=err, out=out, rc=rc)
return 0
else:
if self.name in hidden_users:
del (hidden_users[hidden_users.index(self.name)])
cmd = ['defaults', 'write', plist_file, 'HiddenUsersList', '-array'] + hidden_users
(rc, out, err) = self.execute_command(cmd)
if rc != 0:
self.module.fail_json(msg='Cannot remove user "%s" from hidden user list.' % self.name, err=err, out=out, rc=rc)
return 0
def user_exists(self):
'''Check is SELF.NAME is a known user on the system.'''
cmd = self._get_dscl()
cmd += ['-list', '/Users/%s' % self.name]
(rc, out, err) = self.execute_command(cmd, obey_checkmode=False)
return rc == 0
def remove_user(self):
'''Delete SELF.NAME. If SELF.FORCE is true, remove its home directory.'''
info = self.user_info()
cmd = self._get_dscl()
cmd += ['-delete', '/Users/%s' % self.name]
(rc, out, err) = self.execute_command(cmd)
if rc != 0:
self.module.fail_json(msg='Cannot delete user "%s".' % self.name, err=err, out=out, rc=rc)
if self.force:
if os.path.exists(info[5]):
shutil.rmtree(info[5])
out += "Removed %s" % info[5]
return (rc, out, err)
def create_user(self, command_name='dscl'):
cmd = self._get_dscl()
cmd += ['-create', '/Users/%s' % self.name]
(rc, err, out) = self.execute_command(cmd)
if rc != 0:
self.module.fail_json(msg='Cannot create user "%s".' % self.name, err=err, out=out, rc=rc)
self._make_group_numerical()
if self.uid is None:
self.uid = str(self._get_next_uid(self.system))
# Homedir is not created by default
if self.create_home:
if self.home is None:
self.home = '/Users/%s' % self.name
if not self.module.check_mode:
if not os.path.exists(self.home):
os.makedirs(self.home)
self.chown_homedir(int(self.uid), int(self.group), self.home)
# dscl sets shell to /usr/bin/false when UserShell is not specified
# so set the shell to /bin/bash when the user is not a system user
if not self.system and self.shell is None:
self.shell = '/bin/bash'
for field in self.fields:
if field[0] in self.__dict__ and self.__dict__[field[0]]:
cmd = self._get_dscl()
cmd += ['-create', '/Users/%s' % self.name, field[1], self.__dict__[field[0]]]
(rc, _err, _out) = self.execute_command(cmd)
if rc != 0:
self.module.fail_json(msg='Cannot add property "%s" to user "%s".' % (field[0], self.name), err=err, out=out, rc=rc)
out += _out
err += _err
if rc != 0:
return (rc, _err, _out)
(rc, _err, _out) = self._change_user_password()
out += _out
err += _err
self._update_system_user()
# here we don't care about change status since it is a creation,
# thus changed is always true.
if self.groups:
(rc, _out, _err, changed) = self._modify_group()
out += _out
err += _err
return (rc, err, out)
def modify_user(self):
changed = None
out = ''
err = ''
if self.group:
self._make_group_numerical()
for field in self.fields:
if field[0] in self.__dict__ and self.__dict__[field[0]]:
current = self._get_user_property(field[1])
if current is None or current != self.__dict__[field[0]]:
cmd = self._get_dscl()
cmd += ['-create', '/Users/%s' % self.name, field[1], self.__dict__[field[0]]]
(rc, _err, _out) = self.execute_command(cmd)
if rc != 0:
self.module.fail_json(
msg='Cannot update property "%s" for user "%s".'
% (field[0], self.name), err=err, out=out, rc=rc)
changed = rc
out += _out
err += _err
if self.update_password == 'always' and self.password is not None:
(rc, _err, _out) = self._change_user_password()
out += _out
err += _err
changed = rc
if self.groups:
(rc, _out, _err, _changed) = self._modify_group()
out += _out
err += _err
if _changed is True:
changed = rc
rc = self._update_system_user()
if rc == 0:
changed = rc
return (changed, out, err)
class AIX(User):
"""
This is a AIX User manipulation class.
This overrides the following methods from the generic class:-
- create_user()
- remove_user()
- modify_user()
- parse_shadow_file()
"""
platform = 'AIX'
distribution = None
SHADOWFILE = '/etc/security/passwd'
def remove_user(self):
cmd = [self.module.get_bin_path('userdel', True)]
if self.remove:
cmd.append('-r')
cmd.append(self.name)
return self.execute_command(cmd)
def create_user_useradd(self, command_name='useradd'):
cmd = [self.module.get_bin_path(command_name, True)]
if self.uid is not None:
cmd.append('-u')
cmd.append(self.uid)
if self.group is not None:
if not self.group_exists(self.group):
self.module.fail_json(msg="Group %s does not exist" % self.group)
cmd.append('-g')
cmd.append(self.group)
if self.groups is not None and len(self.groups):
groups = self.get_groups_set()
cmd.append('-G')
cmd.append(','.join(groups))
if self.comment is not None:
cmd.append('-c')
cmd.append(self.comment)
if self.home is not None:
cmd.append('-d')
cmd.append(self.home)
if self.shell is not None:
cmd.append('-s')
cmd.append(self.shell)
if self.create_home:
cmd.append('-m')
if self.skeleton is not None:
cmd.append('-k')
cmd.append(self.skeleton)
cmd.append(self.name)
(rc, out, err) = self.execute_command(cmd)
# set password with chpasswd
if self.password is not None:
cmd = []
cmd.append(self.module.get_bin_path('chpasswd', True))
cmd.append('-e')
cmd.append('-c')
self.execute_command(cmd, data="%s:%s" % (self.name, self.password))
return (rc, out, err)
def modify_user_usermod(self):
cmd = [self.module.get_bin_path('usermod', True)]
info = self.user_info()
if self.uid is not None and info[2] != int(self.uid):
cmd.append('-u')
cmd.append(self.uid)
if self.group is not None:
if not self.group_exists(self.group):
self.module.fail_json(msg="Group %s does not exist" % self.group)
ginfo = self.group_info(self.group)
if info[3] != ginfo[2]:
cmd.append('-g')
cmd.append(self.group)
if self.groups is not None:
current_groups = self.user_group_membership()
groups_need_mod = False
groups = []
if self.groups == '':
if current_groups and not self.append:
groups_need_mod = True
else:
groups = self.get_groups_set()
group_diff = set(current_groups).symmetric_difference(groups)
if group_diff:
if self.append:
for g in groups:
if g in group_diff:
groups_need_mod = True
break
else:
groups_need_mod = True
if groups_need_mod:
cmd.append('-G')
cmd.append(','.join(groups))
if self.comment is not None and info[4] != self.comment:
cmd.append('-c')
cmd.append(self.comment)
if self.home is not None and info[5] != self.home:
if self.move_home:
cmd.append('-m')
cmd.append('-d')
cmd.append(self.home)
if self.shell is not None and info[6] != self.shell:
cmd.append('-s')
cmd.append(self.shell)
# skip if no changes to be made
if len(cmd) == 1:
(rc, out, err) = (None, '', '')
else:
cmd.append(self.name)
(rc, out, err) = self.execute_command(cmd)
# set password with chpasswd
if self.update_password == 'always' and self.password is not None and info[1] != self.password:
cmd = []
cmd.append(self.module.get_bin_path('chpasswd', True))
cmd.append('-e')
cmd.append('-c')
(rc2, out2, err2) = self.execute_command(cmd, data="%s:%s" % (self.name, self.password))
else:
(rc2, out2, err2) = (None, '', '')
if rc is not None:
return (rc, out + out2, err + err2)
else:
return (rc2, out + out2, err + err2)
def parse_shadow_file(self):
"""Example AIX shadowfile data:
nobody:
password = *
operator1:
password = {ssha512}06$xxxxxxxxxxxx....
lastupdate = 1549558094
test1:
password = *
lastupdate = 1553695126
"""
b_name = to_bytes(self.name)
if os.path.exists(self.SHADOWFILE) and os.access(self.SHADOWFILE, os.R_OK):
with open(self.SHADOWFILE, 'rb') as bf:
b_lines = bf.readlines()
b_passwd_line = b''
b_expires_line = b''
for index, b_line in enumerate(b_lines):
# Get password and lastupdate lines which come after the username
if b_line.startswith(b'%s:' % b_name):
b_passwd_line = b_lines[index + 1]
b_expires_line = b_lines[index + 2]
break
# Sanity check the lines because sometimes both are not present
if b' = ' in b_passwd_line:
b_passwd = b_passwd_line.split(b' = ', 1)[-1].strip()
else:
b_passwd = b''
if b' = ' in b_expires_line:
b_expires = b_expires_line.split(b' = ', 1)[-1].strip()
else:
b_expires = b''
passwd = to_native(b_passwd)
expires = to_native(b_expires) or -1
return passwd, expires
class HPUX(User):
"""
This is a HP-UX User manipulation class.
This overrides the following methods from the generic class:-
- create_user()
- remove_user()
- modify_user()
"""
platform = 'HP-UX'
distribution = None
SHADOWFILE = '/etc/shadow'
def create_user(self):
cmd = ['/usr/sam/lbin/useradd.sam']
if self.uid is not None:
cmd.append('-u')
cmd.append(self.uid)
if self.non_unique:
cmd.append('-o')
if self.group is not None:
if not self.group_exists(self.group):
self.module.fail_json(msg="Group %s does not exist" % self.group)
cmd.append('-g')
cmd.append(self.group)
if self.groups is not None and len(self.groups):
groups = self.get_groups_set()
cmd.append('-G')
cmd.append(','.join(groups))
if self.comment is not None:
cmd.append('-c')
cmd.append(self.comment)
if self.home is not None:
cmd.append('-d')
cmd.append(self.home)
if self.shell is not None:
cmd.append('-s')
cmd.append(self.shell)
if self.password is not None:
cmd.append('-p')
cmd.append(self.password)
if self.create_home:
cmd.append('-m')
else:
cmd.append('-M')
if self.system:
cmd.append('-r')
cmd.append(self.name)
return self.execute_command(cmd)
def remove_user(self):
cmd = ['/usr/sam/lbin/userdel.sam']
if self.force:
cmd.append('-F')
if self.remove:
cmd.append('-r')
cmd.append(self.name)
return self.execute_command(cmd)
def modify_user(self):
cmd = ['/usr/sam/lbin/usermod.sam']
info = self.user_info()
if self.uid is not None and info[2] != int(self.uid):
cmd.append('-u')
cmd.append(self.uid)
if self.non_unique:
cmd.append('-o')
if self.group is not None:
if not self.group_exists(self.group):
self.module.fail_json(msg="Group %s does not exist" % self.group)
ginfo = self.group_info(self.group)
if info[3] != ginfo[2]:
cmd.append('-g')
cmd.append(self.group)
if self.groups is not None:
current_groups = self.user_group_membership()
groups_need_mod = False
groups = []
if self.groups == '':
if current_groups and not self.append:
groups_need_mod = True
else:
groups = self.get_groups_set(remove_existing=False)
group_diff = set(current_groups).symmetric_difference(groups)
if group_diff:
if self.append:
for g in groups:
if g in group_diff:
groups_need_mod = True
break
else:
groups_need_mod = True
if groups_need_mod:
cmd.append('-G')
new_groups = groups
if self.append:
new_groups = groups | set(current_groups)
cmd.append(','.join(new_groups))
if self.comment is not None and info[4] != self.comment:
cmd.append('-c')
cmd.append(self.comment)
if self.home is not None and info[5] != self.home:
cmd.append('-d')
cmd.append(self.home)
if self.move_home:
cmd.append('-m')
if self.shell is not None and info[6] != self.shell:
cmd.append('-s')
cmd.append(self.shell)
if self.update_password == 'always' and self.password is not None and info[1] != self.password:
cmd.append('-F')
cmd.append('-p')
cmd.append(self.password)
# skip if no changes to be made
if len(cmd) == 1:
return (None, '', '')
cmd.append(self.name)
return self.execute_command(cmd)
class BusyBox(User):
"""
This is the BusyBox class for use on systems that have adduser, deluser,
and delgroup commands. It overrides the following methods:
- create_user()
- remove_user()
- modify_user()
"""
def create_user(self):
cmd = [self.module.get_bin_path('adduser', True)]
cmd.append('-D')
if self.uid is not None:
cmd.append('-u')
cmd.append(self.uid)
if self.group is not None:
if not self.group_exists(self.group):
self.module.fail_json(msg='Group {0} does not exist'.format(self.group))
cmd.append('-G')
cmd.append(self.group)
if self.comment is not None:
cmd.append('-g')
cmd.append(self.comment)
if self.home is not None:
cmd.append('-h')
cmd.append(self.home)
if self.shell is not None:
cmd.append('-s')
cmd.append(self.shell)
if not self.create_home:
cmd.append('-H')
if self.skeleton is not None:
cmd.append('-k')
cmd.append(self.skeleton)
if self.system:
cmd.append('-S')
cmd.append(self.name)
rc, out, err = self.execute_command(cmd)
if rc is not None and rc != 0:
self.module.fail_json(name=self.name, msg=err, rc=rc)
if self.password is not None:
cmd = [self.module.get_bin_path('chpasswd', True)]
cmd.append('--encrypted')
data = '{name}:{password}'.format(name=self.name, password=self.password)
rc, out, err = self.execute_command(cmd, data=data)
if rc is not None and rc != 0:
self.module.fail_json(name=self.name, msg=err, rc=rc)
# Add to additional groups
if self.groups is not None and len(self.groups):
groups = self.get_groups_set()
add_cmd_bin = self.module.get_bin_path('adduser', True)
for group in groups:
cmd = [add_cmd_bin, self.name, group]
rc, out, err = self.execute_command(cmd)
if rc is not None and rc != 0:
self.module.fail_json(name=self.name, msg=err, rc=rc)
return rc, out, err
def remove_user(self):
cmd = [
self.module.get_bin_path('deluser', True),
self.name
]
if self.remove:
cmd.append('--remove-home')
return self.execute_command(cmd)
def modify_user(self):
current_groups = self.user_group_membership()
groups = []
rc = None
out = ''
err = ''
info = self.user_info()
add_cmd_bin = self.module.get_bin_path('adduser', True)
remove_cmd_bin = self.module.get_bin_path('delgroup', True)
# Manage group membership
if self.groups is not None and len(self.groups):
groups = self.get_groups_set()
group_diff = set(current_groups).symmetric_difference(groups)
if group_diff:
for g in groups:
if g in group_diff:
add_cmd = [add_cmd_bin, self.name, g]
rc, out, err = self.execute_command(add_cmd)
if rc is not None and rc != 0:
self.module.fail_json(name=self.name, msg=err, rc=rc)
for g in group_diff:
if g not in groups and not self.append:
remove_cmd = [remove_cmd_bin, self.name, g]
rc, out, err = self.execute_command(remove_cmd)
if rc is not None and rc != 0:
self.module.fail_json(name=self.name, msg=err, rc=rc)
# Manage password
if self.password is not None:
if info[1] != self.password:
cmd = [self.module.get_bin_path('chpasswd', True)]
cmd.append('--encrypted')
data = '{name}:{password}'.format(name=self.name, password=self.password)
rc, out, err = self.execute_command(cmd, data=data)
if rc is not None and rc != 0:
self.module.fail_json(name=self.name, msg=err, rc=rc)
return rc, out, err
class Alpine(BusyBox):
"""
This is the Alpine User manipulation class. It inherits the BusyBox class
behaviors such as using adduser and deluser commands.
"""
platform = 'Linux'
distribution = 'Alpine'
def main():
ssh_defaults = dict(
bits=0,
type='rsa',
passphrase=None,
comment='ansible-generated on %s' % socket.gethostname()
)
module = AnsibleModule(
argument_spec=dict(
state=dict(type='str', default='present', choices=['absent', 'present']),
name=dict(type='str', required=True, aliases=['user']),
uid=dict(type='int'),
non_unique=dict(type='bool', default=False),
group=dict(type='str'),
groups=dict(type='list'),
comment=dict(type='str'),
home=dict(type='path'),
shell=dict(type='str'),
password=dict(type='str', no_log=True),
login_class=dict(type='str'),
# following options are specific to macOS
hidden=dict(type='bool'),
# following options are specific to selinux
seuser=dict(type='str'),
# following options are specific to userdel
force=dict(type='bool', default=False),
remove=dict(type='bool', default=False),
# following options are specific to useradd
create_home=dict(type='bool', default=True, aliases=['createhome']),
skeleton=dict(type='str'),
system=dict(type='bool', default=False),
# following options are specific to usermod
move_home=dict(type='bool', default=False),
append=dict(type='bool', default=False),
# following are specific to ssh key generation
generate_ssh_key=dict(type='bool'),
ssh_key_bits=dict(type='int', default=ssh_defaults['bits']),
ssh_key_type=dict(type='str', default=ssh_defaults['type']),
ssh_key_file=dict(type='path'),
ssh_key_comment=dict(type='str', default=ssh_defaults['comment']),
ssh_key_passphrase=dict(type='str', no_log=True),
update_password=dict(type='str', default='always', choices=['always', 'on_create']),
expires=dict(type='float'),
password_lock=dict(type='bool'),
local=dict(type='bool'),
profile=dict(type='str'),
authorization=dict(type='str'),
role=dict(type='str'),
),
supports_check_mode=True,
mutually_exclusive=[
('local', 'groups'),
('local', 'append')
]
)
user = User(module)
user.check_password_encrypted()
module.debug('User instantiated - platform %s' % user.platform)
if user.distribution:
module.debug('User instantiated - distribution %s' % user.distribution)
rc = None
out = ''
err = ''
result = {}
result['name'] = user.name
result['state'] = user.state
if user.state == 'absent':
if user.user_exists():
if module.check_mode:
module.exit_json(changed=True)
(rc, out, err) = user.remove_user()
if rc != 0:
module.fail_json(name=user.name, msg=err, rc=rc)
result['force'] = user.force
result['remove'] = user.remove
elif user.state == 'present':
if not user.user_exists():
if module.check_mode:
module.exit_json(changed=True)
# Check to see if the provided home path contains parent directories
# that do not exist.
path_needs_parents = False
if user.home:
parent = os.path.basename(user.home)
if not os.path.isdir(parent):
path_needs_parents = True
(rc, out, err) = user.create_user()
# If the home path had parent directories that needed to be created,
# make sure file permissions are correct in the created home directory.
if path_needs_parents:
info = user.user_info()
if info is not False:
user.chown_homedir(info[2], info[3], user.home)
if module.check_mode:
result['system'] = user.name
else:
result['system'] = user.system
result['create_home'] = user.create_home
else:
# modify user (note: this function is check mode aware)
(rc, out, err) = user.modify_user()
result['append'] = user.append
result['move_home'] = user.move_home
if rc is not None and rc != 0:
module.fail_json(name=user.name, msg=err, rc=rc)
if user.password is not None:
result['password'] = 'NOT_LOGGING_PASSWORD'
if rc is None:
result['changed'] = False
else:
result['changed'] = True
if out:
result['stdout'] = out
if err:
result['stderr'] = err
if user.user_exists() and user.state == 'present':
info = user.user_info()
if info is False:
result['msg'] = "failed to look up user name: %s" % user.name
result['failed'] = True
result['uid'] = info[2]
result['group'] = info[3]
result['comment'] = info[4]
result['home'] = info[5]
result['shell'] = info[6]
if user.groups is not None:
result['groups'] = user.groups
# handle missing homedirs
info = user.user_info()
if user.home is None:
user.home = info[5]
if not os.path.exists(user.home) and user.create_home:
if not module.check_mode:
user.create_homedir(user.home)
user.chown_homedir(info[2], info[3], user.home)
result['changed'] = True
# deal with ssh key
if user.sshkeygen:
# generate ssh key (note: this function is check mode aware)
(rc, out, err) = user.ssh_key_gen()
if rc is not None and rc != 0:
module.fail_json(name=user.name, msg=err, rc=rc)
if rc == 0:
result['changed'] = True
(rc, out, err) = user.ssh_key_fingerprint()
if rc == 0:
result['ssh_fingerprint'] = out.strip()
else:
result['ssh_fingerprint'] = err.strip()
result['ssh_key_file'] = user.get_ssh_key_path()
result['ssh_public_key'] = user.get_ssh_public_key()
module.exit_json(**result)
# import module snippets
if __name__ == '__main__':
main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 60,307 |
user: createhome=no fails with [Errno 2] No such file or directory
|
##### SUMMARY
I am creating a system user account for Jenkins workers, and I want to create the home directory myself, after the user is already created. Ansible 2.8 let me, current devel branch fails.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
user
##### ANSIBLE VERSION
```
ansible 2.9.0.dev0
config file = /home/mg/src/deployments/provisioning/ansible.cfg
configured module search path = [u'/home/mg/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /home/mg/src/ansible/lib/ansible
executable location = /home/mg/src/ansible/bin/ansible
python version = 2.7.16 (default, Apr 6 2019, 01:42:57) [GCC 8.3.0]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```
ACTION_WARNINGS(/home/mg/src/deployments/provisioning/ansible.cfg) = False
CACHE_PLUGIN(/home/mg/src/deployments/provisioning/ansible.cfg) = jsonfile
CACHE_PLUGIN_CONNECTION(/home/mg/src/deployments/provisioning/ansible.cfg) = .cache/facts/
CACHE_PLUGIN_TIMEOUT(/home/mg/src/deployments/provisioning/ansible.cfg) = 86400
DEFAULT_CALLBACK_WHITELIST(/home/mg/src/deployments/provisioning/ansible.cfg) = [u'fancy_html']
DEFAULT_FORKS(/home/mg/src/deployments/provisioning/ansible.cfg) = 15
DEFAULT_GATHERING(/home/mg/src/deployments/provisioning/ansible.cfg) = smart
DEFAULT_HOST_LIST(/home/mg/src/deployments/provisioning/ansible.cfg) = [u'/home/mg/src/deployments/provisioning/inventory']
DEFAULT_LOG_PATH(/home/mg/src/deployments/provisioning/ansible.cfg) = /home/mg/src/deployments/provisioning/.cache/ansible.log
DEFAULT_REMOTE_USER(/home/mg/src/deployments/provisioning/ansible.cfg) = root
DEFAULT_STDOUT_CALLBACK(/home/mg/src/deployments/provisioning/ansible.cfg) = yaml
DEFAULT_VAULT_PASSWORD_FILE(/home/mg/src/deployments/provisioning/ansible.cfg) = /home/mg/src/deployments/provisioning/askpass.py
RETRY_FILES_ENABLED(/home/mg/src/deployments/provisioning/ansible.cfg) = False
```
##### OS / ENVIRONMENT
Ubuntu 19.04 on the controller, Ubuntu 16.04 and 18.04 on the targets.
##### STEPS TO REPRODUCE
```yaml
- hosts: all
tasks:
- name: create a user for the jenkins worker
user: name=jenkins-worker home=/var/lib/jenkins-worker createhome=no shell=/bin/bash system=yes
tags: [ jenkins, user ]
- name: create /var/lib/jenkins-worker
file: dest=/var/lib/jenkins-worker state=directory owner=jenkins-worker group=jenkins-worker
tags: [ jenkins, user ]
```
##### EXPECTED RESULTS
I expect there to be a /var/lib/jenkins-worker, owner by jenkins-worker:jenkins-worker, with no dotfiles inside it.
##### ACTUAL RESULTS
<!--- Paste verbatim command output between quotes -->
```
TASK [jenkins-worker : create a user for the jenkins worker] ******************************************************************************************************************************************************
fatal: [xenial]: FAILED! => changed=false
msg: '[Errno 2] No such file or directory: ''/var/lib/jenkins-worker'''
fatal: [bionic]: FAILED! => changed=false
msg: '[Errno 2] No such file or directory: ''/var/lib/jenkins-worker'''
```
What's even more fun is that when I re-ran the playbook, it succeeded, and I don't understand what even is going on any more.
|
https://github.com/ansible/ansible/issues/60307
|
https://github.com/ansible/ansible/pull/60310
|
13403b36889a24740178908d3f816d80545b06ad
|
c71622b31a52c44f171a4bb9da1331939ad7aa60
| 2019-08-09T10:43:06Z |
python
| 2019-08-12T14:37:45Z |
test/integration/targets/user/tasks/main.yml
|
# Test code for the user module.
# (c) 2017, James Tanner <[email protected]>
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
#
## user add
- name: remove the test user
user:
name: ansibulluser
state: absent
- name: try to create a user
user:
name: ansibulluser
state: present
register: user_test0_0
- name: create the user again
user:
name: ansibulluser
state: present
register: user_test0_1
- debug:
var: user_test0
verbosity: 2
- name: make a list of users
script: userlist.sh {{ ansible_facts.distribution }}
register: user_names
- debug:
var: user_names
verbosity: 2
- name: validate results for testcase 0
assert:
that:
- user_test0_0 is changed
- user_test0_1 is not changed
- '"ansibulluser" in user_names.stdout_lines'
# test user add with password
- name: add an encrypted password for user
user:
name: ansibulluser
password: "$6$rounds=656000$TT4O7jz2M57npccl$33LF6FcUMSW11qrESXL1HX0BS.bsiT6aenFLLiVpsQh6hDtI9pJh5iY7x8J7ePkN4fP8hmElidHXaeD51pbGS."
state: present
update_password: always
register: test_user_encrypt0
- name: there should not be warnings
assert:
that: "'warnings' not in test_user_encrypt0"
- block:
- name: add an plaintext password for user
user:
name: ansibulluser
password: "plaintextpassword"
state: present
update_password: always
register: test_user_encrypt1
- name: there should be a warning complains that the password is plaintext
assert:
that: "'warnings' in test_user_encrypt1"
- name: add an invalid hashed password
user:
name: ansibulluser
password: "$6$rounds=656000$tgK3gYTyRLUmhyv2$lAFrYUQwn7E6VsjPOwQwoSx30lmpiU9r/E0Al7tzKrR9mkodcMEZGe9OXD0H/clOn6qdsUnaL4zefy5fG+++++"
state: present
update_password: always
register: test_user_encrypt2
- name: there should be a warning complains about the character set of password
assert:
that: "'warnings' in test_user_encrypt2"
- name: change password to '!'
user:
name: ansibulluser
password: '!'
register: test_user_encrypt3
- name: change password to '*'
user:
name: ansibulluser
password: '*'
register: test_user_encrypt4
- name: there should be no warnings when setting the password to '!' and '*'
assert:
that:
- "'warnings' not in test_user_encrypt3"
- "'warnings' not in test_user_encrypt4"
when: ansible_facts.system != 'Darwin'
# https://github.com/ansible/ansible/issues/42484
# Skipping macOS for now since there is a bug when changing home directory
- block:
- name: create user specifying home
user:
name: ansibulluser
state: present
home: "{{ user_home_prefix[ansible_facts.system] }}/ansibulluser"
register: user_test3_0
- name: create user again specifying home
user:
name: ansibulluser
state: present
home: "{{ user_home_prefix[ansible_facts.system] }}/ansibulluser"
register: user_test3_1
- name: change user home
user:
name: ansibulluser
state: present
home: "{{ user_home_prefix[ansible_facts.system] }}/ansibulluser-mod"
register: user_test3_2
- name: change user home back
user:
name: ansibulluser
state: present
home: "{{ user_home_prefix[ansible_facts.system] }}/ansibulluser"
register: user_test3_3
- name: validate results for testcase 3
assert:
that:
- user_test3_0 is not changed
- user_test3_1 is not changed
- user_test3_2 is changed
- user_test3_3 is changed
when: ansible_facts.system != 'Darwin'
# https://github.com/ansible/ansible/issues/41393
# Create a new user account with a path that has parent directories that do not exist
- name: Create user with home path that has parents that do not exist
user:
name: ansibulluser2
state: present
home: "{{ user_home_prefix[ansible_facts.system] }}/in2deep/ansibulluser2"
register: create_home_with_no_parent_1
- name: Create user with home path that has parents that do not exist again
user:
name: ansibulluser2
state: present
home: "{{ user_home_prefix[ansible_facts.system] }}/in2deep/ansibulluser2"
register: create_home_with_no_parent_2
- name: Check the created home directory
stat:
path: "{{ user_home_prefix[ansible_facts.system] }}/in2deep/ansibulluser2"
register: home_with_no_parent_3
- name: Ensure user with non-existing parent paths was created successfully
assert:
that:
- create_home_with_no_parent_1 is changed
- create_home_with_no_parent_1.home == user_home_prefix[ansible_facts.system] ~ '/in2deep/ansibulluser2'
- create_home_with_no_parent_2 is not changed
- home_with_no_parent_3.stat.uid == create_home_with_no_parent_1.uid
- home_with_no_parent_3.stat.gr_name == default_user_group[ansible_facts.distribution] | default('ansibulluser2')
- name: Cleanup test account
user:
name: ansibulluser2
home: "{{ user_home_prefix[ansible_facts.system] }}/in2deep/ansibulluser2"
state: absent
remove: yes
- name: Remove testing dir
file:
path: "{{ user_home_prefix[ansible_facts.system] }}/in2deep/"
state: absent
## user check
- name: run existing user check tests
user:
name: "{{ user_names.stdout_lines | random }}"
state: present
create_home: no
loop: "{{ range(1, 5+1) | list }}"
register: user_test1
- debug:
var: user_test1
verbosity: 2
- name: validate results for testcase 1
assert:
that:
- user_test1.results is defined
- user_test1.results | length == 5
- name: validate changed results for testcase 1
assert:
that:
- "user_test1.results[0] is not changed"
- "user_test1.results[1] is not changed"
- "user_test1.results[2] is not changed"
- "user_test1.results[3] is not changed"
- "user_test1.results[4] is not changed"
- "user_test1.results[0]['state'] == 'present'"
- "user_test1.results[1]['state'] == 'present'"
- "user_test1.results[2]['state'] == 'present'"
- "user_test1.results[3]['state'] == 'present'"
- "user_test1.results[4]['state'] == 'present'"
## user remove
- name: try to delete the user
user:
name: ansibulluser
state: absent
force: true
register: user_test2
- name: make a new list of users
script: userlist.sh {{ ansible_facts.distribution }}
register: user_names2
- debug:
var: user_names2
verbosity: 2
- name: validate results for testcase 2
assert:
that:
- '"ansibulluser" not in user_names2.stdout_lines'
## create user without home and test fallback home dir create
- block:
- name: create the user
user:
name: ansibulluser
- name: delete the user and home dir
user:
name: ansibulluser
state: absent
force: true
remove: true
- name: create the user without home
user:
name: ansibulluser
create_home: no
- name: create the user home dir
user:
name: ansibulluser
register: user_create_home_fallback
- name: stat home dir
stat:
path: '{{ user_create_home_fallback.home }}'
register: user_create_home_fallback_dir
- name: read UMASK from /etc/login.defs and return mode
shell: |
import re
import os
try:
for line in open('/etc/login.defs').readlines():
m = re.match(r'^UMASK\s+(\d+)$', line)
if m:
umask = int(m.group(1), 8)
except:
umask = os.umask(0)
mode = oct(0o777 & ~umask)
print(str(mode).replace('o', ''))
args:
executable: "{{ ansible_python_interpreter }}"
register: user_login_defs_umask
- name: validate that user home dir is created
assert:
that:
- user_create_home_fallback is changed
- user_create_home_fallback_dir.stat.exists
- user_create_home_fallback_dir.stat.isdir
- user_create_home_fallback_dir.stat.pw_name == 'ansibulluser'
- user_create_home_fallback_dir.stat.mode == user_login_defs_umask.stdout
when: ansible_facts.system != 'Darwin'
- block:
- name: create non-system user on macOS to test the shell is set to /bin/bash
user:
name: macosuser
register: macosuser_output
- name: validate the shell is set to /bin/bash
assert:
that:
- 'macosuser_output.shell == "/bin/bash"'
- name: cleanup
user:
name: macosuser
state: absent
- name: create system user on macos to test the shell is set to /usr/bin/false
user:
name: macosuser
system: yes
register: macosuser_output
- name: validate the shell is set to /usr/bin/false
assert:
that:
- 'macosuser_output.shell == "/usr/bin/false"'
- name: cleanup
user:
name: macosuser
state: absent
- name: create non-system user on macos and set the shell to /bin/sh
user:
name: macosuser
shell: /bin/sh
register: macosuser_output
- name: validate the shell is set to /bin/sh
assert:
that:
- 'macosuser_output.shell == "/bin/sh"'
- name: cleanup
user:
name: macosuser
state: absent
when: ansible_facts.distribution == "MacOSX"
## user expires
# Date is March 3, 2050
- name: Set user expiration
user:
name: ansibulluser
state: present
expires: 2529881062
register: user_test_expires1
tags:
- timezone
- name: Set user expiration again to ensure no change is made
user:
name: ansibulluser
state: present
expires: 2529881062
register: user_test_expires2
tags:
- timezone
- name: Ensure that account with expiration was created and did not change on subsequent run
assert:
that:
- user_test_expires1 is changed
- user_test_expires2 is not changed
- name: Verify expiration date for Linux
block:
- name: LINUX | Get expiration date for ansibulluser
getent:
database: shadow
key: ansibulluser
- name: LINUX | Ensure proper expiration date was set
assert:
that:
- getent_shadow['ansibulluser'][6] == '29281'
when: ansible_facts.os_family in ['RedHat', 'Debian', 'Suse']
- name: Verify expiration date for BSD
block:
- name: BSD | Get expiration date for ansibulluser
shell: 'grep ansibulluser /etc/master.passwd | cut -d: -f 7'
changed_when: no
register: bsd_account_expiration
- name: BSD | Ensure proper expiration date was set
assert:
that:
- bsd_account_expiration.stdout == '2529881062'
when: ansible_facts.os_family == 'FreeBSD'
- name: Change timezone
timezone:
name: America/Denver
register: original_timezone
tags:
- timezone
- name: Change system timezone to make sure expiration comparison works properly
block:
- name: Create user with expiration again to ensure no change is made in a new timezone
user:
name: ansibulluser
state: present
expires: 2529881062
register: user_test_different_tz
tags:
- timezone
- name: Ensure that no change was reported
assert:
that:
- user_test_different_tz is not changed
tags:
- timezone
always:
- name: Restore original timezone - {{ original_timezone.diff.before.name }}
timezone:
name: "{{ original_timezone.diff.before.name }}"
when: original_timezone.diff.before.name != "n/a"
tags:
- timezone
- name: Restore original timezone when n/a
file:
path: /etc/sysconfig/clock
state: absent
when:
- original_timezone.diff.before.name == "n/a"
- "'/etc/sysconfig/clock' in original_timezone.msg"
tags:
- timezone
- name: Unexpire user
user:
name: ansibulluser
state: present
expires: -1
register: user_test_expires3
- name: Verify un expiration date for Linux
block:
- name: LINUX | Get expiration date for ansibulluser
getent:
database: shadow
key: ansibulluser
- name: LINUX | Ensure proper expiration date was set
assert:
msg: "expiry is supposed to be empty or -1, not {{ getent_shadow['ansibulluser'][6] }}"
that:
- not getent_shadow['ansibulluser'][6] or getent_shadow['ansibulluser'][6] | int < 0
when: ansible_facts.os_family in ['RedHat', 'Debian', 'Suse']
- name: Verify un expiration date for Linux/BSD
block:
- name: Unexpire user again to check for change
user:
name: ansibulluser
state: present
expires: -1
register: user_test_expires4
- name: Ensure first expiration reported a change and second did not
assert:
msg: The second run of the expiration removal task reported a change when it should not
that:
- user_test_expires3 is changed
- user_test_expires4 is not changed
when: ansible_facts.os_family in ['RedHat', 'Debian', 'Suse', 'FreeBSD']
- name: Verify un expiration date for BSD
block:
- name: BSD | Get expiration date for ansibulluser
shell: 'grep ansibulluser /etc/master.passwd | cut -d: -f 7'
changed_when: no
register: bsd_account_expiration
- name: BSD | Ensure proper expiration date was set
assert:
msg: "expiry is supposed to be '0', not {{ bsd_account_expiration.stdout }}"
that:
- bsd_account_expiration.stdout == '0'
when: ansible_facts.os_family == 'FreeBSD'
# Test setting no expiration when creating a new account
# https://github.com/ansible/ansible/issues/44155
- name: Remove ansibulluser
user:
name: ansibulluser
state: absent
- name: Create user account without expiration
user:
name: ansibulluser
state: present
expires: -1
register: user_test_create_no_expires_1
- name: Create user account without expiration again
user:
name: ansibulluser
state: present
expires: -1
register: user_test_create_no_expires_2
- name: Ensure changes were made appropriately
assert:
msg: Setting 'expires='-1 resulted in incorrect changes
that:
- user_test_create_no_expires_1 is changed
- user_test_create_no_expires_2 is not changed
- name: Verify un expiration date for Linux
block:
- name: LINUX | Get expiration date for ansibulluser
getent:
database: shadow
key: ansibulluser
- name: LINUX | Ensure proper expiration date was set
assert:
msg: "expiry is supposed to be empty or -1, not {{ getent_shadow['ansibulluser'][6] }}"
that:
- not getent_shadow['ansibulluser'][6] or getent_shadow['ansibulluser'][6] | int < 0
when: ansible_facts.os_family in ['RedHat', 'Debian', 'Suse']
- name: Verify un expiration date for BSD
block:
- name: BSD | Get expiration date for ansibulluser
shell: 'grep ansibulluser /etc/master.passwd | cut -d: -f 7'
changed_when: no
register: bsd_account_expiration
- name: BSD | Ensure proper expiration date was set
assert:
msg: "expiry is supposed to be '0', not {{ bsd_account_expiration.stdout }}"
that:
- bsd_account_expiration.stdout == '0'
when: ansible_facts.os_family == 'FreeBSD'
# Test setting epoch 0 expiration when creating a new account, then removing the expiry
# https://github.com/ansible/ansible/issues/47114
- name: Remove ansibulluser
user:
name: ansibulluser
state: absent
- name: Create user account with epoch 0 expiration
user:
name: ansibulluser
state: present
expires: 0
register: user_test_expires_create0_1
- name: Create user account with epoch 0 expiration again
user:
name: ansibulluser
state: present
expires: 0
register: user_test_expires_create0_2
- name: Change the user account to remove the expiry time
user:
name: ansibulluser
expires: -1
register: user_test_remove_expires_1
- name: Change the user account to remove the expiry time again
user:
name: ansibulluser
expires: -1
register: user_test_remove_expires_2
- name: Verify un expiration date for Linux
block:
- name: LINUX | Ensure changes were made appropriately
assert:
msg: Creating an account with 'expries=0' then removing that expriation with 'expires=-1' resulted in incorrect changes
that:
- user_test_expires_create0_1 is changed
- user_test_expires_create0_2 is not changed
- user_test_remove_expires_1 is changed
- user_test_remove_expires_2 is not changed
- name: LINUX | Get expiration date for ansibulluser
getent:
database: shadow
key: ansibulluser
- name: LINUX | Ensure proper expiration date was set
assert:
msg: "expiry is supposed to be empty or -1, not {{ getent_shadow['ansibulluser'][6] }}"
that:
- not getent_shadow['ansibulluser'][6] or getent_shadow['ansibulluser'][6] | int < 0
when: ansible_facts.os_family in ['RedHat', 'Debian', 'Suse']
- name: Verify proper expiration behavior for BSD
block:
- name: BSD | Ensure changes were made appropriately
assert:
msg: Creating an account with 'expries=0' then removing that expriation with 'expires=-1' resulted in incorrect changes
that:
- user_test_expires_create0_1 is changed
- user_test_expires_create0_2 is not changed
- user_test_remove_expires_1 is not changed
- user_test_remove_expires_2 is not changed
when: ansible_facts.os_family == 'FreeBSD'
# Test expiration with a very large negative number. This should have the same
# result as setting -1.
- name: Set expiration date using very long negative number
user:
name: ansibulluser
state: present
expires: -2529881062
register: user_test_expires5
- name: Ensure no change was made
assert:
that:
- user_test_expires5 is not changed
- name: Verify un expiration date for Linux
block:
- name: LINUX | Get expiration date for ansibulluser
getent:
database: shadow
key: ansibulluser
- name: LINUX | Ensure proper expiration date was set
assert:
msg: "expiry is supposed to be empty or -1, not {{ getent_shadow['ansibulluser'][6] }}"
that:
- not getent_shadow['ansibulluser'][6] or getent_shadow['ansibulluser'][6] | int < 0
when: ansible_facts.os_family in ['RedHat', 'Debian', 'Suse']
- name: Verify un expiration date for BSD
block:
- name: BSD | Get expiration date for ansibulluser
shell: 'grep ansibulluser /etc/master.passwd | cut -d: -f 7'
changed_when: no
register: bsd_account_expiration
- name: BSD | Ensure proper expiration date was set
assert:
msg: "expiry is supposed to be '0', not {{ bsd_account_expiration.stdout }}"
that:
- bsd_account_expiration.stdout == '0'
when: ansible_facts.os_family == 'FreeBSD'
## shadow backup
- block:
- name: Create a user to test shadow file backup
user:
name: ansibulluser
state: present
register: result
- name: Find shadow backup files
find:
path: /etc
patterns: 'shadow\..*~$'
use_regex: yes
register: shadow_backups
- name: Assert that a backup file was created
assert:
that:
- result.bakup
- shadow_backups.files | map(attribute='path') | list | length > 0
when: ansible_facts.os_family == 'Solaris'
# Test creating ssh key with passphrase
- name: Remove ansibulluser
user:
name: ansibulluser
state: absent
- name: Create user with ssh key
user:
name: ansibulluser
state: present
generate_ssh_key: yes
ssh_key_file: "{{ output_dir }}/test_id_rsa"
ssh_key_passphrase: secret_passphrase
- name: Unlock ssh key
command: "ssh-keygen -y -f {{ output_dir }}/test_id_rsa -P secret_passphrase"
register: result
- name: Check that ssh key was unlocked successfully
assert:
that:
- result.rc == 0
- name: Clean ssh key
file:
path: "{{ output_dir }}/test_id_rsa"
state: absent
when: ansible_os_family == 'FreeBSD'
## password lock
- block:
- name: Set password for ansibulluser
user:
name: ansibulluser
password: "$6$rounds=656000$TT4O7jz2M57npccl$33LF6FcUMSW11qrESXL1HX0BS.bsiT6aenFLLiVpsQh6hDtI9pJh5iY7x8J7ePkN4fP8hmElidHXaeD51pbGS."
- name: Lock account
user:
name: ansibulluser
password_lock: yes
register: password_lock_1
- name: Lock account again
user:
name: ansibulluser
password_lock: yes
register: password_lock_2
- name: Unlock account
user:
name: ansibulluser
password_lock: no
register: password_lock_3
- name: Unlock account again
user:
name: ansibulluser
password_lock: no
register: password_lock_4
- name: Ensure task reported changes appropriately
assert:
msg: The password_lock tasks did not make changes appropriately
that:
- password_lock_1 is changed
- password_lock_2 is not changed
- password_lock_3 is changed
- password_lock_4 is not changed
- name: Lock account
user:
name: ansibulluser
password_lock: yes
- name: Verify account lock for BSD
block:
- name: BSD | Get account status
shell: "{{ status_command[ansible_facts['system']] }}"
register: account_status_locked
- name: Unlock account
user:
name: ansibulluser
password_lock: no
- name: BSD | Get account status
shell: "{{ status_command[ansible_facts['system']] }}"
register: account_status_unlocked
- name: FreeBSD | Ensure account is locked
assert:
that:
- "'LOCKED' in account_status_locked.stdout"
- "'LOCKED' not in account_status_unlocked.stdout"
when: ansible_facts['system'] == 'FreeBSD'
when: ansible_facts['system'] in ['FreeBSD', 'OpenBSD']
- name: Verify account lock for Linux
block:
- name: LINUX | Get account status
getent:
database: shadow
key: ansibulluser
- name: LINUX | Ensure account is locked
assert:
that:
- getent_shadow['ansibulluser'][0].startswith('!')
- name: Unlock account
user:
name: ansibulluser
password_lock: no
- name: LINUX | Get account status
getent:
database: shadow
key: ansibulluser
- name: LINUX | Ensure account is unlocked
assert:
that:
- not getent_shadow['ansibulluser'][0].startswith('!')
when: ansible_facts['system'] == 'Linux'
always:
- name: Unlock account
user:
name: ansibulluser
password_lock: no
when: ansible_facts['system'] in ['FreeBSD', 'OpenBSD', 'Linux']
## Check local mode
# Even if we don't have a system that is bound to a directory, it's useful
# to run with local: true to exercise the code path that reads through the local
# user database file.
# https://github.com/ansible/ansible/issues/50947
- name: Create /etc/gshadow
file:
path: /etc/gshadow
state: touch
when: ansible_facts.os_family == 'Suse'
tags:
- user_test_local_mode
- name: Create /etc/libuser.conf
file:
path: /etc/libuser.conf
state: touch
when:
- ansible_facts.distribution == 'Ubuntu'
- ansible_facts.distribution_major_version is version_compare('16', '==')
tags:
- user_test_local_mode
- name: Ensure luseradd is present
action: "{{ ansible_facts.pkg_mgr }}"
args:
name: libuser
state: present
when: ansible_facts.system in ['Linux']
tags:
- user_test_local_mode
- name: Create local account that already exists to check for warning
user:
name: root
local: yes
register: local_existing
tags:
- user_test_local_mode
- name: Create local_ansibulluser
user:
name: local_ansibulluser
state: present
local: yes
register: local_user_test_1
tags:
- user_test_local_mode
- name: Create local_ansibulluser again
user:
name: local_ansibulluser
state: present
local: yes
register: local_user_test_2
tags:
- user_test_local_mode
- name: Remove local_ansibulluser
user:
name: local_ansibulluser
state: absent
remove: yes
local: yes
register: local_user_test_remove_1
tags:
- user_test_local_mode
- name: Remove local_ansibulluser again
user:
name: local_ansibulluser
state: absent
remove: yes
local: yes
register: local_user_test_remove_2
tags:
- user_test_local_mode
- name: Create test group
group:
name: testgroup
tags:
- user_test_local_mode
- name: Create local_ansibulluser with groups
user:
name: local_ansibulluser
state: present
local: yes
groups: testgroup
register: local_user_test_3
ignore_errors: yes
tags:
- user_test_local_mode
- name: Append groups for local_ansibulluser
user:
name: local_ansibulluser
state: present
local: yes
append: yes
register: local_user_test_4
ignore_errors: yes
tags:
- user_test_local_mode
- name: Ensure local user accounts were created and removed properly
assert:
that:
- local_user_test_1 is changed
- local_user_test_2 is not changed
- local_user_test_3 is failed
- "local_user_test_3['msg'] is search('parameters are mutually exclusive: groups|local')"
- local_user_test_4 is failed
- "local_user_test_4['msg'] is search('parameters are mutually exclusive: groups|append')"
- local_user_test_remove_1 is changed
- local_user_test_remove_2 is not changed
tags:
- user_test_local_mode
- name: Ensure warnings were displayed properly
assert:
that:
- local_user_test_1['warnings'] | length > 0
- local_user_test_1['warnings'] | first is search('The local user account may already exist')
- local_existing['warnings'] is not defined
when: ansible_facts.system in ['Linux']
tags:
- user_test_local_mode
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 59,189 |
ovirt_vm: Unable to find template shared among multiple clusters in the same datacenter
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below -->
Since commit df581eb ovirt_vm is unable to find a template shared among multiple clusters in the same datacenter. Moreover, the document says that `cluster` parameter is a parameter used to define where VMs will be created. Then it should not be used to filter a template search which is not related to VM storage.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
ovirt_vm
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
2.8.2
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
In RHEVM:
1. Create a cluster `cluster_a` and add a template `template_a` in this cluster
2. Create a cluster `cluster_b`
<!--- Paste example playbooks or commands between quotes below -->
```yaml
- name: Provision machine
ovirt_vm:
...
cluster: "cluster_b"
template: "template_a"
...
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
The template should be found (Behaviour before commit df581eb)
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
It raises an error
<!--- Paste verbatim command output between quotes -->
```paste below
Template with name 'template_a' and version 'None' in cluster 'cluster_b' was not found
```
|
https://github.com/ansible/ansible/issues/59189
|
https://github.com/ansible/ansible/pull/60461
|
6110dcc789c4dfadc43e79bb20227fded9b9c0bd
|
5972567ab61d1dc062f1df929bfb570a7c8c962f
| 2019-07-17T16:35:44Z |
python
| 2019-08-13T10:16:25Z |
lib/ansible/modules/cloud/ovirt/ovirt_vm.py
|
#!/usr/bin/python
# -*- coding: utf-8 -*-
# Copyright: (c) 2017, Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
ANSIBLE_METADATA = {'metadata_version': '1.1',
'status': ['preview'],
'supported_by': 'community'}
DOCUMENTATION = '''
---
module: ovirt_vm
short_description: Module to manage Virtual Machines in oVirt/RHV
version_added: "2.2"
author:
- Ondra Machacek (@machacekondra)
description:
- This module manages whole lifecycle of the Virtual Machine(VM) in oVirt/RHV.
- Since VM can hold many states in oVirt/RHV, this see notes to see how the states of the VM are handled.
options:
name:
description:
- Name of the Virtual Machine to manage.
- If VM don't exists C(name) is required. Otherwise C(id) or C(name) can be used.
id:
description:
- ID of the Virtual Machine to manage.
state:
description:
- Should the Virtual Machine be running/stopped/present/absent/suspended/next_run/registered/exported.
When C(state) is I(registered) and the unregistered VM's name
belongs to an already registered in engine VM in the same DC
then we fail to register the unregistered template.
- I(present) state will create/update VM and don't change its state if it already exists.
- I(running) state will create/update VM and start it.
- I(next_run) state updates the VM and if the VM has next run configuration it will be rebooted.
- Please check I(notes) to more detailed description of states.
- I(exported) state will export the VM to export domain or as OVA.
- I(registered) is supported since 2.4.
choices: [ absent, next_run, present, registered, running, stopped, suspended, exported ]
default: present
cluster:
description:
- Name of the cluster, where Virtual Machine should be created.
- Required if creating VM.
allow_partial_import:
description:
- Boolean indication whether to allow partial registration of Virtual Machine when C(state) is registered.
type: bool
version_added: "2.4"
vnic_profile_mappings:
description:
- "Mapper which maps an external virtual NIC profile to one that exists in the engine when C(state) is registered.
vnic_profile is described by the following dictionary:"
suboptions:
source_network_name:
description:
- The network name of the source network.
source_profile_name:
description:
- The profile name related to the source network.
target_profile_id:
description:
- The id of the target profile id to be mapped to in the engine.
version_added: "2.5"
cluster_mappings:
description:
- "Mapper which maps cluster name between VM's OVF and the destination cluster this VM should be registered to,
relevant when C(state) is registered.
Cluster mapping is described by the following dictionary:"
suboptions:
source_name:
description:
- The name of the source cluster.
dest_name:
description:
- The name of the destination cluster.
version_added: "2.5"
role_mappings:
description:
- "Mapper which maps role name between VM's OVF and the destination role this VM should be registered to,
relevant when C(state) is registered.
Role mapping is described by the following dictionary:"
suboptions:
source_name:
description:
- The name of the source role.
dest_name:
description:
- The name of the destination role.
version_added: "2.5"
domain_mappings:
description:
- "Mapper which maps aaa domain name between VM's OVF and the destination aaa domain this VM should be registered to,
relevant when C(state) is registered.
The aaa domain mapping is described by the following dictionary:"
suboptions:
source_name:
description:
- The name of the source aaa domain.
dest_name:
description:
- The name of the destination aaa domain.
version_added: "2.5"
affinity_group_mappings:
description:
- "Mapper which maps affinty name between VM's OVF and the destination affinity this VM should be registered to,
relevant when C(state) is registered."
version_added: "2.5"
affinity_label_mappings:
description:
- "Mappper which maps affinity label name between VM's OVF and the destination label this VM should be registered to,
relevant when C(state) is registered."
version_added: "2.5"
lun_mappings:
description:
- "Mapper which maps lun between VM's OVF and the destination lun this VM should contain, relevant when C(state) is registered.
lun_mappings is described by the following dictionary:
- C(logical_unit_id): The logical unit number to identify a logical unit,
- C(logical_unit_port): The port being used to connect with the LUN disk.
- C(logical_unit_portal): The portal being used to connect with the LUN disk.
- C(logical_unit_address): The address of the block storage host.
- C(logical_unit_target): The iSCSI specification located on an iSCSI server
- C(logical_unit_username): Username to be used to connect to the block storage host.
- C(logical_unit_password): Password to be used to connect to the block storage host.
- C(storage_type): The storage type which the LUN reside on (iscsi or fcp)"
version_added: "2.5"
reassign_bad_macs:
description:
- "Boolean indication whether to reassign bad macs when C(state) is registered."
type: bool
version_added: "2.5"
template:
description:
- Name of the template, which should be used to create Virtual Machine.
- Required if creating VM.
- If template is not specified and VM doesn't exist, VM will be created from I(Blank) template.
template_version:
description:
- Version number of the template to be used for VM.
- By default the latest available version of the template is used.
version_added: "2.3"
use_latest_template_version:
description:
- Specify if latest template version should be used, when running a stateless VM.
- If this parameter is set to I(yes) stateless VM is created.
type: bool
version_added: "2.3"
storage_domain:
description:
- Name of the storage domain where all template disks should be created.
- This parameter is considered only when C(template) is provided.
- IMPORTANT - This parameter is not idempotent, if the VM exists and you specfiy different storage domain,
disk won't move.
version_added: "2.4"
disk_format:
description:
- Specify format of the disk.
- If C(cow) format is used, disk will by created as sparse, so space will be allocated for the volume as needed, also known as I(thin provision).
- If C(raw) format is used, disk storage will be allocated right away, also known as I(preallocated).
- Note that this option isn't idempotent as it's not currently possible to change format of the disk via API.
- This parameter is considered only when C(template) and C(storage domain) is provided.
choices: [ cow, raw ]
default: cow
version_added: "2.4"
memory:
description:
- Amount of memory of the Virtual Machine. Prefix uses IEC 60027-2 standard (for example 1GiB, 1024MiB).
- Default value is set by engine.
memory_guaranteed:
description:
- Amount of minimal guaranteed memory of the Virtual Machine.
Prefix uses IEC 60027-2 standard (for example 1GiB, 1024MiB).
- C(memory_guaranteed) parameter can't be lower than C(memory) parameter.
- Default value is set by engine.
memory_max:
description:
- Upper bound of virtual machine memory up to which memory hot-plug can be performed.
Prefix uses IEC 60027-2 standard (for example 1GiB, 1024MiB).
- Default value is set by engine.
version_added: "2.5"
cpu_shares:
description:
- Set a CPU shares for this Virtual Machine.
- Default value is set by oVirt/RHV engine.
cpu_cores:
description:
- Number of virtual CPUs cores of the Virtual Machine.
- Default value is set by oVirt/RHV engine.
cpu_sockets:
description:
- Number of virtual CPUs sockets of the Virtual Machine.
- Default value is set by oVirt/RHV engine.
cpu_threads:
description:
- Number of virtual CPUs sockets of the Virtual Machine.
- Default value is set by oVirt/RHV engine.
version_added: "2.5"
type:
description:
- Type of the Virtual Machine.
- Default value is set by oVirt/RHV engine.
- I(high_performance) is supported since Ansible 2.5 and oVirt/RHV 4.2.
choices: [ desktop, server, high_performance ]
quota_id:
description:
- "Virtual Machine quota ID to be used for disk. By default quota is chosen by oVirt/RHV engine."
version_added: "2.5"
operating_system:
description:
- Operating system of the Virtual Machine.
- Default value is set by oVirt/RHV engine.
- "Possible values: debian_7, freebsd, freebsdx64, other, other_linux,
other_linux_ppc64, other_ppc64, rhel_3, rhel_4, rhel_4x64, rhel_5, rhel_5x64,
rhel_6, rhel_6x64, rhel_6_ppc64, rhel_7x64, rhel_7_ppc64, sles_11, sles_11_ppc64,
ubuntu_12_04, ubuntu_12_10, ubuntu_13_04, ubuntu_13_10, ubuntu_14_04, ubuntu_14_04_ppc64,
windows_10, windows_10x64, windows_2003, windows_2003x64, windows_2008, windows_2008x64,
windows_2008r2x64, windows_2008R2x64, windows_2012x64, windows_2012R2x64, windows_7,
windows_7x64, windows_8, windows_8x64, windows_xp"
boot_devices:
description:
- List of boot devices which should be used to boot. For example C([ cdrom, hd ]).
- Default value is set by oVirt/RHV engine.
choices: [ cdrom, hd, network ]
boot_menu:
description:
- "I(True) enable menu to select boot device, I(False) to disable it. By default is chosen by oVirt/RHV engine."
type: bool
version_added: "2.5"
usb_support:
description:
- "I(True) enable USB support, I(False) to disable it. By default is chosen by oVirt/RHV engine."
type: bool
version_added: "2.5"
serial_console:
description:
- "I(True) enable VirtIO serial console, I(False) to disable it. By default is chosen by oVirt/RHV engine."
type: bool
version_added: "2.5"
sso:
description:
- "I(True) enable Single Sign On by Guest Agent, I(False) to disable it. By default is chosen by oVirt/RHV engine."
type: bool
version_added: "2.5"
host:
description:
- Specify host where Virtual Machine should be running. By default the host is chosen by engine scheduler.
- This parameter is used only when C(state) is I(running) or I(present).
high_availability:
description:
- If I(yes) Virtual Machine will be set as highly available.
- If I(no) Virtual Machine won't be set as highly available.
- If no value is passed, default value is set by oVirt/RHV engine.
type: bool
high_availability_priority:
description:
- Indicates the priority of the virtual machine inside the run and migration queues.
Virtual machines with higher priorities will be started and migrated before virtual machines with lower
priorities. The value is an integer between 0 and 100. The higher the value, the higher the priority.
- If no value is passed, default value is set by oVirt/RHV engine.
version_added: "2.5"
lease:
description:
- Name of the storage domain this virtual machine lease reside on. Pass an empty string to remove the lease.
- NOTE - Supported since oVirt 4.1.
version_added: "2.4"
custom_compatibility_version:
description:
- "Enables a virtual machine to be customized to its own compatibility version. If
'C(custom_compatibility_version)' is set, it overrides the cluster's compatibility version
for this particular virtual machine."
version_added: "2.7"
host_devices:
description:
- Single Root I/O Virtualization - technology that allows single device to expose multiple endpoints that can be passed to VMs
- host_devices is an list which contain dictinary with name and state of device
version_added: "2.7"
delete_protected:
description:
- If I(yes) Virtual Machine will be set as delete protected.
- If I(no) Virtual Machine won't be set as delete protected.
- If no value is passed, default value is set by oVirt/RHV engine.
type: bool
stateless:
description:
- If I(yes) Virtual Machine will be set as stateless.
- If I(no) Virtual Machine will be unset as stateless.
- If no value is passed, default value is set by oVirt/RHV engine.
type: bool
clone:
description:
- If I(yes) then the disks of the created virtual machine will be cloned and independent of the template.
- This parameter is used only when C(state) is I(running) or I(present) and VM didn't exist before.
type: bool
default: 'no'
clone_permissions:
description:
- If I(yes) then the permissions of the template (only the direct ones, not the inherited ones)
will be copied to the created virtual machine.
- This parameter is used only when C(state) is I(running) or I(present) and VM didn't exist before.
type: bool
default: 'no'
cd_iso:
description:
- ISO file from ISO storage domain which should be attached to Virtual Machine.
- If you pass empty string the CD will be ejected from VM.
- If used with C(state) I(running) or I(present) and VM is running the CD will be attached to VM.
- If used with C(state) I(running) or I(present) and VM is down the CD will be attached to VM persistently.
force:
description:
- Please check to I(Synopsis) to more detailed description of force parameter, it can behave differently
in different situations.
type: bool
default: 'no'
nics:
description:
- List of NICs, which should be attached to Virtual Machine. NIC is described by following dictionary.
suboptions:
name:
description:
- Name of the NIC.
profile_name:
description:
- Profile name where NIC should be attached.
interface:
description:
- Type of the network interface.
choices: ['virtio', 'e1000', 'rtl8139']
default: 'virtio'
mac_address:
description:
- Custom MAC address of the network interface, by default it's obtained from MAC pool.
- "NOTE - This parameter is used only when C(state) is I(running) or I(present) and is able to only create NICs.
To manage NICs of the VM in more depth please use M(ovirt_nic) module instead."
disks:
description:
- List of disks, which should be attached to Virtual Machine. Disk is described by following dictionary.
suboptions:
name:
description:
- Name of the disk. Either C(name) or C(id) is required.
id:
description:
- ID of the disk. Either C(name) or C(id) is required.
interface:
description:
- Interface of the disk.
choices: ['virtio', 'IDE']
default: 'virtio'
bootable:
description:
- I(True) if the disk should be bootable, default is non bootable.
type: bool
activate:
description:
- I(True) if the disk should be activated, default is activated.
- "NOTE - This parameter is used only when C(state) is I(running) or I(present) and is able to only attach disks.
To manage disks of the VM in more depth please use M(ovirt_disk) module instead."
type: bool
sysprep:
description:
- Dictionary with values for Windows Virtual Machine initialization using sysprep.
suboptions:
host_name:
description:
- Hostname to be set to Virtual Machine when deployed.
active_directory_ou:
description:
- Active Directory Organizational Unit, to be used for login of user.
org_name:
description:
- Organization name to be set to Windows Virtual Machine.
domain:
description:
- Domain to be set to Windows Virtual Machine.
timezone:
description:
- Timezone to be set to Windows Virtual Machine.
ui_language:
description:
- UI language of the Windows Virtual Machine.
system_locale:
description:
- System localization of the Windows Virtual Machine.
input_locale:
description:
- Input localization of the Windows Virtual Machine.
windows_license_key:
description:
- License key to be set to Windows Virtual Machine.
user_name:
description:
- Username to be used for set password to Windows Virtual Machine.
root_password:
description:
- Password to be set for username to Windows Virtual Machine.
cloud_init:
description:
- Dictionary with values for Unix-like Virtual Machine initialization using cloud init.
suboptions:
host_name:
description:
- Hostname to be set to Virtual Machine when deployed.
timezone:
description:
- Timezone to be set to Virtual Machine when deployed.
user_name:
description:
- Username to be used to set password to Virtual Machine when deployed.
root_password:
description:
- Password to be set for user specified by C(user_name) parameter.
authorized_ssh_keys:
description:
- Use this SSH keys to login to Virtual Machine.
regenerate_ssh_keys:
description:
- If I(True) SSH keys will be regenerated on Virtual Machine.
type: bool
custom_script:
description:
- Cloud-init script which will be executed on Virtual Machine when deployed.
- This is appended to the end of the cloud-init script generated by any other options.
dns_servers:
description:
- DNS servers to be configured on Virtual Machine.
dns_search:
description:
- DNS search domains to be configured on Virtual Machine.
nic_boot_protocol:
description:
- Set boot protocol of the network interface of Virtual Machine.
choices: ['none', 'dhcp', 'static']
nic_ip_address:
description:
- If boot protocol is static, set this IP address to network interface of Virtual Machine.
nic_netmask:
description:
- If boot protocol is static, set this netmask to network interface of Virtual Machine.
nic_gateway:
description:
- If boot protocol is static, set this gateway to network interface of Virtual Machine.
nic_boot_protocol_v6:
description:
- Set boot protocol of the network interface of Virtual Machine.
choices: ['none', 'dhcp', 'static']
version_added: "2.9"
nic_ip_address_v6:
description:
- If boot protocol is static, set this IP address to network interface of Virtual Machine.
version_added: "2.9"
nic_netmask_v6:
description:
- If boot protocol is static, set this netmask to network interface of Virtual Machine.
version_added: "2.9"
nic_gateway_v6:
description:
- If boot protocol is static, set this gateway to network interface of Virtual Machine.
- For IPv6 addresses the value is an integer in the range of 0-128, which represents the subnet prefix.
version_added: "2.9"
nic_name:
description:
- Set name to network interface of Virtual Machine.
nic_on_boot:
description:
- If I(True) network interface will be set to start on boot.
type: bool
cloud_init_nics:
description:
- List of dictionaries representing network interfaces to be setup by cloud init.
- This option is used, when user needs to setup more network interfaces via cloud init.
- If one network interface is enough, user should use C(cloud_init) I(nic_*) parameters. C(cloud_init) I(nic_*) parameters
are merged with C(cloud_init_nics) parameters.
suboptions:
nic_boot_protocol:
description:
- Set boot protocol of the network interface of Virtual Machine. Can be one of C(none), C(dhcp) or C(static).
nic_ip_address:
description:
- If boot protocol is static, set this IP address to network interface of Virtual Machine.
nic_netmask:
description:
- If boot protocol is static, set this netmask to network interface of Virtual Machine.
nic_gateway:
description:
- If boot protocol is static, set this gateway to network interface of Virtual Machine.
nic_boot_protocol_v6:
description:
- Set boot protocol of the network interface of Virtual Machine. Can be one of C(none), C(dhcp) or C(static).
version_added: "2.9"
nic_ip_address_v6:
description:
- If boot protocol is static, set this IP address to network interface of Virtual Machine.
version_added: "2.9"
nic_netmask_v6:
description:
- If boot protocol is static, set this netmask to network interface of Virtual Machine.
version_added: "2.9"
nic_gateway_v6:
description:
- If boot protocol is static, set this gateway to network interface of Virtual Machine.
- For IPv6 addresses the value is an integer in the range of 0-128, which represents the subnet prefix.
version_added: "2.9"
nic_name:
description:
- Set name to network interface of Virtual Machine.
nic_on_boot:
description:
- If I(True) network interface will be set to start on boot.
type: bool
version_added: "2.3"
cloud_init_persist:
description:
- "If I(yes) the C(cloud_init) or C(sysprep) parameters will be saved for the virtual machine
and the virtual machine won't be started as run-once."
type: bool
version_added: "2.5"
aliases: [ 'sysprep_persist' ]
default: 'no'
kernel_params_persist:
description:
- "If I(true) C(kernel_params), C(initrd_path) and C(kernel_path) will persist in virtual machine configuration,
if I(False) it will be used for run once."
type: bool
version_added: "2.8"
kernel_path:
description:
- Path to a kernel image used to boot the virtual machine.
- Kernel image must be stored on either the ISO domain or on the host's storage.
version_added: "2.3"
initrd_path:
description:
- Path to an initial ramdisk to be used with the kernel specified by C(kernel_path) option.
- Ramdisk image must be stored on either the ISO domain or on the host's storage.
version_added: "2.3"
kernel_params:
description:
- Kernel command line parameters (formatted as string) to be used with the kernel specified by C(kernel_path) option.
version_added: "2.3"
instance_type:
description:
- Name of virtual machine's hardware configuration.
- By default no instance type is used.
version_added: "2.3"
description:
description:
- Description of the Virtual Machine.
version_added: "2.3"
comment:
description:
- Comment of the Virtual Machine.
version_added: "2.3"
timezone:
description:
- Sets time zone offset of the guest hardware clock.
- For example C(Etc/GMT)
version_added: "2.3"
serial_policy:
description:
- Specify a serial number policy for the Virtual Machine.
- Following options are supported.
- C(vm) - Sets the Virtual Machine's UUID as its serial number.
- C(host) - Sets the host's UUID as the Virtual Machine's serial number.
- C(custom) - Allows you to specify a custom serial number in C(serial_policy_value).
choices: ['vm', 'host', 'custom']
version_added: "2.3"
serial_policy_value:
description:
- Allows you to specify a custom serial number.
- This parameter is used only when C(serial_policy) is I(custom).
version_added: "2.3"
vmware:
description:
- Dictionary of values to be used to connect to VMware and import
a virtual machine to oVirt.
suboptions:
username:
description:
- The username to authenticate against the VMware.
password:
description:
- The password to authenticate against the VMware.
url:
description:
- The URL to be passed to the I(virt-v2v) tool for conversion.
- For example I(vpx://wmware_user@vcenter-host/DataCenter/Cluster/esxi-host?no_verify=1)
drivers_iso:
description:
- The name of the ISO containing drivers that can be used during the I(virt-v2v) conversion process.
sparse:
description:
- Specifies the disk allocation policy of the resulting virtual machine. I(true) for sparse, I(false) for preallocated.
type: bool
default: true
storage_domain:
description:
- Specifies the target storage domain for converted disks. This is required parameter.
version_added: "2.3"
xen:
description:
- Dictionary of values to be used to connect to XEN and import
a virtual machine to oVirt.
suboptions:
url:
description:
- The URL to be passed to the I(virt-v2v) tool for conversion.
- For example I(xen+ssh://[email protected]). This is required parameter.
drivers_iso:
description:
- The name of the ISO containing drivers that can be used during the I(virt-v2v) conversion process.
sparse:
description:
- Specifies the disk allocation policy of the resulting virtual machine. I(true) for sparse, I(false) for preallocated.
type: bool
default: true
storage_domain:
description:
- Specifies the target storage domain for converted disks. This is required parameter.
version_added: "2.3"
kvm:
description:
- Dictionary of values to be used to connect to kvm and import
a virtual machine to oVirt.
suboptions:
name:
description:
- The name of the KVM virtual machine.
username:
description:
- The username to authenticate against the KVM.
password:
description:
- The password to authenticate against the KVM.
url:
description:
- The URL to be passed to the I(virt-v2v) tool for conversion.
- For example I(qemu:///system). This is required parameter.
drivers_iso:
description:
- The name of the ISO containing drivers that can be used during the I(virt-v2v) conversion process.
sparse:
description:
- Specifies the disk allocation policy of the resulting virtual machine. I(true) for sparse, I(false) for preallocated.
type: bool
default: true
storage_domain:
description:
- Specifies the target storage domain for converted disks. This is required parameter.
version_added: "2.3"
cpu_mode:
description:
- "CPU mode of the virtual machine. It can be some of the following: I(host_passthrough), I(host_model) or I(custom)."
- "For I(host_passthrough) CPU type you need to set C(placement_policy) to I(pinned)."
- "If no value is passed, default value is set by oVirt/RHV engine."
version_added: "2.5"
placement_policy:
description:
- "The configuration of the virtual machine's placement policy."
- "If no value is passed, default value is set by oVirt/RHV engine."
- "Placement policy can be one of the following values:"
suboptions:
migratable:
description:
- "Allow manual and automatic migration."
pinned:
description:
- "Do not allow migration."
user_migratable:
description:
- "Allow manual migration only."
version_added: "2.5"
ticket:
description:
- "If I(true), in addition return I(remote_vv_file) inside I(vm) dictionary, which contains compatible
content for remote-viewer application. Works only C(state) is I(running)."
version_added: "2.7"
type: bool
cpu_pinning:
description:
- "CPU Pinning topology to map virtual machine CPU to host CPU."
- "CPU Pinning topology is a list of dictionary which can have following values:"
suboptions:
cpu:
description:
- "Number of the host CPU."
vcpu:
description:
- "Number of the virtual machine CPU."
version_added: "2.5"
soundcard_enabled:
description:
- "If I(true), the sound card is added to the virtual machine."
type: bool
version_added: "2.5"
smartcard_enabled:
description:
- "If I(true), use smart card authentication."
type: bool
version_added: "2.5"
io_threads:
description:
- "Number of IO threads used by virtual machine. I(0) means IO threading disabled."
version_added: "2.5"
ballooning_enabled:
description:
- "If I(true), use memory ballooning."
- "Memory balloon is a guest device, which may be used to re-distribute / reclaim the host memory
based on VM needs in a dynamic way. In this way it's possible to create memory over commitment states."
type: bool
version_added: "2.5"
numa_tune_mode:
description:
- "Set how the memory allocation for NUMA nodes of this VM is applied (relevant if NUMA nodes are set for this VM)."
- "It can be one of the following: I(interleave), I(preferred) or I(strict)."
- "If no value is passed, default value is set by oVirt/RHV engine."
choices: ['interleave', 'preferred', 'strict']
version_added: "2.6"
numa_nodes:
description:
- "List of vNUMA Nodes to set for this VM and pin them to assigned host's physical NUMA node."
- "Each vNUMA node is described by following dictionary:"
suboptions:
index:
description:
- "The index of this NUMA node (mandatory)."
memory:
description:
- "Memory size of the NUMA node in MiB (mandatory)."
cores:
description:
- "list of VM CPU cores indexes to be included in this NUMA node (mandatory)."
numa_node_pins:
description:
- "list of physical NUMA node indexes to pin this virtual NUMA node to."
version_added: "2.6"
rng_device:
description:
- "Random number generator (RNG). You can choose of one the following devices I(urandom), I(random) or I(hwrng)."
- "In order to select I(hwrng), you must have it enabled on cluster first."
- "/dev/urandom is used for cluster version >= 4.1, and /dev/random for cluster version <= 4.0"
version_added: "2.5"
custom_properties:
description:
- "Properties sent to VDSM to configure various hooks."
- "Custom properties is a list of dictionary which can have following values:"
suboptions:
name:
description:
- "Name of the custom property. For example: I(hugepages), I(vhost), I(sap_agent), etc."
regexp:
description:
- "Regular expression to set for custom property."
value:
description:
- "Value to set for custom property."
version_added: "2.5"
watchdog:
description:
- "Assign watchdog device for the virtual machine."
- "Watchdogs is a dictionary which can have following values:"
suboptions:
model:
description:
- "Model of the watchdog device. For example: I(i6300esb), I(diag288) or I(null)."
action:
description:
- "Watchdog action to be performed when watchdog is triggered. For example: I(none), I(reset), I(poweroff), I(pause) or I(dump)."
version_added: "2.5"
graphical_console:
description:
- "Assign graphical console to the virtual machine."
suboptions:
headless_mode:
description:
- If I(true) disable the graphics console for this virtual machine.
type: bool
protocol:
description:
- Graphical protocol, a list of I(spice), I(vnc), or both.
version_added: "2.5"
exclusive:
description:
- "When C(state) is I(exported) this parameter indicates if the existing VM with the
same name should be overwritten."
version_added: "2.8"
type: bool
export_domain:
description:
- "When C(state) is I(exported)this parameter specifies the name of the export storage domain."
version_added: "2.8"
export_ova:
description:
- Dictionary of values to be used to export VM as OVA.
suboptions:
host:
description:
- The name of the destination host where the OVA has to be exported.
directory:
description:
- The name of the directory where the OVA has to be exported.
filename:
description:
- The name of the exported OVA file.
version_added: "2.8"
force_migrate:
description:
- If I(true), the VM will migrate when I(placement_policy=user-migratable) but not when I(placement_policy=pinned).
version_added: "2.8"
type: bool
migrate:
description:
- "If I(true), the VM will migrate to any available host."
version_added: "2.8"
type: bool
next_run:
description:
- "If I(true), the update will not be applied to the VM immediately and will be only applied when virtual machine is restarted."
- NOTE - If there are multiple next run configuration changes on the VM, the first change may get reverted if this option is not passed.
version_added: "2.8"
type: bool
snapshot_name:
description:
- "Snapshot to clone VM from."
- "Snapshot with description specified should exist."
- "You have to specify C(snapshot_vm) parameter with virtual machine name of this snapshot."
version_added: "2.9"
snapshot_vm:
description:
- "Source VM to clone VM from."
- "VM should have snapshot specified by C(snapshot)."
- "If C(snapshot_name) specified C(snapshot_vm) is required."
version_added: "2.9"
template_cluster:
description:
- "Template cluster name. When not defined C(cluster) is used."
- "Allows you to create virtual machine in diffrent cluster than template cluster name."
version_added: "2.9"
notes:
- If VM is in I(UNASSIGNED) or I(UNKNOWN) state before any operation, the module will fail.
If VM is in I(IMAGE_LOCKED) state before any operation, we try to wait for VM to be I(DOWN).
If VM is in I(SAVING_STATE) state before any operation, we try to wait for VM to be I(SUSPENDED).
If VM is in I(POWERING_DOWN) state before any operation, we try to wait for VM to be I(UP) or I(DOWN). VM can
get into I(UP) state from I(POWERING_DOWN) state, when there is no ACPI or guest agent running inside VM, or
if the shutdown operation fails.
When user specify I(UP) C(state), we always wait to VM to be in I(UP) state in case VM is I(MIGRATING),
I(REBOOTING), I(POWERING_UP), I(RESTORING_STATE), I(WAIT_FOR_LAUNCH). In other states we run start operation on VM.
When user specify I(stopped) C(state), and If user pass C(force) parameter set to I(true) we forcibly stop the VM in
any state. If user don't pass C(force) parameter, we always wait to VM to be in UP state in case VM is
I(MIGRATING), I(REBOOTING), I(POWERING_UP), I(RESTORING_STATE), I(WAIT_FOR_LAUNCH). If VM is in I(PAUSED) or
I(SUSPENDED) state, we start the VM. Then we gracefully shutdown the VM.
When user specify I(suspended) C(state), we always wait to VM to be in UP state in case VM is I(MIGRATING),
I(REBOOTING), I(POWERING_UP), I(RESTORING_STATE), I(WAIT_FOR_LAUNCH). If VM is in I(PAUSED) or I(DOWN) state,
we start the VM. Then we suspend the VM.
When user specify I(absent) C(state), we forcibly stop the VM in any state and remove it.
extends_documentation_fragment: ovirt
'''
EXAMPLES = '''
# Examples don't contain auth parameter for simplicity,
# look at ovirt_auth module to see how to reuse authentication:
- name: Creates a new Virtual Machine from template named 'rhel7_template'
ovirt_vm:
state: present
name: myvm
template: rhel7_template
cluster: mycluster
- name: Register VM
ovirt_vm:
state: registered
storage_domain: mystorage
cluster: mycluster
name: myvm
- name: Register VM using id
ovirt_vm:
state: registered
storage_domain: mystorage
cluster: mycluster
id: 1111-1111-1111-1111
- name: Register VM, allowing partial import
ovirt_vm:
state: registered
storage_domain: mystorage
allow_partial_import: "True"
cluster: mycluster
id: 1111-1111-1111-1111
- name: Register VM with vnic profile mappings and reassign bad macs
ovirt_vm:
state: registered
storage_domain: mystorage
cluster: mycluster
id: 1111-1111-1111-1111
vnic_profile_mappings:
- source_network_name: mynetwork
source_profile_name: mynetwork
target_profile_id: 3333-3333-3333-3333
- source_network_name: mynetwork2
source_profile_name: mynetwork2
target_profile_id: 4444-4444-4444-4444
reassign_bad_macs: "True"
- name: Register VM with mappings
ovirt_vm:
state: registered
storage_domain: mystorage
cluster: mycluster
id: 1111-1111-1111-1111
role_mappings:
- source_name: Role_A
dest_name: Role_B
domain_mappings:
- source_name: Domain_A
dest_name: Domain_B
lun_mappings:
- source_storage_type: iscsi
source_logical_unit_id: 1IET_000d0001
source_logical_unit_port: 3260
source_logical_unit_portal: 1
source_logical_unit_address: 10.34.63.203
source_logical_unit_target: iqn.2016-08-09.brq.str-01:omachace
dest_storage_type: iscsi
dest_logical_unit_id: 1IET_000d0002
dest_logical_unit_port: 3260
dest_logical_unit_portal: 1
dest_logical_unit_address: 10.34.63.204
dest_logical_unit_target: iqn.2016-08-09.brq.str-02:omachace
affinity_group_mappings:
- source_name: Affinity_A
dest_name: Affinity_B
affinity_label_mappings:
- source_name: Label_A
dest_name: Label_B
cluster_mappings:
- source_name: cluster_A
dest_name: cluster_B
- name: Creates a stateless VM which will always use latest template version
ovirt_vm:
name: myvm
template: rhel7
cluster: mycluster
use_latest_template_version: true
# Creates a new server rhel7 Virtual Machine from Blank template
# on brq01 cluster with 2GiB memory and 2 vcpu cores/sockets
# and attach bootable disk with name rhel7_disk and attach virtio NIC
- ovirt_vm:
state: present
cluster: brq01
name: myvm
memory: 2GiB
cpu_cores: 2
cpu_sockets: 2
cpu_shares: 1024
type: server
operating_system: rhel_7x64
disks:
- name: rhel7_disk
bootable: True
nics:
- name: nic1
# Change VM Name
- ovirt_vm:
id: 00000000-0000-0000-0000-000000000000
name: "new_vm_name"
- name: Run VM with cloud init
ovirt_vm:
name: rhel7
template: rhel7
cluster: Default
memory: 1GiB
high_availability: true
high_availability_priority: 50 # Available from Ansible 2.5
cloud_init:
nic_boot_protocol: static
nic_ip_address: 10.34.60.86
nic_netmask: 255.255.252.0
nic_gateway: 10.34.63.254
nic_name: eth1
nic_on_boot: true
host_name: example.com
custom_script: |
write_files:
- content: |
Hello, world!
path: /tmp/greeting.txt
permissions: '0644'
user_name: root
root_password: super_password
- name: Run VM with cloud init, with multiple network interfaces
ovirt_vm:
name: rhel7_4
template: rhel7
cluster: mycluster
cloud_init_nics:
- nic_name: eth0
nic_boot_protocol: dhcp
nic_on_boot: true
- nic_name: eth1
nic_boot_protocol: static
nic_ip_address: 10.34.60.86
nic_netmask: 255.255.252.0
nic_gateway: 10.34.63.254
nic_on_boot: true
# IP version 6 parameters are supported since ansible 2.9
- nic_name: eth2
nic_boot_protocol_v6: static
nic_ip_address_v6: '2620:52:0:2282:b898:1f69:6512:36c5'
nic_gateway_v6: '2620:52:0:2282:b898:1f69:6512:36c9'
nic_netmask_v6: '120'
nic_on_boot: true
- nic_name: eth3
nic_on_boot: true
nic_boot_protocol_v6: dhcp
- name: Run VM with sysprep
ovirt_vm:
name: windows2012R2_AD
template: windows2012R2
cluster: Default
memory: 3GiB
high_availability: true
sysprep:
host_name: windowsad.example.com
user_name: Administrator
root_password: SuperPassword123
- name: Migrate/Run VM to/on host named 'host1'
ovirt_vm:
state: running
name: myvm
host: host1
- name: Migrate VM to any available host
ovirt_vm:
state: running
name: myvm
migrate: true
- name: Change VMs CD
ovirt_vm:
name: myvm
cd_iso: drivers.iso
- name: Eject VMs CD
ovirt_vm:
name: myvm
cd_iso: ''
- name: Boot VM from CD
ovirt_vm:
name: myvm
cd_iso: centos7_x64.iso
boot_devices:
- cdrom
- name: Stop vm
ovirt_vm:
state: stopped
name: myvm
- name: Upgrade memory to already created VM
ovirt_vm:
name: myvm
memory: 4GiB
- name: Hot plug memory to already created and running VM (VM won't be restarted)
ovirt_vm:
name: myvm
memory: 4GiB
# Create/update a VM to run with two vNUMA nodes and pin them to physical NUMA nodes as follows:
# vnuma index 0-> numa index 0, vnuma index 1-> numa index 1
- name: Create a VM to run with two vNUMA nodes
ovirt_vm:
name: myvm
cluster: mycluster
numa_tune_mode: "interleave"
numa_nodes:
- index: 0
cores: [0]
memory: 20
numa_node_pins: [0]
- index: 1
cores: [1]
memory: 30
numa_node_pins: [1]
- name: Update an existing VM to run without previously created vNUMA nodes (i.e. remove all vNUMA nodes+NUMA pinning setting)
ovirt_vm:
name: myvm
cluster: mycluster
state: "present"
numa_tune_mode: "interleave"
numa_nodes:
- index: -1
# When change on the VM needs restart of the VM, use next_run state,
# The VM will be updated and rebooted if there are any changes.
# If present state would be used, VM won't be restarted.
- ovirt_vm:
state: next_run
name: myvm
boot_devices:
- network
- name: Import virtual machine from VMware
ovirt_vm:
state: stopped
cluster: mycluster
name: vmware_win10
timeout: 1800
poll_interval: 30
vmware:
url: vpx://[email protected]/Folder1/Cluster1/2.3.4.5?no_verify=1
name: windows10
storage_domain: mynfs
username: user
password: password
- name: Create vm from template and create all disks on specific storage domain
ovirt_vm:
name: vm_test
cluster: mycluster
template: mytemplate
storage_domain: mynfs
nics:
- name: nic1
- name: Remove VM, if VM is running it will be stopped
ovirt_vm:
state: absent
name: myvm
# Defining a specific quota for a VM:
# Since Ansible 2.5
- ovirt_quotas_facts:
data_center: Default
name: myquota
- ovirt_vm:
name: myvm
sso: False
boot_menu: True
usb_support: True
serial_console: True
quota_id: "{{ ovirt_quotas[0]['id'] }}"
- name: Create a VM that has the console configured for both Spice and VNC
ovirt_vm:
name: myvm
template: mytemplate
cluster: mycluster
graphical_console:
protocol:
- spice
- vnc
# Execute remote viever to VM
- block:
- name: Create a ticket for console for a running VM
ovirt_vm:
name: myvm
ticket: true
state: running
register: myvm
- name: Save ticket to file
copy:
content: "{{ myvm.vm.remote_vv_file }}"
dest: ~/vvfile.vv
- name: Run remote viewer with file
command: remote-viewer ~/vvfile.vv
# Default value of host_device state is present
- name: Attach host devices to virtual machine
ovirt_vm:
name: myvm
host: myhost
placement_policy: pinned
host_devices:
- name: pci_0000_00_06_0
- name: pci_0000_00_07_0
state: absent
- name: pci_0000_00_08_0
state: present
- name: Export the VM as OVA
ovirt_vm:
name: myvm
state: exported
cluster: mycluster
export_ova:
host: myhost
filename: myvm.ova
directory: /tmp/
- name: Clone VM from snapshot
ovirt_vm:
snapshot_vm: myvm
snapshot_name: myvm_snap
name: myvm_clone
state: present
'''
RETURN = '''
id:
description: ID of the VM which is managed
returned: On success if VM is found.
type: str
sample: 7de90f31-222c-436c-a1ca-7e655bd5b60c
vm:
description: "Dictionary of all the VM attributes. VM attributes can be found on your oVirt/RHV instance
at following url: http://ovirt.github.io/ovirt-engine-api-model/master/#types/vm.
Additionally when user sent ticket=true, this module will return also remote_vv_file
parameter in vm dictionary, which contains remote-viewer compatible file to open virtual
machine console. Please note that this file contains sensible information."
returned: On success if VM is found.
type: dict
'''
import traceback
try:
import ovirtsdk4.types as otypes
except ImportError:
pass
from ansible.module_utils.basic import AnsibleModule
from ansible.module_utils.ovirt import (
BaseModule,
check_params,
check_sdk,
convert_to_bytes,
create_connection,
equal,
get_dict_of_struct,
get_entity,
get_link_name,
get_id_by_name,
ovirt_full_argument_spec,
search_by_attributes,
search_by_name,
wait,
)
class VmsModule(BaseModule):
def __init__(self, *args, **kwargs):
super(VmsModule, self).__init__(*args, **kwargs)
self._initialization = None
self._is_new = False
def __get_template_with_version(self):
"""
oVirt/RHV in version 4.1 doesn't support search by template+version_number,
so we need to list all templates with specific name and then iterate
through it's version until we find the version we look for.
"""
template = None
templates_service = self._connection.system_service().templates_service()
if self.param('template'):
cluster = self.param('template_cluster') if self.param('template_cluster') else self.param('cluster')
templates = templates_service.list(
search='name=%s and cluster=%s' % (self.param('template'), cluster)
)
if not templates:
templates = templates_service.list(
search='name=%s' % self.param('template')
)
if self.param('template_version'):
templates = [
t for t in templates
if t.version.version_number == self.param('template_version')
]
if not templates:
raise ValueError(
"Template with name '%s' and version '%s' in cluster '%s' was not found'" % (
self.param('template'),
self.param('template_version'),
cluster
)
)
template = sorted(templates, key=lambda t: t.version.version_number, reverse=True)[0]
elif self._is_new:
# If template isn't specified and VM is about to be created specify default template:
template = templates_service.template_service('00000000-0000-0000-0000-000000000000').get()
return template
def __get_storage_domain_and_all_template_disks(self, template):
if self.param('template') is None:
return None
if self.param('storage_domain') is None:
return None
disks = list()
for att in self._connection.follow_link(template.disk_attachments):
disks.append(
otypes.DiskAttachment(
disk=otypes.Disk(
id=att.disk.id,
format=otypes.DiskFormat(self.param('disk_format')),
storage_domains=[
otypes.StorageDomain(
id=get_id_by_name(
self._connection.system_service().storage_domains_service(),
self.param('storage_domain')
)
)
]
)
)
)
return disks
def __get_snapshot(self):
if self.param('snapshot_vm') is None:
return None
if self.param('snapshot_name') is None:
return None
vms_service = self._connection.system_service().vms_service()
vm_id = get_id_by_name(vms_service, self.param('snapshot_vm'))
vm_service = vms_service.vm_service(vm_id)
snaps_service = vm_service.snapshots_service()
snaps = snaps_service.list()
snap = next(
(s for s in snaps if s.description == self.param('snapshot_name')),
None
)
return snap
def __get_cluster(self):
if self.param('cluster') is not None:
return self.param('cluster')
elif self.param('snapshot_name') is not None and self.param('snapshot_vm') is not None:
vms_service = self._connection.system_service().vms_service()
vm = search_by_name(vms_service, self.param('snapshot_vm'))
return self._connection.system_service().clusters_service().cluster_service(vm.cluster.id).get().name
def build_entity(self):
template = self.__get_template_with_version()
cluster = self.__get_cluster()
snapshot = self.__get_snapshot()
disk_attachments = self.__get_storage_domain_and_all_template_disks(template)
return otypes.Vm(
id=self.param('id'),
name=self.param('name'),
cluster=otypes.Cluster(
name=cluster
) if cluster else None,
disk_attachments=disk_attachments,
template=otypes.Template(
id=template.id,
) if template else None,
use_latest_template_version=self.param('use_latest_template_version'),
stateless=self.param('stateless') or self.param('use_latest_template_version'),
delete_protected=self.param('delete_protected'),
bios=(
otypes.Bios(boot_menu=otypes.BootMenu(enabled=self.param('boot_menu')))
) if self.param('boot_menu') is not None else None,
console=(
otypes.Console(enabled=self.param('serial_console'))
) if self.param('serial_console') is not None else None,
usb=(
otypes.Usb(enabled=self.param('usb_support'))
) if self.param('usb_support') is not None else None,
sso=(
otypes.Sso(
methods=[otypes.Method(id=otypes.SsoMethod.GUEST_AGENT)] if self.param('sso') else []
)
) if self.param('sso') is not None else None,
quota=otypes.Quota(id=self._module.params.get('quota_id')) if self.param('quota_id') is not None else None,
high_availability=otypes.HighAvailability(
enabled=self.param('high_availability'),
priority=self.param('high_availability_priority'),
) if self.param('high_availability') is not None or self.param('high_availability_priority') else None,
lease=otypes.StorageDomainLease(
storage_domain=otypes.StorageDomain(
id=get_id_by_name(
service=self._connection.system_service().storage_domains_service(),
name=self.param('lease')
) if self.param('lease') else None
)
) if self.param('lease') is not None else None,
cpu=otypes.Cpu(
topology=otypes.CpuTopology(
cores=self.param('cpu_cores'),
sockets=self.param('cpu_sockets'),
threads=self.param('cpu_threads'),
) if any((
self.param('cpu_cores'),
self.param('cpu_sockets'),
self.param('cpu_threads')
)) else None,
cpu_tune=otypes.CpuTune(
vcpu_pins=[
otypes.VcpuPin(vcpu=int(pin['vcpu']), cpu_set=str(pin['cpu'])) for pin in self.param('cpu_pinning')
],
) if self.param('cpu_pinning') else None,
mode=otypes.CpuMode(self.param('cpu_mode')) if self.param('cpu_mode') else None,
) if any((
self.param('cpu_cores'),
self.param('cpu_sockets'),
self.param('cpu_threads'),
self.param('cpu_mode'),
self.param('cpu_pinning')
)) else None,
cpu_shares=self.param('cpu_shares'),
os=otypes.OperatingSystem(
type=self.param('operating_system'),
boot=otypes.Boot(
devices=[
otypes.BootDevice(dev) for dev in self.param('boot_devices')
],
) if self.param('boot_devices') else None,
cmdline=self.param('kernel_params') if self.param('kernel_params_persist') else None,
initrd=self.param('initrd_path') if self.param('kernel_params_persist') else None,
kernel=self.param('kernel_path') if self.param('kernel_params_persist') else None,
) if (
self.param('operating_system') or self.param('boot_devices') or self.param('kernel_params_persist')
) else None,
type=otypes.VmType(
self.param('type')
) if self.param('type') else None,
memory=convert_to_bytes(
self.param('memory')
) if self.param('memory') else None,
memory_policy=otypes.MemoryPolicy(
guaranteed=convert_to_bytes(self.param('memory_guaranteed')),
ballooning=self.param('ballooning_enabled'),
max=convert_to_bytes(self.param('memory_max')),
) if any((
self.param('memory_guaranteed'),
self.param('ballooning_enabled') is not None,
self.param('memory_max')
)) else None,
instance_type=otypes.InstanceType(
id=get_id_by_name(
self._connection.system_service().instance_types_service(),
self.param('instance_type'),
),
) if self.param('instance_type') else None,
custom_compatibility_version=otypes.Version(
major=self._get_major(self.param('custom_compatibility_version')),
minor=self._get_minor(self.param('custom_compatibility_version')),
) if self.param('custom_compatibility_version') is not None else None,
description=self.param('description'),
comment=self.param('comment'),
time_zone=otypes.TimeZone(
name=self.param('timezone'),
) if self.param('timezone') else None,
serial_number=otypes.SerialNumber(
policy=otypes.SerialNumberPolicy(self.param('serial_policy')),
value=self.param('serial_policy_value'),
) if (
self.param('serial_policy') is not None or
self.param('serial_policy_value') is not None
) else None,
placement_policy=otypes.VmPlacementPolicy(
affinity=otypes.VmAffinity(self.param('placement_policy')),
hosts=[
otypes.Host(name=self.param('host')),
] if self.param('host') else None,
) if self.param('placement_policy') else None,
soundcard_enabled=self.param('soundcard_enabled'),
display=otypes.Display(
smartcard_enabled=self.param('smartcard_enabled')
) if self.param('smartcard_enabled') is not None else None,
io=otypes.Io(
threads=self.param('io_threads'),
) if self.param('io_threads') is not None else None,
numa_tune_mode=otypes.NumaTuneMode(
self.param('numa_tune_mode')
) if self.param('numa_tune_mode') else None,
rng_device=otypes.RngDevice(
source=otypes.RngSource(self.param('rng_device')),
) if self.param('rng_device') else None,
custom_properties=[
otypes.CustomProperty(
name=cp.get('name'),
regexp=cp.get('regexp'),
value=str(cp.get('value')),
) for cp in self.param('custom_properties') if cp
] if self.param('custom_properties') is not None else None,
initialization=self.get_initialization() if self.param('cloud_init_persist') else None,
snapshots=[otypes.Snapshot(id=snapshot.id)] if snapshot is not None else None,
)
def _get_export_domain_service(self):
provider_name = self._module.params['export_domain']
export_sds_service = self._connection.system_service().storage_domains_service()
export_sd_id = get_id_by_name(export_sds_service, provider_name)
return export_sds_service.service(export_sd_id)
def post_export_action(self, entity):
self._service = self._get_export_domain_service().vms_service()
def update_check(self, entity):
res = self._update_check(entity)
if entity.next_run_configuration_exists:
res = res and self._update_check(self._service.service(entity.id).get(next_run=True))
return res
def _update_check(self, entity):
def check_cpu_pinning():
if self.param('cpu_pinning'):
current = []
if entity.cpu.cpu_tune:
current = [(str(pin.cpu_set), int(pin.vcpu)) for pin in entity.cpu.cpu_tune.vcpu_pins]
passed = [(str(pin['cpu']), int(pin['vcpu'])) for pin in self.param('cpu_pinning')]
return sorted(current) == sorted(passed)
return True
def check_custom_properties():
if self.param('custom_properties'):
current = []
if entity.custom_properties:
current = [(cp.name, cp.regexp, str(cp.value)) for cp in entity.custom_properties]
passed = [(cp.get('name'), cp.get('regexp'), str(cp.get('value'))) for cp in self.param('custom_properties') if cp]
return sorted(current) == sorted(passed)
return True
def check_host():
if self.param('host') is not None:
return self.param('host') in [self._connection.follow_link(host).name for host in getattr(entity.placement_policy, 'hosts', None) or []]
return True
def check_custom_compatibility_version():
if self.param('custom_compatibility_version') is not None:
return (self._get_minor(self.param('custom_compatibility_version')) == self._get_minor(entity.custom_compatibility_version) and
self._get_major(self.param('custom_compatibility_version')) == self._get_major(entity.custom_compatibility_version))
return True
cpu_mode = getattr(entity.cpu, 'mode')
vm_display = entity.display
return (
check_cpu_pinning() and
check_custom_properties() and
check_host() and
check_custom_compatibility_version() and
not self.param('cloud_init_persist') and
not self.param('kernel_params_persist') and
equal(self.param('cluster'), get_link_name(self._connection, entity.cluster)) and equal(convert_to_bytes(self.param('memory')), entity.memory) and
equal(convert_to_bytes(self.param('memory_guaranteed')), entity.memory_policy.guaranteed) and
equal(convert_to_bytes(self.param('memory_max')), entity.memory_policy.max) and
equal(self.param('cpu_cores'), entity.cpu.topology.cores) and
equal(self.param('cpu_sockets'), entity.cpu.topology.sockets) and
equal(self.param('cpu_threads'), entity.cpu.topology.threads) and
equal(self.param('cpu_mode'), str(cpu_mode) if cpu_mode else None) and
equal(self.param('type'), str(entity.type)) and
equal(self.param('name'), str(entity.name)) and
equal(self.param('operating_system'), str(entity.os.type)) and
equal(self.param('boot_menu'), entity.bios.boot_menu.enabled) and
equal(self.param('soundcard_enabled'), entity.soundcard_enabled) and
equal(self.param('smartcard_enabled'), getattr(vm_display, 'smartcard_enabled', False)) and
equal(self.param('io_threads'), entity.io.threads) and
equal(self.param('ballooning_enabled'), entity.memory_policy.ballooning) and
equal(self.param('serial_console'), getattr(entity.console, 'enabled', None)) and
equal(self.param('usb_support'), entity.usb.enabled) and
equal(self.param('sso'), True if entity.sso.methods else False) and
equal(self.param('quota_id'), getattr(entity.quota, 'id', None)) and
equal(self.param('high_availability'), entity.high_availability.enabled) and
equal(self.param('high_availability_priority'), entity.high_availability.priority) and
equal(self.param('lease'), get_link_name(self._connection, getattr(entity.lease, 'storage_domain', None))) and
equal(self.param('stateless'), entity.stateless) and
equal(self.param('cpu_shares'), entity.cpu_shares) and
equal(self.param('delete_protected'), entity.delete_protected) and
equal(self.param('use_latest_template_version'), entity.use_latest_template_version) and
equal(self.param('boot_devices'), [str(dev) for dev in getattr(entity.os.boot, 'devices', [])]) and
equal(self.param('instance_type'), get_link_name(self._connection, entity.instance_type), ignore_case=True) and
equal(self.param('description'), entity.description) and
equal(self.param('comment'), entity.comment) and
equal(self.param('timezone'), getattr(entity.time_zone, 'name', None)) and
equal(self.param('serial_policy'), str(getattr(entity.serial_number, 'policy', None))) and
equal(self.param('serial_policy_value'), getattr(entity.serial_number, 'value', None)) and
equal(self.param('placement_policy'), str(entity.placement_policy.affinity) if entity.placement_policy else None) and
equal(self.param('numa_tune_mode'), str(entity.numa_tune_mode)) and
equal(self.param('rng_device'), str(entity.rng_device.source) if entity.rng_device else None)
)
def pre_create(self, entity):
# Mark if entity exists before touching it:
if entity is None:
self._is_new = True
def post_update(self, entity):
self.post_present(entity.id)
def post_present(self, entity_id):
# After creation of the VM, attach disks and NICs:
entity = self._service.service(entity_id).get()
self.__attach_disks(entity)
self.__attach_nics(entity)
self._attach_cd(entity)
self.changed = self.__attach_numa_nodes(entity)
self.changed = self.__attach_watchdog(entity)
self.changed = self.__attach_graphical_console(entity)
self.changed = self.__attach_host_devices(entity)
def pre_remove(self, entity):
# Forcibly stop the VM, if it's not in DOWN state:
if entity.status != otypes.VmStatus.DOWN:
if not self._module.check_mode:
self.changed = self.action(
action='stop',
action_condition=lambda vm: vm.status != otypes.VmStatus.DOWN,
wait_condition=lambda vm: vm.status == otypes.VmStatus.DOWN,
)['changed']
def __suspend_shutdown_common(self, vm_service):
if vm_service.get().status in [
otypes.VmStatus.MIGRATING,
otypes.VmStatus.POWERING_UP,
otypes.VmStatus.REBOOT_IN_PROGRESS,
otypes.VmStatus.WAIT_FOR_LAUNCH,
otypes.VmStatus.UP,
otypes.VmStatus.RESTORING_STATE,
]:
self._wait_for_UP(vm_service)
def _pre_shutdown_action(self, entity):
vm_service = self._service.vm_service(entity.id)
self.__suspend_shutdown_common(vm_service)
if entity.status in [otypes.VmStatus.SUSPENDED, otypes.VmStatus.PAUSED]:
vm_service.start()
self._wait_for_UP(vm_service)
return vm_service.get()
def _pre_suspend_action(self, entity):
vm_service = self._service.vm_service(entity.id)
self.__suspend_shutdown_common(vm_service)
if entity.status in [otypes.VmStatus.PAUSED, otypes.VmStatus.DOWN]:
vm_service.start()
self._wait_for_UP(vm_service)
return vm_service.get()
def _post_start_action(self, entity):
vm_service = self._service.service(entity.id)
self._wait_for_UP(vm_service)
self._attach_cd(vm_service.get())
def _attach_cd(self, entity):
cd_iso = self.param('cd_iso')
if cd_iso is not None:
vm_service = self._service.service(entity.id)
current = vm_service.get().status == otypes.VmStatus.UP and self.param('state') == 'running'
cdroms_service = vm_service.cdroms_service()
cdrom_device = cdroms_service.list()[0]
cdrom_service = cdroms_service.cdrom_service(cdrom_device.id)
cdrom = cdrom_service.get(current=current)
if getattr(cdrom.file, 'id', '') != cd_iso:
if not self._module.check_mode:
cdrom_service.update(
cdrom=otypes.Cdrom(
file=otypes.File(id=cd_iso)
),
current=current,
)
self.changed = True
return entity
def _migrate_vm(self, entity):
vm_host = self.param('host')
vm_service = self._service.vm_service(entity.id)
# In case VM is preparing to be UP, wait to be up, to migrate it:
if entity.status == otypes.VmStatus.UP:
if vm_host is not None:
hosts_service = self._connection.system_service().hosts_service()
current_vm_host = hosts_service.host_service(entity.host.id).get().name
if vm_host != current_vm_host:
if not self._module.check_mode:
vm_service.migrate(host=otypes.Host(name=vm_host), force=self.param('force_migrate'))
self._wait_for_UP(vm_service)
self.changed = True
elif self.param('migrate'):
if not self._module.check_mode:
vm_service.migrate(force=self.param('force_migrate'))
self._wait_for_UP(vm_service)
self.changed = True
return entity
def _wait_for_UP(self, vm_service):
wait(
service=vm_service,
condition=lambda vm: vm.status == otypes.VmStatus.UP,
wait=self.param('wait'),
timeout=self.param('timeout'),
)
def _wait_for_vm_disks(self, vm_service):
disks_service = self._connection.system_service().disks_service()
for da in vm_service.disk_attachments_service().list():
disk_service = disks_service.disk_service(da.disk.id)
wait(
service=disk_service,
condition=lambda disk: disk.status == otypes.DiskStatus.OK if disk.storage_type == otypes.DiskStorageType.IMAGE else True,
wait=self.param('wait'),
timeout=self.param('timeout'),
)
def wait_for_down(self, vm):
"""
This function will first wait for the status DOWN of the VM.
Then it will find the active snapshot and wait until it's state is OK for
stateless VMs and statless snaphot is removed.
"""
vm_service = self._service.vm_service(vm.id)
wait(
service=vm_service,
condition=lambda vm: vm.status == otypes.VmStatus.DOWN,
wait=self.param('wait'),
timeout=self.param('timeout'),
)
if vm.stateless:
snapshots_service = vm_service.snapshots_service()
snapshots = snapshots_service.list()
snap_active = [
snap for snap in snapshots
if snap.snapshot_type == otypes.SnapshotType.ACTIVE
][0]
snap_stateless = [
snap for snap in snapshots
if snap.snapshot_type == otypes.SnapshotType.STATELESS
]
# Stateless snapshot may be already removed:
if snap_stateless:
"""
We need to wait for Active snapshot ID, to be removed as it's current
stateless snapshot. Then we need to wait for staless snapshot ID to
be read, for use, because it will become active snapshot.
"""
wait(
service=snapshots_service.snapshot_service(snap_active.id),
condition=lambda snap: snap is None,
wait=self.param('wait'),
timeout=self.param('timeout'),
)
wait(
service=snapshots_service.snapshot_service(snap_stateless[0].id),
condition=lambda snap: snap.snapshot_status == otypes.SnapshotStatus.OK,
wait=self.param('wait'),
timeout=self.param('timeout'),
)
return True
def __attach_graphical_console(self, entity):
graphical_console = self.param('graphical_console')
if not graphical_console:
return False
vm_service = self._service.service(entity.id)
gcs_service = vm_service.graphics_consoles_service()
graphical_consoles = gcs_service.list()
# Remove all graphical consoles if there are any:
if bool(graphical_console.get('headless_mode')):
if not self._module.check_mode:
for gc in graphical_consoles:
gcs_service.console_service(gc.id).remove()
return len(graphical_consoles) > 0
# If there are not gc add any gc to be added:
protocol = graphical_console.get('protocol')
if isinstance(protocol, str):
protocol = [protocol]
current_protocols = [str(gc.protocol) for gc in graphical_consoles]
if not current_protocols:
if not self._module.check_mode:
for p in protocol:
gcs_service.add(
otypes.GraphicsConsole(
protocol=otypes.GraphicsType(p),
)
)
return True
# Update consoles:
if sorted(protocol) != sorted(current_protocols):
if not self._module.check_mode:
for gc in graphical_consoles:
gcs_service.console_service(gc.id).remove()
for p in protocol:
gcs_service.add(
otypes.GraphicsConsole(
protocol=otypes.GraphicsType(p),
)
)
return True
def __attach_disks(self, entity):
if not self.param('disks'):
return
vm_service = self._service.service(entity.id)
disks_service = self._connection.system_service().disks_service()
disk_attachments_service = vm_service.disk_attachments_service()
self._wait_for_vm_disks(vm_service)
for disk in self.param('disks'):
# If disk ID is not specified, find disk by name:
disk_id = disk.get('id')
if disk_id is None:
disk_id = getattr(
search_by_name(
service=disks_service,
name=disk.get('name')
),
'id',
None
)
# Attach disk to VM:
disk_attachment = disk_attachments_service.attachment_service(disk_id)
if get_entity(disk_attachment) is None:
if not self._module.check_mode:
disk_attachments_service.add(
otypes.DiskAttachment(
disk=otypes.Disk(
id=disk_id,
),
active=disk.get('activate', True),
interface=otypes.DiskInterface(
disk.get('interface', 'virtio')
),
bootable=disk.get('bootable', False),
)
)
self.changed = True
def __get_vnic_profile_id(self, nic):
"""
Return VNIC profile ID looked up by it's name, because there can be
more VNIC profiles with same name, other criteria of filter is cluster.
"""
vnics_service = self._connection.system_service().vnic_profiles_service()
clusters_service = self._connection.system_service().clusters_service()
cluster = search_by_name(clusters_service, self.param('cluster'))
profiles = [
profile for profile in vnics_service.list()
if profile.name == nic.get('profile_name')
]
cluster_networks = [
net.id for net in self._connection.follow_link(cluster.networks)
]
try:
return next(
profile.id for profile in profiles
if profile.network.id in cluster_networks
)
except StopIteration:
raise Exception(
"Profile '%s' was not found in cluster '%s'" % (
nic.get('profile_name'),
self.param('cluster')
)
)
def __attach_numa_nodes(self, entity):
updated = False
numa_nodes_service = self._service.service(entity.id).numa_nodes_service()
if len(self.param('numa_nodes')) > 0:
# Remove all existing virtual numa nodes before adding new ones
existed_numa_nodes = numa_nodes_service.list()
existed_numa_nodes.sort(reverse=len(existed_numa_nodes) > 1 and existed_numa_nodes[1].index > existed_numa_nodes[0].index)
for current_numa_node in existed_numa_nodes:
numa_nodes_service.node_service(current_numa_node.id).remove()
updated = True
for numa_node in self.param('numa_nodes'):
if numa_node is None or numa_node.get('index') is None or numa_node.get('cores') is None or numa_node.get('memory') is None:
continue
numa_nodes_service.add(
otypes.VirtualNumaNode(
index=numa_node.get('index'),
memory=numa_node.get('memory'),
cpu=otypes.Cpu(
cores=[
otypes.Core(
index=core
) for core in numa_node.get('cores')
],
),
numa_node_pins=[
otypes.NumaNodePin(
index=pin
) for pin in numa_node.get('numa_node_pins')
] if numa_node.get('numa_node_pins') is not None else None,
)
)
updated = True
return updated
def __attach_watchdog(self, entity):
watchdogs_service = self._service.service(entity.id).watchdogs_service()
watchdog = self.param('watchdog')
if watchdog is not None:
current_watchdog = next(iter(watchdogs_service.list()), None)
if watchdog.get('model') is None and current_watchdog:
watchdogs_service.watchdog_service(current_watchdog.id).remove()
return True
elif watchdog.get('model') is not None and current_watchdog is None:
watchdogs_service.add(
otypes.Watchdog(
model=otypes.WatchdogModel(watchdog.get('model').lower()),
action=otypes.WatchdogAction(watchdog.get('action')),
)
)
return True
elif current_watchdog is not None:
if (
str(current_watchdog.model).lower() != watchdog.get('model').lower() or
str(current_watchdog.action).lower() != watchdog.get('action').lower()
):
watchdogs_service.watchdog_service(current_watchdog.id).update(
otypes.Watchdog(
model=otypes.WatchdogModel(watchdog.get('model')),
action=otypes.WatchdogAction(watchdog.get('action')),
)
)
return True
return False
def __attach_nics(self, entity):
# Attach NICs to VM, if specified:
nics_service = self._service.service(entity.id).nics_service()
for nic in self.param('nics'):
if search_by_name(nics_service, nic.get('name')) is None:
if not self._module.check_mode:
nics_service.add(
otypes.Nic(
name=nic.get('name'),
interface=otypes.NicInterface(
nic.get('interface', 'virtio')
),
vnic_profile=otypes.VnicProfile(
id=self.__get_vnic_profile_id(nic),
) if nic.get('profile_name') else None,
mac=otypes.Mac(
address=nic.get('mac_address')
) if nic.get('mac_address') else None,
)
)
self.changed = True
def get_initialization(self):
if self._initialization is not None:
return self._initialization
sysprep = self.param('sysprep')
cloud_init = self.param('cloud_init')
cloud_init_nics = self.param('cloud_init_nics') or []
if cloud_init is not None:
cloud_init_nics.append(cloud_init)
if cloud_init or cloud_init_nics:
self._initialization = otypes.Initialization(
nic_configurations=[
otypes.NicConfiguration(
boot_protocol=otypes.BootProtocol(
nic.pop('nic_boot_protocol').lower()
) if nic.get('nic_boot_protocol') else None,
ipv6_boot_protocol=otypes.BootProtocol(
nic.pop('nic_boot_protocol_v6').lower()
) if nic.get('nic_boot_protocol_v6') else None,
name=nic.pop('nic_name', None),
on_boot=nic.pop('nic_on_boot', None),
ip=otypes.Ip(
address=nic.pop('nic_ip_address', None),
netmask=nic.pop('nic_netmask', None),
gateway=nic.pop('nic_gateway', None),
version=otypes.IpVersion('v4')
) if (
nic.get('nic_gateway') is not None or
nic.get('nic_netmask') is not None or
nic.get('nic_ip_address') is not None
) else None,
ipv6=otypes.Ip(
address=nic.pop('nic_ip_address_v6', None),
netmask=nic.pop('nic_netmask_v6', None),
gateway=nic.pop('nic_gateway_v6', None),
version=otypes.IpVersion('v6')
) if (
nic.get('nic_gateway_v6') is not None or
nic.get('nic_netmask_v6') is not None or
nic.get('nic_ip_address_v6') is not None
) else None,
)
for nic in cloud_init_nics
if (
nic.get('nic_boot_protocol_v6') is not None or
nic.get('nic_ip_address_v6') is not None or
nic.get('nic_gateway_v6') is not None or
nic.get('nic_netmask_v6') is not None or
nic.get('nic_gateway') is not None or
nic.get('nic_netmask') is not None or
nic.get('nic_ip_address') is not None or
nic.get('nic_boot_protocol') is not None or
nic.get('nic_on_boot') is not None
)
] if cloud_init_nics else None,
**cloud_init
)
elif sysprep:
self._initialization = otypes.Initialization(
**sysprep
)
return self._initialization
def __attach_host_devices(self, entity):
vm_service = self._service.service(entity.id)
host_devices_service = vm_service.host_devices_service()
host_devices = self.param('host_devices')
updated = False
if host_devices:
device_names = [dev.name for dev in host_devices_service.list()]
for device in host_devices:
device_name = device.get('name')
state = device.get('state', 'present')
if state == 'absent' and device_name in device_names:
updated = True
if not self._module.check_mode:
device_id = get_id_by_name(host_devices_service, device.get('name'))
host_devices_service.device_service(device_id).remove()
elif state == 'present' and device_name not in device_names:
updated = True
if not self._module.check_mode:
host_devices_service.add(
otypes.HostDevice(
name=device.get('name'),
)
)
return updated
def _get_role_mappings(module):
roleMappings = list()
for roleMapping in module.params['role_mappings']:
roleMappings.append(
otypes.RegistrationRoleMapping(
from_=otypes.Role(
name=roleMapping['source_name'],
) if roleMapping['source_name'] else None,
to=otypes.Role(
name=roleMapping['dest_name'],
) if roleMapping['dest_name'] else None,
)
)
return roleMappings
def _get_affinity_group_mappings(module):
affinityGroupMappings = list()
for affinityGroupMapping in module.params['affinity_group_mappings']:
affinityGroupMappings.append(
otypes.RegistrationAffinityGroupMapping(
from_=otypes.AffinityGroup(
name=affinityGroupMapping['source_name'],
) if affinityGroupMapping['source_name'] else None,
to=otypes.AffinityGroup(
name=affinityGroupMapping['dest_name'],
) if affinityGroupMapping['dest_name'] else None,
)
)
return affinityGroupMappings
def _get_affinity_label_mappings(module):
affinityLabelMappings = list()
for affinityLabelMapping in module.params['affinity_label_mappings']:
affinityLabelMappings.append(
otypes.RegistrationAffinityLabelMapping(
from_=otypes.AffinityLabel(
name=affinityLabelMapping['source_name'],
) if affinityLabelMapping['source_name'] else None,
to=otypes.AffinityLabel(
name=affinityLabelMapping['dest_name'],
) if affinityLabelMapping['dest_name'] else None,
)
)
return affinityLabelMappings
def _get_domain_mappings(module):
domainMappings = list()
for domainMapping in module.params['domain_mappings']:
domainMappings.append(
otypes.RegistrationDomainMapping(
from_=otypes.Domain(
name=domainMapping['source_name'],
) if domainMapping['source_name'] else None,
to=otypes.Domain(
name=domainMapping['dest_name'],
) if domainMapping['dest_name'] else None,
)
)
return domainMappings
def _get_lun_mappings(module):
lunMappings = list()
for lunMapping in module.params['lun_mappings']:
lunMappings.append(
otypes.RegistrationLunMapping(
from_=otypes.Disk(
lun_storage=otypes.HostStorage(
type=otypes.StorageType(lunMapping['source_storage_type'])
if (lunMapping['source_storage_type'] in
['iscsi', 'fcp']) else None,
logical_units=[
otypes.LogicalUnit(
id=lunMapping['source_logical_unit_id'],
)
],
),
) if lunMapping['source_logical_unit_id'] else None,
to=otypes.Disk(
lun_storage=otypes.HostStorage(
type=otypes.StorageType(lunMapping['dest_storage_type'])
if (lunMapping['dest_storage_type'] in
['iscsi', 'fcp']) else None,
logical_units=[
otypes.LogicalUnit(
id=lunMapping['dest_logical_unit_id'],
port=lunMapping['dest_logical_unit_port'],
portal=lunMapping['dest_logical_unit_portal'],
address=lunMapping['dest_logical_unit_address'],
target=lunMapping['dest_logical_unit_target'],
password=lunMapping['dest_logical_unit_password'],
username=lunMapping['dest_logical_unit_username'],
)
],
),
) if lunMapping['dest_logical_unit_id'] else None,
),
),
return lunMappings
def _get_cluster_mappings(module):
clusterMappings = list()
for clusterMapping in module.params['cluster_mappings']:
clusterMappings.append(
otypes.RegistrationClusterMapping(
from_=otypes.Cluster(
name=clusterMapping['source_name'],
),
to=otypes.Cluster(
name=clusterMapping['dest_name'],
) if clusterMapping['dest_name'] else None,
)
)
return clusterMappings
def _get_vnic_profile_mappings(module):
vnicProfileMappings = list()
for vnicProfileMapping in module.params['vnic_profile_mappings']:
vnicProfileMappings.append(
otypes.VnicProfileMapping(
source_network_name=vnicProfileMapping['source_network_name'],
source_network_profile_name=vnicProfileMapping['source_profile_name'],
target_vnic_profile=otypes.VnicProfile(
id=vnicProfileMapping['target_profile_id'],
) if vnicProfileMapping['target_profile_id'] else None,
)
)
return vnicProfileMappings
def import_vm(module, connection):
vms_service = connection.system_service().vms_service()
if search_by_name(vms_service, module.params['name']) is not None:
return False
events_service = connection.system_service().events_service()
last_event = events_service.list(max=1)[0]
external_type = [
tmp for tmp in ['kvm', 'xen', 'vmware']
if module.params[tmp] is not None
][0]
external_vm = module.params[external_type]
imports_service = connection.system_service().external_vm_imports_service()
imported_vm = imports_service.add(
otypes.ExternalVmImport(
vm=otypes.Vm(
name=module.params['name']
),
name=external_vm.get('name'),
username=external_vm.get('username', 'test'),
password=external_vm.get('password', 'test'),
provider=otypes.ExternalVmProviderType(external_type),
url=external_vm.get('url'),
cluster=otypes.Cluster(
name=module.params['cluster'],
) if module.params['cluster'] else None,
storage_domain=otypes.StorageDomain(
name=external_vm.get('storage_domain'),
) if external_vm.get('storage_domain') else None,
sparse=external_vm.get('sparse', True),
host=otypes.Host(
name=module.params['host'],
) if module.params['host'] else None,
)
)
# Wait until event with code 1152 for our VM don't appear:
vms_service = connection.system_service().vms_service()
wait(
service=vms_service.vm_service(imported_vm.vm.id),
condition=lambda vm: len([
event
for event in events_service.list(
from_=int(last_event.id),
search='type=1152 and vm.id=%s' % vm.id,
)
]) > 0 if vm is not None else False,
fail_condition=lambda vm: vm is None,
timeout=module.params['timeout'],
poll_interval=module.params['poll_interval'],
)
return True
def control_state(vm, vms_service, module):
if vm is None:
return
force = module.params['force']
state = module.params['state']
vm_service = vms_service.vm_service(vm.id)
if vm.status == otypes.VmStatus.IMAGE_LOCKED:
wait(
service=vm_service,
condition=lambda vm: vm.status == otypes.VmStatus.DOWN,
)
elif vm.status == otypes.VmStatus.SAVING_STATE:
# Result state is SUSPENDED, we should wait to be suspended:
wait(
service=vm_service,
condition=lambda vm: vm.status == otypes.VmStatus.SUSPENDED,
)
elif (
vm.status == otypes.VmStatus.UNASSIGNED or
vm.status == otypes.VmStatus.UNKNOWN
):
# Invalid states:
module.fail_json(msg="Not possible to control VM, if it's in '{0}' status".format(vm.status))
elif vm.status == otypes.VmStatus.POWERING_DOWN:
if (force and state == 'stopped') or state == 'absent':
vm_service.stop()
wait(
service=vm_service,
condition=lambda vm: vm.status == otypes.VmStatus.DOWN,
)
else:
# If VM is powering down, wait to be DOWN or UP.
# VM can end in UP state in case there is no GA
# or ACPI on the VM or shutdown operation crashed:
wait(
service=vm_service,
condition=lambda vm: vm.status in [otypes.VmStatus.DOWN, otypes.VmStatus.UP],
)
def main():
argument_spec = ovirt_full_argument_spec(
state=dict(type='str', default='present', choices=['absent', 'next_run', 'present', 'registered', 'running', 'stopped', 'suspended', 'exported']),
name=dict(type='str'),
id=dict(type='str'),
cluster=dict(type='str'),
allow_partial_import=dict(type='bool'),
template=dict(type='str'),
template_cluster=dict(type='str'),
template_version=dict(type='int'),
use_latest_template_version=dict(type='bool'),
storage_domain=dict(type='str'),
disk_format=dict(type='str', default='cow', choices=['cow', 'raw']),
disks=dict(type='list', default=[]),
memory=dict(type='str'),
memory_guaranteed=dict(type='str'),
memory_max=dict(type='str'),
cpu_sockets=dict(type='int'),
cpu_cores=dict(type='int'),
cpu_shares=dict(type='int'),
cpu_threads=dict(type='int'),
type=dict(type='str', choices=['server', 'desktop', 'high_performance']),
operating_system=dict(type='str'),
cd_iso=dict(type='str'),
boot_devices=dict(type='list', choices=['cdrom', 'hd', 'network']),
vnic_profile_mappings=dict(default=[], type='list'),
cluster_mappings=dict(default=[], type='list'),
role_mappings=dict(default=[], type='list'),
affinity_group_mappings=dict(default=[], type='list'),
affinity_label_mappings=dict(default=[], type='list'),
lun_mappings=dict(default=[], type='list'),
domain_mappings=dict(default=[], type='list'),
reassign_bad_macs=dict(default=None, type='bool'),
boot_menu=dict(type='bool'),
serial_console=dict(type='bool'),
usb_support=dict(type='bool'),
sso=dict(type='bool'),
quota_id=dict(type='str'),
high_availability=dict(type='bool'),
high_availability_priority=dict(type='int'),
lease=dict(type='str'),
stateless=dict(type='bool'),
delete_protected=dict(type='bool'),
force=dict(type='bool', default=False),
nics=dict(type='list', default=[]),
cloud_init=dict(type='dict'),
cloud_init_nics=dict(type='list', default=[]),
cloud_init_persist=dict(type='bool', default=False, aliases=['sysprep_persist']),
kernel_params_persist=dict(type='bool', default=False),
sysprep=dict(type='dict'),
host=dict(type='str'),
clone=dict(type='bool', default=False),
clone_permissions=dict(type='bool', default=False),
kernel_path=dict(type='str'),
initrd_path=dict(type='str'),
kernel_params=dict(type='str'),
instance_type=dict(type='str'),
description=dict(type='str'),
comment=dict(type='str'),
timezone=dict(type='str'),
serial_policy=dict(type='str', choices=['vm', 'host', 'custom']),
serial_policy_value=dict(type='str'),
vmware=dict(type='dict'),
xen=dict(type='dict'),
kvm=dict(type='dict'),
cpu_mode=dict(type='str'),
placement_policy=dict(type='str'),
custom_compatibility_version=dict(type='str'),
ticket=dict(type='bool', default=None),
cpu_pinning=dict(type='list'),
soundcard_enabled=dict(type='bool', default=None),
smartcard_enabled=dict(type='bool', default=None),
io_threads=dict(type='int', default=None),
ballooning_enabled=dict(type='bool', default=None),
rng_device=dict(type='str'),
numa_tune_mode=dict(type='str', choices=['interleave', 'preferred', 'strict']),
numa_nodes=dict(type='list', default=[]),
custom_properties=dict(type='list'),
watchdog=dict(type='dict'),
host_devices=dict(type='list'),
graphical_console=dict(type='dict'),
exclusive=dict(type='bool'),
export_domain=dict(default=None),
export_ova=dict(type='dict'),
force_migrate=dict(type='bool'),
migrate=dict(type='bool', default=None),
next_run=dict(type='bool'),
snapshot_name=dict(type='str'),
snapshot_vm=dict(type='str'),
)
module = AnsibleModule(
argument_spec=argument_spec,
supports_check_mode=True,
required_one_of=[['id', 'name']],
required_if=[
('state', 'registered', ['storage_domain']),
],
required_together=[['snapshot_name', 'snapshot_vm']]
)
check_sdk(module)
check_params(module)
try:
state = module.params['state']
auth = module.params.pop('auth')
connection = create_connection(auth)
vms_service = connection.system_service().vms_service()
vms_module = VmsModule(
connection=connection,
module=module,
service=vms_service,
)
vm = vms_module.search_entity(list_params={'all_content': True})
# Boolean variable to mark if vm existed before module was executed
vm_existed = True if vm else False
control_state(vm, vms_service, module)
if state in ('present', 'running', 'next_run'):
if module.params['xen'] or module.params['kvm'] or module.params['vmware']:
vms_module.changed = import_vm(module, connection)
# In case of wait=false and state=running, waits for VM to be created
# In case VM don't exist, wait for VM DOWN state,
# otherwise don't wait for any state, just update VM:
ret = vms_module.create(
entity=vm,
result_state=otypes.VmStatus.DOWN if vm is None else None,
update_params={'next_run': module.params['next_run']} if module.params['next_run'] is not None else None,
clone=module.params['clone'],
clone_permissions=module.params['clone_permissions'],
_wait=True if not module.params['wait'] and state == 'running' else module.params['wait'],
)
# If VM is going to be created and check_mode is on, return now:
if module.check_mode and ret.get('id') is None:
module.exit_json(**ret)
vms_module.post_present(ret['id'])
# Run the VM if it was just created, else don't run it:
if state == 'running':
def kernel_persist_check():
return (module.params.get('kernel_params') or
module.params.get('initrd_path') or
module.params.get('kernel_path')
and not module.params.get('cloud_init_persist'))
initialization = vms_module.get_initialization()
ret = vms_module.action(
action='start',
post_action=vms_module._post_start_action,
action_condition=lambda vm: (
vm.status not in [
otypes.VmStatus.MIGRATING,
otypes.VmStatus.POWERING_UP,
otypes.VmStatus.REBOOT_IN_PROGRESS,
otypes.VmStatus.WAIT_FOR_LAUNCH,
otypes.VmStatus.UP,
otypes.VmStatus.RESTORING_STATE,
]
),
wait_condition=lambda vm: vm.status == otypes.VmStatus.UP,
# Start action kwargs:
use_cloud_init=True if not module.params.get('cloud_init_persist') and module.params.get('cloud_init') else None,
use_sysprep=True if not module.params.get('cloud_init_persist') and module.params.get('sysprep') else None,
vm=otypes.Vm(
placement_policy=otypes.VmPlacementPolicy(
hosts=[otypes.Host(name=module.params['host'])]
) if module.params['host'] else None,
initialization=initialization,
os=otypes.OperatingSystem(
cmdline=module.params.get('kernel_params'),
initrd=module.params.get('initrd_path'),
kernel=module.params.get('kernel_path'),
) if (kernel_persist_check()) else None,
) if (
kernel_persist_check() or
module.params.get('host') or
initialization is not None
and not module.params.get('cloud_init_persist')
) else None,
)
if module.params['ticket']:
vm_service = vms_service.vm_service(ret['id'])
graphics_consoles_service = vm_service.graphics_consoles_service()
graphics_console = graphics_consoles_service.list()[0]
console_service = graphics_consoles_service.console_service(graphics_console.id)
ticket = console_service.remote_viewer_connection_file()
if ticket:
ret['vm']['remote_vv_file'] = ticket
if state == 'next_run':
# Apply next run configuration, if needed:
vm = vms_service.vm_service(ret['id']).get()
if vm.next_run_configuration_exists:
ret = vms_module.action(
action='reboot',
entity=vm,
action_condition=lambda vm: vm.status == otypes.VmStatus.UP,
wait_condition=lambda vm: vm.status == otypes.VmStatus.UP,
)
# Allow migrate vm when state present.
if vm_existed:
vms_module._migrate_vm(vm)
ret['changed'] = vms_module.changed
elif state == 'stopped':
if module.params['xen'] or module.params['kvm'] or module.params['vmware']:
vms_module.changed = import_vm(module, connection)
ret = vms_module.create(
entity=vm,
result_state=otypes.VmStatus.DOWN if vm is None else None,
clone=module.params['clone'],
clone_permissions=module.params['clone_permissions'],
)
if module.params['force']:
ret = vms_module.action(
action='stop',
action_condition=lambda vm: vm.status != otypes.VmStatus.DOWN,
wait_condition=vms_module.wait_for_down,
)
else:
ret = vms_module.action(
action='shutdown',
pre_action=vms_module._pre_shutdown_action,
action_condition=lambda vm: vm.status != otypes.VmStatus.DOWN,
wait_condition=vms_module.wait_for_down,
)
vms_module.post_present(ret['id'])
elif state == 'suspended':
ret = vms_module.create(
entity=vm,
result_state=otypes.VmStatus.DOWN if vm is None else None,
clone=module.params['clone'],
clone_permissions=module.params['clone_permissions'],
)
vms_module.post_present(ret['id'])
ret = vms_module.action(
action='suspend',
pre_action=vms_module._pre_suspend_action,
action_condition=lambda vm: vm.status != otypes.VmStatus.SUSPENDED,
wait_condition=lambda vm: vm.status == otypes.VmStatus.SUSPENDED,
)
elif state == 'absent':
ret = vms_module.remove()
elif state == 'registered':
storage_domains_service = connection.system_service().storage_domains_service()
# Find the storage domain with unregistered VM:
sd_id = get_id_by_name(storage_domains_service, module.params['storage_domain'])
storage_domain_service = storage_domains_service.storage_domain_service(sd_id)
vms_service = storage_domain_service.vms_service()
# Find the unregistered VM we want to register:
vms = vms_service.list(unregistered=True)
vm = next(
(vm for vm in vms if (vm.id == module.params['id'] or vm.name == module.params['name'])),
None
)
changed = False
if vm is None:
vm = vms_module.search_entity()
if vm is None:
raise ValueError(
"VM '%s(%s)' wasn't found." % (module.params['name'], module.params['id'])
)
else:
# Register the vm into the system:
changed = True
vm_service = vms_service.vm_service(vm.id)
vm_service.register(
allow_partial_import=module.params['allow_partial_import'],
cluster=otypes.Cluster(
name=module.params['cluster']
) if module.params['cluster'] else None,
vnic_profile_mappings=_get_vnic_profile_mappings(module)
if module.params['vnic_profile_mappings'] else None,
reassign_bad_macs=module.params['reassign_bad_macs']
if module.params['reassign_bad_macs'] is not None else None,
registration_configuration=otypes.RegistrationConfiguration(
cluster_mappings=_get_cluster_mappings(module),
role_mappings=_get_role_mappings(module),
domain_mappings=_get_domain_mappings(module),
lun_mappings=_get_lun_mappings(module),
affinity_group_mappings=_get_affinity_group_mappings(module),
affinity_label_mappings=_get_affinity_label_mappings(module),
) if (module.params['cluster_mappings']
or module.params['role_mappings']
or module.params['domain_mappings']
or module.params['lun_mappings']
or module.params['affinity_group_mappings']
or module.params['affinity_label_mappings']) else None
)
if module.params['wait']:
vm = vms_module.wait_for_import()
else:
# Fetch vm to initialize return.
vm = vm_service.get()
ret = {
'changed': changed,
'id': vm.id,
'vm': get_dict_of_struct(vm)
}
elif state == 'exported':
if module.params['export_domain']:
export_service = vms_module._get_export_domain_service()
export_vm = search_by_attributes(export_service.vms_service(), id=vm.id)
ret = vms_module.action(
entity=vm,
action='export',
action_condition=lambda t: export_vm is None or module.params['exclusive'],
wait_condition=lambda t: t is not None,
post_action=vms_module.post_export_action,
storage_domain=otypes.StorageDomain(id=export_service.get().id),
exclusive=module.params['exclusive'],
)
elif module.params['export_ova']:
export_vm = module.params['export_ova']
ret = vms_module.action(
entity=vm,
action='export_to_path_on_host',
host=otypes.Host(name=export_vm.get('host')),
directory=export_vm.get('directory'),
filename=export_vm.get('filename'),
)
module.exit_json(**ret)
except Exception as e:
module.fail_json(msg=str(e), exception=traceback.format_exc())
finally:
connection.close(logout=auth.get('token') is None)
if __name__ == "__main__":
main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 60,399 |
vmware: avoid unnecessary copy call on internal objects
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
Two vmware modules uses `copy()` to duplicate an internal instance of a pyvmomi object. This to be able to modify the object during an iteration.
- https://github.com/ansible/ansible/blob/622a493ae03bd5e5cf517d336fc426e9d12208c7/lib/ansible/modules/cloud/vmware/vmware_guest.py#L788
- https://github.com/ansible/ansible/blob/622a493ae03bd5e5cf517d336fc426e9d12208c7/lib/ansible/modules/cloud/vmware/vmware_datastore_facts.py#L220
As explain here, https://github.com/ansible/ansible/pull/60196/files#r312643761 this is not the best approach and this may impact the performance.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
vmware
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
devel
```
|
https://github.com/ansible/ansible/issues/60399
|
https://github.com/ansible/ansible/pull/60476
|
fa783c027bff9b5b9e25c8619faddb9a0fcc02fc
|
df2a09e998205df30306de33ca3ce1dd9cae1cb5
| 2019-08-12T08:30:37Z |
python
| 2019-08-13T13:27:22Z |
lib/ansible/modules/cloud/vmware/vmware_datastore_facts.py
|
#!/usr/bin/python
# -*- coding: utf-8 -*-
# Copyright: (c) 2017, Tim Rightnour <[email protected]>
# Copyright: (c) 2018, Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
ANSIBLE_METADATA = {
'metadata_version': '1.1',
'status': ['preview'],
'supported_by': 'community'
}
DOCUMENTATION = '''
---
module: vmware_datastore_facts
short_description: Gather facts about datastores available in given vCenter
description:
- This module can be used to gather facts about datastores in VMware infrastructure.
- All values and VMware object names are case sensitive.
version_added: 2.5
author:
- Tim Rightnour (@garbled1)
notes:
- Tested on vSphere 5.5, 6.0 and 6.5
requirements:
- "python >= 2.6"
- PyVmomi
options:
name:
description:
- Name of the datastore to match.
- If set, facts of specific datastores are returned.
required: False
type: str
datacenter:
description:
- Datacenter to search for datastores.
- This parameter is required, if C(cluster) is not supplied.
required: False
aliases: ['datacenter_name']
type: str
cluster:
description:
- Cluster to search for datastores.
- If set, facts of datastores belonging this clusters will be returned.
- This parameter is required, if C(datacenter) is not supplied.
required: False
type: str
gather_nfs_mount_info:
description:
- Gather mount information of NFS datastores.
- Disabled per default because this slows down the execution if you have a lot of datastores.
type: bool
default: false
version_added: 2.8
gather_vmfs_mount_info:
description:
- Gather mount information of VMFS datastores.
- Disabled per default because this slows down the execution if you have a lot of datastores.
type: bool
default: false
version_added: 2.8
extends_documentation_fragment: vmware.documentation
'''
EXAMPLES = '''
- name: Gather facts from standalone ESXi server having datacenter as 'ha-datacenter'
vmware_datastore_facts:
hostname: '{{ vcenter_hostname }}'
username: '{{ vcenter_username }}'
password: '{{ vcenter_password }}'
datacenter_name: '{{ datacenter_name }}'
validate_certs: no
delegate_to: localhost
register: facts
- name: Gather facts from datacenter about specific datastore
vmware_datastore_facts:
hostname: '{{ vcenter_hostname }}'
username: '{{ vcenter_username }}'
password: '{{ vcenter_password }}'
datacenter_name: '{{ datacenter_name }}'
name: datastore1
delegate_to: localhost
register: facts
'''
RETURN = """
datastores:
description: metadata about the available datastores
returned: always
type: list
sample: [
{
"accessible": false,
"capacity": 42681237504,
"datastore_cluster": "datacluster0",
"freeSpace": 39638269952,
"maintenanceMode": "normal",
"multipleHostAccess": false,
"name": "datastore2",
"provisioned": 12289211488,
"type": "VMFS",
"uncommitted": 9246243936,
"url": "ds:///vmfs/volumes/5a69b18a-c03cd88c-36ae-5254001249ce/",
"vmfs_blockSize": 1024,
"vmfs_uuid": "5a69b18a-c03cd88c-36ae-5254001249ce",
"vmfs_version": "6.81"
},
{
"accessible": true,
"capacity": 5497558138880,
"datastore_cluster": "datacluster0",
"freeSpace": 4279000641536,
"maintenanceMode": "normal",
"multipleHostAccess": true,
"name": "datastore3",
"nfs_path": "/vol/datastore3",
"nfs_server": "nfs_server1",
"provisioned": 1708109410304,
"type": "NFS",
"uncommitted": 489551912960,
"url": "ds:///vmfs/volumes/420b3e73-67070776/"
},
]
"""
try:
from pyVmomi import vim
except ImportError:
pass
from ansible.module_utils.basic import AnsibleModule
from ansible.module_utils.vmware import (PyVmomi, vmware_argument_spec, get_all_objs,
find_cluster_by_name, get_parent_datacenter)
class VMwareHostDatastore(PyVmomi):
""" This class populates the datastore list """
def __init__(self, module):
super(VMwareHostDatastore, self).__init__(module)
self.gather_nfs_mount_info = self.module.params['gather_nfs_mount_info']
self.gather_vmfs_mount_info = self.module.params['gather_vmfs_mount_info']
def check_datastore_host(self, esxi_host, datastore):
""" Get all datastores of specified ESXi host """
esxi = self.find_hostsystem_by_name(esxi_host)
if esxi is None:
self.module.fail_json(msg="Failed to find ESXi hostname %s " % esxi_host)
storage_system = esxi.configManager.storageSystem
host_file_sys_vol_mount_info = storage_system.fileSystemVolumeInfo.mountInfo
for host_mount_info in host_file_sys_vol_mount_info:
if host_mount_info.volume.name == datastore:
return host_mount_info
return None
def build_datastore_list(self, datastore_list):
""" Build list with datastores """
datastores = list()
for datastore in datastore_list:
summary = datastore.summary
datastore_summary = dict()
datastore_summary['accessible'] = summary.accessible
datastore_summary['capacity'] = summary.capacity
datastore_summary['name'] = summary.name
datastore_summary['freeSpace'] = summary.freeSpace
datastore_summary['maintenanceMode'] = summary.maintenanceMode
datastore_summary['multipleHostAccess'] = summary.multipleHostAccess
datastore_summary['type'] = summary.type
if self.gather_nfs_mount_info or self.gather_vmfs_mount_info:
if self.gather_nfs_mount_info and summary.type.startswith("NFS"):
# get mount info from the first ESXi host attached to this NFS datastore
host_mount_info = self.check_datastore_host(summary.datastore.host[0].key.name, summary.name)
datastore_summary['nfs_server'] = host_mount_info.volume.remoteHost
datastore_summary['nfs_path'] = host_mount_info.volume.remotePath
if self.gather_vmfs_mount_info and summary.type == "VMFS":
# get mount info from the first ESXi host attached to this VMFS datastore
host_mount_info = self.check_datastore_host(summary.datastore.host[0].key.name, summary.name)
datastore_summary['vmfs_blockSize'] = host_mount_info.volume.blockSize
datastore_summary['vmfs_version'] = host_mount_info.volume.version
datastore_summary['vmfs_uuid'] = host_mount_info.volume.uuid
# vcsim does not return uncommitted
if not summary.uncommitted:
summary.uncommitted = 0
datastore_summary['uncommitted'] = summary.uncommitted
datastore_summary['url'] = summary.url
# Calculated values
datastore_summary['provisioned'] = summary.capacity - summary.freeSpace + summary.uncommitted
datastore_summary['datastore_cluster'] = 'N/A'
if isinstance(datastore.parent, vim.StoragePod):
datastore_summary['datastore_cluster'] = datastore.parent.name
if self.module.params['name']:
if datastore_summary['name'] == self.module.params['name']:
datastores.extend([datastore_summary])
else:
datastores.extend([datastore_summary])
return datastores
class PyVmomiCache(object):
""" This class caches references to objects which are requested multiples times but not modified """
def __init__(self, content, dc_name=None):
self.content = content
self.dc_name = dc_name
self.clusters = {}
self.parent_datacenters = {}
def get_all_objs(self, content, types, confine_to_datacenter=True):
""" Wrapper around get_all_objs to set datacenter context """
objects = get_all_objs(content, types)
if confine_to_datacenter:
if hasattr(objects, 'items'):
# resource pools come back as a dictionary
tmpobjs = objects.copy()
for k, v in objects.items():
parent_dc = get_parent_datacenter(k)
if parent_dc.name != self.dc_name:
tmpobjs.pop(k, None)
objects = tmpobjs
else:
# everything else should be a list
objects = [x for x in objects if get_parent_datacenter(x).name == self.dc_name]
return objects
class PyVmomiHelper(PyVmomi):
""" This class gets datastores """
def __init__(self, module):
super(PyVmomiHelper, self).__init__(module)
self.cache = PyVmomiCache(self.content, dc_name=self.params['datacenter'])
def lookup_datastore(self, confine_to_datacenter):
""" Get datastore(s) per ESXi host or vCenter server """
datastores = self.cache.get_all_objs(self.content, [vim.Datastore], confine_to_datacenter)
return datastores
def lookup_datastore_by_cluster(self):
""" Get datastore(s) per cluster """
cluster = find_cluster_by_name(self.content, self.params['cluster'])
if not cluster:
self.module.fail_json(msg='Failed to find cluster "%(cluster)s"' % self.params)
c_dc = cluster.datastore
return c_dc
def main():
""" Main """
argument_spec = vmware_argument_spec()
argument_spec.update(
name=dict(type='str'),
datacenter=dict(type='str', aliases=['datacenter_name']),
cluster=dict(type='str'),
gather_nfs_mount_info=dict(type='bool', default=False),
gather_vmfs_mount_info=dict(type='bool', default=False)
)
module = AnsibleModule(argument_spec=argument_spec,
supports_check_mode=True
)
result = dict(changed=False)
pyv = PyVmomiHelper(module)
if module.params['cluster']:
dxs = pyv.lookup_datastore_by_cluster()
elif module.params['datacenter']:
dxs = pyv.lookup_datastore(confine_to_datacenter=True)
else:
dxs = pyv.lookup_datastore(confine_to_datacenter=False)
vmware_host_datastore = VMwareHostDatastore(module)
datastores = vmware_host_datastore.build_datastore_list(dxs)
result['datastores'] = datastores
module.exit_json(**result)
if __name__ == '__main__':
main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 60,399 |
vmware: avoid unnecessary copy call on internal objects
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
Two vmware modules uses `copy()` to duplicate an internal instance of a pyvmomi object. This to be able to modify the object during an iteration.
- https://github.com/ansible/ansible/blob/622a493ae03bd5e5cf517d336fc426e9d12208c7/lib/ansible/modules/cloud/vmware/vmware_guest.py#L788
- https://github.com/ansible/ansible/blob/622a493ae03bd5e5cf517d336fc426e9d12208c7/lib/ansible/modules/cloud/vmware/vmware_datastore_facts.py#L220
As explain here, https://github.com/ansible/ansible/pull/60196/files#r312643761 this is not the best approach and this may impact the performance.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
vmware
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
devel
```
|
https://github.com/ansible/ansible/issues/60399
|
https://github.com/ansible/ansible/pull/60476
|
fa783c027bff9b5b9e25c8619faddb9a0fcc02fc
|
df2a09e998205df30306de33ca3ce1dd9cae1cb5
| 2019-08-12T08:30:37Z |
python
| 2019-08-13T13:27:22Z |
lib/ansible/modules/cloud/vmware/vmware_guest.py
|
#!/usr/bin/python
# -*- coding: utf-8 -*-
#
# This module is also sponsored by E.T.A.I. (www.etai.fr)
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
ANSIBLE_METADATA = {'metadata_version': '1.1',
'status': ['preview'],
'supported_by': 'community'}
DOCUMENTATION = r'''
---
module: vmware_guest
short_description: Manages virtual machines in vCenter
description: >
This module can be used to create new virtual machines from templates or other virtual machines,
manage power state of virtual machine such as power on, power off, suspend, shutdown, reboot, restart etc.,
modify various virtual machine components like network, disk, customization etc.,
rename a virtual machine and remove a virtual machine with associated components.
version_added: '2.2'
author:
- Loic Blot (@nerzhul) <[email protected]>
- Philippe Dellaert (@pdellaert) <[email protected]>
- Abhijeet Kasurde (@Akasurde) <[email protected]>
requirements:
- python >= 2.6
- PyVmomi
notes:
- Please make sure that the user used for vmware_guest has the correct level of privileges.
- For example, following is the list of minimum privileges required by users to create virtual machines.
- " DataStore > Allocate Space"
- " Virtual Machine > Configuration > Add New Disk"
- " Virtual Machine > Configuration > Add or Remove Device"
- " Virtual Machine > Inventory > Create New"
- " Network > Assign Network"
- " Resource > Assign Virtual Machine to Resource Pool"
- "Module may require additional privileges as well, which may be required for gathering facts - e.g. ESXi configurations."
- Tested on vSphere 5.5, 6.0, 6.5 and 6.7
- Use SCSI disks instead of IDE when you want to expand online disks by specifing a SCSI controller
- "For additional information please visit Ansible VMware community wiki - U(https://github.com/ansible/community/wiki/VMware)."
options:
state:
description:
- Specify the state the virtual machine should be in.
- 'If C(state) is set to C(present) and virtual machine exists, ensure the virtual machine
configurations conforms to task arguments.'
- 'If C(state) is set to C(absent) and virtual machine exists, then the specified virtual machine
is removed with its associated components.'
- 'If C(state) is set to one of the following C(poweredon), C(poweredoff), C(present), C(restarted), C(suspended)
and virtual machine does not exists, then virtual machine is deployed with given parameters.'
- 'If C(state) is set to C(poweredon) and virtual machine exists with powerstate other than powered on,
then the specified virtual machine is powered on.'
- 'If C(state) is set to C(poweredoff) and virtual machine exists with powerstate other than powered off,
then the specified virtual machine is powered off.'
- 'If C(state) is set to C(restarted) and virtual machine exists, then the virtual machine is restarted.'
- 'If C(state) is set to C(suspended) and virtual machine exists, then the virtual machine is set to suspended mode.'
- 'If C(state) is set to C(shutdownguest) and virtual machine exists, then the virtual machine is shutdown.'
- 'If C(state) is set to C(rebootguest) and virtual machine exists, then the virtual machine is rebooted.'
default: present
choices: [ present, absent, poweredon, poweredoff, restarted, suspended, shutdownguest, rebootguest ]
name:
description:
- Name of the virtual machine to work with.
- Virtual machine names in vCenter are not necessarily unique, which may be problematic, see C(name_match).
- 'If multiple virtual machines with same name exists, then C(folder) is required parameter to
identify uniqueness of the virtual machine.'
- This parameter is required, if C(state) is set to C(poweredon), C(poweredoff), C(present), C(restarted), C(suspended)
and virtual machine does not exists.
- This parameter is case sensitive.
required: yes
name_match:
description:
- If multiple virtual machines matching the name, use the first or last found.
default: 'first'
choices: [ first, last ]
uuid:
description:
- UUID of the virtual machine to manage if known, this is VMware's unique identifier.
- This is required if C(name) is not supplied.
- If virtual machine does not exists, then this parameter is ignored.
- Please note that a supplied UUID will be ignored on virtual machine creation, as VMware creates the UUID internally.
use_instance_uuid:
description:
- Whether to use the VMware instance UUID rather than the BIOS UUID.
default: no
type: bool
version_added: '2.8'
template:
description:
- Template or existing virtual machine used to create new virtual machine.
- If this value is not set, virtual machine is created without using a template.
- If the virtual machine already exists, this parameter will be ignored.
- This parameter is case sensitive.
- You can also specify template or VM UUID for identifying source. version_added 2.8. Use C(hw_product_uuid) from M(vmware_guest_facts) as UUID value.
- From version 2.8 onwards, absolute path to virtual machine or template can be used.
aliases: [ 'template_src' ]
is_template:
description:
- Flag the instance as a template.
- This will mark the given virtual machine as template.
default: 'no'
type: bool
version_added: '2.3'
folder:
description:
- Destination folder, absolute path to find an existing guest or create the new guest.
- The folder should include the datacenter. ESX's datacenter is ha-datacenter.
- This parameter is case sensitive.
- This parameter is required, while deploying new virtual machine. version_added 2.5.
- 'If multiple machines are found with same name, this parameter is used to identify
uniqueness of the virtual machine. version_added 2.5'
- 'Examples:'
- ' folder: /ha-datacenter/vm'
- ' folder: ha-datacenter/vm'
- ' folder: /datacenter1/vm'
- ' folder: datacenter1/vm'
- ' folder: /datacenter1/vm/folder1'
- ' folder: datacenter1/vm/folder1'
- ' folder: /folder1/datacenter1/vm'
- ' folder: folder1/datacenter1/vm'
- ' folder: /folder1/datacenter1/vm/folder2'
hardware:
description:
- Manage virtual machine's hardware attributes.
- All parameters case sensitive.
- 'Valid attributes are:'
- ' - C(hotadd_cpu) (boolean): Allow virtual CPUs to be added while the virtual machine is running.'
- ' - C(hotremove_cpu) (boolean): Allow virtual CPUs to be removed while the virtual machine is running.
version_added: 2.5'
- ' - C(hotadd_memory) (boolean): Allow memory to be added while the virtual machine is running.'
- ' - C(memory_mb) (integer): Amount of memory in MB.'
- ' - C(nested_virt) (bool): Enable nested virtualization. version_added: 2.5'
- ' - C(num_cpus) (integer): Number of CPUs.'
- ' - C(num_cpu_cores_per_socket) (integer): Number of Cores Per Socket.'
- " C(num_cpus) must be a multiple of C(num_cpu_cores_per_socket).
For example to create a VM with 2 sockets of 4 cores, specify C(num_cpus): 8 and C(num_cpu_cores_per_socket): 4"
- ' - C(scsi) (string): Valid values are C(buslogic), C(lsilogic), C(lsilogicsas) and C(paravirtual) (default).'
- " - C(memory_reservation_lock) (boolean): If set true, memory resource reservation for the virtual machine
will always be equal to the virtual machine's memory size. version_added: 2.5"
- ' - C(max_connections) (integer): Maximum number of active remote display connections for the virtual machines.
version_added: 2.5.'
- ' - C(mem_limit) (integer): The memory utilization of a virtual machine will not exceed this limit. Unit is MB.
version_added: 2.5'
- ' - C(mem_reservation) (integer): The amount of memory resource that is guaranteed available to the virtual
machine. Unit is MB. C(memory_reservation) is alias to this. version_added: 2.5'
- ' - C(cpu_limit) (integer): The CPU utilization of a virtual machine will not exceed this limit. Unit is MHz.
version_added: 2.5'
- ' - C(cpu_reservation) (integer): The amount of CPU resource that is guaranteed available to the virtual machine.
Unit is MHz. version_added: 2.5'
- ' - C(version) (integer): The Virtual machine hardware versions. Default is 10 (ESXi 5.5 and onwards).
Please check VMware documentation for correct virtual machine hardware version.
Incorrect hardware version may lead to failure in deployment. If hardware version is already equal to the given
version then no action is taken. version_added: 2.6'
- ' - C(boot_firmware) (string): Choose which firmware should be used to boot the virtual machine.
Allowed values are "bios" and "efi". version_added: 2.7'
- ' - C(virt_based_security) (bool): Enable Virtualization Based Security feature for Windows 10.
(Support from Virtual machine hardware version 14, Guest OS Windows 10 64 bit, Windows Server 2016)'
guest_id:
description:
- Set the guest ID.
- This parameter is case sensitive.
- 'Examples:'
- " virtual machine with RHEL7 64 bit, will be 'rhel7_64Guest'"
- " virtual machine with CensOS 64 bit, will be 'centos64Guest'"
- " virtual machine with Ubuntu 64 bit, will be 'ubuntu64Guest'"
- This field is required when creating a virtual machine, not required when creating from the template.
- >
Valid values are referenced here:
U(https://code.vmware.com/apis/358/vsphere#/doc/vim.vm.GuestOsDescriptor.GuestOsIdentifier.html)
version_added: '2.3'
disk:
description:
- A list of disks to add.
- This parameter is case sensitive.
- Shrinking disks is not supported.
- Removing existing disks of the virtual machine is not supported.
- 'Valid attributes are:'
- ' - C(size_[tb,gb,mb,kb]) (integer): Disk storage size in specified unit.'
- ' - C(type) (string): Valid values are:'
- ' - C(thin) thin disk'
- ' - C(eagerzeroedthick) eagerzeroedthick disk, added in version 2.5'
- ' Default: C(None) thick disk, no eagerzero.'
- ' - C(datastore) (string): The name of datastore which will be used for the disk. If C(autoselect_datastore) is set to True,
then will select the less used datastore whose name contains this "disk.datastore" string.'
- ' - C(filename) (string): Existing disk image to be used. Filename must already exist on the datastore.'
- ' Specify filename string in C([datastore_name] path/to/file.vmdk) format. Added in version 2.8.'
- ' - C(autoselect_datastore) (bool): select the less used datastore. "disk.datastore" and "disk.autoselect_datastore"
will not be used if C(datastore) is specified outside this C(disk) configuration.'
- ' - C(disk_mode) (string): Type of disk mode. Added in version 2.6'
- ' - Available options are :'
- ' - C(persistent): Changes are immediately and permanently written to the virtual disk. This is default.'
- ' - C(independent_persistent): Same as persistent, but not affected by snapshots.'
- ' - C(independent_nonpersistent): Changes to virtual disk are made to a redo log and discarded at power off, but not affected by snapshots.'
cdrom:
description:
- A CD-ROM configuration for the virtual machine.
- 'Valid attributes are:'
- ' - C(type) (string): The type of CD-ROM, valid options are C(none), C(client) or C(iso). With C(none) the CD-ROM will be disconnected but present.'
- ' - C(iso_path) (string): The datastore path to the ISO file to use, in the form of C([datastore1] path/to/file.iso). Required if type is set C(iso).'
version_added: '2.5'
resource_pool:
description:
- Use the given resource pool for virtual machine operation.
- This parameter is case sensitive.
- Resource pool should be child of the selected host parent.
version_added: '2.3'
wait_for_ip_address:
description:
- Wait until vCenter detects an IP address for the virtual machine.
- This requires vmware-tools (vmtoolsd) to properly work after creation.
- "vmware-tools needs to be installed on the given virtual machine in order to work with this parameter."
default: 'no'
type: bool
wait_for_customization:
description:
- Wait until vCenter detects all guest customizations as successfully completed.
- When enabled, the VM will automatically be powered on.
default: 'no'
type: bool
version_added: '2.8'
state_change_timeout:
description:
- If the C(state) is set to C(shutdownguest), by default the module will return immediately after sending the shutdown signal.
- If this argument is set to a positive integer, the module will instead wait for the virtual machine to reach the poweredoff state.
- The value sets a timeout in seconds for the module to wait for the state change.
default: 0
version_added: '2.6'
snapshot_src:
description:
- Name of the existing snapshot to use to create a clone of a virtual machine.
- This parameter is case sensitive.
- While creating linked clone using C(linked_clone) parameter, this parameter is required.
version_added: '2.4'
linked_clone:
description:
- Whether to create a linked clone from the snapshot specified.
- If specified, then C(snapshot_src) is required parameter.
default: 'no'
type: bool
version_added: '2.4'
force:
description:
- Ignore warnings and complete the actions.
- This parameter is useful while removing virtual machine which is powered on state.
- 'This module reflects the VMware vCenter API and UI workflow, as such, in some cases the `force` flag will
be mandatory to perform the action to ensure you are certain the action has to be taken, no matter what the consequence.
This is specifically the case for removing a powered on the virtual machine when C(state) is set to C(absent).'
default: 'no'
type: bool
datacenter:
description:
- Destination datacenter for the deploy operation.
- This parameter is case sensitive.
default: ha-datacenter
cluster:
description:
- The cluster name where the virtual machine will run.
- This is a required parameter, if C(esxi_hostname) is not set.
- C(esxi_hostname) and C(cluster) are mutually exclusive parameters.
- This parameter is case sensitive.
version_added: '2.3'
esxi_hostname:
description:
- The ESXi hostname where the virtual machine will run.
- This is a required parameter, if C(cluster) is not set.
- C(esxi_hostname) and C(cluster) are mutually exclusive parameters.
- This parameter is case sensitive.
annotation:
description:
- A note or annotation to include in the virtual machine.
version_added: '2.3'
customvalues:
description:
- Define a list of custom values to set on virtual machine.
- A custom value object takes two fields C(key) and C(value).
- Incorrect key and values will be ignored.
version_added: '2.3'
networks:
description:
- A list of networks (in the order of the NICs).
- Removing NICs is not allowed, while reconfiguring the virtual machine.
- All parameters and VMware object names are case sensetive.
- 'One of the below parameters is required per entry:'
- ' - C(name) (string): Name of the portgroup or distributed virtual portgroup for this interface.
When specifying distributed virtual portgroup make sure given C(esxi_hostname) or C(cluster) is associated with it.'
- ' - C(vlan) (integer): VLAN number for this interface.'
- 'Optional parameters per entry (used for virtual hardware):'
- ' - C(device_type) (string): Virtual network device (one of C(e1000), C(e1000e), C(pcnet32), C(vmxnet2), C(vmxnet3) (default), C(sriov)).'
- ' - C(mac) (string): Customize MAC address.'
- ' - C(dvswitch_name) (string): Name of the distributed vSwitch.
This value is required if multiple distributed portgroups exists with the same name. version_added 2.7'
- ' - C(start_connected) (bool): Indicates that virtual network adapter starts with associated virtual machine powers on. version_added: 2.5'
- 'Optional parameters per entry (used for OS customization):'
- ' - C(type) (string): Type of IP assignment (either C(dhcp) or C(static)). C(dhcp) is default.'
- ' - C(ip) (string): Static IP address (implies C(type: static)).'
- ' - C(netmask) (string): Static netmask required for C(ip).'
- ' - C(gateway) (string): Static gateway.'
- ' - C(dns_servers) (string): DNS servers for this network interface (Windows).'
- ' - C(domain) (string): Domain name for this network interface (Windows).'
- ' - C(wake_on_lan) (bool): Indicates if wake-on-LAN is enabled on this virtual network adapter. version_added: 2.5'
- ' - C(allow_guest_control) (bool): Enables guest control over whether the connectable device is connected. version_added: 2.5'
version_added: '2.3'
customization:
description:
- Parameters for OS customization when cloning from the template or the virtual machine, or apply to the existing virtual machine directly.
- Not all operating systems are supported for customization with respective vCenter version,
please check VMware documentation for respective OS customization.
- For supported customization operating system matrix, (see U(http://partnerweb.vmware.com/programs/guestOS/guest-os-customization-matrix.pdf))
- All parameters and VMware object names are case sensitive.
- Linux based OSes requires Perl package to be installed for OS customizations.
- 'Common parameters (Linux/Windows):'
- ' - C(existing_vm) (bool): If set to C(True), do OS customization on the specified virtual machine directly.
If set to C(False) or not specified, do OS customization when cloning from the template or the virtual machine. version_added: 2.8'
- ' - C(dns_servers) (list): List of DNS servers to configure.'
- ' - C(dns_suffix) (list): List of domain suffixes, also known as DNS search path (default: C(domain) parameter).'
- ' - C(domain) (string): DNS domain name to use.'
- ' - C(hostname) (string): Computer hostname (default: shorted C(name) parameter). Allowed characters are alphanumeric (uppercase and lowercase)
and minus, rest of the characters are dropped as per RFC 952.'
- 'Parameters related to Linux customization:'
- ' - C(timezone) (string): Timezone (See List of supported time zones for different vSphere versions in Linux/Unix
systems (2145518) U(https://kb.vmware.com/s/article/2145518)). version_added: 2.9'
- ' - C(hwclockUTC) (bool): Specifies whether the hardware clock is in UTC or local time.
True when the hardware clock is in UTC, False when the hardware clock is in local time. version_added: 2.9'
- 'Parameters related to Windows customization:'
- ' - C(autologon) (bool): Auto logon after virtual machine customization (default: False).'
- ' - C(autologoncount) (int): Number of autologon after reboot (default: 1).'
- ' - C(domainadmin) (string): User used to join in AD domain (mandatory with C(joindomain)).'
- ' - C(domainadminpassword) (string): Password used to join in AD domain (mandatory with C(joindomain)).'
- ' - C(fullname) (string): Server owner name (default: Administrator).'
- ' - C(joindomain) (string): AD domain to join (Not compatible with C(joinworkgroup)).'
- ' - C(joinworkgroup) (string): Workgroup to join (Not compatible with C(joindomain), default: WORKGROUP).'
- ' - C(orgname) (string): Organisation name (default: ACME).'
- ' - C(password) (string): Local administrator password.'
- ' - C(productid) (string): Product ID.'
- ' - C(runonce) (list): List of commands to run at first user logon.'
- ' - C(timezone) (int): Timezone (See U(https://msdn.microsoft.com/en-us/library/ms912391.aspx)).'
version_added: '2.3'
vapp_properties:
description:
- A list of vApp properties
- 'For full list of attributes and types refer to: U(https://github.com/vmware/pyvmomi/blob/master/docs/vim/vApp/PropertyInfo.rst)'
- 'Basic attributes are:'
- ' - C(id) (string): Property id - required.'
- ' - C(value) (string): Property value.'
- ' - C(type) (string): Value type, string type by default.'
- ' - C(operation): C(remove): This attribute is required only when removing properties.'
version_added: '2.6'
customization_spec:
description:
- Unique name identifying the requested customization specification.
- This parameter is case sensitive.
- If set, then overrides C(customization) parameter values.
version_added: '2.6'
datastore:
description:
- Specify datastore or datastore cluster to provision virtual machine.
- 'This parameter takes precedence over "disk.datastore" parameter.'
- 'This parameter can be used to override datastore or datastore cluster setting of the virtual machine when deployed
from the template.'
- Please see example for more usage.
version_added: '2.7'
convert:
description:
- Specify convert disk type while cloning template or virtual machine.
choices: [ thin, thick, eagerzeroedthick ]
version_added: '2.8'
extends_documentation_fragment: vmware.documentation
'''
EXAMPLES = r'''
- name: Create a virtual machine on given ESXi hostname
vmware_guest:
hostname: "{{ vcenter_hostname }}"
username: "{{ vcenter_username }}"
password: "{{ vcenter_password }}"
validate_certs: no
folder: /DC1/vm/
name: test_vm_0001
state: poweredon
guest_id: centos64Guest
# This is hostname of particular ESXi server on which user wants VM to be deployed
esxi_hostname: "{{ esxi_hostname }}"
disk:
- size_gb: 10
type: thin
datastore: datastore1
hardware:
memory_mb: 512
num_cpus: 4
scsi: paravirtual
networks:
- name: VM Network
mac: aa:bb:dd:aa:00:14
ip: 10.10.10.100
netmask: 255.255.255.0
device_type: vmxnet3
wait_for_ip_address: yes
delegate_to: localhost
register: deploy_vm
- name: Create a virtual machine from a template
vmware_guest:
hostname: "{{ vcenter_hostname }}"
username: "{{ vcenter_username }}"
password: "{{ vcenter_password }}"
validate_certs: no
folder: /testvms
name: testvm_2
state: poweredon
template: template_el7
disk:
- size_gb: 10
type: thin
datastore: g73_datastore
hardware:
memory_mb: 512
num_cpus: 6
num_cpu_cores_per_socket: 3
scsi: paravirtual
memory_reservation_lock: True
mem_limit: 8096
mem_reservation: 4096
cpu_limit: 8096
cpu_reservation: 4096
max_connections: 5
hotadd_cpu: True
hotremove_cpu: True
hotadd_memory: False
version: 12 # Hardware version of virtual machine
boot_firmware: "efi"
cdrom:
type: iso
iso_path: "[datastore1] livecd.iso"
networks:
- name: VM Network
mac: aa:bb:dd:aa:00:14
wait_for_ip_address: yes
delegate_to: localhost
register: deploy
- name: Clone a virtual machine from Windows template and customize
vmware_guest:
hostname: "{{ vcenter_hostname }}"
username: "{{ vcenter_username }}"
password: "{{ vcenter_password }}"
validate_certs: no
datacenter: datacenter1
cluster: cluster
name: testvm-2
template: template_windows
networks:
- name: VM Network
ip: 192.168.1.100
netmask: 255.255.255.0
gateway: 192.168.1.1
mac: aa:bb:dd:aa:00:14
domain: my_domain
dns_servers:
- 192.168.1.1
- 192.168.1.2
- vlan: 1234
type: dhcp
customization:
autologon: yes
dns_servers:
- 192.168.1.1
- 192.168.1.2
domain: my_domain
password: new_vm_password
runonce:
- powershell.exe -ExecutionPolicy Unrestricted -File C:\Windows\Temp\ConfigureRemotingForAnsible.ps1 -ForceNewSSLCert -EnableCredSSP
delegate_to: localhost
- name: Clone a virtual machine from Linux template and customize
vmware_guest:
hostname: "{{ vcenter_hostname }}"
username: "{{ vcenter_username }}"
password: "{{ vcenter_password }}"
validate_certs: no
datacenter: "{{ datacenter }}"
state: present
folder: /DC1/vm
template: "{{ template }}"
name: "{{ vm_name }}"
cluster: DC1_C1
networks:
- name: VM Network
ip: 192.168.10.11
netmask: 255.255.255.0
wait_for_ip_address: True
customization:
domain: "{{ guest_domain }}"
dns_servers:
- 8.9.9.9
- 7.8.8.9
dns_suffix:
- example.com
- example2.com
delegate_to: localhost
- name: Rename a virtual machine (requires the virtual machine's uuid)
vmware_guest:
hostname: "{{ vcenter_hostname }}"
username: "{{ vcenter_username }}"
password: "{{ vcenter_password }}"
validate_certs: no
uuid: "{{ vm_uuid }}"
name: new_name
state: present
delegate_to: localhost
- name: Remove a virtual machine by uuid
vmware_guest:
hostname: "{{ vcenter_hostname }}"
username: "{{ vcenter_username }}"
password: "{{ vcenter_password }}"
validate_certs: no
uuid: "{{ vm_uuid }}"
state: absent
delegate_to: localhost
- name: Manipulate vApp properties
vmware_guest:
hostname: "{{ vcenter_hostname }}"
username: "{{ vcenter_username }}"
password: "{{ vcenter_password }}"
validate_certs: no
name: vm_name
state: present
vapp_properties:
- id: remoteIP
category: Backup
label: Backup server IP
type: str
value: 10.10.10.1
- id: old_property
operation: remove
delegate_to: localhost
- name: Set powerstate of a virtual machine to poweroff by using UUID
vmware_guest:
hostname: "{{ vcenter_hostname }}"
username: "{{ vcenter_username }}"
password: "{{ vcenter_password }}"
validate_certs: no
uuid: "{{ vm_uuid }}"
state: poweredoff
delegate_to: localhost
- name: Deploy a virtual machine in a datastore different from the datastore of the template
vmware_guest:
hostname: "{{ vcenter_hostname }}"
username: "{{ vcenter_username }}"
password: "{{ vcenter_password }}"
name: "{{ vm_name }}"
state: present
template: "{{ template_name }}"
# Here datastore can be different which holds template
datastore: "{{ virtual_machine_datastore }}"
hardware:
memory_mb: 512
num_cpus: 2
scsi: paravirtual
delegate_to: localhost
'''
RETURN = r'''
instance:
description: metadata about the new virtual machine
returned: always
type: dict
sample: None
'''
import re
import time
import string
HAS_PYVMOMI = False
try:
from pyVmomi import vim, vmodl, VmomiSupport
HAS_PYVMOMI = True
except ImportError:
pass
from random import randint
from ansible.module_utils.basic import AnsibleModule
from ansible.module_utils.common.network import is_mac
from ansible.module_utils._text import to_text, to_native
from ansible.module_utils.vmware import (find_obj, gather_vm_facts, get_all_objs,
compile_folder_path_for_object, serialize_spec,
vmware_argument_spec, set_vm_power_state, PyVmomi,
find_dvs_by_name, find_dvspg_by_name, wait_for_vm_ip,
wait_for_task, TaskError)
class PyVmomiDeviceHelper(object):
""" This class is a helper to create easily VMware Objects for PyVmomiHelper """
def __init__(self, module):
self.module = module
self.next_disk_unit_number = 0
self.scsi_device_type = {
'lsilogic': vim.vm.device.VirtualLsiLogicController,
'paravirtual': vim.vm.device.ParaVirtualSCSIController,
'buslogic': vim.vm.device.VirtualBusLogicController,
'lsilogicsas': vim.vm.device.VirtualLsiLogicSASController,
}
def create_scsi_controller(self, scsi_type):
scsi_ctl = vim.vm.device.VirtualDeviceSpec()
scsi_ctl.operation = vim.vm.device.VirtualDeviceSpec.Operation.add
scsi_device = self.scsi_device_type.get(scsi_type, vim.vm.device.ParaVirtualSCSIController)
scsi_ctl.device = scsi_device()
scsi_ctl.device.busNumber = 0
# While creating a new SCSI controller, temporary key value
# should be unique negative integers
scsi_ctl.device.key = -randint(1000, 9999)
scsi_ctl.device.hotAddRemove = True
scsi_ctl.device.sharedBus = 'noSharing'
scsi_ctl.device.scsiCtlrUnitNumber = 7
return scsi_ctl
def is_scsi_controller(self, device):
return isinstance(device, tuple(self.scsi_device_type.values()))
@staticmethod
def create_ide_controller():
ide_ctl = vim.vm.device.VirtualDeviceSpec()
ide_ctl.operation = vim.vm.device.VirtualDeviceSpec.Operation.add
ide_ctl.device = vim.vm.device.VirtualIDEController()
ide_ctl.device.deviceInfo = vim.Description()
# While creating a new IDE controller, temporary key value
# should be unique negative integers
ide_ctl.device.key = -randint(200, 299)
ide_ctl.device.busNumber = 0
return ide_ctl
@staticmethod
def create_cdrom(ide_ctl, cdrom_type, iso_path=None):
cdrom_spec = vim.vm.device.VirtualDeviceSpec()
cdrom_spec.operation = vim.vm.device.VirtualDeviceSpec.Operation.add
cdrom_spec.device = vim.vm.device.VirtualCdrom()
cdrom_spec.device.controllerKey = ide_ctl.device.key
cdrom_spec.device.key = -1
cdrom_spec.device.connectable = vim.vm.device.VirtualDevice.ConnectInfo()
cdrom_spec.device.connectable.allowGuestControl = True
cdrom_spec.device.connectable.startConnected = (cdrom_type != "none")
if cdrom_type in ["none", "client"]:
cdrom_spec.device.backing = vim.vm.device.VirtualCdrom.RemotePassthroughBackingInfo()
elif cdrom_type == "iso":
cdrom_spec.device.backing = vim.vm.device.VirtualCdrom.IsoBackingInfo(fileName=iso_path)
return cdrom_spec
@staticmethod
def is_equal_cdrom(vm_obj, cdrom_device, cdrom_type, iso_path):
if cdrom_type == "none":
return (isinstance(cdrom_device.backing, vim.vm.device.VirtualCdrom.RemotePassthroughBackingInfo) and
cdrom_device.connectable.allowGuestControl and
not cdrom_device.connectable.startConnected and
(vm_obj.runtime.powerState != vim.VirtualMachinePowerState.poweredOn or not cdrom_device.connectable.connected))
elif cdrom_type == "client":
return (isinstance(cdrom_device.backing, vim.vm.device.VirtualCdrom.RemotePassthroughBackingInfo) and
cdrom_device.connectable.allowGuestControl and
cdrom_device.connectable.startConnected and
(vm_obj.runtime.powerState != vim.VirtualMachinePowerState.poweredOn or cdrom_device.connectable.connected))
elif cdrom_type == "iso":
return (isinstance(cdrom_device.backing, vim.vm.device.VirtualCdrom.IsoBackingInfo) and
cdrom_device.backing.fileName == iso_path and
cdrom_device.connectable.allowGuestControl and
cdrom_device.connectable.startConnected and
(vm_obj.runtime.powerState != vim.VirtualMachinePowerState.poweredOn or cdrom_device.connectable.connected))
def create_scsi_disk(self, scsi_ctl, disk_index=None):
diskspec = vim.vm.device.VirtualDeviceSpec()
diskspec.operation = vim.vm.device.VirtualDeviceSpec.Operation.add
diskspec.device = vim.vm.device.VirtualDisk()
diskspec.device.backing = vim.vm.device.VirtualDisk.FlatVer2BackingInfo()
diskspec.device.controllerKey = scsi_ctl.device.key
if self.next_disk_unit_number == 7:
raise AssertionError()
if disk_index == 7:
raise AssertionError()
"""
Configure disk unit number.
"""
if disk_index is not None:
diskspec.device.unitNumber = disk_index
self.next_disk_unit_number = disk_index + 1
else:
diskspec.device.unitNumber = self.next_disk_unit_number
self.next_disk_unit_number += 1
# unit number 7 is reserved to SCSI controller, increase next index
if self.next_disk_unit_number == 7:
self.next_disk_unit_number += 1
return diskspec
def get_device(self, device_type, name):
nic_dict = dict(pcnet32=vim.vm.device.VirtualPCNet32(),
vmxnet2=vim.vm.device.VirtualVmxnet2(),
vmxnet3=vim.vm.device.VirtualVmxnet3(),
e1000=vim.vm.device.VirtualE1000(),
e1000e=vim.vm.device.VirtualE1000e(),
sriov=vim.vm.device.VirtualSriovEthernetCard(),
)
if device_type in nic_dict:
return nic_dict[device_type]
else:
self.module.fail_json(msg='Invalid device_type "%s"'
' for network "%s"' % (device_type, name))
def create_nic(self, device_type, device_label, device_infos):
nic = vim.vm.device.VirtualDeviceSpec()
nic.device = self.get_device(device_type, device_infos['name'])
nic.device.wakeOnLanEnabled = bool(device_infos.get('wake_on_lan', True))
nic.device.deviceInfo = vim.Description()
nic.device.deviceInfo.label = device_label
nic.device.deviceInfo.summary = device_infos['name']
nic.device.connectable = vim.vm.device.VirtualDevice.ConnectInfo()
nic.device.connectable.startConnected = bool(device_infos.get('start_connected', True))
nic.device.connectable.allowGuestControl = bool(device_infos.get('allow_guest_control', True))
nic.device.connectable.connected = True
if 'mac' in device_infos and is_mac(device_infos['mac']):
nic.device.addressType = 'manual'
nic.device.macAddress = device_infos['mac']
else:
nic.device.addressType = 'generated'
return nic
def integer_value(self, input_value, name):
"""
Function to return int value for given input, else return error
Args:
input_value: Input value to retrive int value from
name: Name of the Input value (used to build error message)
Returns: (int) if integer value can be obtained, otherwise will send a error message.
"""
if isinstance(input_value, int):
return input_value
elif isinstance(input_value, str) and input_value.isdigit():
return int(input_value)
else:
self.module.fail_json(msg='"%s" attribute should be an'
' integer value.' % name)
class PyVmomiCache(object):
""" This class caches references to objects which are requested multiples times but not modified """
def __init__(self, content, dc_name=None):
self.content = content
self.dc_name = dc_name
self.networks = {}
self.clusters = {}
self.esx_hosts = {}
self.parent_datacenters = {}
def find_obj(self, content, types, name, confine_to_datacenter=True):
""" Wrapper around find_obj to set datacenter context """
result = find_obj(content, types, name)
if result and confine_to_datacenter:
if to_text(self.get_parent_datacenter(result).name) != to_text(self.dc_name):
result = None
objects = self.get_all_objs(content, types, confine_to_datacenter=True)
for obj in objects:
if name is None or to_text(obj.name) == to_text(name):
return obj
return result
def get_all_objs(self, content, types, confine_to_datacenter=True):
""" Wrapper around get_all_objs to set datacenter context """
objects = get_all_objs(content, types)
if confine_to_datacenter:
if hasattr(objects, 'items'):
# resource pools come back as a dictionary
# make a copy
tmpobjs = objects.copy()
for k, v in objects.items():
parent_dc = self.get_parent_datacenter(k)
if parent_dc.name != self.dc_name:
tmpobjs.pop(k, None)
objects = tmpobjs
else:
# everything else should be a list
objects = [x for x in objects if self.get_parent_datacenter(x).name == self.dc_name]
return objects
def get_network(self, network):
if network not in self.networks:
self.networks[network] = self.find_obj(self.content, [vim.Network], network)
return self.networks[network]
def get_cluster(self, cluster):
if cluster not in self.clusters:
self.clusters[cluster] = self.find_obj(self.content, [vim.ClusterComputeResource], cluster)
return self.clusters[cluster]
def get_esx_host(self, host):
if host not in self.esx_hosts:
self.esx_hosts[host] = self.find_obj(self.content, [vim.HostSystem], host)
return self.esx_hosts[host]
def get_parent_datacenter(self, obj):
""" Walk the parent tree to find the objects datacenter """
if isinstance(obj, vim.Datacenter):
return obj
if obj in self.parent_datacenters:
return self.parent_datacenters[obj]
datacenter = None
while True:
if not hasattr(obj, 'parent'):
break
obj = obj.parent
if isinstance(obj, vim.Datacenter):
datacenter = obj
break
self.parent_datacenters[obj] = datacenter
return datacenter
class PyVmomiHelper(PyVmomi):
def __init__(self, module):
super(PyVmomiHelper, self).__init__(module)
self.device_helper = PyVmomiDeviceHelper(self.module)
self.configspec = None
self.relospec = None
self.change_detected = False # a change was detected and needs to be applied through reconfiguration
self.change_applied = False # a change was applied meaning at least one task succeeded
self.customspec = None
self.cache = PyVmomiCache(self.content, dc_name=self.params['datacenter'])
def gather_facts(self, vm):
return gather_vm_facts(self.content, vm)
def remove_vm(self, vm):
# https://www.vmware.com/support/developer/converter-sdk/conv60_apireference/vim.ManagedEntity.html#destroy
if vm.summary.runtime.powerState.lower() == 'poweredon':
self.module.fail_json(msg="Virtual machine %s found in 'powered on' state, "
"please use 'force' parameter to remove or poweroff VM "
"and try removing VM again." % vm.name)
task = vm.Destroy()
self.wait_for_task(task)
if task.info.state == 'error':
return {'changed': self.change_applied, 'failed': True, 'msg': task.info.error.msg, 'op': 'destroy'}
else:
return {'changed': self.change_applied, 'failed': False}
def configure_guestid(self, vm_obj, vm_creation=False):
# guest_id is not required when using templates
if self.params['template']:
return
# guest_id is only mandatory on VM creation
if vm_creation and self.params['guest_id'] is None:
self.module.fail_json(msg="guest_id attribute is mandatory for VM creation")
if self.params['guest_id'] and \
(vm_obj is None or self.params['guest_id'].lower() != vm_obj.summary.config.guestId.lower()):
self.change_detected = True
self.configspec.guestId = self.params['guest_id']
def configure_resource_alloc_info(self, vm_obj):
"""
Function to configure resource allocation information about virtual machine
:param vm_obj: VM object in case of reconfigure, None in case of deploy
:return: None
"""
rai_change_detected = False
memory_allocation = vim.ResourceAllocationInfo()
cpu_allocation = vim.ResourceAllocationInfo()
if 'hardware' in self.params:
if 'mem_limit' in self.params['hardware']:
mem_limit = None
try:
mem_limit = int(self.params['hardware'].get('mem_limit'))
except ValueError:
self.module.fail_json(msg="hardware.mem_limit attribute should be an integer value.")
memory_allocation.limit = mem_limit
if vm_obj is None or memory_allocation.limit != vm_obj.config.memoryAllocation.limit:
rai_change_detected = True
if 'mem_reservation' in self.params['hardware'] or 'memory_reservation' in self.params['hardware']:
mem_reservation = self.params['hardware'].get('mem_reservation')
if mem_reservation is None:
mem_reservation = self.params['hardware'].get('memory_reservation')
try:
mem_reservation = int(mem_reservation)
except ValueError:
self.module.fail_json(msg="hardware.mem_reservation or hardware.memory_reservation should be an integer value.")
memory_allocation.reservation = mem_reservation
if vm_obj is None or \
memory_allocation.reservation != vm_obj.config.memoryAllocation.reservation:
rai_change_detected = True
if 'cpu_limit' in self.params['hardware']:
cpu_limit = None
try:
cpu_limit = int(self.params['hardware'].get('cpu_limit'))
except ValueError:
self.module.fail_json(msg="hardware.cpu_limit attribute should be an integer value.")
cpu_allocation.limit = cpu_limit
if vm_obj is None or cpu_allocation.limit != vm_obj.config.cpuAllocation.limit:
rai_change_detected = True
if 'cpu_reservation' in self.params['hardware']:
cpu_reservation = None
try:
cpu_reservation = int(self.params['hardware'].get('cpu_reservation'))
except ValueError:
self.module.fail_json(msg="hardware.cpu_reservation should be an integer value.")
cpu_allocation.reservation = cpu_reservation
if vm_obj is None or \
cpu_allocation.reservation != vm_obj.config.cpuAllocation.reservation:
rai_change_detected = True
if rai_change_detected:
self.configspec.memoryAllocation = memory_allocation
self.configspec.cpuAllocation = cpu_allocation
self.change_detected = True
def configure_cpu_and_memory(self, vm_obj, vm_creation=False):
# set cpu/memory/etc
if 'hardware' in self.params:
if 'num_cpus' in self.params['hardware']:
try:
num_cpus = int(self.params['hardware']['num_cpus'])
except ValueError:
self.module.fail_json(msg="hardware.num_cpus attribute should be an integer value.")
# check VM power state and cpu hot-add/hot-remove state before re-config VM
if vm_obj and vm_obj.runtime.powerState == vim.VirtualMachinePowerState.poweredOn:
if not vm_obj.config.cpuHotRemoveEnabled and num_cpus < vm_obj.config.hardware.numCPU:
self.module.fail_json(msg="Configured cpu number is less than the cpu number of the VM, "
"cpuHotRemove is not enabled")
if not vm_obj.config.cpuHotAddEnabled and num_cpus > vm_obj.config.hardware.numCPU:
self.module.fail_json(msg="Configured cpu number is more than the cpu number of the VM, "
"cpuHotAdd is not enabled")
if 'num_cpu_cores_per_socket' in self.params['hardware']:
try:
num_cpu_cores_per_socket = int(self.params['hardware']['num_cpu_cores_per_socket'])
except ValueError:
self.module.fail_json(msg="hardware.num_cpu_cores_per_socket attribute "
"should be an integer value.")
if num_cpus % num_cpu_cores_per_socket != 0:
self.module.fail_json(msg="hardware.num_cpus attribute should be a multiple "
"of hardware.num_cpu_cores_per_socket")
self.configspec.numCoresPerSocket = num_cpu_cores_per_socket
if vm_obj is None or self.configspec.numCoresPerSocket != vm_obj.config.hardware.numCoresPerSocket:
self.change_detected = True
self.configspec.numCPUs = num_cpus
if vm_obj is None or self.configspec.numCPUs != vm_obj.config.hardware.numCPU:
self.change_detected = True
# num_cpu is mandatory for VM creation
elif vm_creation and not self.params['template']:
self.module.fail_json(msg="hardware.num_cpus attribute is mandatory for VM creation")
if 'memory_mb' in self.params['hardware']:
try:
memory_mb = int(self.params['hardware']['memory_mb'])
except ValueError:
self.module.fail_json(msg="Failed to parse hardware.memory_mb value."
" Please refer the documentation and provide"
" correct value.")
# check VM power state and memory hotadd state before re-config VM
if vm_obj and vm_obj.runtime.powerState == vim.VirtualMachinePowerState.poweredOn:
if vm_obj.config.memoryHotAddEnabled and memory_mb < vm_obj.config.hardware.memoryMB:
self.module.fail_json(msg="Configured memory is less than memory size of the VM, "
"operation is not supported")
elif not vm_obj.config.memoryHotAddEnabled and memory_mb != vm_obj.config.hardware.memoryMB:
self.module.fail_json(msg="memoryHotAdd is not enabled")
self.configspec.memoryMB = memory_mb
if vm_obj is None or self.configspec.memoryMB != vm_obj.config.hardware.memoryMB:
self.change_detected = True
# memory_mb is mandatory for VM creation
elif vm_creation and not self.params['template']:
self.module.fail_json(msg="hardware.memory_mb attribute is mandatory for VM creation")
if 'hotadd_memory' in self.params['hardware']:
if vm_obj and vm_obj.runtime.powerState == vim.VirtualMachinePowerState.poweredOn and \
vm_obj.config.memoryHotAddEnabled != bool(self.params['hardware']['hotadd_memory']):
self.module.fail_json(msg="Configure hotadd memory operation is not supported when VM is power on")
self.configspec.memoryHotAddEnabled = bool(self.params['hardware']['hotadd_memory'])
if vm_obj is None or self.configspec.memoryHotAddEnabled != vm_obj.config.memoryHotAddEnabled:
self.change_detected = True
if 'hotadd_cpu' in self.params['hardware']:
if vm_obj and vm_obj.runtime.powerState == vim.VirtualMachinePowerState.poweredOn and \
vm_obj.config.cpuHotAddEnabled != bool(self.params['hardware']['hotadd_cpu']):
self.module.fail_json(msg="Configure hotadd cpu operation is not supported when VM is power on")
self.configspec.cpuHotAddEnabled = bool(self.params['hardware']['hotadd_cpu'])
if vm_obj is None or self.configspec.cpuHotAddEnabled != vm_obj.config.cpuHotAddEnabled:
self.change_detected = True
if 'hotremove_cpu' in self.params['hardware']:
if vm_obj and vm_obj.runtime.powerState == vim.VirtualMachinePowerState.poweredOn and \
vm_obj.config.cpuHotRemoveEnabled != bool(self.params['hardware']['hotremove_cpu']):
self.module.fail_json(msg="Configure hotremove cpu operation is not supported when VM is power on")
self.configspec.cpuHotRemoveEnabled = bool(self.params['hardware']['hotremove_cpu'])
if vm_obj is None or self.configspec.cpuHotRemoveEnabled != vm_obj.config.cpuHotRemoveEnabled:
self.change_detected = True
if 'memory_reservation_lock' in self.params['hardware']:
self.configspec.memoryReservationLockedToMax = bool(self.params['hardware']['memory_reservation_lock'])
if vm_obj is None or self.configspec.memoryReservationLockedToMax != vm_obj.config.memoryReservationLockedToMax:
self.change_detected = True
if 'boot_firmware' in self.params['hardware']:
# boot firmware re-config can cause boot issue
if vm_obj is not None:
return
boot_firmware = self.params['hardware']['boot_firmware'].lower()
if boot_firmware not in ('bios', 'efi'):
self.module.fail_json(msg="hardware.boot_firmware value is invalid [%s]."
" Need one of ['bios', 'efi']." % boot_firmware)
self.configspec.firmware = boot_firmware
self.change_detected = True
def configure_cdrom(self, vm_obj):
# Configure the VM CD-ROM
if "cdrom" in self.params and self.params["cdrom"]:
if "type" not in self.params["cdrom"] or self.params["cdrom"]["type"] not in ["none", "client", "iso"]:
self.module.fail_json(msg="cdrom.type is mandatory")
if self.params["cdrom"]["type"] == "iso" and ("iso_path" not in self.params["cdrom"] or not self.params["cdrom"]["iso_path"]):
self.module.fail_json(msg="cdrom.iso_path is mandatory in case cdrom.type is iso")
if vm_obj and vm_obj.config.template:
# Changing CD-ROM settings on a template is not supported
return
cdrom_spec = None
cdrom_device = self.get_vm_cdrom_device(vm=vm_obj)
iso_path = self.params["cdrom"]["iso_path"] if "iso_path" in self.params["cdrom"] else None
if cdrom_device is None:
# Creating new CD-ROM
ide_device = self.get_vm_ide_device(vm=vm_obj)
if ide_device is None:
# Creating new IDE device
ide_device = self.device_helper.create_ide_controller()
self.change_detected = True
self.configspec.deviceChange.append(ide_device)
elif len(ide_device.device) > 3:
self.module.fail_json(msg="hardware.cdrom specified for a VM or template which already has 4 IDE devices of which none are a cdrom")
cdrom_spec = self.device_helper.create_cdrom(ide_ctl=ide_device, cdrom_type=self.params["cdrom"]["type"], iso_path=iso_path)
if vm_obj and vm_obj.runtime.powerState == vim.VirtualMachinePowerState.poweredOn:
cdrom_spec.device.connectable.connected = (self.params["cdrom"]["type"] != "none")
elif not self.device_helper.is_equal_cdrom(vm_obj=vm_obj, cdrom_device=cdrom_device, cdrom_type=self.params["cdrom"]["type"], iso_path=iso_path):
# Updating an existing CD-ROM
if self.params["cdrom"]["type"] in ["client", "none"]:
cdrom_device.backing = vim.vm.device.VirtualCdrom.RemotePassthroughBackingInfo()
elif self.params["cdrom"]["type"] == "iso":
cdrom_device.backing = vim.vm.device.VirtualCdrom.IsoBackingInfo(fileName=iso_path)
cdrom_device.connectable = vim.vm.device.VirtualDevice.ConnectInfo()
cdrom_device.connectable.allowGuestControl = True
cdrom_device.connectable.startConnected = (self.params["cdrom"]["type"] != "none")
if vm_obj and vm_obj.runtime.powerState == vim.VirtualMachinePowerState.poweredOn:
cdrom_device.connectable.connected = (self.params["cdrom"]["type"] != "none")
cdrom_spec = vim.vm.device.VirtualDeviceSpec()
cdrom_spec.operation = vim.vm.device.VirtualDeviceSpec.Operation.edit
cdrom_spec.device = cdrom_device
if cdrom_spec:
self.change_detected = True
self.configspec.deviceChange.append(cdrom_spec)
def configure_hardware_params(self, vm_obj):
"""
Function to configure hardware related configuration of virtual machine
Args:
vm_obj: virtual machine object
"""
if 'hardware' in self.params:
if 'max_connections' in self.params['hardware']:
# maxMksConnections == max_connections
self.configspec.maxMksConnections = int(self.params['hardware']['max_connections'])
if vm_obj is None or self.configspec.maxMksConnections != vm_obj.config.maxMksConnections:
self.change_detected = True
if 'nested_virt' in self.params['hardware']:
self.configspec.nestedHVEnabled = bool(self.params['hardware']['nested_virt'])
if vm_obj is None or self.configspec.nestedHVEnabled != bool(vm_obj.config.nestedHVEnabled):
self.change_detected = True
if 'version' in self.params['hardware']:
hw_version_check_failed = False
temp_version = self.params['hardware'].get('version', 10)
try:
temp_version = int(temp_version)
except ValueError:
hw_version_check_failed = True
if temp_version not in range(3, 15):
hw_version_check_failed = True
if hw_version_check_failed:
self.module.fail_json(msg="Failed to set hardware.version '%s' value as valid"
" values range from 3 (ESX 2.x) to 14 (ESXi 6.5 and greater)." % temp_version)
# Hardware version is denoted as "vmx-10"
version = "vmx-%02d" % temp_version
self.configspec.version = version
if vm_obj is None or self.configspec.version != vm_obj.config.version:
self.change_detected = True
if vm_obj is not None:
# VM exists and we need to update the hardware version
current_version = vm_obj.config.version
# current_version = "vmx-10"
version_digit = int(current_version.split("-", 1)[-1])
if temp_version < version_digit:
self.module.fail_json(msg="Current hardware version '%d' which is greater than the specified"
" version '%d'. Downgrading hardware version is"
" not supported. Please specify version greater"
" than the current version." % (version_digit,
temp_version))
new_version = "vmx-%02d" % temp_version
try:
task = vm_obj.UpgradeVM_Task(new_version)
self.wait_for_task(task)
if task.info.state == 'error':
return {'changed': self.change_applied, 'failed': True, 'msg': task.info.error.msg, 'op': 'upgrade'}
except vim.fault.AlreadyUpgraded:
# Don't fail if VM is already upgraded.
pass
if 'virt_based_security' in self.params['hardware']:
host_version = self.select_host().summary.config.product.version
if int(host_version.split('.')[0]) < 6 or (int(host_version.split('.')[0]) == 6 and int(host_version.split('.')[1]) < 7):
self.module.fail_json(msg="ESXi version %s not support VBS." % host_version)
guest_ids = ['windows9_64Guest', 'windows9Server64Guest']
if vm_obj is None:
guestid = self.configspec.guestId
else:
guestid = vm_obj.summary.config.guestId
if guestid not in guest_ids:
self.module.fail_json(msg="Guest '%s' not support VBS." % guestid)
if (vm_obj is None and int(self.configspec.version.split('-')[1]) >= 14) or \
(vm_obj and int(vm_obj.config.version.split('-')[1]) >= 14 and (vm_obj.runtime.powerState == vim.VirtualMachinePowerState.poweredOff)):
self.configspec.flags = vim.vm.FlagInfo()
self.configspec.flags.vbsEnabled = bool(self.params['hardware']['virt_based_security'])
if bool(self.params['hardware']['virt_based_security']):
self.configspec.flags.vvtdEnabled = True
self.configspec.nestedHVEnabled = True
if (vm_obj is None and self.configspec.firmware == 'efi') or \
(vm_obj and vm_obj.config.firmware == 'efi'):
self.configspec.bootOptions = vim.vm.BootOptions()
self.configspec.bootOptions.efiSecureBootEnabled = True
else:
self.module.fail_json(msg="Not support VBS when firmware is BIOS.")
if vm_obj is None or self.configspec.flags.vbsEnabled != vm_obj.config.flags.vbsEnabled:
self.change_detected = True
def get_device_by_type(self, vm=None, type=None):
if vm is None or type is None:
return None
for device in vm.config.hardware.device:
if isinstance(device, type):
return device
return None
def get_vm_cdrom_device(self, vm=None):
return self.get_device_by_type(vm=vm, type=vim.vm.device.VirtualCdrom)
def get_vm_ide_device(self, vm=None):
return self.get_device_by_type(vm=vm, type=vim.vm.device.VirtualIDEController)
def get_vm_network_interfaces(self, vm=None):
device_list = []
if vm is None:
return device_list
nw_device_types = (vim.vm.device.VirtualPCNet32, vim.vm.device.VirtualVmxnet2,
vim.vm.device.VirtualVmxnet3, vim.vm.device.VirtualE1000,
vim.vm.device.VirtualE1000e, vim.vm.device.VirtualSriovEthernetCard)
for device in vm.config.hardware.device:
if isinstance(device, nw_device_types):
device_list.append(device)
return device_list
def sanitize_network_params(self):
"""
Sanitize user provided network provided params
Returns: A sanitized list of network params, else fails
"""
network_devices = list()
# Clean up user data here
for network in self.params['networks']:
if 'name' not in network and 'vlan' not in network:
self.module.fail_json(msg="Please specify at least a network name or"
" a VLAN name under VM network list.")
if 'name' in network and self.cache.get_network(network['name']) is None:
self.module.fail_json(msg="Network '%(name)s' does not exist." % network)
elif 'vlan' in network:
dvps = self.cache.get_all_objs(self.content, [vim.dvs.DistributedVirtualPortgroup])
for dvp in dvps:
if hasattr(dvp.config.defaultPortConfig, 'vlan') and \
isinstance(dvp.config.defaultPortConfig.vlan.vlanId, int) and \
str(dvp.config.defaultPortConfig.vlan.vlanId) == str(network['vlan']):
network['name'] = dvp.config.name
break
if 'dvswitch_name' in network and \
dvp.config.distributedVirtualSwitch.name == network['dvswitch_name'] and \
dvp.config.name == network['vlan']:
network['name'] = dvp.config.name
break
if dvp.config.name == network['vlan']:
network['name'] = dvp.config.name
break
else:
self.module.fail_json(msg="VLAN '%(vlan)s' does not exist." % network)
if 'type' in network:
if network['type'] not in ['dhcp', 'static']:
self.module.fail_json(msg="Network type '%(type)s' is not a valid parameter."
" Valid parameters are ['dhcp', 'static']." % network)
if network['type'] != 'static' and ('ip' in network or 'netmask' in network):
self.module.fail_json(msg='Static IP information provided for network "%(name)s",'
' but "type" is set to "%(type)s".' % network)
else:
# Type is optional parameter, if user provided IP or Subnet assume
# network type as 'static'
if 'ip' in network or 'netmask' in network:
network['type'] = 'static'
else:
# User wants network type as 'dhcp'
network['type'] = 'dhcp'
if network.get('type') == 'static':
if 'ip' in network and 'netmask' not in network:
self.module.fail_json(msg="'netmask' is required if 'ip' is"
" specified under VM network list.")
if 'ip' not in network and 'netmask' in network:
self.module.fail_json(msg="'ip' is required if 'netmask' is"
" specified under VM network list.")
validate_device_types = ['pcnet32', 'vmxnet2', 'vmxnet3', 'e1000', 'e1000e', 'sriov']
if 'device_type' in network and network['device_type'] not in validate_device_types:
self.module.fail_json(msg="Device type specified '%s' is not valid."
" Please specify correct device"
" type from ['%s']." % (network['device_type'],
"', '".join(validate_device_types)))
if 'mac' in network and not is_mac(network['mac']):
self.module.fail_json(msg="Device MAC address '%s' is invalid."
" Please provide correct MAC address." % network['mac'])
network_devices.append(network)
return network_devices
def configure_network(self, vm_obj):
# Ignore empty networks, this permits to keep networks when deploying a template/cloning a VM
if len(self.params['networks']) == 0:
return
network_devices = self.sanitize_network_params()
# List current device for Clone or Idempotency
current_net_devices = self.get_vm_network_interfaces(vm=vm_obj)
if len(network_devices) < len(current_net_devices):
self.module.fail_json(msg="Given network device list is lesser than current VM device list (%d < %d). "
"Removing interfaces is not allowed"
% (len(network_devices), len(current_net_devices)))
for key in range(0, len(network_devices)):
nic_change_detected = False
network_name = network_devices[key]['name']
if key < len(current_net_devices) and (vm_obj or self.params['template']):
# We are editing existing network devices, this is either when
# are cloning from VM or Template
nic = vim.vm.device.VirtualDeviceSpec()
nic.operation = vim.vm.device.VirtualDeviceSpec.Operation.edit
nic.device = current_net_devices[key]
if ('wake_on_lan' in network_devices[key] and
nic.device.wakeOnLanEnabled != network_devices[key].get('wake_on_lan')):
nic.device.wakeOnLanEnabled = network_devices[key].get('wake_on_lan')
nic_change_detected = True
if ('start_connected' in network_devices[key] and
nic.device.connectable.startConnected != network_devices[key].get('start_connected')):
nic.device.connectable.startConnected = network_devices[key].get('start_connected')
nic_change_detected = True
if ('allow_guest_control' in network_devices[key] and
nic.device.connectable.allowGuestControl != network_devices[key].get('allow_guest_control')):
nic.device.connectable.allowGuestControl = network_devices[key].get('allow_guest_control')
nic_change_detected = True
if nic.device.deviceInfo.summary != network_name:
nic.device.deviceInfo.summary = network_name
nic_change_detected = True
if 'device_type' in network_devices[key]:
device = self.device_helper.get_device(network_devices[key]['device_type'], network_name)
device_class = type(device)
if not isinstance(nic.device, device_class):
self.module.fail_json(msg="Changing the device type is not possible when interface is already present. "
"The failing device type is %s" % network_devices[key]['device_type'])
# Changing mac address has no effect when editing interface
if 'mac' in network_devices[key] and nic.device.macAddress != current_net_devices[key].macAddress:
self.module.fail_json(msg="Changing MAC address has not effect when interface is already present. "
"The failing new MAC address is %s" % nic.device.macAddress)
else:
# Default device type is vmxnet3, VMware best practice
device_type = network_devices[key].get('device_type', 'vmxnet3')
nic = self.device_helper.create_nic(device_type,
'Network Adapter %s' % (key + 1),
network_devices[key])
nic.operation = vim.vm.device.VirtualDeviceSpec.Operation.add
nic_change_detected = True
if hasattr(self.cache.get_network(network_name), 'portKeys'):
# VDS switch
pg_obj = None
if 'dvswitch_name' in network_devices[key]:
dvs_name = network_devices[key]['dvswitch_name']
dvs_obj = find_dvs_by_name(self.content, dvs_name)
if dvs_obj is None:
self.module.fail_json(msg="Unable to find distributed virtual switch %s" % dvs_name)
pg_obj = find_dvspg_by_name(dvs_obj, network_name)
if pg_obj is None:
self.module.fail_json(msg="Unable to find distributed port group %s" % network_name)
else:
pg_obj = self.cache.find_obj(self.content, [vim.dvs.DistributedVirtualPortgroup], network_name)
# TODO: (akasurde) There is no way to find association between resource pool and distributed virtual portgroup
# For now, check if we are able to find distributed virtual switch
if not pg_obj.config.distributedVirtualSwitch:
self.module.fail_json(msg="Failed to find distributed virtual switch which is associated with"
" distributed virtual portgroup '%s'. Make sure hostsystem is associated with"
" the given distributed virtual portgroup. Also, check if user has correct"
" permission to access distributed virtual switch in the given portgroup." % pg_obj.name)
if (nic.device.backing and
(not hasattr(nic.device.backing, 'port') or
(nic.device.backing.port.portgroupKey != pg_obj.key or
nic.device.backing.port.switchUuid != pg_obj.config.distributedVirtualSwitch.uuid))):
nic_change_detected = True
dvs_port_connection = vim.dvs.PortConnection()
dvs_port_connection.portgroupKey = pg_obj.key
# If user specifies distributed port group without associating to the hostsystem on which
# virtual machine is going to be deployed then we get error. We can infer that there is no
# association between given distributed port group and host system.
host_system = self.params.get('esxi_hostname')
if host_system and host_system not in [host.config.host.name for host in pg_obj.config.distributedVirtualSwitch.config.host]:
self.module.fail_json(msg="It seems that host system '%s' is not associated with distributed"
" virtual portgroup '%s'. Please make sure host system is associated"
" with given distributed virtual portgroup" % (host_system, pg_obj.name))
dvs_port_connection.switchUuid = pg_obj.config.distributedVirtualSwitch.uuid
nic.device.backing = vim.vm.device.VirtualEthernetCard.DistributedVirtualPortBackingInfo()
nic.device.backing.port = dvs_port_connection
elif isinstance(self.cache.get_network(network_name), vim.OpaqueNetwork):
# NSX-T Logical Switch
nic.device.backing = vim.vm.device.VirtualEthernetCard.OpaqueNetworkBackingInfo()
network_id = self.cache.get_network(network_name).summary.opaqueNetworkId
nic.device.backing.opaqueNetworkType = 'nsx.LogicalSwitch'
nic.device.backing.opaqueNetworkId = network_id
nic.device.deviceInfo.summary = 'nsx.LogicalSwitch: %s' % network_id
nic_change_detected = True
else:
# vSwitch
if not isinstance(nic.device.backing, vim.vm.device.VirtualEthernetCard.NetworkBackingInfo):
nic.device.backing = vim.vm.device.VirtualEthernetCard.NetworkBackingInfo()
nic_change_detected = True
net_obj = self.cache.get_network(network_name)
if nic.device.backing.network != net_obj:
nic.device.backing.network = net_obj
nic_change_detected = True
if nic.device.backing.deviceName != network_name:
nic.device.backing.deviceName = network_name
nic_change_detected = True
if nic_change_detected:
# Change to fix the issue found while configuring opaque network
# VMs cloned from a template with opaque network will get disconnected
# Replacing deprecated config parameter with relocation Spec
if isinstance(self.cache.get_network(network_name), vim.OpaqueNetwork):
self.relospec.deviceChange.append(nic)
else:
self.configspec.deviceChange.append(nic)
self.change_detected = True
def configure_vapp_properties(self, vm_obj):
if len(self.params['vapp_properties']) == 0:
return
for x in self.params['vapp_properties']:
if not x.get('id'):
self.module.fail_json(msg="id is required to set vApp property")
new_vmconfig_spec = vim.vApp.VmConfigSpec()
if vm_obj:
# VM exists
# This is primarily for vcsim/integration tests, unset vAppConfig was not seen on my deployments
orig_spec = vm_obj.config.vAppConfig if vm_obj.config.vAppConfig else new_vmconfig_spec
vapp_properties_current = dict((x.id, x) for x in orig_spec.property)
vapp_properties_to_change = dict((x['id'], x) for x in self.params['vapp_properties'])
# each property must have a unique key
# init key counter with max value + 1
all_keys = [x.key for x in orig_spec.property]
new_property_index = max(all_keys) + 1 if all_keys else 0
for property_id, property_spec in vapp_properties_to_change.items():
is_property_changed = False
new_vapp_property_spec = vim.vApp.PropertySpec()
if property_id in vapp_properties_current:
if property_spec.get('operation') == 'remove':
new_vapp_property_spec.operation = 'remove'
new_vapp_property_spec.removeKey = vapp_properties_current[property_id].key
is_property_changed = True
else:
# this is 'edit' branch
new_vapp_property_spec.operation = 'edit'
new_vapp_property_spec.info = vapp_properties_current[property_id]
try:
for property_name, property_value in property_spec.items():
if property_name == 'operation':
# operation is not an info object property
# if set to anything other than 'remove' we don't fail
continue
# Updating attributes only if needed
if getattr(new_vapp_property_spec.info, property_name) != property_value:
setattr(new_vapp_property_spec.info, property_name, property_value)
is_property_changed = True
except Exception as e:
msg = "Failed to set vApp property field='%s' and value='%s'. Error: %s" % (property_name, property_value, to_text(e))
self.module.fail_json(msg=msg)
else:
if property_spec.get('operation') == 'remove':
# attempt to delete non-existent property
continue
# this is add new property branch
new_vapp_property_spec.operation = 'add'
property_info = vim.vApp.PropertyInfo()
property_info.classId = property_spec.get('classId')
property_info.instanceId = property_spec.get('instanceId')
property_info.id = property_spec.get('id')
property_info.category = property_spec.get('category')
property_info.label = property_spec.get('label')
property_info.type = property_spec.get('type', 'string')
property_info.userConfigurable = property_spec.get('userConfigurable', True)
property_info.defaultValue = property_spec.get('defaultValue')
property_info.value = property_spec.get('value', '')
property_info.description = property_spec.get('description')
new_vapp_property_spec.info = property_info
new_vapp_property_spec.info.key = new_property_index
new_property_index += 1
is_property_changed = True
if is_property_changed:
new_vmconfig_spec.property.append(new_vapp_property_spec)
else:
# New VM
all_keys = [x.key for x in new_vmconfig_spec.property]
new_property_index = max(all_keys) + 1 if all_keys else 0
vapp_properties_to_change = dict((x['id'], x) for x in self.params['vapp_properties'])
is_property_changed = False
for property_id, property_spec in vapp_properties_to_change.items():
new_vapp_property_spec = vim.vApp.PropertySpec()
# this is add new property branch
new_vapp_property_spec.operation = 'add'
property_info = vim.vApp.PropertyInfo()
property_info.classId = property_spec.get('classId')
property_info.instanceId = property_spec.get('instanceId')
property_info.id = property_spec.get('id')
property_info.category = property_spec.get('category')
property_info.label = property_spec.get('label')
property_info.type = property_spec.get('type', 'string')
property_info.userConfigurable = property_spec.get('userConfigurable', True)
property_info.defaultValue = property_spec.get('defaultValue')
property_info.value = property_spec.get('value', '')
property_info.description = property_spec.get('description')
new_vapp_property_spec.info = property_info
new_vapp_property_spec.info.key = new_property_index
new_property_index += 1
is_property_changed = True
if is_property_changed:
new_vmconfig_spec.property.append(new_vapp_property_spec)
if new_vmconfig_spec.property:
self.configspec.vAppConfig = new_vmconfig_spec
self.change_detected = True
def customize_customvalues(self, vm_obj):
if len(self.params['customvalues']) == 0:
return
facts = self.gather_facts(vm_obj)
for kv in self.params['customvalues']:
if 'key' not in kv or 'value' not in kv:
self.module.exit_json(msg="customvalues items required both 'key' and 'value' fields.")
key_id = None
for field in self.content.customFieldsManager.field:
if field.name == kv['key']:
key_id = field.key
break
if not key_id:
self.module.fail_json(msg="Unable to find custom value key %s" % kv['key'])
# If kv is not kv fetched from facts, change it
if kv['key'] not in facts['customvalues'] or facts['customvalues'][kv['key']] != kv['value']:
self.content.customFieldsManager.SetField(entity=vm_obj, key=key_id, value=kv['value'])
self.change_detected = True
def customize_vm(self, vm_obj):
# User specified customization specification
custom_spec_name = self.params.get('customization_spec')
if custom_spec_name:
cc_mgr = self.content.customizationSpecManager
if cc_mgr.DoesCustomizationSpecExist(name=custom_spec_name):
temp_spec = cc_mgr.GetCustomizationSpec(name=custom_spec_name)
self.customspec = temp_spec.spec
return
else:
self.module.fail_json(msg="Unable to find customization specification"
" '%s' in given configuration." % custom_spec_name)
# Network settings
adaptermaps = []
for network in self.params['networks']:
guest_map = vim.vm.customization.AdapterMapping()
guest_map.adapter = vim.vm.customization.IPSettings()
if 'ip' in network and 'netmask' in network:
guest_map.adapter.ip = vim.vm.customization.FixedIp()
guest_map.adapter.ip.ipAddress = str(network['ip'])
guest_map.adapter.subnetMask = str(network['netmask'])
elif 'type' in network and network['type'] == 'dhcp':
guest_map.adapter.ip = vim.vm.customization.DhcpIpGenerator()
if 'gateway' in network:
guest_map.adapter.gateway = network['gateway']
# On Windows, DNS domain and DNS servers can be set by network interface
# https://pubs.vmware.com/vi3/sdk/ReferenceGuide/vim.vm.customization.IPSettings.html
if 'domain' in network:
guest_map.adapter.dnsDomain = network['domain']
elif 'domain' in self.params['customization']:
guest_map.adapter.dnsDomain = self.params['customization']['domain']
if 'dns_servers' in network:
guest_map.adapter.dnsServerList = network['dns_servers']
elif 'dns_servers' in self.params['customization']:
guest_map.adapter.dnsServerList = self.params['customization']['dns_servers']
adaptermaps.append(guest_map)
# Global DNS settings
globalip = vim.vm.customization.GlobalIPSettings()
if 'dns_servers' in self.params['customization']:
globalip.dnsServerList = self.params['customization']['dns_servers']
# TODO: Maybe list the different domains from the interfaces here by default ?
if 'dns_suffix' in self.params['customization']:
dns_suffix = self.params['customization']['dns_suffix']
if isinstance(dns_suffix, list):
globalip.dnsSuffixList = " ".join(dns_suffix)
else:
globalip.dnsSuffixList = dns_suffix
elif 'domain' in self.params['customization']:
globalip.dnsSuffixList = self.params['customization']['domain']
if self.params['guest_id']:
guest_id = self.params['guest_id']
else:
guest_id = vm_obj.summary.config.guestId
# For windows guest OS, use SysPrep
# https://pubs.vmware.com/vi3/sdk/ReferenceGuide/vim.vm.customization.Sysprep.html#field_detail
if 'win' in guest_id:
ident = vim.vm.customization.Sysprep()
ident.userData = vim.vm.customization.UserData()
# Setting hostName, orgName and fullName is mandatory, so we set some default when missing
ident.userData.computerName = vim.vm.customization.FixedName()
# computer name will be truncated to 15 characters if using VM name
default_name = self.params['name'].replace(' ', '')
default_name = ''.join([c for c in default_name if c not in string.punctuation])
ident.userData.computerName.name = str(self.params['customization'].get('hostname', default_name[0:15]))
ident.userData.fullName = str(self.params['customization'].get('fullname', 'Administrator'))
ident.userData.orgName = str(self.params['customization'].get('orgname', 'ACME'))
if 'productid' in self.params['customization']:
ident.userData.productId = str(self.params['customization']['productid'])
ident.guiUnattended = vim.vm.customization.GuiUnattended()
if 'autologon' in self.params['customization']:
ident.guiUnattended.autoLogon = self.params['customization']['autologon']
ident.guiUnattended.autoLogonCount = self.params['customization'].get('autologoncount', 1)
if 'timezone' in self.params['customization']:
# Check if timezone value is a int before proceeding.
ident.guiUnattended.timeZone = self.device_helper.integer_value(
self.params['customization']['timezone'],
'customization.timezone')
ident.identification = vim.vm.customization.Identification()
if self.params['customization'].get('password', '') != '':
ident.guiUnattended.password = vim.vm.customization.Password()
ident.guiUnattended.password.value = str(self.params['customization']['password'])
ident.guiUnattended.password.plainText = True
if 'joindomain' in self.params['customization']:
if 'domainadmin' not in self.params['customization'] or 'domainadminpassword' not in self.params['customization']:
self.module.fail_json(msg="'domainadmin' and 'domainadminpassword' entries are mandatory in 'customization' section to use "
"joindomain feature")
ident.identification.domainAdmin = str(self.params['customization']['domainadmin'])
ident.identification.joinDomain = str(self.params['customization']['joindomain'])
ident.identification.domainAdminPassword = vim.vm.customization.Password()
ident.identification.domainAdminPassword.value = str(self.params['customization']['domainadminpassword'])
ident.identification.domainAdminPassword.plainText = True
elif 'joinworkgroup' in self.params['customization']:
ident.identification.joinWorkgroup = str(self.params['customization']['joinworkgroup'])
if 'runonce' in self.params['customization']:
ident.guiRunOnce = vim.vm.customization.GuiRunOnce()
ident.guiRunOnce.commandList = self.params['customization']['runonce']
else:
# FIXME: We have no clue whether this non-Windows OS is actually Linux, hence it might fail!
# For Linux guest OS, use LinuxPrep
# https://pubs.vmware.com/vi3/sdk/ReferenceGuide/vim.vm.customization.LinuxPrep.html
ident = vim.vm.customization.LinuxPrep()
# TODO: Maybe add domain from interface if missing ?
if 'domain' in self.params['customization']:
ident.domain = str(self.params['customization']['domain'])
ident.hostName = vim.vm.customization.FixedName()
hostname = str(self.params['customization'].get('hostname', self.params['name'].split('.')[0]))
# Remove all characters except alphanumeric and minus which is allowed by RFC 952
valid_hostname = re.sub(r"[^a-zA-Z0-9\-]", "", hostname)
ident.hostName.name = valid_hostname
# List of supported time zones for different vSphere versions in Linux/Unix systems
# https://kb.vmware.com/s/article/2145518
if 'timezone' in self.params['customization']:
ident.timeZone = str(self.params['customization']['timezone'])
if 'hwclockUTC' in self.params['customization']:
ident.hwClockUTC = self.params['customization']['hwclockUTC']
self.customspec = vim.vm.customization.Specification()
self.customspec.nicSettingMap = adaptermaps
self.customspec.globalIPSettings = globalip
self.customspec.identity = ident
def get_vm_scsi_controller(self, vm_obj):
# If vm_obj doesn't exist there is no SCSI controller to find
if vm_obj is None:
return None
for device in vm_obj.config.hardware.device:
if self.device_helper.is_scsi_controller(device):
scsi_ctl = vim.vm.device.VirtualDeviceSpec()
scsi_ctl.device = device
return scsi_ctl
return None
def get_configured_disk_size(self, expected_disk_spec):
# what size is it?
if [x for x in expected_disk_spec.keys() if x.startswith('size_') or x == 'size']:
# size, size_tb, size_gb, size_mb, size_kb
if 'size' in expected_disk_spec:
size_regex = re.compile(r'(\d+(?:\.\d+)?)([tgmkTGMK][bB])')
disk_size_m = size_regex.match(expected_disk_spec['size'])
try:
if disk_size_m:
expected = disk_size_m.group(1)
unit = disk_size_m.group(2)
else:
raise ValueError
if re.match(r'\d+\.\d+', expected):
# We found float value in string, let's typecast it
expected = float(expected)
else:
# We found int value in string, let's typecast it
expected = int(expected)
if not expected or not unit:
raise ValueError
except (TypeError, ValueError, NameError):
# Common failure
self.module.fail_json(msg="Failed to parse disk size please review value"
" provided using documentation.")
else:
param = [x for x in expected_disk_spec.keys() if x.startswith('size_')][0]
unit = param.split('_')[-1].lower()
expected = [x[1] for x in expected_disk_spec.items() if x[0].startswith('size_')][0]
expected = int(expected)
disk_units = dict(tb=3, gb=2, mb=1, kb=0)
if unit in disk_units:
unit = unit.lower()
return expected * (1024 ** disk_units[unit])
else:
self.module.fail_json(msg="%s is not a supported unit for disk size."
" Supported units are ['%s']." % (unit,
"', '".join(disk_units.keys())))
# No size found but disk, fail
self.module.fail_json(
msg="No size, size_kb, size_mb, size_gb or size_tb attribute found into disk configuration")
def find_vmdk(self, vmdk_path):
"""
Takes a vsphere datastore path in the format
[datastore_name] path/to/file.vmdk
Returns vsphere file object or raises RuntimeError
"""
datastore_name, vmdk_fullpath, vmdk_filename, vmdk_folder = self.vmdk_disk_path_split(vmdk_path)
datastore = self.cache.find_obj(self.content, [vim.Datastore], datastore_name)
if datastore is None:
self.module.fail_json(msg="Failed to find the datastore %s" % datastore_name)
return self.find_vmdk_file(datastore, vmdk_fullpath, vmdk_filename, vmdk_folder)
def add_existing_vmdk(self, vm_obj, expected_disk_spec, diskspec, scsi_ctl):
"""
Adds vmdk file described by expected_disk_spec['filename'], retrieves the file
information and adds the correct spec to self.configspec.deviceChange.
"""
filename = expected_disk_spec['filename']
# if this is a new disk, or the disk file names are different
if (vm_obj and diskspec.device.backing.fileName != filename) or vm_obj is None:
vmdk_file = self.find_vmdk(expected_disk_spec['filename'])
diskspec.device.backing.fileName = expected_disk_spec['filename']
diskspec.device.capacityInKB = VmomiSupport.vmodlTypes['long'](vmdk_file.fileSize / 1024)
diskspec.device.key = -1
self.change_detected = True
self.configspec.deviceChange.append(diskspec)
def configure_disks(self, vm_obj):
# Ignore empty disk list, this permits to keep disks when deploying a template/cloning a VM
if len(self.params['disk']) == 0:
return
scsi_ctl = self.get_vm_scsi_controller(vm_obj)
# Create scsi controller only if we are deploying a new VM, not a template or reconfiguring
if vm_obj is None or scsi_ctl is None:
scsi_ctl = self.device_helper.create_scsi_controller(self.get_scsi_type())
self.change_detected = True
self.configspec.deviceChange.append(scsi_ctl)
disks = [x for x in vm_obj.config.hardware.device if isinstance(x, vim.vm.device.VirtualDisk)] \
if vm_obj is not None else None
if disks is not None and self.params.get('disk') and len(self.params.get('disk')) < len(disks):
self.module.fail_json(msg="Provided disks configuration has less disks than "
"the target object (%d vs %d)" % (len(self.params.get('disk')), len(disks)))
disk_index = 0
for expected_disk_spec in self.params.get('disk'):
disk_modified = False
# If we are manipulating and existing objects which has disks and disk_index is in disks
if vm_obj is not None and disks is not None and disk_index < len(disks):
diskspec = vim.vm.device.VirtualDeviceSpec()
# set the operation to edit so that it knows to keep other settings
diskspec.operation = vim.vm.device.VirtualDeviceSpec.Operation.edit
diskspec.device = disks[disk_index]
else:
diskspec = self.device_helper.create_scsi_disk(scsi_ctl, disk_index)
disk_modified = True
# increment index for next disk search
disk_index += 1
# index 7 is reserved to SCSI controller
if disk_index == 7:
disk_index += 1
if 'disk_mode' in expected_disk_spec:
disk_mode = expected_disk_spec.get('disk_mode', 'persistent').lower()
valid_disk_mode = ['persistent', 'independent_persistent', 'independent_nonpersistent']
if disk_mode not in valid_disk_mode:
self.module.fail_json(msg="disk_mode specified is not valid."
" Should be one of ['%s']" % "', '".join(valid_disk_mode))
if (vm_obj and diskspec.device.backing.diskMode != disk_mode) or (vm_obj is None):
diskspec.device.backing.diskMode = disk_mode
disk_modified = True
else:
diskspec.device.backing.diskMode = "persistent"
# is it thin?
if 'type' in expected_disk_spec:
disk_type = expected_disk_spec.get('type', '').lower()
if disk_type == 'thin':
diskspec.device.backing.thinProvisioned = True
elif disk_type == 'eagerzeroedthick':
diskspec.device.backing.eagerlyScrub = True
if 'filename' in expected_disk_spec and expected_disk_spec['filename'] is not None:
self.add_existing_vmdk(vm_obj, expected_disk_spec, diskspec, scsi_ctl)
continue
elif vm_obj is None or self.params['template']:
# We are creating new VM or from Template
# Only create virtual device if not backed by vmdk in original template
if diskspec.device.backing.fileName == '':
diskspec.fileOperation = vim.vm.device.VirtualDeviceSpec.FileOperation.create
# which datastore?
if expected_disk_spec.get('datastore'):
# TODO: This is already handled by the relocation spec,
# but it needs to eventually be handled for all the
# other disks defined
pass
kb = self.get_configured_disk_size(expected_disk_spec)
# VMware doesn't allow to reduce disk sizes
if kb < diskspec.device.capacityInKB:
self.module.fail_json(
msg="Given disk size is smaller than found (%d < %d). Reducing disks is not allowed." %
(kb, diskspec.device.capacityInKB))
if kb != diskspec.device.capacityInKB or disk_modified:
diskspec.device.capacityInKB = kb
self.configspec.deviceChange.append(diskspec)
self.change_detected = True
def select_host(self):
hostsystem = self.cache.get_esx_host(self.params['esxi_hostname'])
if not hostsystem:
self.module.fail_json(msg='Failed to find ESX host "%(esxi_hostname)s"' % self.params)
if hostsystem.runtime.connectionState != 'connected' or hostsystem.runtime.inMaintenanceMode:
self.module.fail_json(msg='ESXi "%(esxi_hostname)s" is in invalid state or in maintenance mode.' % self.params)
return hostsystem
def autoselect_datastore(self):
datastore = None
datastores = self.cache.get_all_objs(self.content, [vim.Datastore])
if datastores is None or len(datastores) == 0:
self.module.fail_json(msg="Unable to find a datastore list when autoselecting")
datastore_freespace = 0
for ds in datastores:
if not self.is_datastore_valid(datastore_obj=ds):
continue
if ds.summary.freeSpace > datastore_freespace:
datastore = ds
datastore_freespace = ds.summary.freeSpace
return datastore
def get_recommended_datastore(self, datastore_cluster_obj=None):
"""
Function to return Storage DRS recommended datastore from datastore cluster
Args:
datastore_cluster_obj: datastore cluster managed object
Returns: Name of recommended datastore from the given datastore cluster
"""
if datastore_cluster_obj is None:
return None
# Check if Datastore Cluster provided by user is SDRS ready
sdrs_status = datastore_cluster_obj.podStorageDrsEntry.storageDrsConfig.podConfig.enabled
if sdrs_status:
# We can get storage recommendation only if SDRS is enabled on given datastorage cluster
pod_sel_spec = vim.storageDrs.PodSelectionSpec()
pod_sel_spec.storagePod = datastore_cluster_obj
storage_spec = vim.storageDrs.StoragePlacementSpec()
storage_spec.podSelectionSpec = pod_sel_spec
storage_spec.type = 'create'
try:
rec = self.content.storageResourceManager.RecommendDatastores(storageSpec=storage_spec)
rec_action = rec.recommendations[0].action[0]
return rec_action.destination.name
except Exception:
# There is some error so we fall back to general workflow
pass
datastore = None
datastore_freespace = 0
for ds in datastore_cluster_obj.childEntity:
if isinstance(ds, vim.Datastore) and ds.summary.freeSpace > datastore_freespace:
# If datastore field is provided, filter destination datastores
if not self.is_datastore_valid(datastore_obj=ds):
continue
datastore = ds
datastore_freespace = ds.summary.freeSpace
if datastore:
return datastore.name
return None
def select_datastore(self, vm_obj=None):
datastore = None
datastore_name = None
if len(self.params['disk']) != 0:
# TODO: really use the datastore for newly created disks
if 'autoselect_datastore' in self.params['disk'][0] and self.params['disk'][0]['autoselect_datastore']:
datastores = self.cache.get_all_objs(self.content, [vim.Datastore])
datastores = [x for x in datastores if self.cache.get_parent_datacenter(x).name == self.params['datacenter']]
if datastores is None or len(datastores) == 0:
self.module.fail_json(msg="Unable to find a datastore list when autoselecting")
datastore_freespace = 0
for ds in datastores:
if not self.is_datastore_valid(datastore_obj=ds):
continue
if (ds.summary.freeSpace > datastore_freespace) or (ds.summary.freeSpace == datastore_freespace and not datastore):
# If datastore field is provided, filter destination datastores
if 'datastore' in self.params['disk'][0] and \
isinstance(self.params['disk'][0]['datastore'], str) and \
ds.name.find(self.params['disk'][0]['datastore']) < 0:
continue
datastore = ds
datastore_name = datastore.name
datastore_freespace = ds.summary.freeSpace
elif 'datastore' in self.params['disk'][0]:
datastore_name = self.params['disk'][0]['datastore']
# Check if user has provided datastore cluster first
datastore_cluster = self.cache.find_obj(self.content, [vim.StoragePod], datastore_name)
if datastore_cluster:
# If user specified datastore cluster so get recommended datastore
datastore_name = self.get_recommended_datastore(datastore_cluster_obj=datastore_cluster)
# Check if get_recommended_datastore or user specified datastore exists or not
datastore = self.cache.find_obj(self.content, [vim.Datastore], datastore_name)
else:
self.module.fail_json(msg="Either datastore or autoselect_datastore should be provided to select datastore")
if not datastore and self.params['template']:
# use the template's existing DS
disks = [x for x in vm_obj.config.hardware.device if isinstance(x, vim.vm.device.VirtualDisk)]
if disks:
datastore = disks[0].backing.datastore
datastore_name = datastore.name
# validation
if datastore:
dc = self.cache.get_parent_datacenter(datastore)
if dc.name != self.params['datacenter']:
datastore = self.autoselect_datastore()
datastore_name = datastore.name
if not datastore:
if len(self.params['disk']) != 0 or self.params['template'] is None:
self.module.fail_json(msg="Unable to find the datastore with given parameters."
" This could mean, %s is a non-existent virtual machine and module tried to"
" deploy it as new virtual machine with no disk. Please specify disks parameter"
" or specify template to clone from." % self.params['name'])
self.module.fail_json(msg="Failed to find a matching datastore")
return datastore, datastore_name
def obj_has_parent(self, obj, parent):
if obj is None and parent is None:
raise AssertionError()
current_parent = obj
while True:
if current_parent.name == parent.name:
return True
# Check if we have reached till root folder
moid = current_parent._moId
if moid in ['group-d1', 'ha-folder-root']:
return False
current_parent = current_parent.parent
if current_parent is None:
return False
def get_scsi_type(self):
disk_controller_type = "paravirtual"
# set cpu/memory/etc
if 'hardware' in self.params:
if 'scsi' in self.params['hardware']:
if self.params['hardware']['scsi'] in ['buslogic', 'paravirtual', 'lsilogic', 'lsilogicsas']:
disk_controller_type = self.params['hardware']['scsi']
else:
self.module.fail_json(msg="hardware.scsi attribute should be 'paravirtual' or 'lsilogic'")
return disk_controller_type
def find_folder(self, searchpath):
""" Walk inventory objects one position of the searchpath at a time """
# split the searchpath so we can iterate through it
paths = [x.replace('/', '') for x in searchpath.split('/')]
paths_total = len(paths) - 1
position = 0
# recursive walk while looking for next element in searchpath
root = self.content.rootFolder
while root and position <= paths_total:
change = False
if hasattr(root, 'childEntity'):
for child in root.childEntity:
if child.name == paths[position]:
root = child
position += 1
change = True
break
elif isinstance(root, vim.Datacenter):
if hasattr(root, 'vmFolder'):
if root.vmFolder.name == paths[position]:
root = root.vmFolder
position += 1
change = True
else:
root = None
if not change:
root = None
return root
def get_resource_pool(self, cluster=None, host=None, resource_pool=None):
""" Get a resource pool, filter on cluster, esxi_hostname or resource_pool if given """
cluster_name = cluster or self.params.get('cluster', None)
host_name = host or self.params.get('esxi_hostname', None)
resource_pool_name = resource_pool or self.params.get('resource_pool', None)
# get the datacenter object
datacenter = find_obj(self.content, [vim.Datacenter], self.params['datacenter'])
if not datacenter:
self.module.fail_json(msg='Unable to find datacenter "%s"' % self.params['datacenter'])
# if cluster is given, get the cluster object
if cluster_name:
cluster = find_obj(self.content, [vim.ComputeResource], cluster_name, folder=datacenter)
if not cluster:
self.module.fail_json(msg='Unable to find cluster "%s"' % cluster_name)
# if host is given, get the cluster object using the host
elif host_name:
host = find_obj(self.content, [vim.HostSystem], host_name, folder=datacenter)
if not host:
self.module.fail_json(msg='Unable to find host "%s"' % host_name)
cluster = host.parent
else:
cluster = None
# get resource pools limiting search to cluster or datacenter
resource_pool = find_obj(self.content, [vim.ResourcePool], resource_pool_name, folder=cluster or datacenter)
if not resource_pool:
if resource_pool_name:
self.module.fail_json(msg='Unable to find resource_pool "%s"' % resource_pool_name)
else:
self.module.fail_json(msg='Unable to find resource pool, need esxi_hostname, resource_pool, or cluster')
return resource_pool
def deploy_vm(self):
# https://github.com/vmware/pyvmomi-community-samples/blob/master/samples/clone_vm.py
# https://www.vmware.com/support/developer/vc-sdk/visdk25pubs/ReferenceGuide/vim.vm.CloneSpec.html
# https://www.vmware.com/support/developer/vc-sdk/visdk25pubs/ReferenceGuide/vim.vm.ConfigSpec.html
# https://www.vmware.com/support/developer/vc-sdk/visdk41pubs/ApiReference/vim.vm.RelocateSpec.html
# FIXME:
# - static IPs
self.folder = self.params.get('folder', None)
if self.folder is None:
self.module.fail_json(msg="Folder is required parameter while deploying new virtual machine")
# Prepend / if it was missing from the folder path, also strip trailing slashes
if not self.folder.startswith('/'):
self.folder = '/%(folder)s' % self.params
self.folder = self.folder.rstrip('/')
datacenter = self.cache.find_obj(self.content, [vim.Datacenter], self.params['datacenter'])
if datacenter is None:
self.module.fail_json(msg='No datacenter named %(datacenter)s was found' % self.params)
dcpath = compile_folder_path_for_object(datacenter)
# Nested folder does not have trailing /
if not dcpath.endswith('/'):
dcpath += '/'
# Check for full path first in case it was already supplied
if (self.folder.startswith(dcpath + self.params['datacenter'] + '/vm') or
self.folder.startswith(dcpath + '/' + self.params['datacenter'] + '/vm')):
fullpath = self.folder
elif self.folder.startswith('/vm/') or self.folder == '/vm':
fullpath = "%s%s%s" % (dcpath, self.params['datacenter'], self.folder)
elif self.folder.startswith('/'):
fullpath = "%s%s/vm%s" % (dcpath, self.params['datacenter'], self.folder)
else:
fullpath = "%s%s/vm/%s" % (dcpath, self.params['datacenter'], self.folder)
f_obj = self.content.searchIndex.FindByInventoryPath(fullpath)
# abort if no strategy was successful
if f_obj is None:
# Add some debugging values in failure.
details = {
'datacenter': datacenter.name,
'datacenter_path': dcpath,
'folder': self.folder,
'full_search_path': fullpath,
}
self.module.fail_json(msg='No folder %s matched in the search path : %s' % (self.folder, fullpath),
details=details)
destfolder = f_obj
if self.params['template']:
vm_obj = self.get_vm_or_template(template_name=self.params['template'])
if vm_obj is None:
self.module.fail_json(msg="Could not find a template named %(template)s" % self.params)
else:
vm_obj = None
# always get a resource_pool
resource_pool = self.get_resource_pool()
# set the destination datastore for VM & disks
if self.params['datastore']:
# Give precedence to datastore value provided by user
# User may want to deploy VM to specific datastore.
datastore_name = self.params['datastore']
# Check if user has provided datastore cluster first
datastore_cluster = self.cache.find_obj(self.content, [vim.StoragePod], datastore_name)
if datastore_cluster:
# If user specified datastore cluster so get recommended datastore
datastore_name = self.get_recommended_datastore(datastore_cluster_obj=datastore_cluster)
# Check if get_recommended_datastore or user specified datastore exists or not
datastore = self.cache.find_obj(self.content, [vim.Datastore], datastore_name)
else:
(datastore, datastore_name) = self.select_datastore(vm_obj)
self.configspec = vim.vm.ConfigSpec()
self.configspec.deviceChange = []
# create the relocation spec
self.relospec = vim.vm.RelocateSpec()
self.relospec.deviceChange = []
self.configure_guestid(vm_obj=vm_obj, vm_creation=True)
self.configure_cpu_and_memory(vm_obj=vm_obj, vm_creation=True)
self.configure_hardware_params(vm_obj=vm_obj)
self.configure_resource_alloc_info(vm_obj=vm_obj)
self.configure_vapp_properties(vm_obj=vm_obj)
self.configure_disks(vm_obj=vm_obj)
self.configure_network(vm_obj=vm_obj)
self.configure_cdrom(vm_obj=vm_obj)
# Find if we need network customizations (find keys in dictionary that requires customizations)
network_changes = False
for nw in self.params['networks']:
for key in nw:
# We don't need customizations for these keys
if key not in ('device_type', 'mac', 'name', 'vlan', 'type', 'start_connected'):
network_changes = True
break
if len(self.params['customization']) > 0 or network_changes or self.params.get('customization_spec') is not None:
self.customize_vm(vm_obj=vm_obj)
clonespec = None
clone_method = None
try:
if self.params['template']:
# Only select specific host when ESXi hostname is provided
if self.params['esxi_hostname']:
self.relospec.host = self.select_host()
self.relospec.datastore = datastore
# Convert disk present in template if is set
if self.params['convert']:
for device in vm_obj.config.hardware.device:
if isinstance(device, vim.vm.device.VirtualDisk):
disk_locator = vim.vm.RelocateSpec.DiskLocator()
disk_locator.diskBackingInfo = vim.vm.device.VirtualDisk.FlatVer2BackingInfo()
if self.params['convert'] in ['thin']:
disk_locator.diskBackingInfo.thinProvisioned = True
if self.params['convert'] in ['eagerzeroedthick']:
disk_locator.diskBackingInfo.eagerlyScrub = True
if self.params['convert'] in ['thick']:
disk_locator.diskBackingInfo.diskMode = "persistent"
disk_locator.diskId = device.key
disk_locator.datastore = datastore
self.relospec.disk.append(disk_locator)
# https://www.vmware.com/support/developer/vc-sdk/visdk41pubs/ApiReference/vim.vm.RelocateSpec.html
# > pool: For a clone operation from a template to a virtual machine, this argument is required.
self.relospec.pool = resource_pool
linked_clone = self.params.get('linked_clone')
snapshot_src = self.params.get('snapshot_src', None)
if linked_clone:
if snapshot_src is not None:
self.relospec.diskMoveType = vim.vm.RelocateSpec.DiskMoveOptions.createNewChildDiskBacking
else:
self.module.fail_json(msg="Parameter 'linked_src' and 'snapshot_src' are"
" required together for linked clone operation.")
clonespec = vim.vm.CloneSpec(template=self.params['is_template'], location=self.relospec)
if self.customspec:
clonespec.customization = self.customspec
if snapshot_src is not None:
if vm_obj.snapshot is None:
self.module.fail_json(msg="No snapshots present for virtual machine or template [%(template)s]" % self.params)
snapshot = self.get_snapshots_by_name_recursively(snapshots=vm_obj.snapshot.rootSnapshotList,
snapname=snapshot_src)
if len(snapshot) != 1:
self.module.fail_json(msg='virtual machine "%(template)s" does not contain'
' snapshot named "%(snapshot_src)s"' % self.params)
clonespec.snapshot = snapshot[0].snapshot
clonespec.config = self.configspec
clone_method = 'Clone'
try:
task = vm_obj.Clone(folder=destfolder, name=self.params['name'], spec=clonespec)
except vim.fault.NoPermission as e:
self.module.fail_json(msg="Failed to clone virtual machine %s to folder %s "
"due to permission issue: %s" % (self.params['name'],
destfolder,
to_native(e.msg)))
self.change_detected = True
else:
# ConfigSpec require name for VM creation
self.configspec.name = self.params['name']
self.configspec.files = vim.vm.FileInfo(logDirectory=None,
snapshotDirectory=None,
suspendDirectory=None,
vmPathName="[" + datastore_name + "]")
clone_method = 'CreateVM_Task'
try:
task = destfolder.CreateVM_Task(config=self.configspec, pool=resource_pool)
except vmodl.fault.InvalidRequest as e:
self.module.fail_json(msg="Failed to create virtual machine due to invalid configuration "
"parameter %s" % to_native(e.msg))
except vim.fault.RestrictedVersion as e:
self.module.fail_json(msg="Failed to create virtual machine due to "
"product versioning restrictions: %s" % to_native(e.msg))
self.change_detected = True
self.wait_for_task(task)
except TypeError as e:
self.module.fail_json(msg="TypeError was returned, please ensure to give correct inputs. %s" % to_text(e))
if task.info.state == 'error':
# https://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=2021361
# https://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=2173
# provide these to the user for debugging
clonespec_json = serialize_spec(clonespec)
configspec_json = serialize_spec(self.configspec)
kwargs = {
'changed': self.change_applied,
'failed': True,
'msg': task.info.error.msg,
'clonespec': clonespec_json,
'configspec': configspec_json,
'clone_method': clone_method
}
return kwargs
else:
# set annotation
vm = task.info.result
if self.params['annotation']:
annotation_spec = vim.vm.ConfigSpec()
annotation_spec.annotation = str(self.params['annotation'])
task = vm.ReconfigVM_Task(annotation_spec)
self.wait_for_task(task)
if task.info.state == 'error':
return {'changed': self.change_applied, 'failed': True, 'msg': task.info.error.msg, 'op': 'annotation'}
if self.params['customvalues']:
self.customize_customvalues(vm_obj=vm)
if self.params['wait_for_ip_address'] or self.params['wait_for_customization'] or self.params['state'] in ['poweredon', 'restarted']:
set_vm_power_state(self.content, vm, 'poweredon', force=False)
if self.params['wait_for_ip_address']:
self.wait_for_vm_ip(vm)
if self.params['wait_for_customization']:
is_customization_ok = self.wait_for_customization(vm)
if not is_customization_ok:
vm_facts = self.gather_facts(vm)
return {'changed': self.change_applied, 'failed': True, 'instance': vm_facts, 'op': 'customization'}
vm_facts = self.gather_facts(vm)
return {'changed': self.change_applied, 'failed': False, 'instance': vm_facts}
def get_snapshots_by_name_recursively(self, snapshots, snapname):
snap_obj = []
for snapshot in snapshots:
if snapshot.name == snapname:
snap_obj.append(snapshot)
else:
snap_obj = snap_obj + self.get_snapshots_by_name_recursively(snapshot.childSnapshotList, snapname)
return snap_obj
def reconfigure_vm(self):
self.configspec = vim.vm.ConfigSpec()
self.configspec.deviceChange = []
# create the relocation spec
self.relospec = vim.vm.RelocateSpec()
self.relospec.deviceChange = []
self.configure_guestid(vm_obj=self.current_vm_obj)
self.configure_cpu_and_memory(vm_obj=self.current_vm_obj)
self.configure_hardware_params(vm_obj=self.current_vm_obj)
self.configure_disks(vm_obj=self.current_vm_obj)
self.configure_network(vm_obj=self.current_vm_obj)
self.configure_cdrom(vm_obj=self.current_vm_obj)
self.customize_customvalues(vm_obj=self.current_vm_obj)
self.configure_resource_alloc_info(vm_obj=self.current_vm_obj)
self.configure_vapp_properties(vm_obj=self.current_vm_obj)
if self.params['annotation'] and self.current_vm_obj.config.annotation != self.params['annotation']:
self.configspec.annotation = str(self.params['annotation'])
self.change_detected = True
if self.params['resource_pool']:
self.relospec.pool = self.get_resource_pool()
if self.relospec.pool != self.current_vm_obj.resourcePool:
task = self.current_vm_obj.RelocateVM_Task(spec=self.relospec)
self.wait_for_task(task)
if task.info.state == 'error':
return {'changed': self.change_applied, 'failed': True, 'msg': task.info.error.msg, 'op': 'relocate'}
# Only send VMware task if we see a modification
if self.change_detected:
task = None
try:
task = self.current_vm_obj.ReconfigVM_Task(spec=self.configspec)
except vim.fault.RestrictedVersion as e:
self.module.fail_json(msg="Failed to reconfigure virtual machine due to"
" product versioning restrictions: %s" % to_native(e.msg))
self.wait_for_task(task)
if task.info.state == 'error':
return {'changed': self.change_applied, 'failed': True, 'msg': task.info.error.msg, 'op': 'reconfig'}
# Rename VM
if self.params['uuid'] and self.params['name'] and self.params['name'] != self.current_vm_obj.config.name:
task = self.current_vm_obj.Rename_Task(self.params['name'])
self.wait_for_task(task)
if task.info.state == 'error':
return {'changed': self.change_applied, 'failed': True, 'msg': task.info.error.msg, 'op': 'rename'}
# Mark VM as Template
if self.params['is_template'] and not self.current_vm_obj.config.template:
try:
self.current_vm_obj.MarkAsTemplate()
self.change_applied = True
except vmodl.fault.NotSupported as e:
self.module.fail_json(msg="Failed to mark virtual machine [%s] "
"as template: %s" % (self.params['name'], e.msg))
# Mark Template as VM
elif not self.params['is_template'] and self.current_vm_obj.config.template:
resource_pool = self.get_resource_pool()
kwargs = dict(pool=resource_pool)
if self.params.get('esxi_hostname', None):
host_system_obj = self.select_host()
kwargs.update(host=host_system_obj)
try:
self.current_vm_obj.MarkAsVirtualMachine(**kwargs)
self.change_applied = True
except vim.fault.InvalidState as invalid_state:
self.module.fail_json(msg="Virtual machine is not marked"
" as template : %s" % to_native(invalid_state.msg))
except vim.fault.InvalidDatastore as invalid_ds:
self.module.fail_json(msg="Converting template to virtual machine"
" operation cannot be performed on the"
" target datastores: %s" % to_native(invalid_ds.msg))
except vim.fault.CannotAccessVmComponent as cannot_access:
self.module.fail_json(msg="Failed to convert template to virtual machine"
" as operation unable access virtual machine"
" component: %s" % to_native(cannot_access.msg))
except vmodl.fault.InvalidArgument as invalid_argument:
self.module.fail_json(msg="Failed to convert template to virtual machine"
" due to : %s" % to_native(invalid_argument.msg))
except Exception as generic_exc:
self.module.fail_json(msg="Failed to convert template to virtual machine"
" due to generic error : %s" % to_native(generic_exc))
# Automatically update VMware UUID when converting template to VM.
# This avoids an interactive prompt during VM startup.
uuid_action = [x for x in self.current_vm_obj.config.extraConfig if x.key == "uuid.action"]
if not uuid_action:
uuid_action_opt = vim.option.OptionValue()
uuid_action_opt.key = "uuid.action"
uuid_action_opt.value = "create"
self.configspec.extraConfig.append(uuid_action_opt)
self.change_detected = True
# add customize existing VM after VM re-configure
if 'existing_vm' in self.params['customization'] and self.params['customization']['existing_vm']:
if self.current_vm_obj.config.template:
self.module.fail_json(msg="VM is template, not support guest OS customization.")
if self.current_vm_obj.runtime.powerState != vim.VirtualMachinePowerState.poweredOff:
self.module.fail_json(msg="VM is not in poweroff state, can not do guest OS customization.")
cus_result = self.customize_exist_vm()
if cus_result['failed']:
return cus_result
vm_facts = self.gather_facts(self.current_vm_obj)
return {'changed': self.change_applied, 'failed': False, 'instance': vm_facts}
def customize_exist_vm(self):
task = None
# Find if we need network customizations (find keys in dictionary that requires customizations)
network_changes = False
for nw in self.params['networks']:
for key in nw:
# We don't need customizations for these keys
if key not in ('device_type', 'mac', 'name', 'vlan', 'type', 'start_connected'):
network_changes = True
break
if len(self.params['customization']) > 1 or network_changes or self.params.get('customization_spec'):
self.customize_vm(vm_obj=self.current_vm_obj)
try:
task = self.current_vm_obj.CustomizeVM_Task(self.customspec)
except vim.fault.CustomizationFault as e:
self.module.fail_json(msg="Failed to customization virtual machine due to CustomizationFault: %s" % to_native(e.msg))
except vim.fault.RuntimeFault as e:
self.module.fail_json(msg="failed to customization virtual machine due to RuntimeFault: %s" % to_native(e.msg))
except Exception as e:
self.module.fail_json(msg="failed to customization virtual machine due to fault: %s" % to_native(e.msg))
self.wait_for_task(task)
if task.info.state == 'error':
return {'changed': self.change_applied, 'failed': True, 'msg': task.info.error.msg, 'op': 'customize_exist'}
if self.params['wait_for_customization']:
set_vm_power_state(self.content, self.current_vm_obj, 'poweredon', force=False)
is_customization_ok = self.wait_for_customization(self.current_vm_obj)
if not is_customization_ok:
return {'changed': self.change_applied, 'failed': True, 'op': 'wait_for_customize_exist'}
return {'changed': self.change_applied, 'failed': False}
def wait_for_task(self, task, poll_interval=1):
"""
Wait for a VMware task to complete. Terminal states are 'error' and 'success'.
Inputs:
- task: the task to wait for
- poll_interval: polling interval to check the task, in seconds
Modifies:
- self.change_applied
"""
# https://www.vmware.com/support/developer/vc-sdk/visdk25pubs/ReferenceGuide/vim.Task.html
# https://www.vmware.com/support/developer/vc-sdk/visdk25pubs/ReferenceGuide/vim.TaskInfo.html
# https://github.com/virtdevninja/pyvmomi-community-samples/blob/master/samples/tools/tasks.py
while task.info.state not in ['error', 'success']:
time.sleep(poll_interval)
self.change_applied = self.change_applied or task.info.state == 'success'
def wait_for_vm_ip(self, vm, poll=100, sleep=5):
ips = None
facts = {}
thispoll = 0
while not ips and thispoll <= poll:
newvm = self.get_vm()
facts = self.gather_facts(newvm)
if facts['ipv4'] or facts['ipv6']:
ips = True
else:
time.sleep(sleep)
thispoll += 1
return facts
def get_vm_events(self, vm, eventTypeIdList):
byEntity = vim.event.EventFilterSpec.ByEntity(entity=vm, recursion="self")
filterSpec = vim.event.EventFilterSpec(entity=byEntity, eventTypeId=eventTypeIdList)
eventManager = self.content.eventManager
return eventManager.QueryEvent(filterSpec)
def wait_for_customization(self, vm, poll=10000, sleep=10):
thispoll = 0
while thispoll <= poll:
eventStarted = self.get_vm_events(vm, ['CustomizationStartedEvent'])
if len(eventStarted):
thispoll = 0
while thispoll <= poll:
eventsFinishedResult = self.get_vm_events(vm, ['CustomizationSucceeded', 'CustomizationFailed'])
if len(eventsFinishedResult):
if not isinstance(eventsFinishedResult[0], vim.event.CustomizationSucceeded):
self.module.fail_json(msg='Customization failed with error {0}:\n{1}'.format(
eventsFinishedResult[0]._wsdlName, eventsFinishedResult[0].fullFormattedMessage))
return False
break
else:
time.sleep(sleep)
thispoll += 1
return True
else:
time.sleep(sleep)
thispoll += 1
self.module.fail_json('waiting for customizations timed out.')
return False
def main():
argument_spec = vmware_argument_spec()
argument_spec.update(
state=dict(type='str', default='present',
choices=['absent', 'poweredoff', 'poweredon', 'present', 'rebootguest', 'restarted', 'shutdownguest', 'suspended']),
template=dict(type='str', aliases=['template_src']),
is_template=dict(type='bool', default=False),
annotation=dict(type='str', aliases=['notes']),
customvalues=dict(type='list', default=[]),
name=dict(type='str'),
name_match=dict(type='str', choices=['first', 'last'], default='first'),
uuid=dict(type='str'),
use_instance_uuid=dict(type='bool', default=False),
folder=dict(type='str'),
guest_id=dict(type='str'),
disk=dict(type='list', default=[]),
cdrom=dict(type='dict', default={}),
hardware=dict(type='dict', default={}),
force=dict(type='bool', default=False),
datacenter=dict(type='str', default='ha-datacenter'),
esxi_hostname=dict(type='str'),
cluster=dict(type='str'),
wait_for_ip_address=dict(type='bool', default=False),
state_change_timeout=dict(type='int', default=0),
snapshot_src=dict(type='str'),
linked_clone=dict(type='bool', default=False),
networks=dict(type='list', default=[]),
resource_pool=dict(type='str'),
customization=dict(type='dict', default={}, no_log=True),
customization_spec=dict(type='str', default=None),
wait_for_customization=dict(type='bool', default=False),
vapp_properties=dict(type='list', default=[]),
datastore=dict(type='str'),
convert=dict(type='str', choices=['thin', 'thick', 'eagerzeroedthick']),
)
module = AnsibleModule(argument_spec=argument_spec,
supports_check_mode=True,
mutually_exclusive=[
['cluster', 'esxi_hostname'],
],
required_one_of=[
['name', 'uuid'],
],
)
result = {'failed': False, 'changed': False}
pyv = PyVmomiHelper(module)
# Check if the VM exists before continuing
vm = pyv.get_vm()
# VM already exists
if vm:
if module.params['state'] == 'absent':
# destroy it
if module.check_mode:
result.update(
vm_name=vm.name,
changed=True,
current_powerstate=vm.summary.runtime.powerState.lower(),
desired_operation='remove_vm',
)
module.exit_json(**result)
if module.params['force']:
# has to be poweredoff first
set_vm_power_state(pyv.content, vm, 'poweredoff', module.params['force'])
result = pyv.remove_vm(vm)
elif module.params['state'] == 'present':
if module.check_mode:
result.update(
vm_name=vm.name,
changed=True,
desired_operation='reconfigure_vm',
)
module.exit_json(**result)
result = pyv.reconfigure_vm()
elif module.params['state'] in ['poweredon', 'poweredoff', 'restarted', 'suspended', 'shutdownguest', 'rebootguest']:
if module.check_mode:
result.update(
vm_name=vm.name,
changed=True,
current_powerstate=vm.summary.runtime.powerState.lower(),
desired_operation='set_vm_power_state',
)
module.exit_json(**result)
# set powerstate
tmp_result = set_vm_power_state(pyv.content, vm, module.params['state'], module.params['force'], module.params['state_change_timeout'])
if tmp_result['changed']:
result["changed"] = True
if module.params['state'] in ['poweredon', 'restarted', 'rebootguest'] and module.params['wait_for_ip_address']:
wait_result = wait_for_vm_ip(pyv.content, vm)
if not wait_result:
module.fail_json(msg='Waiting for IP address timed out')
tmp_result['instance'] = wait_result
if not tmp_result["failed"]:
result["failed"] = False
result['instance'] = tmp_result['instance']
if tmp_result["failed"]:
result["failed"] = True
result["msg"] = tmp_result["msg"]
else:
# This should not happen
raise AssertionError()
# VM doesn't exist
else:
if module.params['state'] in ['poweredon', 'poweredoff', 'present', 'restarted', 'suspended']:
if module.check_mode:
result.update(
changed=True,
desired_operation='deploy_vm',
)
module.exit_json(**result)
result = pyv.deploy_vm()
if result['failed']:
module.fail_json(msg='Failed to create a virtual machine : %s' % result['msg'])
if result['failed']:
module.fail_json(**result)
else:
module.exit_json(**result)
if __name__ == '__main__':
main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 58,194 |
[proxmox_kvm] Error in get_vmid when called 'too early' after the creation of a VM
|
##### SUMMARY
For a fraction of second after the creation of a new VM, the VM name is not yet defined in its relative dictionary returned by the proxmoxer when querying the list of VMs (proxmox.cluster.resources.get(type='vm')).
Accessing to this non existing attribute as an element of a dictionary instead of using the get function on the dictionary raise an error.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
proxmox_kvm
##### ANSIBLE VERSION
```
ansible 2.8.0
config file = /home/pguermo/Projects/superluminal-pdp/etc/ansible.cfg
configured module search path = ['/home/pguermo/Projects/superluminal-pdp/library']
ansible python module location = /home/pguermo/.virtualenvs/superluminal-pdp/lib/python3.5/site-packages/ansible
executable location = /home/pguermo/.virtualenvs/superluminal-pdp/bin/ansible
python version = 3.5.3 (default, Sep 27 2018, 17:25:39) [GCC 6.3.0 20170516]
```
##### STEPS TO REPRODUCE
Create several VMs on proxmox in a row.
```
proxmox_kvm:
api_host: "{{ pve_host_fqdn }}"
api_user: "{{ pve_user }}"
api_password: "{{ pve_password }}"
validate_certs: "{{ validate_certs }}"
node: "{{ pve_node }}"
name: "{{ ansible_host }}"
pool: "{{ pool_id }}"
vmid: "{{ vmid }}"
description: "{{ description | default(vm.description) }}"
net: "{{ net | default(vm.net) }}"
balloon: "{{ balloon | default(vm_defaults.balloon) }}"
cores: "{{ cores | default(vm_defaults.cores) }}"
ide: "{{ ide | default(vm_defaults.ide) }}"
memory: "{{ memory | default(vm_defaults.memory) }}"
ostype: "{{ ostype | default(vm_defaults.ostype) }}"
scsi: "{{ scsi | default(vm_defaults.scsi) }}"
state: present
delegate_to: localhost
```
Some VMs will not be created.
If you add traces in the function get_vmid(proxmox, name) of ansible/modules/cloud/misc/proxmox_kvm.py, you'll notice that for some new VMs of the list, the dictionary doesn't have a 'name' entry. The `vm['name']` is then failing.
Use instead `vm.get('name')`. It will returns None if there is no name entry in the dictionary and the check `if vm.get('name') == name` can be processed properly.
|
https://github.com/ansible/ansible/issues/58194
|
https://github.com/ansible/ansible/pull/58196
|
21863d48f3bb8c8b470fb3a132c9a3a7db470561
|
8923d1353765e85ad4660f8fcc81ee4915a94f91
| 2019-06-21T13:30:25Z |
python
| 2019-08-13T17:22:17Z |
changelogs/fragments/58194-vm-name-item-not-yet-present-at-creation-time.yaml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 58,194 |
[proxmox_kvm] Error in get_vmid when called 'too early' after the creation of a VM
|
##### SUMMARY
For a fraction of second after the creation of a new VM, the VM name is not yet defined in its relative dictionary returned by the proxmoxer when querying the list of VMs (proxmox.cluster.resources.get(type='vm')).
Accessing to this non existing attribute as an element of a dictionary instead of using the get function on the dictionary raise an error.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
proxmox_kvm
##### ANSIBLE VERSION
```
ansible 2.8.0
config file = /home/pguermo/Projects/superluminal-pdp/etc/ansible.cfg
configured module search path = ['/home/pguermo/Projects/superluminal-pdp/library']
ansible python module location = /home/pguermo/.virtualenvs/superluminal-pdp/lib/python3.5/site-packages/ansible
executable location = /home/pguermo/.virtualenvs/superluminal-pdp/bin/ansible
python version = 3.5.3 (default, Sep 27 2018, 17:25:39) [GCC 6.3.0 20170516]
```
##### STEPS TO REPRODUCE
Create several VMs on proxmox in a row.
```
proxmox_kvm:
api_host: "{{ pve_host_fqdn }}"
api_user: "{{ pve_user }}"
api_password: "{{ pve_password }}"
validate_certs: "{{ validate_certs }}"
node: "{{ pve_node }}"
name: "{{ ansible_host }}"
pool: "{{ pool_id }}"
vmid: "{{ vmid }}"
description: "{{ description | default(vm.description) }}"
net: "{{ net | default(vm.net) }}"
balloon: "{{ balloon | default(vm_defaults.balloon) }}"
cores: "{{ cores | default(vm_defaults.cores) }}"
ide: "{{ ide | default(vm_defaults.ide) }}"
memory: "{{ memory | default(vm_defaults.memory) }}"
ostype: "{{ ostype | default(vm_defaults.ostype) }}"
scsi: "{{ scsi | default(vm_defaults.scsi) }}"
state: present
delegate_to: localhost
```
Some VMs will not be created.
If you add traces in the function get_vmid(proxmox, name) of ansible/modules/cloud/misc/proxmox_kvm.py, you'll notice that for some new VMs of the list, the dictionary doesn't have a 'name' entry. The `vm['name']` is then failing.
Use instead `vm.get('name')`. It will returns None if there is no name entry in the dictionary and the check `if vm.get('name') == name` can be processed properly.
|
https://github.com/ansible/ansible/issues/58194
|
https://github.com/ansible/ansible/pull/58196
|
21863d48f3bb8c8b470fb3a132c9a3a7db470561
|
8923d1353765e85ad4660f8fcc81ee4915a94f91
| 2019-06-21T13:30:25Z |
python
| 2019-08-13T17:22:17Z |
lib/ansible/modules/cloud/misc/proxmox_kvm.py
|
#!/usr/bin/python
# -*- coding: utf-8 -*-
# Copyright: (c) 2016, Abdoul Bah (@helldorado) <bahabdoul at gmail.com>
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
ANSIBLE_METADATA = {'metadata_version': '1.1',
'status': ['preview'],
'supported_by': 'community'}
DOCUMENTATION = r'''
---
module: proxmox_kvm
short_description: Management of Qemu(KVM) Virtual Machines in Proxmox VE cluster.
description:
- Allows you to create/delete/stop Qemu(KVM) Virtual Machines in Proxmox VE cluster.
version_added: "2.3"
author: "Abdoul Bah (@helldorado) <bahabdoul at gmail.com>"
options:
acpi:
description:
- Specify if ACPI should be enabled/disabled.
type: bool
default: 'yes'
agent:
description:
- Specify if the QEMU Guest Agent should be enabled/disabled.
type: bool
args:
description:
- Pass arbitrary arguments to kvm.
- This option is for experts only!
default: "-serial unix:/var/run/qemu-server/VMID.serial,server,nowait"
api_host:
description:
- Specify the target host of the Proxmox VE cluster.
required: true
api_user:
description:
- Specify the user to authenticate with.
required: true
api_password:
description:
- Specify the password to authenticate with.
- You can use C(PROXMOX_PASSWORD) environment variable.
autostart:
description:
- Specify if the VM should be automatically restarted after crash (currently ignored in PVE API).
type: bool
default: 'no'
balloon:
description:
- Specify the amount of RAM for the VM in MB.
- Using zero disables the balloon driver.
default: 0
bios:
description:
- Specify the BIOS implementation.
choices: ['seabios', 'ovmf']
boot:
description:
- Specify the boot order -> boot on floppy C(a), hard disk C(c), CD-ROM C(d), or network C(n).
- You can combine to set order.
default: cnd
bootdisk:
description:
- Enable booting from specified disk. C((ide|sata|scsi|virtio)\d+)
clone:
description:
- Name of VM to be cloned. If C(vmid) is setted, C(clone) can take arbitrary value but required for intiating the clone.
cores:
description:
- Specify number of cores per socket.
default: 1
cpu:
description:
- Specify emulated CPU type.
default: kvm64
cpulimit:
description:
- Specify if CPU usage will be limited. Value 0 indicates no CPU limit.
- If the computer has 2 CPUs, it has total of '2' CPU time
cpuunits:
description:
- Specify CPU weight for a VM.
- You can disable fair-scheduler configuration by setting this to 0
default: 1000
delete:
description:
- Specify a list of settings you want to delete.
description:
description:
- Specify the description for the VM. Only used on the configuration web interface.
- This is saved as comment inside the configuration file.
digest:
description:
- Specify if to prevent changes if current configuration file has different SHA1 digest.
- This can be used to prevent concurrent modifications.
force:
description:
- Allow to force stop VM.
- Can be used only with states C(stopped), C(restarted).
type: bool
format:
description:
- Target drive's backing file's data format.
- Used only with clone
default: qcow2
choices: [ "cloop", "cow", "qcow", "qcow2", "qed", "raw", "vmdk" ]
freeze:
description:
- Specify if PVE should freeze CPU at startup (use 'c' monitor command to start execution).
type: bool
full:
description:
- Create a full copy of all disk. This is always done when you clone a normal VM.
- For VM templates, we try to create a linked clone by default.
- Used only with clone
type: bool
default: 'yes'
hostpci:
description:
- Specify a hash/dictionary of map host pci devices into guest. C(hostpci='{"key":"value", "key":"value"}').
- Keys allowed are - C(hostpci[n]) where 0 ≤ n ≤ N.
- Values allowed are - C("host="HOSTPCIID[;HOSTPCIID2...]",pcie="1|0",rombar="1|0",x-vga="1|0"").
- The C(host) parameter is Host PCI device pass through. HOSTPCIID syntax is C(bus:dev.func) (hexadecimal numbers).
- C(pcie=boolean) I(default=0) Choose the PCI-express bus (needs the q35 machine model).
- C(rombar=boolean) I(default=1) Specify whether or not the device's ROM will be visible in the guest's memory map.
- C(x-vga=boolean) I(default=0) Enable vfio-vga device support.
- /!\ This option allows direct access to host hardware. So it is no longer possible to migrate such machines - use with special care.
hotplug:
description:
- Selectively enable hotplug features.
- This is a comma separated list of hotplug features C('network', 'disk', 'cpu', 'memory' and 'usb').
- Value 0 disables hotplug completely and value 1 is an alias for the default C('network,disk,usb').
hugepages:
description:
- Enable/disable hugepages memory.
choices: ['any', '2', '1024']
ide:
description:
- A hash/dictionary of volume used as IDE hard disk or CD-ROM. C(ide='{"key":"value", "key":"value"}').
- Keys allowed are - C(ide[n]) where 0 ≤ n ≤ 3.
- Values allowed are - C("storage:size,format=value").
- C(storage) is the storage identifier where to create the disk.
- C(size) is the size of the disk in GB.
- C(format) is the drive's backing file's data format. C(qcow2|raw|subvol).
keyboard:
description:
- Sets the keyboard layout for VNC server.
kvm:
description:
- Enable/disable KVM hardware virtualization.
type: bool
default: 'yes'
localtime:
description:
- Sets the real time clock to local time.
- This is enabled by default if ostype indicates a Microsoft OS.
type: bool
lock:
description:
- Lock/unlock the VM.
choices: ['migrate', 'backup', 'snapshot', 'rollback']
machine:
description:
- Specifies the Qemu machine type.
- type => C((pc|pc(-i440fx)?-\d+\.\d+(\.pxe)?|q35|pc-q35-\d+\.\d+(\.pxe)?))
memory:
description:
- Memory size in MB for instance.
default: 512
migrate_downtime:
description:
- Sets maximum tolerated downtime (in seconds) for migrations.
migrate_speed:
description:
- Sets maximum speed (in MB/s) for migrations.
- A value of 0 is no limit.
name:
description:
- Specifies the VM name. Only used on the configuration web interface.
- Required only for C(state=present).
net:
description:
- A hash/dictionary of network interfaces for the VM. C(net='{"key":"value", "key":"value"}').
- Keys allowed are - C(net[n]) where 0 ≤ n ≤ N.
- Values allowed are - C("model="XX:XX:XX:XX:XX:XX",bridge="value",rate="value",tag="value",firewall="1|0",trunks="vlanid"").
- Model is one of C(e1000 e1000-82540em e1000-82544gc e1000-82545em i82551 i82557b i82559er ne2k_isa ne2k_pci pcnet rtl8139 virtio vmxnet3).
- C(XX:XX:XX:XX:XX:XX) should be an unique MAC address. This is automatically generated if not specified.
- The C(bridge) parameter can be used to automatically add the interface to a bridge device. The Proxmox VE standard bridge is called 'vmbr0'.
- Option C(rate) is used to limit traffic bandwidth from and to this interface. It is specified as floating point number, unit is 'Megabytes per second'.
- If you specify no bridge, we create a kvm 'user' (NATed) network device, which provides DHCP and DNS services.
newid:
description:
- VMID for the clone. Used only with clone.
- If newid is not set, the next available VM ID will be fetched from ProxmoxAPI.
node:
description:
- Proxmox VE node, where the new VM will be created.
- Only required for C(state=present).
- For other states, it will be autodiscovered.
numa:
description:
- A hash/dictionaries of NUMA topology. C(numa='{"key":"value", "key":"value"}').
- Keys allowed are - C(numa[n]) where 0 ≤ n ≤ N.
- Values allowed are - C("cpu="<id[-id];...>",hostnodes="<id[-id];...>",memory="number",policy="(bind|interleave|preferred)"").
- C(cpus) CPUs accessing this NUMA node.
- C(hostnodes) Host NUMA nodes to use.
- C(memory) Amount of memory this NUMA node provides.
- C(policy) NUMA allocation policy.
onboot:
description:
- Specifies whether a VM will be started during system bootup.
type: bool
default: 'yes'
ostype:
description:
- Specifies guest operating system. This is used to enable special optimization/features for specific operating systems.
- The l26 is Linux 2.6/3.X Kernel.
choices: ['other', 'wxp', 'w2k', 'w2k3', 'w2k8', 'wvista', 'win7', 'win8', 'l24', 'l26', 'solaris']
default: l26
parallel:
description:
- A hash/dictionary of map host parallel devices. C(parallel='{"key":"value", "key":"value"}').
- Keys allowed are - (parallel[n]) where 0 ≤ n ≤ 2.
- Values allowed are - C("/dev/parport\d+|/dev/usb/lp\d+").
pool:
description:
- Add the new VM to the specified pool.
protection:
description:
- Enable/disable the protection flag of the VM. This will enable/disable the remove VM and remove disk operations.
type: bool
reboot:
description:
- Allow reboot. If set to C(yes), the VM exit on reboot.
type: bool
revert:
description:
- Revert a pending change.
sata:
description:
- A hash/dictionary of volume used as sata hard disk or CD-ROM. C(sata='{"key":"value", "key":"value"}').
- Keys allowed are - C(sata[n]) where 0 ≤ n ≤ 5.
- Values allowed are - C("storage:size,format=value").
- C(storage) is the storage identifier where to create the disk.
- C(size) is the size of the disk in GB.
- C(format) is the drive's backing file's data format. C(qcow2|raw|subvol).
scsi:
description:
- A hash/dictionary of volume used as SCSI hard disk or CD-ROM. C(scsi='{"key":"value", "key":"value"}').
- Keys allowed are - C(sata[n]) where 0 ≤ n ≤ 13.
- Values allowed are - C("storage:size,format=value").
- C(storage) is the storage identifier where to create the disk.
- C(size) is the size of the disk in GB.
- C(format) is the drive's backing file's data format. C(qcow2|raw|subvol).
scsihw:
description:
- Specifies the SCSI controller model.
choices: ['lsi', 'lsi53c810', 'virtio-scsi-pci', 'virtio-scsi-single', 'megasas', 'pvscsi']
serial:
description:
- A hash/dictionary of serial device to create inside the VM. C('{"key":"value", "key":"value"}').
- Keys allowed are - serial[n](str; required) where 0 ≤ n ≤ 3.
- Values allowed are - C((/dev/.+|socket)).
- /!\ If you pass through a host serial device, it is no longer possible to migrate such machines - use with special care.
shares:
description:
- Rets amount of memory shares for auto-ballooning. (0 - 50000).
- The larger the number is, the more memory this VM gets.
- The number is relative to weights of all other running VMs.
- Using 0 disables auto-ballooning, this means no limit.
skiplock:
description:
- Ignore locks
- Only root is allowed to use this option.
smbios:
description:
- Specifies SMBIOS type 1 fields.
snapname:
description:
- The name of the snapshot. Used only with clone.
sockets:
description:
- Sets the number of CPU sockets. (1 - N).
default: 1
startdate:
description:
- Sets the initial date of the real time clock.
- Valid format for date are C('now') or C('2016-09-25T16:01:21') or C('2016-09-25').
startup:
description:
- Startup and shutdown behavior. C([[order=]\d+] [,up=\d+] [,down=\d+]).
- Order is a non-negative number defining the general startup order.
- Shutdown in done with reverse ordering.
state:
description:
- Indicates desired state of the instance.
- If C(current), the current state of the VM will be fecthed. You can access it with C(results.status)
choices: ['present', 'started', 'absent', 'stopped', 'restarted','current']
default: present
storage:
description:
- Target storage for full clone.
tablet:
description:
- Enables/disables the USB tablet device.
type: bool
default: 'no'
target:
description:
- Target node. Only allowed if the original VM is on shared storage.
- Used only with clone
tdf:
description:
- Enables/disables time drift fix.
type: bool
template:
description:
- Enables/disables the template.
type: bool
default: 'no'
timeout:
description:
- Timeout for operations.
default: 30
update:
description:
- If C(yes), the VM will be update with new value.
- Cause of the operations of the API and security reasons, I have disabled the update of the following parameters
- C(net, virtio, ide, sata, scsi). Per example updating C(net) update the MAC address and C(virtio) create always new disk...
type: bool
default: 'no'
validate_certs:
description:
- If C(no), SSL certificates will not be validated. This should only be used on personally controlled sites using self-signed certificates.
type: bool
default: 'no'
vcpus:
description:
- Sets number of hotplugged vcpus.
vga:
description:
- Select VGA type. If you want to use high resolution modes (>= 1280x1024x16) then you should use option 'std' or 'vmware'.
choices: ['std', 'cirrus', 'vmware', 'qxl', 'serial0', 'serial1', 'serial2', 'serial3', 'qxl2', 'qxl3', 'qxl4']
default: std
virtio:
description:
- A hash/dictionary of volume used as VIRTIO hard disk. C(virtio='{"key":"value", "key":"value"}').
- Keys allowed are - C(virto[n]) where 0 ≤ n ≤ 15.
- Values allowed are - C("storage:size,format=value").
- C(storage) is the storage identifier where to create the disk.
- C(size) is the size of the disk in GB.
- C(format) is the drive's backing file's data format. C(qcow2|raw|subvol).
vmid:
description:
- Specifies the VM ID. Instead use I(name) parameter.
- If vmid is not set, the next available VM ID will be fetched from ProxmoxAPI.
watchdog:
description:
- Creates a virtual hardware watchdog device.
requirements: [ "proxmoxer", "requests" ]
'''
EXAMPLES = '''
# Create new VM with minimal options
- proxmox_kvm:
api_user : root@pam
api_password: secret
api_host : helldorado
name : spynal
node : sabrewulf
# Create new VM with minimal options and given vmid
- proxmox_kvm:
api_user : root@pam
api_password: secret
api_host : helldorado
name : spynal
node : sabrewulf
vmid : 100
# Create new VM with two network interface options.
- proxmox_kvm:
api_user : root@pam
api_password: secret
api_host : helldorado
name : spynal
node : sabrewulf
net : '{"net0":"virtio,bridge=vmbr1,rate=200", "net1":"e1000,bridge=vmbr2,"}'
# Create new VM with one network interface, three virto hard disk, 4 cores, and 2 vcpus.
- proxmox_kvm:
api_user : root@pam
api_password: secret
api_host : helldorado
name : spynal
node : sabrewulf
net : '{"net0":"virtio,bridge=vmbr1,rate=200"}'
virtio : '{"virtio0":"VMs_LVM:10", "virtio1":"VMs:2,format=qcow2", "virtio2":"VMs:5,format=raw"}'
cores : 4
vcpus : 2
# Clone VM with only source VM name
- proxmox_kvm:
api_user : root@pam
api_password: secret
api_host : helldorado
clone : spynal # The VM source
name : zavala # The target VM name
node : sabrewulf
storage : VMs
format : qcow2
timeout : 500 # Note: The task can take a while. Adapt
# Clone VM with source vmid and target newid and raw format
- proxmox_kvm:
api_user : root@pam
api_password: secret
api_host : helldorado
clone : arbitrary_name
vmid : 108
newid : 152
name : zavala # The target VM name
node : sabrewulf
storage : LVM_STO
format : raw
timeout : 300 # Note: The task can take a while. Adapt
# Create new VM and lock it for snapashot.
- proxmox_kvm:
api_user : root@pam
api_password: secret
api_host : helldorado
name : spynal
node : sabrewulf
lock : snapshot
# Create new VM and set protection to disable the remove VM and remove disk operations
- proxmox_kvm:
api_user : root@pam
api_password: secret
api_host : helldorado
name : spynal
node : sabrewulf
protection : yes
# Start VM
- proxmox_kvm:
api_user : root@pam
api_password: secret
api_host : helldorado
name : spynal
node : sabrewulf
state : started
# Stop VM
- proxmox_kvm:
api_user : root@pam
api_password: secret
api_host : helldorado
name : spynal
node : sabrewulf
state : stopped
# Stop VM with force
- proxmox_kvm:
api_user : root@pam
api_password: secret
api_host : helldorado
name : spynal
node : sabrewulf
state : stopped
force : yes
# Restart VM
- proxmox_kvm:
api_user : root@pam
api_password: secret
api_host : helldorado
name : spynal
node : sabrewulf
state : restarted
# Remove VM
- proxmox_kvm:
api_user : root@pam
api_password: secret
api_host : helldorado
name : spynal
node : sabrewulf
state : absent
# Get VM current state
- proxmox_kvm:
api_user : root@pam
api_password: secret
api_host : helldorado
name : spynal
node : sabrewulf
state : current
# Update VM configuration
- proxmox_kvm:
api_user : root@pam
api_password: secret
api_host : helldorado
name : spynal
node : sabrewulf
cores : 8
memory : 16384
update : yes
# Delete QEMU parameters
- proxmox_kvm:
api_user : root@pam
api_password: secret
api_host : helldorado
name : spynal
node : sabrewulf
delete : 'args,template,cpulimit'
# Revert a pending change
- proxmox_kvm:
api_user : root@pam
api_password: secret
api_host : helldorado
name : spynal
node : sabrewulf
revert : 'template,cpulimit'
'''
RETURN = '''
devices:
description: The list of devices created or used.
returned: success
type: dict
sample: '
{
"ide0": "VMS_LVM:vm-115-disk-1",
"ide1": "VMs:115/vm-115-disk-3.raw",
"virtio0": "VMS_LVM:vm-115-disk-2",
"virtio1": "VMs:115/vm-115-disk-1.qcow2",
"virtio2": "VMs:115/vm-115-disk-2.raw"
}'
mac:
description: List of mac address created and net[n] attached. Useful when you want to use provision systems like Foreman via PXE.
returned: success
type: dict
sample: '
{
"net0": "3E:6E:97:D2:31:9F",
"net1": "B6:A1:FC:EF:78:A4"
}'
vmid:
description: The VM vmid.
returned: success
type: int
sample: 115
status:
description:
- The current virtual machine status.
- Returned only when C(state=current)
returned: success
type: dict
sample: '{
"changed": false,
"msg": "VM kropta with vmid = 110 is running",
"status": "running"
}'
'''
import os
import re
import time
import traceback
try:
from proxmoxer import ProxmoxAPI
HAS_PROXMOXER = True
except ImportError:
HAS_PROXMOXER = False
from ansible.module_utils.basic import AnsibleModule
from ansible.module_utils._text import to_native
VZ_TYPE = 'qemu'
def get_nextvmid(module, proxmox):
try:
vmid = proxmox.cluster.nextid.get()
return vmid
except Exception as e:
module.fail_json(msg="Unable to get next vmid. Failed with exception: %s" % to_native(e),
exception=traceback.format_exc())
def get_vmid(proxmox, name):
return [vm['vmid'] for vm in proxmox.cluster.resources.get(type='vm') if vm['name'] == name]
def get_vm(proxmox, vmid):
return [vm for vm in proxmox.cluster.resources.get(type='vm') if vm['vmid'] == int(vmid)]
def node_check(proxmox, node):
return [True for nd in proxmox.nodes.get() if nd['node'] == node]
def get_vminfo(module, proxmox, node, vmid, **kwargs):
global results
results = {}
mac = {}
devices = {}
try:
vm = proxmox.nodes(node).qemu(vmid).config.get()
except Exception as e:
module.fail_json(msg='Getting information for VM with vmid = %s failed with exception: %s' % (vmid, e))
# Sanitize kwargs. Remove not defined args and ensure True and False converted to int.
kwargs = dict((k, v) for k, v in kwargs.items() if v is not None)
# Convert all dict in kwargs to elements. For hostpci[n], ide[n], net[n], numa[n], parallel[n], sata[n], scsi[n], serial[n], virtio[n]
for k in list(kwargs.keys()):
if isinstance(kwargs[k], dict):
kwargs.update(kwargs[k])
del kwargs[k]
# Split information by type
for k, v in kwargs.items():
if re.match(r'net[0-9]', k) is not None:
interface = k
k = vm[k]
k = re.search('=(.*?),', k).group(1)
mac[interface] = k
if (re.match(r'virtio[0-9]', k) is not None or
re.match(r'ide[0-9]', k) is not None or
re.match(r'scsi[0-9]', k) is not None or
re.match(r'sata[0-9]', k) is not None):
device = k
k = vm[k]
k = re.search('(.*?),', k).group(1)
devices[device] = k
results['mac'] = mac
results['devices'] = devices
results['vmid'] = int(vmid)
def settings(module, proxmox, vmid, node, name, timeout, **kwargs):
proxmox_node = proxmox.nodes(node)
# Sanitize kwargs. Remove not defined args and ensure True and False converted to int.
kwargs = dict((k, v) for k, v in kwargs.items() if v is not None)
if getattr(proxmox_node, VZ_TYPE)(vmid).config.set(**kwargs) is None:
return True
else:
return False
def create_vm(module, proxmox, vmid, newid, node, name, memory, cpu, cores, sockets, timeout, update, **kwargs):
# Available only in PVE 4
only_v4 = ['force', 'protection', 'skiplock']
# valide clone parameters
valid_clone_params = ['format', 'full', 'pool', 'snapname', 'storage', 'target']
clone_params = {}
# Default args for vm. Note: -args option is for experts only. It allows you to pass arbitrary arguments to kvm.
vm_args = "-serial unix:/var/run/qemu-server/{0}.serial,server,nowait".format(vmid)
proxmox_node = proxmox.nodes(node)
# Sanitize kwargs. Remove not defined args and ensure True and False converted to int.
kwargs = dict((k, v) for k, v in kwargs.items() if v is not None)
kwargs.update(dict([k, int(v)] for k, v in kwargs.items() if isinstance(v, bool)))
# The features work only on PVE 4
if PVE_MAJOR_VERSION < 4:
for p in only_v4:
if p in kwargs:
del kwargs[p]
# If update, don't update disk (virtio, ide, sata, scsi) and network interface
if update:
if 'virtio' in kwargs:
del kwargs['virtio']
if 'sata' in kwargs:
del kwargs['sata']
if 'scsi' in kwargs:
del kwargs['scsi']
if 'ide' in kwargs:
del kwargs['ide']
if 'net' in kwargs:
del kwargs['net']
# Convert all dict in kwargs to elements. For hostpci[n], ide[n], net[n], numa[n], parallel[n], sata[n], scsi[n], serial[n], virtio[n]
for k in list(kwargs.keys()):
if isinstance(kwargs[k], dict):
kwargs.update(kwargs[k])
del kwargs[k]
# Rename numa_enabled to numa. According the API documentation
if 'numa_enabled' in kwargs:
kwargs['numa'] = kwargs['numa_enabled']
del kwargs['numa_enabled']
# -args and skiplock require root@pam user
if module.params['api_user'] == "root@pam" and module.params['args'] is None:
if not update:
kwargs['args'] = vm_args
elif module.params['api_user'] == "root@pam" and module.params['args'] is not None:
kwargs['args'] = module.params['args']
elif module.params['api_user'] != "root@pam" and module.params['args'] is not None:
module.fail_json(msg='args parameter require root@pam user. ')
if module.params['api_user'] != "root@pam" and module.params['skiplock'] is not None:
module.fail_json(msg='skiplock parameter require root@pam user. ')
if update:
if getattr(proxmox_node, VZ_TYPE)(vmid).config.set(name=name, memory=memory, cpu=cpu, cores=cores, sockets=sockets, **kwargs) is None:
return True
else:
return False
elif module.params['clone'] is not None:
for param in valid_clone_params:
if module.params[param] is not None:
clone_params[param] = module.params[param]
clone_params.update(dict([k, int(v)] for k, v in clone_params.items() if isinstance(v, bool)))
taskid = proxmox_node.qemu(vmid).clone.post(newid=newid, name=name, **clone_params)
else:
taskid = getattr(proxmox_node, VZ_TYPE).create(vmid=vmid, name=name, memory=memory, cpu=cpu, cores=cores, sockets=sockets, **kwargs)
while timeout:
if (proxmox_node.tasks(taskid).status.get()['status'] == 'stopped' and
proxmox_node.tasks(taskid).status.get()['exitstatus'] == 'OK'):
return True
timeout = timeout - 1
if timeout == 0:
module.fail_json(msg='Reached timeout while waiting for creating VM. Last line in task before timeout: %s' %
proxmox_node.tasks(taskid).log.get()[:1])
time.sleep(1)
return False
def start_vm(module, proxmox, vm, vmid, timeout):
taskid = getattr(proxmox.nodes(vm[0]['node']), VZ_TYPE)(vmid).status.start.post()
while timeout:
if (proxmox.nodes(vm[0]['node']).tasks(taskid).status.get()['status'] == 'stopped' and
proxmox.nodes(vm[0]['node']).tasks(taskid).status.get()['exitstatus'] == 'OK'):
return True
timeout -= 1
if timeout == 0:
module.fail_json(msg='Reached timeout while waiting for starting VM. Last line in task before timeout: %s'
% proxmox.nodes(vm[0]['node']).tasks(taskid).log.get()[:1])
time.sleep(1)
return False
def stop_vm(module, proxmox, vm, vmid, timeout, force):
if force:
taskid = getattr(proxmox.nodes(vm[0]['node']), VZ_TYPE)(vmid).status.shutdown.post(forceStop=1)
else:
taskid = getattr(proxmox.nodes(vm[0]['node']), VZ_TYPE)(vmid).status.shutdown.post()
while timeout:
if (proxmox.nodes(vm[0]['node']).tasks(taskid).status.get()['status'] == 'stopped' and
proxmox.nodes(vm[0]['node']).tasks(taskid).status.get()['exitstatus'] == 'OK'):
return True
timeout -= 1
if timeout == 0:
module.fail_json(msg='Reached timeout while waiting for stopping VM. Last line in task before timeout: %s'
% proxmox.nodes(vm[0]['node']).tasks(taskid).log.get()[:1])
time.sleep(1)
return False
def main():
module = AnsibleModule(
argument_spec=dict(
acpi=dict(type='bool', default='yes'),
agent=dict(type='bool'),
args=dict(type='str', default=None),
api_host=dict(required=True),
api_user=dict(required=True),
api_password=dict(no_log=True),
autostart=dict(type='bool', default='no'),
balloon=dict(type='int', default=0),
bios=dict(choices=['seabios', 'ovmf']),
boot=dict(type='str', default='cnd'),
bootdisk=dict(type='str'),
clone=dict(type='str', default=None),
cores=dict(type='int', default=1),
cpu=dict(type='str', default='kvm64'),
cpulimit=dict(type='int'),
cpuunits=dict(type='int', default=1000),
delete=dict(type='str', default=None),
description=dict(type='str'),
digest=dict(type='str'),
force=dict(type='bool', default=None),
format=dict(type='str', default='qcow2', choices=['cloop', 'cow', 'qcow', 'qcow2', 'qed', 'raw', 'vmdk']),
freeze=dict(type='bool'),
full=dict(type='bool', default='yes'),
hostpci=dict(type='dict'),
hotplug=dict(type='str'),
hugepages=dict(choices=['any', '2', '1024']),
ide=dict(type='dict', default=None),
keyboard=dict(type='str'),
kvm=dict(type='bool', default='yes'),
localtime=dict(type='bool'),
lock=dict(choices=['migrate', 'backup', 'snapshot', 'rollback']),
machine=dict(type='str'),
memory=dict(type='int', default=512),
migrate_downtime=dict(type='int'),
migrate_speed=dict(type='int'),
name=dict(type='str'),
net=dict(type='dict'),
newid=dict(type='int', default=None),
node=dict(),
numa=dict(type='dict'),
numa_enabled=dict(type='bool'),
onboot=dict(type='bool', default='yes'),
ostype=dict(default='l26', choices=['other', 'wxp', 'w2k', 'w2k3', 'w2k8', 'wvista', 'win7', 'win8', 'l24', 'l26', 'solaris']),
parallel=dict(type='dict'),
pool=dict(type='str'),
protection=dict(type='bool'),
reboot=dict(type='bool'),
revert=dict(type='str', default=None),
sata=dict(type='dict'),
scsi=dict(type='dict'),
scsihw=dict(choices=['lsi', 'lsi53c810', 'virtio-scsi-pci', 'virtio-scsi-single', 'megasas', 'pvscsi']),
serial=dict(type='dict'),
shares=dict(type='int'),
skiplock=dict(type='bool'),
smbios=dict(type='str'),
snapname=dict(type='str'),
sockets=dict(type='int', default=1),
startdate=dict(type='str'),
startup=dict(),
state=dict(default='present', choices=['present', 'absent', 'stopped', 'started', 'restarted', 'current']),
storage=dict(type='str'),
tablet=dict(type='bool', default='no'),
target=dict(type='str'),
tdf=dict(type='bool'),
template=dict(type='bool', default='no'),
timeout=dict(type='int', default=30),
update=dict(type='bool', default='no'),
validate_certs=dict(type='bool', default='no'),
vcpus=dict(type='int', default=None),
vga=dict(default='std', choices=['std', 'cirrus', 'vmware', 'qxl', 'serial0', 'serial1', 'serial2', 'serial3', 'qxl2', 'qxl3', 'qxl4']),
virtio=dict(type='dict', default=None),
vmid=dict(type='int', default=None),
watchdog=dict(),
),
mutually_exclusive=[('delete', 'revert'), ('delete', 'update'), ('revert', 'update'), ('clone', 'update'), ('clone', 'delete'), ('clone', 'revert')],
required_one_of=[('name', 'vmid',)],
required_if=[('state', 'present', ['node'])]
)
if not HAS_PROXMOXER:
module.fail_json(msg='proxmoxer required for this module')
api_user = module.params['api_user']
api_host = module.params['api_host']
api_password = module.params['api_password']
clone = module.params['clone']
cpu = module.params['cpu']
cores = module.params['cores']
delete = module.params['delete']
memory = module.params['memory']
name = module.params['name']
newid = module.params['newid']
node = module.params['node']
revert = module.params['revert']
sockets = module.params['sockets']
state = module.params['state']
timeout = module.params['timeout']
update = bool(module.params['update'])
vmid = module.params['vmid']
validate_certs = module.params['validate_certs']
# If password not set get it from PROXMOX_PASSWORD env
if not api_password:
try:
api_password = os.environ['PROXMOX_PASSWORD']
except KeyError as e:
module.fail_json(msg='You should set api_password param or use PROXMOX_PASSWORD environment variable')
try:
proxmox = ProxmoxAPI(api_host, user=api_user, password=api_password, verify_ssl=validate_certs)
global VZ_TYPE
global PVE_MAJOR_VERSION
PVE_MAJOR_VERSION = 3 if float(proxmox.version.get()['version']) < 4.0 else 4
except Exception as e:
module.fail_json(msg='authorization on proxmox cluster failed with exception: %s' % e)
# If vmid not set get the Next VM id from ProxmoxAPI
# If vm name is set get the VM id from ProxmoxAPI
if not vmid:
if state == 'present' and (not update and not clone) and (not delete and not revert):
try:
vmid = get_nextvmid(module, proxmox)
except Exception as e:
module.fail_json(msg="Can't get the next vmid for VM {0} automatically. Ensure your cluster state is good".format(name))
else:
try:
if not clone:
vmid = get_vmid(proxmox, name)[0]
else:
vmid = get_vmid(proxmox, clone)[0]
except Exception as e:
if not clone:
module.fail_json(msg="VM {0} does not exist in cluster.".format(name))
else:
module.fail_json(msg="VM {0} does not exist in cluster.".format(clone))
if clone is not None:
if get_vmid(proxmox, name):
module.exit_json(changed=False, msg="VM with name <%s> already exists" % name)
if vmid is not None:
vm = get_vm(proxmox, vmid)
if not vm:
module.fail_json(msg='VM with vmid = %s does not exist in cluster' % vmid)
if not newid:
try:
newid = get_nextvmid(module, proxmox)
except Exception as e:
module.fail_json(msg="Can't get the next vmid for VM {0} automatically. Ensure your cluster state is good".format(name))
else:
vm = get_vm(proxmox, newid)
if vm:
module.exit_json(changed=False, msg="vmid %s with VM name %s already exists" % (newid, name))
if delete is not None:
try:
settings(module, proxmox, vmid, node, name, timeout, delete=delete)
module.exit_json(changed=True, msg="Settings has deleted on VM {0} with vmid {1}".format(name, vmid))
except Exception as e:
module.fail_json(msg='Unable to delete settings on VM {0} with vmid {1}: '.format(name, vmid) + str(e))
elif revert is not None:
try:
settings(module, proxmox, vmid, node, name, timeout, revert=revert)
module.exit_json(changed=True, msg="Settings has reverted on VM {0} with vmid {1}".format(name, vmid))
except Exception as e:
module.fail_json(msg='Unable to revert settings on VM {0} with vmid {1}: Maybe is not a pending task... '.format(name, vmid) + str(e))
if state == 'present':
try:
if get_vm(proxmox, vmid) and not (update or clone):
module.exit_json(changed=False, msg="VM with vmid <%s> already exists" % vmid)
elif get_vmid(proxmox, name) and not (update or clone):
module.exit_json(changed=False, msg="VM with name <%s> already exists" % name)
elif not (node, name):
module.fail_json(msg='node, name is mandatory for creating/updating vm')
elif not node_check(proxmox, node):
module.fail_json(msg="node '%s' does not exist in cluster" % node)
create_vm(module, proxmox, vmid, newid, node, name, memory, cpu, cores, sockets, timeout, update,
acpi=module.params['acpi'],
agent=module.params['agent'],
autostart=module.params['autostart'],
balloon=module.params['balloon'],
bios=module.params['bios'],
boot=module.params['boot'],
bootdisk=module.params['bootdisk'],
cpulimit=module.params['cpulimit'],
cpuunits=module.params['cpuunits'],
description=module.params['description'],
digest=module.params['digest'],
force=module.params['force'],
freeze=module.params['freeze'],
hostpci=module.params['hostpci'],
hotplug=module.params['hotplug'],
hugepages=module.params['hugepages'],
ide=module.params['ide'],
keyboard=module.params['keyboard'],
kvm=module.params['kvm'],
localtime=module.params['localtime'],
lock=module.params['lock'],
machine=module.params['machine'],
migrate_downtime=module.params['migrate_downtime'],
migrate_speed=module.params['migrate_speed'],
net=module.params['net'],
numa=module.params['numa'],
numa_enabled=module.params['numa_enabled'],
onboot=module.params['onboot'],
ostype=module.params['ostype'],
parallel=module.params['parallel'],
pool=module.params['pool'],
protection=module.params['protection'],
reboot=module.params['reboot'],
sata=module.params['sata'],
scsi=module.params['scsi'],
scsihw=module.params['scsihw'],
serial=module.params['serial'],
shares=module.params['shares'],
skiplock=module.params['skiplock'],
smbios1=module.params['smbios'],
snapname=module.params['snapname'],
startdate=module.params['startdate'],
startup=module.params['startup'],
tablet=module.params['tablet'],
target=module.params['target'],
tdf=module.params['tdf'],
template=module.params['template'],
vcpus=module.params['vcpus'],
vga=module.params['vga'],
virtio=module.params['virtio'],
watchdog=module.params['watchdog'])
if not clone:
get_vminfo(module, proxmox, node, vmid,
ide=module.params['ide'],
net=module.params['net'],
sata=module.params['sata'],
scsi=module.params['scsi'],
virtio=module.params['virtio'])
if update:
module.exit_json(changed=True, msg="VM %s with vmid %s updated" % (name, vmid))
elif clone is not None:
module.exit_json(changed=True, msg="VM %s with newid %s cloned from vm with vmid %s" % (name, newid, vmid))
else:
module.exit_json(changed=True, msg="VM %s with vmid %s deployed" % (name, vmid), **results)
except Exception as e:
if update:
module.fail_json(msg="Unable to update vm {0} with vmid {1}=".format(name, vmid) + str(e))
elif clone is not None:
module.fail_json(msg="Unable to clone vm {0} from vmid {1}=".format(name, vmid) + str(e))
else:
module.fail_json(msg="creation of %s VM %s with vmid %s failed with exception=%s" % (VZ_TYPE, name, vmid, e))
elif state == 'started':
try:
vm = get_vm(proxmox, vmid)
if not vm:
module.fail_json(msg='VM with vmid <%s> does not exist in cluster' % vmid)
if getattr(proxmox.nodes(vm[0]['node']), VZ_TYPE)(vmid).status.current.get()['status'] == 'running':
module.exit_json(changed=False, msg="VM %s is already running" % vmid)
if start_vm(module, proxmox, vm, vmid, timeout):
module.exit_json(changed=True, msg="VM %s started" % vmid)
except Exception as e:
module.fail_json(msg="starting of VM %s failed with exception: %s" % (vmid, e))
elif state == 'stopped':
try:
vm = get_vm(proxmox, vmid)
if not vm:
module.fail_json(msg='VM with vmid = %s does not exist in cluster' % vmid)
if getattr(proxmox.nodes(vm[0]['node']), VZ_TYPE)(vmid).status.current.get()['status'] == 'stopped':
module.exit_json(changed=False, msg="VM %s is already stopped" % vmid)
if stop_vm(module, proxmox, vm, vmid, timeout, force=module.params['force']):
module.exit_json(changed=True, msg="VM %s is shutting down" % vmid)
except Exception as e:
module.fail_json(msg="stopping of VM %s failed with exception: %s" % (vmid, e))
elif state == 'restarted':
try:
vm = get_vm(proxmox, vmid)
if not vm:
module.fail_json(msg='VM with vmid = %s does not exist in cluster' % vmid)
if getattr(proxmox.nodes(vm[0]['node']), VZ_TYPE)(vmid).status.current.get()['status'] == 'stopped':
module.exit_json(changed=False, msg="VM %s is not running" % vmid)
if stop_vm(module, proxmox, vm, vmid, timeout, force=module.params['force']) and start_vm(module, proxmox, vm, vmid, timeout):
module.exit_json(changed=True, msg="VM %s is restarted" % vmid)
except Exception as e:
module.fail_json(msg="restarting of VM %s failed with exception: %s" % (vmid, e))
elif state == 'absent':
try:
vm = get_vm(proxmox, vmid)
if not vm:
module.exit_json(changed=False, msg="VM %s does not exist" % vmid)
if getattr(proxmox.nodes(vm[0]['node']), VZ_TYPE)(vmid).status.current.get()['status'] == 'running':
module.exit_json(changed=False, msg="VM %s is running. Stop it before deletion." % vmid)
taskid = getattr(proxmox.nodes(vm[0]['node']), VZ_TYPE).delete(vmid)
while timeout:
if (proxmox.nodes(vm[0]['node']).tasks(taskid).status.get()['status'] == 'stopped' and
proxmox.nodes(vm[0]['node']).tasks(taskid).status.get()['exitstatus'] == 'OK'):
module.exit_json(changed=True, msg="VM %s removed" % vmid)
timeout -= 1
if timeout == 0:
module.fail_json(msg='Reached timeout while waiting for removing VM. Last line in task before timeout: %s'
% proxmox.nodes(vm[0]['node']).tasks(taskid).log.get()[:1])
time.sleep(1)
except Exception as e:
module.fail_json(msg="deletion of VM %s failed with exception: %s" % (vmid, e))
elif state == 'current':
status = {}
try:
vm = get_vm(proxmox, vmid)
if not vm:
module.fail_json(msg='VM with vmid = %s does not exist in cluster' % vmid)
current = getattr(proxmox.nodes(vm[0]['node']), VZ_TYPE)(vmid).status.current.get()['status']
status['status'] = current
if status:
module.exit_json(changed=False, msg="VM %s with vmid = %s is %s" % (name, vmid, current), **status)
except Exception as e:
module.fail_json(msg="Unable to get vm {0} with vmid = {1} status: ".format(name, vmid) + str(e))
if __name__ == '__main__':
main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 59,584 |
In CLI commands, validate collection fully qualified name
|
##### SUMMARY
As a nicety to the user, `ansible-galaxy collection` should check certain properties of the `collection_name` argument, and provide helpful suggestions about expected format.
##### ISSUE TYPE
- Feature Idea
##### COMPONENT NAME
lib/ansible/galaxy/collection.py
##### ADDITIONAL INFORMATION
When getting this message, it's not clear to the user what they should do differently:
```yaml
$ ansible-galaxy collection install chrismeyersfsu_tower_modules -p alan -vvv
ansible-galaxy 2.9.0.dev0
config file = None
configured module search path = ['/Users/alancoding/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /Users/alancoding/Documents/repos/ansible/lib/ansible
executable location = /Users/alancoding/.virtualenvs/ansible3/bin/ansible-galaxy
python version = 3.6.5 (default, Apr 25 2018, 14:23:58) [GCC 4.2.1 Compatible Apple LLVM 9.1.0 (clang-902.0.39.1)]
No config file found; using defaults
Opened /Users/alancoding/.ansible_galaxy
[WARNING]: The specified collections path '/Users/alancoding/Documents/repos/jlaska-ansible-playbooks/alan' is not part of the configured Ansible collections paths
'/Users/alancoding/.ansible/collections:/usr/share/ansible/collections'. The installed collection won't be picked up in an Ansible run.
Found installed collection chrismeyersfsu.tower_modules:0.0.1 at '/Users/alancoding/Documents/repos/jlaska-ansible-playbooks/alan/ansible_collections/chrismeyersfsu/tower_modules'
Processing requirement collection 'chrismeyersfsu_tower_modules'
ERROR! Unexpected Exception, this is probably a bug: not enough values to unpack (expected 2, got 1)
the full traceback was:
Traceback (most recent call last):
File "/Users/alancoding/Documents/repos/ansible/bin/ansible-galaxy", line 111, in <module>
exit_code = cli.run()
File "/Users/alancoding/Documents/repos/ansible/lib/ansible/cli/galaxy.py", line 269, in run
context.CLIARGS['func']()
File "/Users/alancoding/Documents/repos/ansible/lib/ansible/cli/galaxy.py", line 617, in execute_install
no_deps, force, force_deps)
File "/Users/alancoding/Documents/repos/ansible/lib/ansible/galaxy/collection.py", line 450, in install_collections
force, force_deps, no_deps)
File "/Users/alancoding/Documents/repos/ansible/lib/ansible/galaxy/collection.py", line 831, in _build_dependency_map
validate_certs, (force or force_deps))
File "/Users/alancoding/Documents/repos/ansible/lib/ansible/galaxy/collection.py", line 898, in _get_collection_info
parent=parent)
File "/Users/alancoding/Documents/repos/ansible/lib/ansible/galaxy/collection.py", line 296, in from_name
namespace, name = collection.split('.', 1)
ValueError: not enough values to unpack (expected 2, got 1)
```
Instead of `ValueError`, it would be better to give text like
> Improperly formatted collection name argument. Collection name must be of the format "username.collection_name", like "chrismeyersfsu.tower_modules"
|
https://github.com/ansible/ansible/issues/59584
|
https://github.com/ansible/ansible/pull/59957
|
1b8aa798df6f6fa96ba5ea2a9dbf01b3f1de555c
|
14a7722e3957214bcbdc587c0d023d22f7ed2e98
| 2019-07-25T12:16:58Z |
python
| 2019-08-13T20:36:29Z |
changelogs/fragments/galaxy-argspec-verbosity.yaml
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.