status
stringclasses 1
value | repo_name
stringclasses 31
values | repo_url
stringclasses 31
values | issue_id
int64 1
104k
| title
stringlengths 4
369
| body
stringlengths 0
254k
⌀ | issue_url
stringlengths 37
56
| pull_url
stringlengths 37
54
| before_fix_sha
stringlengths 40
40
| after_fix_sha
stringlengths 40
40
| report_datetime
timestamp[us, tz=UTC] | language
stringclasses 5
values | commit_datetime
timestamp[us, tz=UTC] | updated_file
stringlengths 4
188
| file_content
stringlengths 0
5.12M
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 73,657 |
Block with rescue stops next plays and end playbook successfully on some errors
|
##### SUMMARY
Depending on the error that occurs when a block with rescue catches it, the play execution stops, and the playbook ends successfuly, but the next plays don't run. I was able to simulate it using `include_tasks`, but the task inside use a module with a wrong name (that can be provided by a user, so not something that I have control, because it might not be me running the play, the user may include and run custom tasks).
This is not a problem if the module with wrong name is in the main playbook file, because the playbook fails at compilation time, but when lazily loaded through an `include_tasks`, the task fails, the `rescue` (wrapping the `include_tasks`) catches the error successfully, but the next plays don't run, and the playbook ends successfuly.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
block
##### ANSIBLE VERSION
```paste below
ansible 2.10.5
config file = None
configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/lib/python3.6/dist-packages/ansible
executable location = /usr/local/bin/ansible
python version = 3.6.9 (default, Oct 8 2020, 12:12:24) [GCC 8.4.0]
```
##### CONFIGURATION
```paste below
```
(`ansible-config dump --only-changed` outputs nothing)
##### OS / ENVIRONMENT
Ubuntu 18.04 (docker container)
##### STEPS TO REPRODUCE
_hosts.yml:_
```yaml
[main]
localhost ansible_connection=local
```
_playbook.yml:_
```yaml
- name: Test 01
hosts: main
tasks:
- block:
- include_tasks: "test.yml"
rescue:
- debug:
msg: "rescued"
- debug:
msg: "test 01"
- name: Test 02
hosts: main
tasks:
- debug:
msg: "test 02"
- fail:
msg: "end here"
```
**Works:**
_test.yml (gives an error, but works as expected):_
```yaml
- ansible.builtin.file:
param: "value"
```
_Outputs:_
```ini
PLAY [Test 01] ************************************************************************************************************************
TASK [include_tasks] ******************************************************************************************************************
included: /main/dev/repos/cloud/test.yml for localhost
TASK [ansible.builtin.file] ***********************************************************************************************************
fatal: [localhost]: FAILED! => {"changed": false, "msg": "Unsupported parameters for (ansible.builtin.file) module: param Supported parameters include: _diff_peek, _original_basename, access_time, access_time_format, attributes, follow, force, group, mode, modification_time, modification_time_format, owner, path, recurse, selevel, serole, setype, seuser, src, state, unsafe_writes"}
TASK [debug] **************************************************************************************************************************
ok: [localhost] => {
"msg": "rescued"
}
TASK [debug] **************************************************************************************************************************
ok: [localhost] => {
"msg": "test 01"
}
PLAY [Test 02] ************************************************************************************************************************
TASK [debug] **************************************************************************************************************************
ok: [localhost] => {
"msg": "test 02"
}
TASK [fail] ***************************************************************************************************************************
fatal: [localhost]: FAILED! => {"changed": false, "msg": "end here"}
PLAY RECAP ****************************************************************************************************************************
localhost : ok=4 changed=0 unreachable=0 failed=1 skipped=0 rescued=1 ignored=0
```
**Bug:**
_test.yml (gives an error, stops the next plays, ends the playbook successfully, as explained above):_
```yaml
- wrong.collection.module:
param: "value"
```
_Outputs:_
```ini
PLAY [Test 01] ************************************************************************************************************************
TASK [include_tasks] ******************************************************************************************************************
fatal: [localhost]: FAILED! => {"reason": "couldn't resolve module/action 'wrong.collection.module'. This often indicates a misspelling, missing collection, or incorrect module path.\n\nThe error appears to be in '/main/dev/repos/cloud/test.yml': line 1, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n\n- wrong.collection.module:\n ^ here\n"}
TASK [debug] **************************************************************************************************************************
ok: [localhost] => {
"msg": "rescued"
}
TASK [debug] **************************************************************************************************************************
ok: [localhost] => {
"msg": "test 01"
}
PLAY RECAP ****************************************************************************************************************************
localhost : ok=2 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
```
##### EXPECTED RESULTS
The same as in the case that works.
##### ACTUAL RESULTS
Stop the next plays, even after being rescued, and end the playbook successfully (if I run `ansible-playbook playbook.yml && echo "success" || echo "error"` it gives an error (due to the `fail` task at the end) in the 1st case (that works), but success in the 2nd, and don't run the 2nd play).
|
https://github.com/ansible/ansible/issues/73657
|
https://github.com/ansible/ansible/pull/73722
|
d23226a6f4737fb17b7b1be0d09f39e249832c7f
|
47ee28222794010ec447b7b6d73189eed1b46581
| 2021-02-19T03:01:38Z |
python
| 2021-11-01T15:05:39Z |
lib/ansible/plugins/strategy/__init__.py
|
# (c) 2012-2014, Michael DeHaan <[email protected]>
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
# Make coding more python3-ish
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import cmd
import functools
import os
import pprint
import sys
import threading
import time
from collections import deque
from multiprocessing import Lock
from queue import Queue
from jinja2.exceptions import UndefinedError
from ansible import constants as C
from ansible import context
from ansible.errors import AnsibleError, AnsibleFileNotFound, AnsibleUndefinedVariable
from ansible.executor import action_write_locks
from ansible.executor.play_iterator import IteratingStates, FailedStates
from ansible.executor.process.worker import WorkerProcess
from ansible.executor.task_result import TaskResult
from ansible.executor.task_queue_manager import CallbackSend
from ansible.module_utils.six import string_types
from ansible.module_utils._text import to_text
from ansible.module_utils.connection import Connection, ConnectionError
from ansible.playbook.conditional import Conditional
from ansible.playbook.handler import Handler
from ansible.playbook.helpers import load_list_of_blocks
from ansible.playbook.included_file import IncludedFile
from ansible.playbook.task_include import TaskInclude
from ansible.plugins import loader as plugin_loader
from ansible.template import Templar
from ansible.utils.display import Display
from ansible.utils.unsafe_proxy import wrap_var
from ansible.utils.vars import combine_vars
from ansible.vars.clean import strip_internal_keys, module_response_deepcopy
display = Display()
__all__ = ['StrategyBase']
# This list can be an exact match, or start of string bound
# does not accept regex
ALWAYS_DELEGATE_FACT_PREFIXES = frozenset((
'discovered_interpreter_',
))
class StrategySentinel:
pass
_sentinel = StrategySentinel()
def post_process_whens(result, task, templar):
cond = None
if task.changed_when:
cond = Conditional(loader=templar._loader)
cond.when = task.changed_when
result['changed'] = cond.evaluate_conditional(templar, templar.available_variables)
if task.failed_when:
if cond is None:
cond = Conditional(loader=templar._loader)
cond.when = task.failed_when
failed_when_result = cond.evaluate_conditional(templar, templar.available_variables)
result['failed_when_result'] = result['failed'] = failed_when_result
def results_thread_main(strategy):
while True:
try:
result = strategy._final_q.get()
if isinstance(result, StrategySentinel):
break
elif isinstance(result, CallbackSend):
for arg in result.args:
if isinstance(arg, TaskResult):
strategy.normalize_task_result(arg)
break
strategy._tqm.send_callback(result.method_name, *result.args, **result.kwargs)
elif isinstance(result, TaskResult):
strategy.normalize_task_result(result)
with strategy._results_lock:
# only handlers have the listen attr, so this must be a handler
# we split up the results into two queues here to make sure
# handler and regular result processing don't cross wires
if 'listen' in result._task_fields:
strategy._handler_results.append(result)
else:
strategy._results.append(result)
else:
display.warning('Received an invalid object (%s) in the result queue: %r' % (type(result), result))
except (IOError, EOFError):
break
except Queue.Empty:
pass
def debug_closure(func):
"""Closure to wrap ``StrategyBase._process_pending_results`` and invoke the task debugger"""
@functools.wraps(func)
def inner(self, iterator, one_pass=False, max_passes=None, do_handlers=False):
status_to_stats_map = (
('is_failed', 'failures'),
('is_unreachable', 'dark'),
('is_changed', 'changed'),
('is_skipped', 'skipped'),
)
# We don't know the host yet, copy the previous states, for lookup after we process new results
prev_host_states = iterator._host_states.copy()
results = func(self, iterator, one_pass=one_pass, max_passes=max_passes, do_handlers=do_handlers)
_processed_results = []
for result in results:
task = result._task
host = result._host
_queued_task_args = self._queued_task_cache.pop((host.name, task._uuid), None)
task_vars = _queued_task_args['task_vars']
play_context = _queued_task_args['play_context']
# Try to grab the previous host state, if it doesn't exist use get_host_state to generate an empty state
try:
prev_host_state = prev_host_states[host.name]
except KeyError:
prev_host_state = iterator.get_host_state(host)
while result.needs_debugger(globally_enabled=self.debugger_active):
next_action = NextAction()
dbg = Debugger(task, host, task_vars, play_context, result, next_action)
dbg.cmdloop()
if next_action.result == NextAction.REDO:
# rollback host state
self._tqm.clear_failed_hosts()
iterator._host_states[host.name] = prev_host_state
for method, what in status_to_stats_map:
if getattr(result, method)():
self._tqm._stats.decrement(what, host.name)
self._tqm._stats.decrement('ok', host.name)
# redo
self._queue_task(host, task, task_vars, play_context)
_processed_results.extend(debug_closure(func)(self, iterator, one_pass))
break
elif next_action.result == NextAction.CONTINUE:
_processed_results.append(result)
break
elif next_action.result == NextAction.EXIT:
# Matches KeyboardInterrupt from bin/ansible
sys.exit(99)
else:
_processed_results.append(result)
return _processed_results
return inner
class StrategyBase:
'''
This is the base class for strategy plugins, which contains some common
code useful to all strategies like running handlers, cleanup actions, etc.
'''
# by default, strategies should support throttling but we allow individual
# strategies to disable this and either forego supporting it or managing
# the throttling internally (as `free` does)
ALLOW_BASE_THROTTLING = True
def __init__(self, tqm):
self._tqm = tqm
self._inventory = tqm.get_inventory()
self._workers = tqm._workers
self._variable_manager = tqm.get_variable_manager()
self._loader = tqm.get_loader()
self._final_q = tqm._final_q
self._step = context.CLIARGS.get('step', False)
self._diff = context.CLIARGS.get('diff', False)
# the task cache is a dictionary of tuples of (host.name, task._uuid)
# used to find the original task object of in-flight tasks and to store
# the task args/vars and play context info used to queue the task.
self._queued_task_cache = {}
# Backwards compat: self._display isn't really needed, just import the global display and use that.
self._display = display
# internal counters
self._pending_results = 0
self._pending_handler_results = 0
self._cur_worker = 0
# this dictionary is used to keep track of hosts that have
# outstanding tasks still in queue
self._blocked_hosts = dict()
# this dictionary is used to keep track of hosts that have
# flushed handlers
self._flushed_hosts = dict()
self._results = deque()
self._handler_results = deque()
self._results_lock = threading.Condition(threading.Lock())
# create the result processing thread for reading results in the background
self._results_thread = threading.Thread(target=results_thread_main, args=(self,))
self._results_thread.daemon = True
self._results_thread.start()
# holds the list of active (persistent) connections to be shutdown at
# play completion
self._active_connections = dict()
# Caches for get_host calls, to avoid calling excessively
# These values should be set at the top of the ``run`` method of each
# strategy plugin. Use ``_set_hosts_cache`` to set these values
self._hosts_cache = []
self._hosts_cache_all = []
self.debugger_active = C.ENABLE_TASK_DEBUGGER
def _set_hosts_cache(self, play, refresh=True):
"""Responsible for setting _hosts_cache and _hosts_cache_all
See comment in ``__init__`` for the purpose of these caches
"""
if not refresh and all((self._hosts_cache, self._hosts_cache_all)):
return
if not play.finalized and Templar(None).is_template(play.hosts):
_pattern = 'all'
else:
_pattern = play.hosts or 'all'
self._hosts_cache_all = [h.name for h in self._inventory.get_hosts(pattern=_pattern, ignore_restrictions=True)]
self._hosts_cache = [h.name for h in self._inventory.get_hosts(play.hosts, order=play.order)]
def cleanup(self):
# close active persistent connections
for sock in self._active_connections.values():
try:
conn = Connection(sock)
conn.reset()
except ConnectionError as e:
# most likely socket is already closed
display.debug("got an error while closing persistent connection: %s" % e)
self._final_q.put(_sentinel)
self._results_thread.join()
def run(self, iterator, play_context, result=0):
# execute one more pass through the iterator without peeking, to
# make sure that all of the hosts are advanced to their final task.
# This should be safe, as everything should be IteratingStates.COMPLETE by
# this point, though the strategy may not advance the hosts itself.
for host in self._hosts_cache:
if host not in self._tqm._unreachable_hosts:
try:
iterator.get_next_task_for_host(self._inventory.hosts[host])
except KeyError:
iterator.get_next_task_for_host(self._inventory.get_host(host))
# save the failed/unreachable hosts, as the run_handlers()
# method will clear that information during its execution
failed_hosts = iterator.get_failed_hosts()
unreachable_hosts = self._tqm._unreachable_hosts.keys()
display.debug("running handlers")
handler_result = self.run_handlers(iterator, play_context)
if isinstance(handler_result, bool) and not handler_result:
result |= self._tqm.RUN_ERROR
elif not handler_result:
result |= handler_result
# now update with the hosts (if any) that failed or were
# unreachable during the handler execution phase
failed_hosts = set(failed_hosts).union(iterator.get_failed_hosts())
unreachable_hosts = set(unreachable_hosts).union(self._tqm._unreachable_hosts.keys())
# return the appropriate code, depending on the status hosts after the run
if not isinstance(result, bool) and result != self._tqm.RUN_OK:
return result
elif len(unreachable_hosts) > 0:
return self._tqm.RUN_UNREACHABLE_HOSTS
elif len(failed_hosts) > 0:
return self._tqm.RUN_FAILED_HOSTS
else:
return self._tqm.RUN_OK
def get_hosts_remaining(self, play):
self._set_hosts_cache(play, refresh=False)
ignore = set(self._tqm._failed_hosts).union(self._tqm._unreachable_hosts)
return [host for host in self._hosts_cache if host not in ignore]
def get_failed_hosts(self, play):
self._set_hosts_cache(play, refresh=False)
return [host for host in self._hosts_cache if host in self._tqm._failed_hosts]
def add_tqm_variables(self, vars, play):
'''
Base class method to add extra variables/information to the list of task
vars sent through the executor engine regarding the task queue manager state.
'''
vars['ansible_current_hosts'] = self.get_hosts_remaining(play)
vars['ansible_failed_hosts'] = self.get_failed_hosts(play)
def _queue_task(self, host, task, task_vars, play_context):
''' handles queueing the task up to be sent to a worker '''
display.debug("entering _queue_task() for %s/%s" % (host.name, task.action))
# Add a write lock for tasks.
# Maybe this should be added somewhere further up the call stack but
# this is the earliest in the code where we have task (1) extracted
# into its own variable and (2) there's only a single code path
# leading to the module being run. This is called by three
# functions: __init__.py::_do_handler_run(), linear.py::run(), and
# free.py::run() so we'd have to add to all three to do it there.
# The next common higher level is __init__.py::run() and that has
# tasks inside of play_iterator so we'd have to extract them to do it
# there.
if task.action not in action_write_locks.action_write_locks:
display.debug('Creating lock for %s' % task.action)
action_write_locks.action_write_locks[task.action] = Lock()
# create a templar and template things we need later for the queuing process
templar = Templar(loader=self._loader, variables=task_vars)
try:
throttle = int(templar.template(task.throttle))
except Exception as e:
raise AnsibleError("Failed to convert the throttle value to an integer.", obj=task._ds, orig_exc=e)
# and then queue the new task
try:
# Determine the "rewind point" of the worker list. This means we start
# iterating over the list of workers until the end of the list is found.
# Normally, that is simply the length of the workers list (as determined
# by the forks or serial setting), however a task/block/play may "throttle"
# that limit down.
rewind_point = len(self._workers)
if throttle > 0 and self.ALLOW_BASE_THROTTLING:
if task.run_once:
display.debug("Ignoring 'throttle' as 'run_once' is also set for '%s'" % task.get_name())
else:
if throttle <= rewind_point:
display.debug("task: %s, throttle: %d" % (task.get_name(), throttle))
rewind_point = throttle
queued = False
starting_worker = self._cur_worker
while True:
if self._cur_worker >= rewind_point:
self._cur_worker = 0
worker_prc = self._workers[self._cur_worker]
if worker_prc is None or not worker_prc.is_alive():
self._queued_task_cache[(host.name, task._uuid)] = {
'host': host,
'task': task,
'task_vars': task_vars,
'play_context': play_context
}
worker_prc = WorkerProcess(self._final_q, task_vars, host, task, play_context, self._loader, self._variable_manager, plugin_loader)
self._workers[self._cur_worker] = worker_prc
self._tqm.send_callback('v2_runner_on_start', host, task)
worker_prc.start()
display.debug("worker is %d (out of %d available)" % (self._cur_worker + 1, len(self._workers)))
queued = True
self._cur_worker += 1
if self._cur_worker >= rewind_point:
self._cur_worker = 0
if queued:
break
elif self._cur_worker == starting_worker:
time.sleep(0.0001)
if isinstance(task, Handler):
self._pending_handler_results += 1
else:
self._pending_results += 1
except (EOFError, IOError, AssertionError) as e:
# most likely an abort
display.debug("got an error while queuing: %s" % e)
return
display.debug("exiting _queue_task() for %s/%s" % (host.name, task.action))
def get_task_hosts(self, iterator, task_host, task):
if task.run_once:
host_list = [host for host in self._hosts_cache if host not in self._tqm._unreachable_hosts]
else:
host_list = [task_host.name]
return host_list
def get_delegated_hosts(self, result, task):
host_name = result.get('_ansible_delegated_vars', {}).get('ansible_delegated_host', None)
return [host_name or task.delegate_to]
def _set_always_delegated_facts(self, result, task):
"""Sets host facts for ``delegate_to`` hosts for facts that should
always be delegated
This operation mutates ``result`` to remove the always delegated facts
See ``ALWAYS_DELEGATE_FACT_PREFIXES``
"""
if task.delegate_to is None:
return
facts = result['ansible_facts']
always_keys = set()
_add = always_keys.add
for fact_key in facts:
for always_key in ALWAYS_DELEGATE_FACT_PREFIXES:
if fact_key.startswith(always_key):
_add(fact_key)
if always_keys:
_pop = facts.pop
always_facts = {
'ansible_facts': dict((k, _pop(k)) for k in list(facts) if k in always_keys)
}
host_list = self.get_delegated_hosts(result, task)
_set_host_facts = self._variable_manager.set_host_facts
for target_host in host_list:
_set_host_facts(target_host, always_facts)
def normalize_task_result(self, task_result):
"""Normalize a TaskResult to reference actual Host and Task objects
when only given the ``Host.name``, or the ``Task._uuid``
Only the ``Host.name`` and ``Task._uuid`` are commonly sent back from
the ``TaskExecutor`` or ``WorkerProcess`` due to performance concerns
Mutates the original object
"""
if isinstance(task_result._host, string_types):
# If the value is a string, it is ``Host.name``
task_result._host = self._inventory.get_host(to_text(task_result._host))
if isinstance(task_result._task, string_types):
# If the value is a string, it is ``Task._uuid``
queue_cache_entry = (task_result._host.name, task_result._task)
found_task = self._queued_task_cache.get(queue_cache_entry)['task']
original_task = found_task.copy(exclude_parent=True, exclude_tasks=True)
original_task._parent = found_task._parent
original_task.from_attrs(task_result._task_fields)
task_result._task = original_task
return task_result
@debug_closure
def _process_pending_results(self, iterator, one_pass=False, max_passes=None, do_handlers=False):
'''
Reads results off the final queue and takes appropriate action
based on the result (executing callbacks, updating state, etc.).
'''
ret_results = []
handler_templar = Templar(self._loader)
def search_handler_blocks_by_name(handler_name, handler_blocks):
# iterate in reversed order since last handler loaded with the same name wins
for handler_block in reversed(handler_blocks):
for handler_task in handler_block.block:
if handler_task.name:
try:
if not handler_task.cached_name:
if handler_templar.is_template(handler_task.name):
handler_templar.available_variables = self._variable_manager.get_vars(play=iterator._play,
task=handler_task,
_hosts=self._hosts_cache,
_hosts_all=self._hosts_cache_all)
handler_task.name = handler_templar.template(handler_task.name)
handler_task.cached_name = True
# first we check with the full result of get_name(), which may
# include the role name (if the handler is from a role). If that
# is not found, we resort to the simple name field, which doesn't
# have anything extra added to it.
candidates = (
handler_task.name,
handler_task.get_name(include_role_fqcn=False),
handler_task.get_name(include_role_fqcn=True),
)
if handler_name in candidates:
return handler_task
except (UndefinedError, AnsibleUndefinedVariable) as e:
# We skip this handler due to the fact that it may be using
# a variable in the name that was conditionally included via
# set_fact or some other method, and we don't want to error
# out unnecessarily
if not handler_task.listen:
display.warning(
"Handler '%s' is unusable because it has no listen topics and "
"the name could not be templated (host-specific variables are "
"not supported in handler names). The error: %s" % (handler_task.name, to_text(e))
)
continue
return None
cur_pass = 0
while True:
try:
self._results_lock.acquire()
if do_handlers:
task_result = self._handler_results.popleft()
else:
task_result = self._results.popleft()
except IndexError:
break
finally:
self._results_lock.release()
original_host = task_result._host
original_task = task_result._task
# all host status messages contain 2 entries: (msg, task_result)
role_ran = False
if task_result.is_failed():
role_ran = True
ignore_errors = original_task.ignore_errors
if not ignore_errors:
display.debug("marking %s as failed" % original_host.name)
if original_task.run_once:
# if we're using run_once, we have to fail every host here
for h in self._inventory.get_hosts(iterator._play.hosts):
if h.name not in self._tqm._unreachable_hosts:
iterator.mark_host_failed(h)
else:
iterator.mark_host_failed(original_host)
# grab the current state and if we're iterating on the rescue portion
# of a block then we save the failed task in a special var for use
# within the rescue/always
state, _ = iterator.get_next_task_for_host(original_host, peek=True)
if iterator.is_failed(original_host) and state and state.run_state == IteratingStates.COMPLETE:
self._tqm._failed_hosts[original_host.name] = True
# Use of get_active_state() here helps detect proper state if, say, we are in a rescue
# block from an included file (include_tasks). In a non-included rescue case, a rescue
# that starts with a new 'block' will have an active state of IteratingStates.TASKS, so we also
# check the current state block tree to see if any blocks are rescuing.
if state and (iterator.get_active_state(state).run_state == IteratingStates.RESCUE or
iterator.is_any_block_rescuing(state)):
self._tqm._stats.increment('rescued', original_host.name)
self._variable_manager.set_nonpersistent_facts(
original_host.name,
dict(
ansible_failed_task=wrap_var(original_task.serialize()),
ansible_failed_result=task_result._result,
),
)
else:
self._tqm._stats.increment('failures', original_host.name)
else:
self._tqm._stats.increment('ok', original_host.name)
self._tqm._stats.increment('ignored', original_host.name)
if 'changed' in task_result._result and task_result._result['changed']:
self._tqm._stats.increment('changed', original_host.name)
self._tqm.send_callback('v2_runner_on_failed', task_result, ignore_errors=ignore_errors)
elif task_result.is_unreachable():
ignore_unreachable = original_task.ignore_unreachable
if not ignore_unreachable:
self._tqm._unreachable_hosts[original_host.name] = True
iterator._play._removed_hosts.append(original_host.name)
else:
self._tqm._stats.increment('skipped', original_host.name)
task_result._result['skip_reason'] = 'Host %s is unreachable' % original_host.name
self._tqm._stats.increment('dark', original_host.name)
self._tqm.send_callback('v2_runner_on_unreachable', task_result)
elif task_result.is_skipped():
self._tqm._stats.increment('skipped', original_host.name)
self._tqm.send_callback('v2_runner_on_skipped', task_result)
else:
role_ran = True
if original_task.loop:
# this task had a loop, and has more than one result, so
# loop over all of them instead of a single result
result_items = task_result._result.get('results', [])
else:
result_items = [task_result._result]
for result_item in result_items:
if '_ansible_notify' in result_item:
if task_result.is_changed():
# The shared dictionary for notified handlers is a proxy, which
# does not detect when sub-objects within the proxy are modified.
# So, per the docs, we reassign the list so the proxy picks up and
# notifies all other threads
for handler_name in result_item['_ansible_notify']:
found = False
# Find the handler using the above helper. First we look up the
# dependency chain of the current task (if it's from a role), otherwise
# we just look through the list of handlers in the current play/all
# roles and use the first one that matches the notify name
target_handler = search_handler_blocks_by_name(handler_name, iterator._play.handlers)
if target_handler is not None:
found = True
if target_handler.notify_host(original_host):
self._tqm.send_callback('v2_playbook_on_notify', target_handler, original_host)
for listening_handler_block in iterator._play.handlers:
for listening_handler in listening_handler_block.block:
listeners = getattr(listening_handler, 'listen', []) or []
if not listeners:
continue
listeners = listening_handler.get_validated_value(
'listen', listening_handler._valid_attrs['listen'], listeners, handler_templar
)
if handler_name not in listeners:
continue
else:
found = True
if listening_handler.notify_host(original_host):
self._tqm.send_callback('v2_playbook_on_notify', listening_handler, original_host)
# and if none were found, then we raise an error
if not found:
msg = ("The requested handler '%s' was not found in either the main handlers list nor in the listening "
"handlers list" % handler_name)
if C.ERROR_ON_MISSING_HANDLER:
raise AnsibleError(msg)
else:
display.warning(msg)
if 'add_host' in result_item:
# this task added a new host (add_host module)
new_host_info = result_item.get('add_host', dict())
self._add_host(new_host_info, result_item)
post_process_whens(result_item, original_task, handler_templar)
elif 'add_group' in result_item:
# this task added a new group (group_by module)
self._add_group(original_host, result_item)
post_process_whens(result_item, original_task, handler_templar)
if 'ansible_facts' in result_item and original_task.action not in C._ACTION_DEBUG:
# if delegated fact and we are delegating facts, we need to change target host for them
if original_task.delegate_to is not None and original_task.delegate_facts:
host_list = self.get_delegated_hosts(result_item, original_task)
else:
# Set facts that should always be on the delegated hosts
self._set_always_delegated_facts(result_item, original_task)
host_list = self.get_task_hosts(iterator, original_host, original_task)
if original_task.action in C._ACTION_INCLUDE_VARS:
for (var_name, var_value) in result_item['ansible_facts'].items():
# find the host we're actually referring too here, which may
# be a host that is not really in inventory at all
for target_host in host_list:
self._variable_manager.set_host_variable(target_host, var_name, var_value)
else:
cacheable = result_item.pop('_ansible_facts_cacheable', False)
for target_host in host_list:
# so set_fact is a misnomer but 'cacheable = true' was meant to create an 'actual fact'
# to avoid issues with precedence and confusion with set_fact normal operation,
# we set BOTH fact and nonpersistent_facts (aka hostvar)
# when fact is retrieved from cache in subsequent operations it will have the lower precedence,
# but for playbook setting it the 'higher' precedence is kept
is_set_fact = original_task.action in C._ACTION_SET_FACT
if not is_set_fact or cacheable:
self._variable_manager.set_host_facts(target_host, result_item['ansible_facts'].copy())
if is_set_fact:
self._variable_manager.set_nonpersistent_facts(target_host, result_item['ansible_facts'].copy())
if 'ansible_stats' in result_item and 'data' in result_item['ansible_stats'] and result_item['ansible_stats']['data']:
if 'per_host' not in result_item['ansible_stats'] or result_item['ansible_stats']['per_host']:
host_list = self.get_task_hosts(iterator, original_host, original_task)
else:
host_list = [None]
data = result_item['ansible_stats']['data']
aggregate = 'aggregate' in result_item['ansible_stats'] and result_item['ansible_stats']['aggregate']
for myhost in host_list:
for k in data.keys():
if aggregate:
self._tqm._stats.update_custom_stats(k, data[k], myhost)
else:
self._tqm._stats.set_custom_stats(k, data[k], myhost)
if 'diff' in task_result._result:
if self._diff or getattr(original_task, 'diff', False):
self._tqm.send_callback('v2_on_file_diff', task_result)
if not isinstance(original_task, TaskInclude):
self._tqm._stats.increment('ok', original_host.name)
if 'changed' in task_result._result and task_result._result['changed']:
self._tqm._stats.increment('changed', original_host.name)
# finally, send the ok for this task
self._tqm.send_callback('v2_runner_on_ok', task_result)
# register final results
if original_task.register:
host_list = self.get_task_hosts(iterator, original_host, original_task)
clean_copy = strip_internal_keys(module_response_deepcopy(task_result._result))
if 'invocation' in clean_copy:
del clean_copy['invocation']
for target_host in host_list:
self._variable_manager.set_nonpersistent_facts(target_host, {original_task.register: clean_copy})
if do_handlers:
self._pending_handler_results -= 1
else:
self._pending_results -= 1
if original_host.name in self._blocked_hosts:
del self._blocked_hosts[original_host.name]
# If this is a role task, mark the parent role as being run (if
# the task was ok or failed, but not skipped or unreachable)
if original_task._role is not None and role_ran: # TODO: and original_task.action not in C._ACTION_INCLUDE_ROLE:?
# lookup the role in the ROLE_CACHE to make sure we're dealing
# with the correct object and mark it as executed
for (entry, role_obj) in iterator._play.ROLE_CACHE[original_task._role.get_name()].items():
if role_obj._uuid == original_task._role._uuid:
role_obj._had_task_run[original_host.name] = True
ret_results.append(task_result)
if one_pass or max_passes is not None and (cur_pass + 1) >= max_passes:
break
cur_pass += 1
return ret_results
def _wait_on_handler_results(self, iterator, handler, notified_hosts):
'''
Wait for the handler tasks to complete, using a short sleep
between checks to ensure we don't spin lock
'''
ret_results = []
handler_results = 0
display.debug("waiting for handler results...")
while (self._pending_handler_results > 0 and
handler_results < len(notified_hosts) and
not self._tqm._terminated):
if self._tqm.has_dead_workers():
raise AnsibleError("A worker was found in a dead state")
results = self._process_pending_results(iterator, do_handlers=True)
ret_results.extend(results)
handler_results += len([
r._host for r in results if r._host in notified_hosts and
r.task_name == handler.name])
if self._pending_handler_results > 0:
time.sleep(C.DEFAULT_INTERNAL_POLL_INTERVAL)
display.debug("no more pending handlers, returning what we have")
return ret_results
def _wait_on_pending_results(self, iterator):
'''
Wait for the shared counter to drop to zero, using a short sleep
between checks to ensure we don't spin lock
'''
ret_results = []
display.debug("waiting for pending results...")
while self._pending_results > 0 and not self._tqm._terminated:
if self._tqm.has_dead_workers():
raise AnsibleError("A worker was found in a dead state")
results = self._process_pending_results(iterator)
ret_results.extend(results)
if self._pending_results > 0:
time.sleep(C.DEFAULT_INTERNAL_POLL_INTERVAL)
display.debug("no more pending results, returning what we have")
return ret_results
def _add_host(self, host_info, result_item):
'''
Helper function to add a new host to inventory based on a task result.
'''
changed = False
if host_info:
host_name = host_info.get('host_name')
# Check if host in inventory, add if not
if host_name not in self._inventory.hosts:
self._inventory.add_host(host_name, 'all')
self._hosts_cache_all.append(host_name)
changed = True
new_host = self._inventory.hosts.get(host_name)
# Set/update the vars for this host
new_host_vars = new_host.get_vars()
new_host_combined_vars = combine_vars(new_host_vars, host_info.get('host_vars', dict()))
if new_host_vars != new_host_combined_vars:
new_host.vars = new_host_combined_vars
changed = True
new_groups = host_info.get('groups', [])
for group_name in new_groups:
if group_name not in self._inventory.groups:
group_name = self._inventory.add_group(group_name)
changed = True
new_group = self._inventory.groups[group_name]
if new_group.add_host(self._inventory.hosts[host_name]):
changed = True
# reconcile inventory, ensures inventory rules are followed
if changed:
self._inventory.reconcile_inventory()
result_item['changed'] = changed
def _add_group(self, host, result_item):
'''
Helper function to add a group (if it does not exist), and to assign the
specified host to that group.
'''
changed = False
# the host here is from the executor side, which means it was a
# serialized/cloned copy and we'll need to look up the proper
# host object from the master inventory
real_host = self._inventory.hosts.get(host.name)
if real_host is None:
if host.name == self._inventory.localhost.name:
real_host = self._inventory.localhost
else:
raise AnsibleError('%s cannot be matched in inventory' % host.name)
group_name = result_item.get('add_group')
parent_group_names = result_item.get('parent_groups', [])
if group_name not in self._inventory.groups:
group_name = self._inventory.add_group(group_name)
for name in parent_group_names:
if name not in self._inventory.groups:
# create the new group and add it to inventory
self._inventory.add_group(name)
changed = True
group = self._inventory.groups[group_name]
for parent_group_name in parent_group_names:
parent_group = self._inventory.groups[parent_group_name]
new = parent_group.add_child_group(group)
if new and not changed:
changed = True
if real_host not in group.get_hosts():
changed = group.add_host(real_host)
if group not in real_host.get_groups():
changed = real_host.add_group(group)
if changed:
self._inventory.reconcile_inventory()
result_item['changed'] = changed
def _copy_included_file(self, included_file):
'''
A proven safe and performant way to create a copy of an included file
'''
ti_copy = included_file._task.copy(exclude_parent=True)
ti_copy._parent = included_file._task._parent
temp_vars = ti_copy.vars.copy()
temp_vars.update(included_file._vars)
ti_copy.vars = temp_vars
return ti_copy
def _load_included_file(self, included_file, iterator, is_handler=False):
'''
Loads an included YAML file of tasks, applying the optional set of variables.
'''
display.debug("loading included file: %s" % included_file._filename)
try:
data = self._loader.load_from_file(included_file._filename)
if data is None:
return []
elif not isinstance(data, list):
raise AnsibleError("included task files must contain a list of tasks")
ti_copy = self._copy_included_file(included_file)
block_list = load_list_of_blocks(
data,
play=iterator._play,
parent_block=ti_copy.build_parent_block(),
role=included_file._task._role,
use_handlers=is_handler,
loader=self._loader,
variable_manager=self._variable_manager,
)
# since we skip incrementing the stats when the task result is
# first processed, we do so now for each host in the list
for host in included_file._hosts:
self._tqm._stats.increment('ok', host.name)
except AnsibleError as e:
if isinstance(e, AnsibleFileNotFound):
reason = "Could not find or access '%s' on the Ansible Controller." % to_text(e.file_name)
else:
reason = to_text(e)
# mark all of the hosts including this file as failed, send callbacks,
# and increment the stats for this host
for host in included_file._hosts:
tr = TaskResult(host=host, task=included_file._task, return_data=dict(failed=True, reason=reason))
iterator.mark_host_failed(host)
self._tqm._failed_hosts[host.name] = True
self._tqm._stats.increment('failures', host.name)
self._tqm.send_callback('v2_runner_on_failed', tr)
return []
# finally, send the callback and return the list of blocks loaded
self._tqm.send_callback('v2_playbook_on_include', included_file)
display.debug("done processing included file")
return block_list
def run_handlers(self, iterator, play_context):
'''
Runs handlers on those hosts which have been notified.
'''
result = self._tqm.RUN_OK
for handler_block in iterator._play.handlers:
# FIXME: handlers need to support the rescue/always portions of blocks too,
# but this may take some work in the iterator and gets tricky when
# we consider the ability of meta tasks to flush handlers
for handler in handler_block.block:
if handler.notified_hosts:
result = self._do_handler_run(handler, handler.get_name(), iterator=iterator, play_context=play_context)
if not result:
break
return result
def _do_handler_run(self, handler, handler_name, iterator, play_context, notified_hosts=None):
# FIXME: need to use iterator.get_failed_hosts() instead?
# if not len(self.get_hosts_remaining(iterator._play)):
# self._tqm.send_callback('v2_playbook_on_no_hosts_remaining')
# result = False
# break
if notified_hosts is None:
notified_hosts = handler.notified_hosts[:]
# strategy plugins that filter hosts need access to the iterator to identify failed hosts
failed_hosts = self._filter_notified_failed_hosts(iterator, notified_hosts)
notified_hosts = self._filter_notified_hosts(notified_hosts)
notified_hosts += failed_hosts
if len(notified_hosts) > 0:
self._tqm.send_callback('v2_playbook_on_handler_task_start', handler)
bypass_host_loop = False
try:
action = plugin_loader.action_loader.get(handler.action, class_only=True, collection_list=handler.collections)
if getattr(action, 'BYPASS_HOST_LOOP', False):
bypass_host_loop = True
except KeyError:
# we don't care here, because the action may simply not have a
# corresponding action plugin
pass
host_results = []
for host in notified_hosts:
if not iterator.is_failed(host) or iterator._play.force_handlers:
task_vars = self._variable_manager.get_vars(play=iterator._play, host=host, task=handler,
_hosts=self._hosts_cache, _hosts_all=self._hosts_cache_all)
self.add_tqm_variables(task_vars, play=iterator._play)
templar = Templar(loader=self._loader, variables=task_vars)
if not handler.cached_name:
handler.name = templar.template(handler.name)
handler.cached_name = True
self._queue_task(host, handler, task_vars, play_context)
if templar.template(handler.run_once) or bypass_host_loop:
break
# collect the results from the handler run
host_results = self._wait_on_handler_results(iterator, handler, notified_hosts)
included_files = IncludedFile.process_include_results(
host_results,
iterator=iterator,
loader=self._loader,
variable_manager=self._variable_manager
)
result = True
if len(included_files) > 0:
for included_file in included_files:
try:
new_blocks = self._load_included_file(included_file, iterator=iterator, is_handler=True)
# for every task in each block brought in by the include, add the list
# of hosts which included the file to the notified_handlers dict
for block in new_blocks:
iterator._play.handlers.append(block)
for task in block.block:
task_name = task.get_name()
display.debug("adding task '%s' included in handler '%s'" % (task_name, handler_name))
task.notified_hosts = included_file._hosts[:]
result = self._do_handler_run(
handler=task,
handler_name=task_name,
iterator=iterator,
play_context=play_context,
notified_hosts=included_file._hosts[:],
)
if not result:
break
except AnsibleError as e:
for host in included_file._hosts:
iterator.mark_host_failed(host)
self._tqm._failed_hosts[host.name] = True
display.warning(to_text(e))
continue
# remove hosts from notification list
handler.notified_hosts = [
h for h in handler.notified_hosts
if h not in notified_hosts]
display.debug("done running handlers, result is: %s" % result)
return result
def _filter_notified_failed_hosts(self, iterator, notified_hosts):
return []
def _filter_notified_hosts(self, notified_hosts):
'''
Filter notified hosts accordingly to strategy
'''
# As main strategy is linear, we do not filter hosts
# We return a copy to avoid race conditions
return notified_hosts[:]
def _take_step(self, task, host=None):
ret = False
msg = u'Perform task: %s ' % task
if host:
msg += u'on %s ' % host
msg += u'(N)o/(y)es/(c)ontinue: '
resp = display.prompt(msg)
if resp.lower() in ['y', 'yes']:
display.debug("User ran task")
ret = True
elif resp.lower() in ['c', 'continue']:
display.debug("User ran task and canceled step mode")
self._step = False
ret = True
else:
display.debug("User skipped task")
display.banner(msg)
return ret
def _cond_not_supported_warn(self, task_name):
display.warning("%s task does not support when conditional" % task_name)
def _execute_meta(self, task, play_context, iterator, target_host):
# meta tasks store their args in the _raw_params field of args,
# since they do not use k=v pairs, so get that
meta_action = task.args.get('_raw_params')
def _evaluate_conditional(h):
all_vars = self._variable_manager.get_vars(play=iterator._play, host=h, task=task,
_hosts=self._hosts_cache, _hosts_all=self._hosts_cache_all)
templar = Templar(loader=self._loader, variables=all_vars)
return task.evaluate_conditional(templar, all_vars)
skipped = False
msg = ''
skip_reason = '%s conditional evaluated to False' % meta_action
self._tqm.send_callback('v2_playbook_on_task_start', task, is_conditional=False)
# These don't support "when" conditionals
if meta_action in ('noop', 'flush_handlers', 'refresh_inventory', 'reset_connection') and task.when:
self._cond_not_supported_warn(meta_action)
if meta_action == 'noop':
msg = "noop"
elif meta_action == 'flush_handlers':
self._flushed_hosts[target_host] = True
self.run_handlers(iterator, play_context)
self._flushed_hosts[target_host] = False
msg = "ran handlers"
elif meta_action == 'refresh_inventory':
self._inventory.refresh_inventory()
self._set_hosts_cache(iterator._play)
msg = "inventory successfully refreshed"
elif meta_action == 'clear_facts':
if _evaluate_conditional(target_host):
for host in self._inventory.get_hosts(iterator._play.hosts):
hostname = host.get_name()
self._variable_manager.clear_facts(hostname)
msg = "facts cleared"
else:
skipped = True
skip_reason += ', not clearing facts and fact cache for %s' % target_host.name
elif meta_action == 'clear_host_errors':
if _evaluate_conditional(target_host):
for host in self._inventory.get_hosts(iterator._play.hosts):
self._tqm._failed_hosts.pop(host.name, False)
self._tqm._unreachable_hosts.pop(host.name, False)
iterator._host_states[host.name].fail_state = FailedStates.NONE
msg = "cleared host errors"
else:
skipped = True
skip_reason += ', not clearing host error state for %s' % target_host.name
elif meta_action == 'end_batch':
if _evaluate_conditional(target_host):
for host in self._inventory.get_hosts(iterator._play.hosts):
if host.name not in self._tqm._unreachable_hosts:
iterator._host_states[host.name].run_state = IteratingStates.COMPLETE
msg = "ending batch"
else:
skipped = True
skip_reason += ', continuing current batch'
elif meta_action == 'end_play':
if _evaluate_conditional(target_host):
for host in self._inventory.get_hosts(iterator._play.hosts):
if host.name not in self._tqm._unreachable_hosts:
iterator._host_states[host.name].run_state = IteratingStates.COMPLETE
# end_play is used in PlaybookExecutor/TQM to indicate that
# the whole play is supposed to be ended as opposed to just a batch
iterator.end_play = True
msg = "ending play"
else:
skipped = True
skip_reason += ', continuing play'
elif meta_action == 'end_host':
if _evaluate_conditional(target_host):
iterator._host_states[target_host.name].run_state = IteratingStates.COMPLETE
iterator._play._removed_hosts.append(target_host.name)
msg = "ending play for %s" % target_host.name
else:
skipped = True
skip_reason += ", continuing execution for %s" % target_host.name
# TODO: Nix msg here? Left for historical reasons, but skip_reason exists now.
msg = "end_host conditional evaluated to false, continuing execution for %s" % target_host.name
elif meta_action == 'role_complete':
# Allow users to use this in a play as reported in https://github.com/ansible/ansible/issues/22286?
# How would this work with allow_duplicates??
if task.implicit:
if target_host.name in task._role._had_task_run:
task._role._completed[target_host.name] = True
msg = 'role_complete for %s' % target_host.name
elif meta_action == 'reset_connection':
all_vars = self._variable_manager.get_vars(play=iterator._play, host=target_host, task=task,
_hosts=self._hosts_cache, _hosts_all=self._hosts_cache_all)
templar = Templar(loader=self._loader, variables=all_vars)
# apply the given task's information to the connection info,
# which may override some fields already set by the play or
# the options specified on the command line
play_context = play_context.set_task_and_variable_override(task=task, variables=all_vars, templar=templar)
# fields set from the play/task may be based on variables, so we have to
# do the same kind of post validation step on it here before we use it.
play_context.post_validate(templar=templar)
# now that the play context is finalized, if the remote_addr is not set
# default to using the host's address field as the remote address
if not play_context.remote_addr:
play_context.remote_addr = target_host.address
# We also add "magic" variables back into the variables dict to make sure
# a certain subset of variables exist.
play_context.update_vars(all_vars)
if target_host in self._active_connections:
connection = Connection(self._active_connections[target_host])
del self._active_connections[target_host]
else:
connection = plugin_loader.connection_loader.get(play_context.connection, play_context, os.devnull)
connection.set_options(task_keys=task.dump_attrs(), var_options=all_vars)
play_context.set_attributes_from_plugin(connection)
if connection:
try:
connection.reset()
msg = 'reset connection'
except ConnectionError as e:
# most likely socket is already closed
display.debug("got an error while closing persistent connection: %s" % e)
else:
msg = 'no connection, nothing to reset'
else:
raise AnsibleError("invalid meta action requested: %s" % meta_action, obj=task._ds)
result = {'msg': msg}
if skipped:
result['skipped'] = True
result['skip_reason'] = skip_reason
else:
result['changed'] = False
display.vv("META: %s" % msg)
res = TaskResult(target_host, task, result)
if skipped:
self._tqm.send_callback('v2_runner_on_skipped', res)
return [res]
def get_hosts_left(self, iterator):
''' returns list of available hosts for this iterator by filtering out unreachables '''
hosts_left = []
for host in self._hosts_cache:
if host not in self._tqm._unreachable_hosts:
try:
hosts_left.append(self._inventory.hosts[host])
except KeyError:
hosts_left.append(self._inventory.get_host(host))
return hosts_left
def update_active_connections(self, results):
''' updates the current active persistent connections '''
for r in results:
if 'args' in r._task_fields:
socket_path = r._task_fields['args'].get('_ansible_socket')
if socket_path:
if r._host not in self._active_connections:
self._active_connections[r._host] = socket_path
class NextAction(object):
""" The next action after an interpreter's exit. """
REDO = 1
CONTINUE = 2
EXIT = 3
def __init__(self, result=EXIT):
self.result = result
class Debugger(cmd.Cmd):
prompt_continuous = '> ' # multiple lines
def __init__(self, task, host, task_vars, play_context, result, next_action):
# cmd.Cmd is old-style class
cmd.Cmd.__init__(self)
self.prompt = '[%s] %s (debug)> ' % (host, task)
self.intro = None
self.scope = {}
self.scope['task'] = task
self.scope['task_vars'] = task_vars
self.scope['host'] = host
self.scope['play_context'] = play_context
self.scope['result'] = result
self.next_action = next_action
def cmdloop(self):
try:
cmd.Cmd.cmdloop(self)
except KeyboardInterrupt:
pass
do_h = cmd.Cmd.do_help
def do_EOF(self, args):
"""Quit"""
return self.do_quit(args)
def do_quit(self, args):
"""Quit"""
display.display('User interrupted execution')
self.next_action.result = NextAction.EXIT
return True
do_q = do_quit
def do_continue(self, args):
"""Continue to next result"""
self.next_action.result = NextAction.CONTINUE
return True
do_c = do_continue
def do_redo(self, args):
"""Schedule task for re-execution. The re-execution may not be the next result"""
self.next_action.result = NextAction.REDO
return True
do_r = do_redo
def do_update_task(self, args):
"""Recreate the task from ``task._ds``, and template with updated ``task_vars``"""
templar = Templar(None, variables=self.scope['task_vars'])
task = self.scope['task']
task = task.load_data(task._ds)
task.post_validate(templar)
self.scope['task'] = task
do_u = do_update_task
def evaluate(self, args):
try:
return eval(args, globals(), self.scope)
except Exception:
t, v = sys.exc_info()[:2]
if isinstance(t, str):
exc_type_name = t
else:
exc_type_name = t.__name__
display.display('***%s:%s' % (exc_type_name, repr(v)))
raise
def do_pprint(self, args):
"""Pretty Print"""
try:
result = self.evaluate(args)
display.display(pprint.pformat(result))
except Exception:
pass
do_p = do_pprint
def execute(self, args):
try:
code = compile(args + '\n', '<stdin>', 'single')
exec(code, globals(), self.scope)
except Exception:
t, v = sys.exc_info()[:2]
if isinstance(t, str):
exc_type_name = t
else:
exc_type_name = t.__name__
display.display('***%s:%s' % (exc_type_name, repr(v)))
raise
def default(self, line):
try:
self.execute(line)
except Exception:
pass
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 73,657 |
Block with rescue stops next plays and end playbook successfully on some errors
|
##### SUMMARY
Depending on the error that occurs when a block with rescue catches it, the play execution stops, and the playbook ends successfuly, but the next plays don't run. I was able to simulate it using `include_tasks`, but the task inside use a module with a wrong name (that can be provided by a user, so not something that I have control, because it might not be me running the play, the user may include and run custom tasks).
This is not a problem if the module with wrong name is in the main playbook file, because the playbook fails at compilation time, but when lazily loaded through an `include_tasks`, the task fails, the `rescue` (wrapping the `include_tasks`) catches the error successfully, but the next plays don't run, and the playbook ends successfuly.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
block
##### ANSIBLE VERSION
```paste below
ansible 2.10.5
config file = None
configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/lib/python3.6/dist-packages/ansible
executable location = /usr/local/bin/ansible
python version = 3.6.9 (default, Oct 8 2020, 12:12:24) [GCC 8.4.0]
```
##### CONFIGURATION
```paste below
```
(`ansible-config dump --only-changed` outputs nothing)
##### OS / ENVIRONMENT
Ubuntu 18.04 (docker container)
##### STEPS TO REPRODUCE
_hosts.yml:_
```yaml
[main]
localhost ansible_connection=local
```
_playbook.yml:_
```yaml
- name: Test 01
hosts: main
tasks:
- block:
- include_tasks: "test.yml"
rescue:
- debug:
msg: "rescued"
- debug:
msg: "test 01"
- name: Test 02
hosts: main
tasks:
- debug:
msg: "test 02"
- fail:
msg: "end here"
```
**Works:**
_test.yml (gives an error, but works as expected):_
```yaml
- ansible.builtin.file:
param: "value"
```
_Outputs:_
```ini
PLAY [Test 01] ************************************************************************************************************************
TASK [include_tasks] ******************************************************************************************************************
included: /main/dev/repos/cloud/test.yml for localhost
TASK [ansible.builtin.file] ***********************************************************************************************************
fatal: [localhost]: FAILED! => {"changed": false, "msg": "Unsupported parameters for (ansible.builtin.file) module: param Supported parameters include: _diff_peek, _original_basename, access_time, access_time_format, attributes, follow, force, group, mode, modification_time, modification_time_format, owner, path, recurse, selevel, serole, setype, seuser, src, state, unsafe_writes"}
TASK [debug] **************************************************************************************************************************
ok: [localhost] => {
"msg": "rescued"
}
TASK [debug] **************************************************************************************************************************
ok: [localhost] => {
"msg": "test 01"
}
PLAY [Test 02] ************************************************************************************************************************
TASK [debug] **************************************************************************************************************************
ok: [localhost] => {
"msg": "test 02"
}
TASK [fail] ***************************************************************************************************************************
fatal: [localhost]: FAILED! => {"changed": false, "msg": "end here"}
PLAY RECAP ****************************************************************************************************************************
localhost : ok=4 changed=0 unreachable=0 failed=1 skipped=0 rescued=1 ignored=0
```
**Bug:**
_test.yml (gives an error, stops the next plays, ends the playbook successfully, as explained above):_
```yaml
- wrong.collection.module:
param: "value"
```
_Outputs:_
```ini
PLAY [Test 01] ************************************************************************************************************************
TASK [include_tasks] ******************************************************************************************************************
fatal: [localhost]: FAILED! => {"reason": "couldn't resolve module/action 'wrong.collection.module'. This often indicates a misspelling, missing collection, or incorrect module path.\n\nThe error appears to be in '/main/dev/repos/cloud/test.yml': line 1, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n\n- wrong.collection.module:\n ^ here\n"}
TASK [debug] **************************************************************************************************************************
ok: [localhost] => {
"msg": "rescued"
}
TASK [debug] **************************************************************************************************************************
ok: [localhost] => {
"msg": "test 01"
}
PLAY RECAP ****************************************************************************************************************************
localhost : ok=2 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
```
##### EXPECTED RESULTS
The same as in the case that works.
##### ACTUAL RESULTS
Stop the next plays, even after being rescued, and end the playbook successfully (if I run `ansible-playbook playbook.yml && echo "success" || echo "error"` it gives an error (due to the `fail` task at the end) in the 1st case (that works), but success in the 2nd, and don't run the 2nd play).
|
https://github.com/ansible/ansible/issues/73657
|
https://github.com/ansible/ansible/pull/73722
|
d23226a6f4737fb17b7b1be0d09f39e249832c7f
|
47ee28222794010ec447b7b6d73189eed1b46581
| 2021-02-19T03:01:38Z |
python
| 2021-11-01T15:05:39Z |
lib/ansible/plugins/strategy/free.py
|
# (c) 2012-2014, Michael DeHaan <[email protected]>
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
# Make coding more python3-ish
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
DOCUMENTATION = '''
name: free
short_description: Executes tasks without waiting for all hosts
description:
- Task execution is as fast as possible per batch as defined by C(serial) (default all).
Ansible will not wait for other hosts to finish the current task before queuing more tasks for other hosts.
All hosts are still attempted for the current task, but it prevents blocking new tasks for hosts that have already finished.
- With the free strategy, unlike the default linear strategy, a host that is slow or stuck on a specific task
won't hold up the rest of the hosts and tasks.
version_added: "2.0"
author: Ansible Core Team
'''
import time
from ansible import constants as C
from ansible.errors import AnsibleError
from ansible.playbook.included_file import IncludedFile
from ansible.plugins.loader import action_loader
from ansible.plugins.strategy import StrategyBase
from ansible.template import Templar
from ansible.module_utils._text import to_text
from ansible.utils.display import Display
display = Display()
class StrategyModule(StrategyBase):
# This strategy manages throttling on its own, so we don't want it done in queue_task
ALLOW_BASE_THROTTLING = False
def _filter_notified_failed_hosts(self, iterator, notified_hosts):
# If --force-handlers is used we may act on hosts that have failed
return [host for host in notified_hosts if iterator.is_failed(host)]
def _filter_notified_hosts(self, notified_hosts):
'''
Filter notified hosts accordingly to strategy
'''
# We act only on hosts that are ready to flush handlers
return [host for host in notified_hosts
if host in self._flushed_hosts and self._flushed_hosts[host]]
def __init__(self, tqm):
super(StrategyModule, self).__init__(tqm)
self._host_pinned = False
def run(self, iterator, play_context):
'''
The "free" strategy is a bit more complex, in that it allows tasks to
be sent to hosts as quickly as they can be processed. This means that
some hosts may finish very quickly if run tasks result in little or no
work being done versus other systems.
The algorithm used here also tries to be more "fair" when iterating
through hosts by remembering the last host in the list to be given a task
and starting the search from there as opposed to the top of the hosts
list again, which would end up favoring hosts near the beginning of the
list.
'''
# the last host to be given a task
last_host = 0
result = self._tqm.RUN_OK
# start with all workers being counted as being free
workers_free = len(self._workers)
self._set_hosts_cache(iterator._play)
if iterator._play.max_fail_percentage is not None:
display.warning("Using max_fail_percentage with the free strategy is not supported, as tasks are executed independently on each host")
work_to_do = True
while work_to_do and not self._tqm._terminated:
hosts_left = self.get_hosts_left(iterator)
if len(hosts_left) == 0:
self._tqm.send_callback('v2_playbook_on_no_hosts_remaining')
result = False
break
work_to_do = False # assume we have no more work to do
starting_host = last_host # save current position so we know when we've looped back around and need to break
# try and find an unblocked host with a task to run
host_results = []
while True:
host = hosts_left[last_host]
display.debug("next free host: %s" % host)
host_name = host.get_name()
# peek at the next task for the host, to see if there's
# anything to do do for this host
(state, task) = iterator.get_next_task_for_host(host, peek=True)
display.debug("free host state: %s" % state, host=host_name)
display.debug("free host task: %s" % task, host=host_name)
# check if there is work to do, either there is a task or the host is still blocked which could
# mean that it is processing an include task and after its result is processed there might be
# more tasks to run
if (task or self._blocked_hosts.get(host_name, False)) and not self._tqm._unreachable_hosts.get(host_name, False):
display.debug("this host has work to do", host=host_name)
# set the flag so the outer loop knows we've still found
# some work which needs to be done
work_to_do = True
if not self._tqm._unreachable_hosts.get(host_name, False) and task:
# check to see if this host is blocked (still executing a previous task)
if not self._blocked_hosts.get(host_name, False):
display.debug("getting variables", host=host_name)
task_vars = self._variable_manager.get_vars(play=iterator._play, host=host, task=task,
_hosts=self._hosts_cache,
_hosts_all=self._hosts_cache_all)
self.add_tqm_variables(task_vars, play=iterator._play)
templar = Templar(loader=self._loader, variables=task_vars)
display.debug("done getting variables", host=host_name)
try:
throttle = int(templar.template(task.throttle))
except Exception as e:
raise AnsibleError("Failed to convert the throttle value to an integer.", obj=task._ds, orig_exc=e)
if throttle > 0:
same_tasks = 0
for worker in self._workers:
if worker and worker.is_alive() and worker._task._uuid == task._uuid:
same_tasks += 1
display.debug("task: %s, same_tasks: %d" % (task.get_name(), same_tasks))
if same_tasks >= throttle:
break
# pop the task, mark the host blocked, and queue it
self._blocked_hosts[host_name] = True
(state, task) = iterator.get_next_task_for_host(host)
try:
action = action_loader.get(task.action, class_only=True, collection_list=task.collections)
except KeyError:
# we don't care here, because the action may simply not have a
# corresponding action plugin
action = None
try:
task.name = to_text(templar.template(task.name, fail_on_undefined=False), nonstring='empty')
display.debug("done templating", host=host_name)
except Exception:
# just ignore any errors during task name templating,
# we don't care if it just shows the raw name
display.debug("templating failed for some reason", host=host_name)
run_once = templar.template(task.run_once) or action and getattr(action, 'BYPASS_HOST_LOOP', False)
if run_once:
if action and getattr(action, 'BYPASS_HOST_LOOP', False):
raise AnsibleError("The '%s' module bypasses the host loop, which is currently not supported in the free strategy "
"and would instead execute for every host in the inventory list." % task.action, obj=task._ds)
else:
display.warning("Using run_once with the free strategy is not currently supported. This task will still be "
"executed for every host in the inventory list.")
# check to see if this task should be skipped, due to it being a member of a
# role which has already run (and whether that role allows duplicate execution)
if task._role and task._role.has_run(host):
# If there is no metadata, the default behavior is to not allow duplicates,
# if there is metadata, check to see if the allow_duplicates flag was set to true
if task._role._metadata is None or task._role._metadata and not task._role._metadata.allow_duplicates:
display.debug("'%s' skipped because role has already run" % task, host=host_name)
del self._blocked_hosts[host_name]
continue
if task.action in C._ACTION_META:
self._execute_meta(task, play_context, iterator, target_host=host)
self._blocked_hosts[host_name] = False
else:
# handle step if needed, skip meta actions as they are used internally
if not self._step or self._take_step(task, host_name):
if task.any_errors_fatal:
display.warning("Using any_errors_fatal with the free strategy is not supported, "
"as tasks are executed independently on each host")
self._tqm.send_callback('v2_playbook_on_task_start', task, is_conditional=False)
self._queue_task(host, task, task_vars, play_context)
# each task is counted as a worker being busy
workers_free -= 1
del task_vars
else:
display.debug("%s is blocked, skipping for now" % host_name)
# all workers have tasks to do (and the current host isn't done with the play).
# loop back to starting host and break out
if self._host_pinned and workers_free == 0 and work_to_do:
last_host = starting_host
break
# move on to the next host and make sure we
# haven't gone past the end of our hosts list
last_host += 1
if last_host > len(hosts_left) - 1:
last_host = 0
# if we've looped around back to the start, break out
if last_host == starting_host:
break
results = self._process_pending_results(iterator)
host_results.extend(results)
# each result is counted as a worker being free again
workers_free += len(results)
self.update_active_connections(results)
included_files = IncludedFile.process_include_results(
host_results,
iterator=iterator,
loader=self._loader,
variable_manager=self._variable_manager
)
if len(included_files) > 0:
all_blocks = dict((host, []) for host in hosts_left)
for included_file in included_files:
display.debug("collecting new blocks for %s" % included_file)
try:
if included_file._is_role:
new_ir = self._copy_included_file(included_file)
new_blocks, handler_blocks = new_ir.get_block_list(
play=iterator._play,
variable_manager=self._variable_manager,
loader=self._loader,
)
else:
new_blocks = self._load_included_file(included_file, iterator=iterator)
except AnsibleError as e:
for host in included_file._hosts:
iterator.mark_host_failed(host)
display.warning(to_text(e))
continue
for new_block in new_blocks:
task_vars = self._variable_manager.get_vars(play=iterator._play, task=new_block.get_first_parent_include(),
_hosts=self._hosts_cache,
_hosts_all=self._hosts_cache_all)
final_block = new_block.filter_tagged_tasks(task_vars)
for host in hosts_left:
if host in included_file._hosts:
all_blocks[host].append(final_block)
display.debug("done collecting new blocks for %s" % included_file)
display.debug("adding all collected blocks from %d included file(s) to iterator" % len(included_files))
for host in hosts_left:
iterator.add_tasks(host, all_blocks[host])
display.debug("done adding collected blocks to iterator")
# pause briefly so we don't spin lock
time.sleep(C.DEFAULT_INTERNAL_POLL_INTERVAL)
# collect all the final results
results = self._wait_on_pending_results(iterator)
# run the base class run() method, which executes the cleanup function
# and runs any outstanding handlers which have been triggered
return super(StrategyModule, self).run(iterator, play_context, result)
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 73,657 |
Block with rescue stops next plays and end playbook successfully on some errors
|
##### SUMMARY
Depending on the error that occurs when a block with rescue catches it, the play execution stops, and the playbook ends successfuly, but the next plays don't run. I was able to simulate it using `include_tasks`, but the task inside use a module with a wrong name (that can be provided by a user, so not something that I have control, because it might not be me running the play, the user may include and run custom tasks).
This is not a problem if the module with wrong name is in the main playbook file, because the playbook fails at compilation time, but when lazily loaded through an `include_tasks`, the task fails, the `rescue` (wrapping the `include_tasks`) catches the error successfully, but the next plays don't run, and the playbook ends successfuly.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
block
##### ANSIBLE VERSION
```paste below
ansible 2.10.5
config file = None
configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/lib/python3.6/dist-packages/ansible
executable location = /usr/local/bin/ansible
python version = 3.6.9 (default, Oct 8 2020, 12:12:24) [GCC 8.4.0]
```
##### CONFIGURATION
```paste below
```
(`ansible-config dump --only-changed` outputs nothing)
##### OS / ENVIRONMENT
Ubuntu 18.04 (docker container)
##### STEPS TO REPRODUCE
_hosts.yml:_
```yaml
[main]
localhost ansible_connection=local
```
_playbook.yml:_
```yaml
- name: Test 01
hosts: main
tasks:
- block:
- include_tasks: "test.yml"
rescue:
- debug:
msg: "rescued"
- debug:
msg: "test 01"
- name: Test 02
hosts: main
tasks:
- debug:
msg: "test 02"
- fail:
msg: "end here"
```
**Works:**
_test.yml (gives an error, but works as expected):_
```yaml
- ansible.builtin.file:
param: "value"
```
_Outputs:_
```ini
PLAY [Test 01] ************************************************************************************************************************
TASK [include_tasks] ******************************************************************************************************************
included: /main/dev/repos/cloud/test.yml for localhost
TASK [ansible.builtin.file] ***********************************************************************************************************
fatal: [localhost]: FAILED! => {"changed": false, "msg": "Unsupported parameters for (ansible.builtin.file) module: param Supported parameters include: _diff_peek, _original_basename, access_time, access_time_format, attributes, follow, force, group, mode, modification_time, modification_time_format, owner, path, recurse, selevel, serole, setype, seuser, src, state, unsafe_writes"}
TASK [debug] **************************************************************************************************************************
ok: [localhost] => {
"msg": "rescued"
}
TASK [debug] **************************************************************************************************************************
ok: [localhost] => {
"msg": "test 01"
}
PLAY [Test 02] ************************************************************************************************************************
TASK [debug] **************************************************************************************************************************
ok: [localhost] => {
"msg": "test 02"
}
TASK [fail] ***************************************************************************************************************************
fatal: [localhost]: FAILED! => {"changed": false, "msg": "end here"}
PLAY RECAP ****************************************************************************************************************************
localhost : ok=4 changed=0 unreachable=0 failed=1 skipped=0 rescued=1 ignored=0
```
**Bug:**
_test.yml (gives an error, stops the next plays, ends the playbook successfully, as explained above):_
```yaml
- wrong.collection.module:
param: "value"
```
_Outputs:_
```ini
PLAY [Test 01] ************************************************************************************************************************
TASK [include_tasks] ******************************************************************************************************************
fatal: [localhost]: FAILED! => {"reason": "couldn't resolve module/action 'wrong.collection.module'. This often indicates a misspelling, missing collection, or incorrect module path.\n\nThe error appears to be in '/main/dev/repos/cloud/test.yml': line 1, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n\n- wrong.collection.module:\n ^ here\n"}
TASK [debug] **************************************************************************************************************************
ok: [localhost] => {
"msg": "rescued"
}
TASK [debug] **************************************************************************************************************************
ok: [localhost] => {
"msg": "test 01"
}
PLAY RECAP ****************************************************************************************************************************
localhost : ok=2 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
```
##### EXPECTED RESULTS
The same as in the case that works.
##### ACTUAL RESULTS
Stop the next plays, even after being rescued, and end the playbook successfully (if I run `ansible-playbook playbook.yml && echo "success" || echo "error"` it gives an error (due to the `fail` task at the end) in the 1st case (that works), but success in the 2nd, and don't run the 2nd play).
|
https://github.com/ansible/ansible/issues/73657
|
https://github.com/ansible/ansible/pull/73722
|
d23226a6f4737fb17b7b1be0d09f39e249832c7f
|
47ee28222794010ec447b7b6d73189eed1b46581
| 2021-02-19T03:01:38Z |
python
| 2021-11-01T15:05:39Z |
lib/ansible/plugins/strategy/linear.py
|
# (c) 2012-2014, Michael DeHaan <[email protected]>
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
# Make coding more python3-ish
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
DOCUMENTATION = '''
name: linear
short_description: Executes tasks in a linear fashion
description:
- Task execution is in lockstep per host batch as defined by C(serial) (default all).
Up to the fork limit of hosts will execute each task at the same time and then
the next series of hosts until the batch is done, before going on to the next task.
version_added: "2.0"
notes:
- This was the default Ansible behaviour before 'strategy plugins' were introduced in 2.0.
author: Ansible Core Team
'''
from ansible import constants as C
from ansible.errors import AnsibleError, AnsibleAssertionError
from ansible.executor.play_iterator import IteratingStates, FailedStates
from ansible.module_utils._text import to_text
from ansible.playbook.block import Block
from ansible.playbook.included_file import IncludedFile
from ansible.playbook.task import Task
from ansible.plugins.loader import action_loader
from ansible.plugins.strategy import StrategyBase
from ansible.template import Templar
from ansible.utils.display import Display
display = Display()
class StrategyModule(StrategyBase):
noop_task = None
def _replace_with_noop(self, target):
if self.noop_task is None:
raise AnsibleAssertionError('strategy.linear.StrategyModule.noop_task is None, need Task()')
result = []
for el in target:
if isinstance(el, Task):
result.append(self.noop_task)
elif isinstance(el, Block):
result.append(self._create_noop_block_from(el, el._parent))
return result
def _create_noop_block_from(self, original_block, parent):
noop_block = Block(parent_block=parent)
noop_block.block = self._replace_with_noop(original_block.block)
noop_block.always = self._replace_with_noop(original_block.always)
noop_block.rescue = self._replace_with_noop(original_block.rescue)
return noop_block
def _prepare_and_create_noop_block_from(self, original_block, parent, iterator):
self.noop_task = Task()
self.noop_task.action = 'meta'
self.noop_task.args['_raw_params'] = 'noop'
self.noop_task.implicit = True
self.noop_task.set_loader(iterator._play._loader)
return self._create_noop_block_from(original_block, parent)
def _get_next_task_lockstep(self, hosts, iterator):
'''
Returns a list of (host, task) tuples, where the task may
be a noop task to keep the iterator in lock step across
all hosts.
'''
noop_task = Task()
noop_task.action = 'meta'
noop_task.args['_raw_params'] = 'noop'
noop_task.implicit = True
noop_task.set_loader(iterator._play._loader)
host_tasks = {}
display.debug("building list of next tasks for hosts")
for host in hosts:
host_tasks[host.name] = iterator.get_next_task_for_host(host, peek=True)
display.debug("done building task lists")
num_setups = 0
num_tasks = 0
num_rescue = 0
num_always = 0
display.debug("counting tasks in each state of execution")
host_tasks_to_run = [(host, state_task)
for host, state_task in host_tasks.items()
if state_task and state_task[1]]
if host_tasks_to_run:
try:
lowest_cur_block = min(
(iterator.get_active_state(s).cur_block for h, (s, t) in host_tasks_to_run
if s.run_state != IteratingStates.COMPLETE))
except ValueError:
lowest_cur_block = None
else:
# empty host_tasks_to_run will just run till the end of the function
# without ever touching lowest_cur_block
lowest_cur_block = None
for (k, v) in host_tasks_to_run:
(s, t) = v
s = iterator.get_active_state(s)
if s.cur_block > lowest_cur_block:
# Not the current block, ignore it
continue
if s.run_state == IteratingStates.SETUP:
num_setups += 1
elif s.run_state == IteratingStates.TASKS:
num_tasks += 1
elif s.run_state == IteratingStates.RESCUE:
num_rescue += 1
elif s.run_state == IteratingStates.ALWAYS:
num_always += 1
display.debug("done counting tasks in each state of execution:\n\tnum_setups: %s\n\tnum_tasks: %s\n\tnum_rescue: %s\n\tnum_always: %s" % (num_setups,
num_tasks,
num_rescue,
num_always))
def _advance_selected_hosts(hosts, cur_block, cur_state):
'''
This helper returns the task for all hosts in the requested
state, otherwise they get a noop dummy task. This also advances
the state of the host, since the given states are determined
while using peek=True.
'''
# we return the values in the order they were originally
# specified in the given hosts array
rvals = []
display.debug("starting to advance hosts")
for host in hosts:
host_state_task = host_tasks.get(host.name)
if host_state_task is None:
continue
(s, t) = host_state_task
s = iterator.get_active_state(s)
if t is None:
continue
if s.run_state == cur_state and s.cur_block == cur_block:
new_t = iterator.get_next_task_for_host(host)
rvals.append((host, t))
else:
rvals.append((host, noop_task))
display.debug("done advancing hosts to next task")
return rvals
# if any hosts are in SETUP, return the setup task
# while all other hosts get a noop
if num_setups:
display.debug("advancing hosts in SETUP")
return _advance_selected_hosts(hosts, lowest_cur_block, IteratingStates.SETUP)
# if any hosts are in TASKS, return the next normal
# task for these hosts, while all other hosts get a noop
if num_tasks:
display.debug("advancing hosts in TASKS")
return _advance_selected_hosts(hosts, lowest_cur_block, IteratingStates.TASKS)
# if any hosts are in RESCUE, return the next rescue
# task for these hosts, while all other hosts get a noop
if num_rescue:
display.debug("advancing hosts in RESCUE")
return _advance_selected_hosts(hosts, lowest_cur_block, IteratingStates.RESCUE)
# if any hosts are in ALWAYS, return the next always
# task for these hosts, while all other hosts get a noop
if num_always:
display.debug("advancing hosts in ALWAYS")
return _advance_selected_hosts(hosts, lowest_cur_block, IteratingStates.ALWAYS)
# at this point, everything must be COMPLETE, so we
# return None for all hosts in the list
display.debug("all hosts are done, so returning None's for all hosts")
return [(host, None) for host in hosts]
def run(self, iterator, play_context):
'''
The linear strategy is simple - get the next task and queue
it for all hosts, then wait for the queue to drain before
moving on to the next task
'''
# iterate over each task, while there is one left to run
result = self._tqm.RUN_OK
work_to_do = True
self._set_hosts_cache(iterator._play)
while work_to_do and not self._tqm._terminated:
try:
display.debug("getting the remaining hosts for this loop")
hosts_left = self.get_hosts_left(iterator)
display.debug("done getting the remaining hosts for this loop")
# queue up this task for each host in the inventory
callback_sent = False
work_to_do = False
host_results = []
host_tasks = self._get_next_task_lockstep(hosts_left, iterator)
# skip control
skip_rest = False
choose_step = True
# flag set if task is set to any_errors_fatal
any_errors_fatal = False
results = []
for (host, task) in host_tasks:
if not task:
continue
if self._tqm._terminated:
break
run_once = False
work_to_do = True
# check to see if this task should be skipped, due to it being a member of a
# role which has already run (and whether that role allows duplicate execution)
if task._role and task._role.has_run(host):
# If there is no metadata, the default behavior is to not allow duplicates,
# if there is metadata, check to see if the allow_duplicates flag was set to true
if task._role._metadata is None or task._role._metadata and not task._role._metadata.allow_duplicates:
display.debug("'%s' skipped because role has already run" % task)
continue
display.debug("getting variables")
task_vars = self._variable_manager.get_vars(play=iterator._play, host=host, task=task,
_hosts=self._hosts_cache, _hosts_all=self._hosts_cache_all)
self.add_tqm_variables(task_vars, play=iterator._play)
templar = Templar(loader=self._loader, variables=task_vars)
display.debug("done getting variables")
# test to see if the task across all hosts points to an action plugin which
# sets BYPASS_HOST_LOOP to true, or if it has run_once enabled. If so, we
# will only send this task to the first host in the list.
task_action = templar.template(task.action)
try:
action = action_loader.get(task_action, class_only=True, collection_list=task.collections)
except KeyError:
# we don't care here, because the action may simply not have a
# corresponding action plugin
action = None
if task_action in C._ACTION_META:
# for the linear strategy, we run meta tasks just once and for
# all hosts currently being iterated over rather than one host
results.extend(self._execute_meta(task, play_context, iterator, host))
if task.args.get('_raw_params', None) not in ('noop', 'reset_connection', 'end_host', 'role_complete'):
run_once = True
if (task.any_errors_fatal or run_once) and not task.ignore_errors:
any_errors_fatal = True
else:
# handle step if needed, skip meta actions as they are used internally
if self._step and choose_step:
if self._take_step(task):
choose_step = False
else:
skip_rest = True
break
run_once = templar.template(task.run_once) or action and getattr(action, 'BYPASS_HOST_LOOP', False)
if (task.any_errors_fatal or run_once) and not task.ignore_errors:
any_errors_fatal = True
if not callback_sent:
display.debug("sending task start callback, copying the task so we can template it temporarily")
saved_name = task.name
display.debug("done copying, going to template now")
try:
task.name = to_text(templar.template(task.name, fail_on_undefined=False), nonstring='empty')
display.debug("done templating")
except Exception:
# just ignore any errors during task name templating,
# we don't care if it just shows the raw name
display.debug("templating failed for some reason")
display.debug("here goes the callback...")
self._tqm.send_callback('v2_playbook_on_task_start', task, is_conditional=False)
task.name = saved_name
callback_sent = True
display.debug("sending task start callback")
self._blocked_hosts[host.get_name()] = True
self._queue_task(host, task, task_vars, play_context)
del task_vars
# if we're bypassing the host loop, break out now
if run_once:
break
results += self._process_pending_results(iterator, max_passes=max(1, int(len(self._tqm._workers) * 0.1)))
# go to next host/task group
if skip_rest:
continue
display.debug("done queuing things up, now waiting for results queue to drain")
if self._pending_results > 0:
results += self._wait_on_pending_results(iterator)
host_results.extend(results)
self.update_active_connections(results)
included_files = IncludedFile.process_include_results(
host_results,
iterator=iterator,
loader=self._loader,
variable_manager=self._variable_manager
)
include_failure = False
if len(included_files) > 0:
display.debug("we have included files to process")
display.debug("generating all_blocks data")
all_blocks = dict((host, []) for host in hosts_left)
display.debug("done generating all_blocks data")
for included_file in included_files:
display.debug("processing included file: %s" % included_file._filename)
# included hosts get the task list while those excluded get an equal-length
# list of noop tasks, to make sure that they continue running in lock-step
try:
if included_file._is_role:
new_ir = self._copy_included_file(included_file)
new_blocks, handler_blocks = new_ir.get_block_list(
play=iterator._play,
variable_manager=self._variable_manager,
loader=self._loader,
)
else:
new_blocks = self._load_included_file(included_file, iterator=iterator)
display.debug("iterating over new_blocks loaded from include file")
for new_block in new_blocks:
task_vars = self._variable_manager.get_vars(
play=iterator._play,
task=new_block.get_first_parent_include(),
_hosts=self._hosts_cache,
_hosts_all=self._hosts_cache_all,
)
display.debug("filtering new block on tags")
final_block = new_block.filter_tagged_tasks(task_vars)
display.debug("done filtering new block on tags")
noop_block = self._prepare_and_create_noop_block_from(final_block, task._parent, iterator)
for host in hosts_left:
if host in included_file._hosts:
all_blocks[host].append(final_block)
else:
all_blocks[host].append(noop_block)
display.debug("done iterating over new_blocks loaded from include file")
except AnsibleError as e:
for host in included_file._hosts:
self._tqm._failed_hosts[host.name] = True
iterator.mark_host_failed(host)
display.error(to_text(e), wrap_text=False)
include_failure = True
continue
# finally go through all of the hosts and append the
# accumulated blocks to their list of tasks
display.debug("extending task lists for all hosts with included blocks")
for host in hosts_left:
iterator.add_tasks(host, all_blocks[host])
display.debug("done extending task lists")
display.debug("done processing included files")
display.debug("results queue empty")
display.debug("checking for any_errors_fatal")
failed_hosts = []
unreachable_hosts = []
for res in results:
# execute_meta() does not set 'failed' in the TaskResult
# so we skip checking it with the meta tasks and look just at the iterator
if (res.is_failed() or res._task.action in C._ACTION_META) and iterator.is_failed(res._host):
failed_hosts.append(res._host.name)
elif res.is_unreachable():
unreachable_hosts.append(res._host.name)
# if any_errors_fatal and we had an error, mark all hosts as failed
if any_errors_fatal and (len(failed_hosts) > 0 or len(unreachable_hosts) > 0):
dont_fail_states = frozenset([IteratingStates.RESCUE, IteratingStates.ALWAYS])
for host in hosts_left:
(s, _) = iterator.get_next_task_for_host(host, peek=True)
# the state may actually be in a child state, use the get_active_state()
# method in the iterator to figure out the true active state
s = iterator.get_active_state(s)
if s.run_state not in dont_fail_states or \
s.run_state == IteratingStates.RESCUE and s.fail_state & FailedStates.RESCUE != 0:
self._tqm._failed_hosts[host.name] = True
result |= self._tqm.RUN_FAILED_BREAK_PLAY
display.debug("done checking for any_errors_fatal")
display.debug("checking for max_fail_percentage")
if iterator._play.max_fail_percentage is not None and len(results) > 0:
percentage = iterator._play.max_fail_percentage / 100.0
if (len(self._tqm._failed_hosts) / iterator.batch_size) > percentage:
for host in hosts_left:
# don't double-mark hosts, or the iterator will potentially
# fail them out of the rescue/always states
if host.name not in failed_hosts:
self._tqm._failed_hosts[host.name] = True
iterator.mark_host_failed(host)
self._tqm.send_callback('v2_playbook_on_no_hosts_remaining')
result |= self._tqm.RUN_FAILED_BREAK_PLAY
display.debug('(%s failed / %s total )> %s max fail' % (len(self._tqm._failed_hosts), iterator.batch_size, percentage))
display.debug("done checking for max_fail_percentage")
display.debug("checking to see if all hosts have failed and the running result is not ok")
if result != self._tqm.RUN_OK and len(self._tqm._failed_hosts) >= len(hosts_left):
display.debug("^ not ok, so returning result now")
self._tqm.send_callback('v2_playbook_on_no_hosts_remaining')
return result
display.debug("done checking to see if all hosts have failed")
except (IOError, EOFError) as e:
display.debug("got IOError/EOFError in task loop: %s" % e)
# most likely an abort, return failed
return self._tqm.RUN_UNKNOWN_ERROR
# run the base class run() method, which executes the cleanup function
# and runs any outstanding handlers which have been triggered
return super(StrategyModule, self).run(iterator, play_context, result)
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 73,657 |
Block with rescue stops next plays and end playbook successfully on some errors
|
##### SUMMARY
Depending on the error that occurs when a block with rescue catches it, the play execution stops, and the playbook ends successfuly, but the next plays don't run. I was able to simulate it using `include_tasks`, but the task inside use a module with a wrong name (that can be provided by a user, so not something that I have control, because it might not be me running the play, the user may include and run custom tasks).
This is not a problem if the module with wrong name is in the main playbook file, because the playbook fails at compilation time, but when lazily loaded through an `include_tasks`, the task fails, the `rescue` (wrapping the `include_tasks`) catches the error successfully, but the next plays don't run, and the playbook ends successfuly.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
block
##### ANSIBLE VERSION
```paste below
ansible 2.10.5
config file = None
configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/lib/python3.6/dist-packages/ansible
executable location = /usr/local/bin/ansible
python version = 3.6.9 (default, Oct 8 2020, 12:12:24) [GCC 8.4.0]
```
##### CONFIGURATION
```paste below
```
(`ansible-config dump --only-changed` outputs nothing)
##### OS / ENVIRONMENT
Ubuntu 18.04 (docker container)
##### STEPS TO REPRODUCE
_hosts.yml:_
```yaml
[main]
localhost ansible_connection=local
```
_playbook.yml:_
```yaml
- name: Test 01
hosts: main
tasks:
- block:
- include_tasks: "test.yml"
rescue:
- debug:
msg: "rescued"
- debug:
msg: "test 01"
- name: Test 02
hosts: main
tasks:
- debug:
msg: "test 02"
- fail:
msg: "end here"
```
**Works:**
_test.yml (gives an error, but works as expected):_
```yaml
- ansible.builtin.file:
param: "value"
```
_Outputs:_
```ini
PLAY [Test 01] ************************************************************************************************************************
TASK [include_tasks] ******************************************************************************************************************
included: /main/dev/repos/cloud/test.yml for localhost
TASK [ansible.builtin.file] ***********************************************************************************************************
fatal: [localhost]: FAILED! => {"changed": false, "msg": "Unsupported parameters for (ansible.builtin.file) module: param Supported parameters include: _diff_peek, _original_basename, access_time, access_time_format, attributes, follow, force, group, mode, modification_time, modification_time_format, owner, path, recurse, selevel, serole, setype, seuser, src, state, unsafe_writes"}
TASK [debug] **************************************************************************************************************************
ok: [localhost] => {
"msg": "rescued"
}
TASK [debug] **************************************************************************************************************************
ok: [localhost] => {
"msg": "test 01"
}
PLAY [Test 02] ************************************************************************************************************************
TASK [debug] **************************************************************************************************************************
ok: [localhost] => {
"msg": "test 02"
}
TASK [fail] ***************************************************************************************************************************
fatal: [localhost]: FAILED! => {"changed": false, "msg": "end here"}
PLAY RECAP ****************************************************************************************************************************
localhost : ok=4 changed=0 unreachable=0 failed=1 skipped=0 rescued=1 ignored=0
```
**Bug:**
_test.yml (gives an error, stops the next plays, ends the playbook successfully, as explained above):_
```yaml
- wrong.collection.module:
param: "value"
```
_Outputs:_
```ini
PLAY [Test 01] ************************************************************************************************************************
TASK [include_tasks] ******************************************************************************************************************
fatal: [localhost]: FAILED! => {"reason": "couldn't resolve module/action 'wrong.collection.module'. This often indicates a misspelling, missing collection, or incorrect module path.\n\nThe error appears to be in '/main/dev/repos/cloud/test.yml': line 1, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n\n- wrong.collection.module:\n ^ here\n"}
TASK [debug] **************************************************************************************************************************
ok: [localhost] => {
"msg": "rescued"
}
TASK [debug] **************************************************************************************************************************
ok: [localhost] => {
"msg": "test 01"
}
PLAY RECAP ****************************************************************************************************************************
localhost : ok=2 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
```
##### EXPECTED RESULTS
The same as in the case that works.
##### ACTUAL RESULTS
Stop the next plays, even after being rescued, and end the playbook successfully (if I run `ansible-playbook playbook.yml && echo "success" || echo "error"` it gives an error (due to the `fail` task at the end) in the 1st case (that works), but success in the 2nd, and don't run the 2nd play).
|
https://github.com/ansible/ansible/issues/73657
|
https://github.com/ansible/ansible/pull/73722
|
d23226a6f4737fb17b7b1be0d09f39e249832c7f
|
47ee28222794010ec447b7b6d73189eed1b46581
| 2021-02-19T03:01:38Z |
python
| 2021-11-01T15:05:39Z |
test/integration/targets/include_import/issue73657.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 73,657 |
Block with rescue stops next plays and end playbook successfully on some errors
|
##### SUMMARY
Depending on the error that occurs when a block with rescue catches it, the play execution stops, and the playbook ends successfuly, but the next plays don't run. I was able to simulate it using `include_tasks`, but the task inside use a module with a wrong name (that can be provided by a user, so not something that I have control, because it might not be me running the play, the user may include and run custom tasks).
This is not a problem if the module with wrong name is in the main playbook file, because the playbook fails at compilation time, but when lazily loaded through an `include_tasks`, the task fails, the `rescue` (wrapping the `include_tasks`) catches the error successfully, but the next plays don't run, and the playbook ends successfuly.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
block
##### ANSIBLE VERSION
```paste below
ansible 2.10.5
config file = None
configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/lib/python3.6/dist-packages/ansible
executable location = /usr/local/bin/ansible
python version = 3.6.9 (default, Oct 8 2020, 12:12:24) [GCC 8.4.0]
```
##### CONFIGURATION
```paste below
```
(`ansible-config dump --only-changed` outputs nothing)
##### OS / ENVIRONMENT
Ubuntu 18.04 (docker container)
##### STEPS TO REPRODUCE
_hosts.yml:_
```yaml
[main]
localhost ansible_connection=local
```
_playbook.yml:_
```yaml
- name: Test 01
hosts: main
tasks:
- block:
- include_tasks: "test.yml"
rescue:
- debug:
msg: "rescued"
- debug:
msg: "test 01"
- name: Test 02
hosts: main
tasks:
- debug:
msg: "test 02"
- fail:
msg: "end here"
```
**Works:**
_test.yml (gives an error, but works as expected):_
```yaml
- ansible.builtin.file:
param: "value"
```
_Outputs:_
```ini
PLAY [Test 01] ************************************************************************************************************************
TASK [include_tasks] ******************************************************************************************************************
included: /main/dev/repos/cloud/test.yml for localhost
TASK [ansible.builtin.file] ***********************************************************************************************************
fatal: [localhost]: FAILED! => {"changed": false, "msg": "Unsupported parameters for (ansible.builtin.file) module: param Supported parameters include: _diff_peek, _original_basename, access_time, access_time_format, attributes, follow, force, group, mode, modification_time, modification_time_format, owner, path, recurse, selevel, serole, setype, seuser, src, state, unsafe_writes"}
TASK [debug] **************************************************************************************************************************
ok: [localhost] => {
"msg": "rescued"
}
TASK [debug] **************************************************************************************************************************
ok: [localhost] => {
"msg": "test 01"
}
PLAY [Test 02] ************************************************************************************************************************
TASK [debug] **************************************************************************************************************************
ok: [localhost] => {
"msg": "test 02"
}
TASK [fail] ***************************************************************************************************************************
fatal: [localhost]: FAILED! => {"changed": false, "msg": "end here"}
PLAY RECAP ****************************************************************************************************************************
localhost : ok=4 changed=0 unreachable=0 failed=1 skipped=0 rescued=1 ignored=0
```
**Bug:**
_test.yml (gives an error, stops the next plays, ends the playbook successfully, as explained above):_
```yaml
- wrong.collection.module:
param: "value"
```
_Outputs:_
```ini
PLAY [Test 01] ************************************************************************************************************************
TASK [include_tasks] ******************************************************************************************************************
fatal: [localhost]: FAILED! => {"reason": "couldn't resolve module/action 'wrong.collection.module'. This often indicates a misspelling, missing collection, or incorrect module path.\n\nThe error appears to be in '/main/dev/repos/cloud/test.yml': line 1, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n\n- wrong.collection.module:\n ^ here\n"}
TASK [debug] **************************************************************************************************************************
ok: [localhost] => {
"msg": "rescued"
}
TASK [debug] **************************************************************************************************************************
ok: [localhost] => {
"msg": "test 01"
}
PLAY RECAP ****************************************************************************************************************************
localhost : ok=2 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
```
##### EXPECTED RESULTS
The same as in the case that works.
##### ACTUAL RESULTS
Stop the next plays, even after being rescued, and end the playbook successfully (if I run `ansible-playbook playbook.yml && echo "success" || echo "error"` it gives an error (due to the `fail` task at the end) in the 1st case (that works), but success in the 2nd, and don't run the 2nd play).
|
https://github.com/ansible/ansible/issues/73657
|
https://github.com/ansible/ansible/pull/73722
|
d23226a6f4737fb17b7b1be0d09f39e249832c7f
|
47ee28222794010ec447b7b6d73189eed1b46581
| 2021-02-19T03:01:38Z |
python
| 2021-11-01T15:05:39Z |
test/integration/targets/include_import/issue73657_tasks.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 73,657 |
Block with rescue stops next plays and end playbook successfully on some errors
|
##### SUMMARY
Depending on the error that occurs when a block with rescue catches it, the play execution stops, and the playbook ends successfuly, but the next plays don't run. I was able to simulate it using `include_tasks`, but the task inside use a module with a wrong name (that can be provided by a user, so not something that I have control, because it might not be me running the play, the user may include and run custom tasks).
This is not a problem if the module with wrong name is in the main playbook file, because the playbook fails at compilation time, but when lazily loaded through an `include_tasks`, the task fails, the `rescue` (wrapping the `include_tasks`) catches the error successfully, but the next plays don't run, and the playbook ends successfuly.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
block
##### ANSIBLE VERSION
```paste below
ansible 2.10.5
config file = None
configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/lib/python3.6/dist-packages/ansible
executable location = /usr/local/bin/ansible
python version = 3.6.9 (default, Oct 8 2020, 12:12:24) [GCC 8.4.0]
```
##### CONFIGURATION
```paste below
```
(`ansible-config dump --only-changed` outputs nothing)
##### OS / ENVIRONMENT
Ubuntu 18.04 (docker container)
##### STEPS TO REPRODUCE
_hosts.yml:_
```yaml
[main]
localhost ansible_connection=local
```
_playbook.yml:_
```yaml
- name: Test 01
hosts: main
tasks:
- block:
- include_tasks: "test.yml"
rescue:
- debug:
msg: "rescued"
- debug:
msg: "test 01"
- name: Test 02
hosts: main
tasks:
- debug:
msg: "test 02"
- fail:
msg: "end here"
```
**Works:**
_test.yml (gives an error, but works as expected):_
```yaml
- ansible.builtin.file:
param: "value"
```
_Outputs:_
```ini
PLAY [Test 01] ************************************************************************************************************************
TASK [include_tasks] ******************************************************************************************************************
included: /main/dev/repos/cloud/test.yml for localhost
TASK [ansible.builtin.file] ***********************************************************************************************************
fatal: [localhost]: FAILED! => {"changed": false, "msg": "Unsupported parameters for (ansible.builtin.file) module: param Supported parameters include: _diff_peek, _original_basename, access_time, access_time_format, attributes, follow, force, group, mode, modification_time, modification_time_format, owner, path, recurse, selevel, serole, setype, seuser, src, state, unsafe_writes"}
TASK [debug] **************************************************************************************************************************
ok: [localhost] => {
"msg": "rescued"
}
TASK [debug] **************************************************************************************************************************
ok: [localhost] => {
"msg": "test 01"
}
PLAY [Test 02] ************************************************************************************************************************
TASK [debug] **************************************************************************************************************************
ok: [localhost] => {
"msg": "test 02"
}
TASK [fail] ***************************************************************************************************************************
fatal: [localhost]: FAILED! => {"changed": false, "msg": "end here"}
PLAY RECAP ****************************************************************************************************************************
localhost : ok=4 changed=0 unreachable=0 failed=1 skipped=0 rescued=1 ignored=0
```
**Bug:**
_test.yml (gives an error, stops the next plays, ends the playbook successfully, as explained above):_
```yaml
- wrong.collection.module:
param: "value"
```
_Outputs:_
```ini
PLAY [Test 01] ************************************************************************************************************************
TASK [include_tasks] ******************************************************************************************************************
fatal: [localhost]: FAILED! => {"reason": "couldn't resolve module/action 'wrong.collection.module'. This often indicates a misspelling, missing collection, or incorrect module path.\n\nThe error appears to be in '/main/dev/repos/cloud/test.yml': line 1, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n\n- wrong.collection.module:\n ^ here\n"}
TASK [debug] **************************************************************************************************************************
ok: [localhost] => {
"msg": "rescued"
}
TASK [debug] **************************************************************************************************************************
ok: [localhost] => {
"msg": "test 01"
}
PLAY RECAP ****************************************************************************************************************************
localhost : ok=2 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
```
##### EXPECTED RESULTS
The same as in the case that works.
##### ACTUAL RESULTS
Stop the next plays, even after being rescued, and end the playbook successfully (if I run `ansible-playbook playbook.yml && echo "success" || echo "error"` it gives an error (due to the `fail` task at the end) in the 1st case (that works), but success in the 2nd, and don't run the 2nd play).
|
https://github.com/ansible/ansible/issues/73657
|
https://github.com/ansible/ansible/pull/73722
|
d23226a6f4737fb17b7b1be0d09f39e249832c7f
|
47ee28222794010ec447b7b6d73189eed1b46581
| 2021-02-19T03:01:38Z |
python
| 2021-11-01T15:05:39Z |
test/integration/targets/include_import/runme.sh
|
#!/usr/bin/env bash
set -eux
export ANSIBLE_ROLES_PATH=./roles
function gen_task_files() {
for i in $(printf "%03d " {1..39}); do
echo -e "- name: Hello Message\n debug:\n msg: Task file ${i}" > "tasks/hello/tasks-file-${i}.yml"
done
}
## Adhoc
ansible -m include_role -a name=role1 localhost
## Import (static)
# Playbook
test "$(ANSIBLE_DEPRECATION_WARNINGS=1 ansible-playbook -i ../../inventory playbook/test_import_playbook.yml "$@" 2>&1 | grep -c '\[DEPRECATION WARNING\]: Additional parameters in import_playbook')" = 1
ANSIBLE_STRATEGY='linear' ansible-playbook playbook/test_import_playbook_tags.yml -i inventory "$@" --tags canary1,canary22,validate --skip-tags skipme
# Tasks
ANSIBLE_STRATEGY='linear' ansible-playbook tasks/test_import_tasks.yml -i inventory "$@"
ANSIBLE_STRATEGY='free' ansible-playbook tasks/test_import_tasks.yml -i inventory "$@"
ANSIBLE_STRATEGY='free' ansible-playbook tasks/test_import_tasks_tags.yml -i inventory "$@" --tags tasks1,canary1,validate
# Role
ANSIBLE_STRATEGY='linear' ansible-playbook role/test_import_role.yml -i inventory "$@"
ANSIBLE_STRATEGY='free' ansible-playbook role/test_import_role.yml -i inventory "$@"
## Include (dynamic)
# Tasks
ANSIBLE_STRATEGY='linear' ansible-playbook tasks/test_include_tasks.yml -i inventory "$@"
ANSIBLE_STRATEGY='free' ansible-playbook tasks/test_include_tasks.yml -i inventory "$@"
ANSIBLE_STRATEGY='free' ansible-playbook tasks/test_include_tasks_tags.yml -i inventory "$@" --tags tasks1,canary1,validate
# Role
ANSIBLE_STRATEGY='linear' ansible-playbook role/test_include_role.yml -i inventory "$@"
ANSIBLE_STRATEGY='free' ansible-playbook role/test_include_role.yml -i inventory "$@"
# https://github.com/ansible/ansible/issues/68515
ansible-playbook -v role/test_include_role_vars_from.yml 2>&1 | tee test_include_role_vars_from.out
test "$(grep -E -c 'Expected a string for vars_from but got' test_include_role_vars_from.out)" = 1
## Max Recursion Depth
# https://github.com/ansible/ansible/issues/23609
ANSIBLE_STRATEGY='linear' ansible-playbook test_role_recursion.yml -i inventory "$@"
ANSIBLE_STRATEGY='linear' ansible-playbook test_role_recursion_fqcn.yml -i inventory "$@"
## Nested tasks
# https://github.com/ansible/ansible/issues/34782
ANSIBLE_STRATEGY='linear' ansible-playbook test_nested_tasks.yml -i inventory "$@"
ANSIBLE_STRATEGY='linear' ansible-playbook test_nested_tasks_fqcn.yml -i inventory "$@"
ANSIBLE_STRATEGY='free' ansible-playbook test_nested_tasks.yml -i inventory "$@"
ANSIBLE_STRATEGY='free' ansible-playbook test_nested_tasks_fqcn.yml -i inventory "$@"
## Tons of top level include_tasks
# https://github.com/ansible/ansible/issues/36053
# Fixed by https://github.com/ansible/ansible/pull/36075
gen_task_files
ANSIBLE_STRATEGY='linear' ansible-playbook test_copious_include_tasks.yml -i inventory "$@"
ANSIBLE_STRATEGY='linear' ansible-playbook test_copious_include_tasks_fqcn.yml -i inventory "$@"
ANSIBLE_STRATEGY='free' ansible-playbook test_copious_include_tasks.yml -i inventory "$@"
ANSIBLE_STRATEGY='free' ansible-playbook test_copious_include_tasks_fqcn.yml -i inventory "$@"
rm -f tasks/hello/*.yml
# Inlcuded tasks should inherit attrs from non-dynamic blocks in parent chain
# https://github.com/ansible/ansible/pull/38827
ANSIBLE_STRATEGY='linear' ansible-playbook test_grandparent_inheritance.yml -i inventory "$@"
ANSIBLE_STRATEGY='linear' ansible-playbook test_grandparent_inheritance_fqcn.yml -i inventory "$@"
# undefined_var
ANSIBLE_STRATEGY='linear' ansible-playbook undefined_var/playbook.yml -i inventory "$@"
ANSIBLE_STRATEGY='free' ansible-playbook undefined_var/playbook.yml -i inventory "$@"
# include_ + apply (explicit inheritance)
ANSIBLE_STRATEGY='linear' ansible-playbook apply/include_apply.yml -i inventory "$@" --tags foo
set +e
OUT=$(ANSIBLE_STRATEGY='linear' ansible-playbook apply/import_apply.yml -i inventory "$@" --tags foo 2>&1 | grep 'ERROR! Invalid options for import_tasks: apply')
set -e
if [[ -z "$OUT" ]]; then
echo "apply on import_tasks did not cause error"
exit 1
fi
ANSIBLE_STRATEGY='linear' ANSIBLE_PLAYBOOK_VARS_ROOT=all ansible-playbook apply/include_apply_65710.yml -i inventory "$@"
ANSIBLE_STRATEGY='free' ANSIBLE_PLAYBOOK_VARS_ROOT=all ansible-playbook apply/include_apply_65710.yml -i inventory "$@"
# Test that duplicate items in loop are not deduped
ANSIBLE_STRATEGY='linear' ansible-playbook tasks/test_include_dupe_loop.yml -i inventory "$@" | tee test_include_dupe_loop.out
test "$(grep -c '"item=foo"' test_include_dupe_loop.out)" = 3
ANSIBLE_STRATEGY='free' ansible-playbook tasks/test_include_dupe_loop.yml -i inventory "$@" | tee test_include_dupe_loop.out
test "$(grep -c '"item=foo"' test_include_dupe_loop.out)" = 3
ansible-playbook public_exposure/playbook.yml -i inventory "$@"
ansible-playbook public_exposure/no_bleeding.yml -i inventory "$@"
ansible-playbook public_exposure/no_overwrite_roles.yml -i inventory "$@"
# https://github.com/ansible/ansible/pull/48068
ANSIBLE_HOST_PATTERN_MISMATCH=warning ansible-playbook run_once/playbook.yml "$@"
# https://github.com/ansible/ansible/issues/48936
ansible-playbook -v handler_addressing/playbook.yml 2>&1 | tee test_handler_addressing.out
test "$(grep -E -c 'include handler task|ERROR! The requested handler '"'"'do_import'"'"' was not found' test_handler_addressing.out)" = 2
# https://github.com/ansible/ansible/issues/49969
ansible-playbook -v parent_templating/playbook.yml 2>&1 | tee test_parent_templating.out
test "$(grep -E -c 'Templating the path of the parent include_tasks failed.' test_parent_templating.out)" = 0
# https://github.com/ansible/ansible/issues/54618
ansible-playbook test_loop_var_bleed.yaml "$@"
# https://github.com/ansible/ansible/issues/56580
ansible-playbook valid_include_keywords/playbook.yml "$@"
# https://github.com/ansible/ansible/issues/64902
ansible-playbook tasks/test_allow_single_role_dup.yml 2>&1 | tee test_allow_single_role_dup.out
test "$(grep -c 'ok=3' test_allow_single_role_dup.out)" = 1
# https://github.com/ansible/ansible/issues/66764
ANSIBLE_HOST_PATTERN_MISMATCH=error ansible-playbook empty_group_warning/playbook.yml
ansible-playbook test_include_loop.yml "$@"
ansible-playbook test_include_loop_fqcn.yml "$@"
ansible-playbook include_role_omit/playbook.yml "$@"
# Test templating import_playbook, import_tasks, and import_role files
ansible-playbook playbook/test_templated_filenames.yml -e "pb=validate_templated_playbook.yml tasks=validate_templated_tasks.yml tasks_from=templated.yml" "$@" | tee out.txt
cat out.txt
test "$(grep out.txt -ce 'In imported playbook')" = 2
test "$(grep out.txt -ce 'In imported tasks')" = 3
test "$(grep out.txt -ce 'In imported role')" = 3
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 73,657 |
Block with rescue stops next plays and end playbook successfully on some errors
|
##### SUMMARY
Depending on the error that occurs when a block with rescue catches it, the play execution stops, and the playbook ends successfuly, but the next plays don't run. I was able to simulate it using `include_tasks`, but the task inside use a module with a wrong name (that can be provided by a user, so not something that I have control, because it might not be me running the play, the user may include and run custom tasks).
This is not a problem if the module with wrong name is in the main playbook file, because the playbook fails at compilation time, but when lazily loaded through an `include_tasks`, the task fails, the `rescue` (wrapping the `include_tasks`) catches the error successfully, but the next plays don't run, and the playbook ends successfuly.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
block
##### ANSIBLE VERSION
```paste below
ansible 2.10.5
config file = None
configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/lib/python3.6/dist-packages/ansible
executable location = /usr/local/bin/ansible
python version = 3.6.9 (default, Oct 8 2020, 12:12:24) [GCC 8.4.0]
```
##### CONFIGURATION
```paste below
```
(`ansible-config dump --only-changed` outputs nothing)
##### OS / ENVIRONMENT
Ubuntu 18.04 (docker container)
##### STEPS TO REPRODUCE
_hosts.yml:_
```yaml
[main]
localhost ansible_connection=local
```
_playbook.yml:_
```yaml
- name: Test 01
hosts: main
tasks:
- block:
- include_tasks: "test.yml"
rescue:
- debug:
msg: "rescued"
- debug:
msg: "test 01"
- name: Test 02
hosts: main
tasks:
- debug:
msg: "test 02"
- fail:
msg: "end here"
```
**Works:**
_test.yml (gives an error, but works as expected):_
```yaml
- ansible.builtin.file:
param: "value"
```
_Outputs:_
```ini
PLAY [Test 01] ************************************************************************************************************************
TASK [include_tasks] ******************************************************************************************************************
included: /main/dev/repos/cloud/test.yml for localhost
TASK [ansible.builtin.file] ***********************************************************************************************************
fatal: [localhost]: FAILED! => {"changed": false, "msg": "Unsupported parameters for (ansible.builtin.file) module: param Supported parameters include: _diff_peek, _original_basename, access_time, access_time_format, attributes, follow, force, group, mode, modification_time, modification_time_format, owner, path, recurse, selevel, serole, setype, seuser, src, state, unsafe_writes"}
TASK [debug] **************************************************************************************************************************
ok: [localhost] => {
"msg": "rescued"
}
TASK [debug] **************************************************************************************************************************
ok: [localhost] => {
"msg": "test 01"
}
PLAY [Test 02] ************************************************************************************************************************
TASK [debug] **************************************************************************************************************************
ok: [localhost] => {
"msg": "test 02"
}
TASK [fail] ***************************************************************************************************************************
fatal: [localhost]: FAILED! => {"changed": false, "msg": "end here"}
PLAY RECAP ****************************************************************************************************************************
localhost : ok=4 changed=0 unreachable=0 failed=1 skipped=0 rescued=1 ignored=0
```
**Bug:**
_test.yml (gives an error, stops the next plays, ends the playbook successfully, as explained above):_
```yaml
- wrong.collection.module:
param: "value"
```
_Outputs:_
```ini
PLAY [Test 01] ************************************************************************************************************************
TASK [include_tasks] ******************************************************************************************************************
fatal: [localhost]: FAILED! => {"reason": "couldn't resolve module/action 'wrong.collection.module'. This often indicates a misspelling, missing collection, or incorrect module path.\n\nThe error appears to be in '/main/dev/repos/cloud/test.yml': line 1, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n\n- wrong.collection.module:\n ^ here\n"}
TASK [debug] **************************************************************************************************************************
ok: [localhost] => {
"msg": "rescued"
}
TASK [debug] **************************************************************************************************************************
ok: [localhost] => {
"msg": "test 01"
}
PLAY RECAP ****************************************************************************************************************************
localhost : ok=2 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
```
##### EXPECTED RESULTS
The same as in the case that works.
##### ACTUAL RESULTS
Stop the next plays, even after being rescued, and end the playbook successfully (if I run `ansible-playbook playbook.yml && echo "success" || echo "error"` it gives an error (due to the `fail` task at the end) in the 1st case (that works), but success in the 2nd, and don't run the 2nd play).
|
https://github.com/ansible/ansible/issues/73657
|
https://github.com/ansible/ansible/pull/73722
|
d23226a6f4737fb17b7b1be0d09f39e249832c7f
|
47ee28222794010ec447b7b6d73189eed1b46581
| 2021-02-19T03:01:38Z |
python
| 2021-11-01T15:05:39Z |
test/units/plugins/strategy/test_strategy.py
|
# (c) 2012-2014, Michael DeHaan <[email protected]>
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
# Make coding more python3-ish
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
from units.mock.loader import DictDataLoader
import uuid
from units.compat import unittest
from units.compat.mock import patch, MagicMock
from ansible.executor.process.worker import WorkerProcess
from ansible.executor.task_queue_manager import TaskQueueManager
from ansible.executor.task_result import TaskResult
from ansible.inventory.host import Host
from ansible.module_utils.six.moves import queue as Queue
from ansible.playbook.handler import Handler
from ansible.plugins.strategy import StrategyBase
import pytest
pytestmark = pytest.mark.skipif(True, reason="Temporarily disabled due to fragile tests that need rewritten")
class TestStrategyBase(unittest.TestCase):
def test_strategy_base_init(self):
queue_items = []
def _queue_empty(*args, **kwargs):
return len(queue_items) == 0
def _queue_get(*args, **kwargs):
if len(queue_items) == 0:
raise Queue.Empty
else:
return queue_items.pop()
def _queue_put(item, *args, **kwargs):
queue_items.append(item)
mock_queue = MagicMock()
mock_queue.empty.side_effect = _queue_empty
mock_queue.get.side_effect = _queue_get
mock_queue.put.side_effect = _queue_put
mock_tqm = MagicMock(TaskQueueManager)
mock_tqm._final_q = mock_queue
mock_tqm._workers = []
strategy_base = StrategyBase(tqm=mock_tqm)
strategy_base.cleanup()
def test_strategy_base_run(self):
queue_items = []
def _queue_empty(*args, **kwargs):
return len(queue_items) == 0
def _queue_get(*args, **kwargs):
if len(queue_items) == 0:
raise Queue.Empty
else:
return queue_items.pop()
def _queue_put(item, *args, **kwargs):
queue_items.append(item)
mock_queue = MagicMock()
mock_queue.empty.side_effect = _queue_empty
mock_queue.get.side_effect = _queue_get
mock_queue.put.side_effect = _queue_put
mock_tqm = MagicMock(TaskQueueManager)
mock_tqm._final_q = mock_queue
mock_tqm._stats = MagicMock()
mock_tqm.send_callback.return_value = None
for attr in ('RUN_OK', 'RUN_ERROR', 'RUN_FAILED_HOSTS', 'RUN_UNREACHABLE_HOSTS'):
setattr(mock_tqm, attr, getattr(TaskQueueManager, attr))
mock_iterator = MagicMock()
mock_iterator._play = MagicMock()
mock_iterator._play.handlers = []
mock_play_context = MagicMock()
mock_tqm._failed_hosts = dict()
mock_tqm._unreachable_hosts = dict()
mock_tqm._workers = []
strategy_base = StrategyBase(tqm=mock_tqm)
mock_host = MagicMock()
mock_host.name = 'host1'
self.assertEqual(strategy_base.run(iterator=mock_iterator, play_context=mock_play_context), mock_tqm.RUN_OK)
self.assertEqual(strategy_base.run(iterator=mock_iterator, play_context=mock_play_context, result=TaskQueueManager.RUN_ERROR), mock_tqm.RUN_ERROR)
mock_tqm._failed_hosts = dict(host1=True)
mock_iterator.get_failed_hosts.return_value = [mock_host]
self.assertEqual(strategy_base.run(iterator=mock_iterator, play_context=mock_play_context, result=False), mock_tqm.RUN_FAILED_HOSTS)
mock_tqm._unreachable_hosts = dict(host1=True)
mock_iterator.get_failed_hosts.return_value = []
self.assertEqual(strategy_base.run(iterator=mock_iterator, play_context=mock_play_context, result=False), mock_tqm.RUN_UNREACHABLE_HOSTS)
strategy_base.cleanup()
def test_strategy_base_get_hosts(self):
queue_items = []
def _queue_empty(*args, **kwargs):
return len(queue_items) == 0
def _queue_get(*args, **kwargs):
if len(queue_items) == 0:
raise Queue.Empty
else:
return queue_items.pop()
def _queue_put(item, *args, **kwargs):
queue_items.append(item)
mock_queue = MagicMock()
mock_queue.empty.side_effect = _queue_empty
mock_queue.get.side_effect = _queue_get
mock_queue.put.side_effect = _queue_put
mock_hosts = []
for i in range(0, 5):
mock_host = MagicMock()
mock_host.name = "host%02d" % (i + 1)
mock_host.has_hostkey = True
mock_hosts.append(mock_host)
mock_hosts_names = [h.name for h in mock_hosts]
mock_inventory = MagicMock()
mock_inventory.get_hosts.return_value = mock_hosts
mock_tqm = MagicMock()
mock_tqm._final_q = mock_queue
mock_tqm.get_inventory.return_value = mock_inventory
mock_play = MagicMock()
mock_play.hosts = ["host%02d" % (i + 1) for i in range(0, 5)]
strategy_base = StrategyBase(tqm=mock_tqm)
strategy_base._hosts_cache = strategy_base._hosts_cache_all = mock_hosts_names
mock_tqm._failed_hosts = []
mock_tqm._unreachable_hosts = []
self.assertEqual(strategy_base.get_hosts_remaining(play=mock_play), [h.name for h in mock_hosts])
mock_tqm._failed_hosts = ["host01"]
self.assertEqual(strategy_base.get_hosts_remaining(play=mock_play), [h.name for h in mock_hosts[1:]])
self.assertEqual(strategy_base.get_failed_hosts(play=mock_play), [mock_hosts[0].name])
mock_tqm._unreachable_hosts = ["host02"]
self.assertEqual(strategy_base.get_hosts_remaining(play=mock_play), [h.name for h in mock_hosts[2:]])
strategy_base.cleanup()
@patch.object(WorkerProcess, 'run')
def test_strategy_base_queue_task(self, mock_worker):
def fake_run(self):
return
mock_worker.run.side_effect = fake_run
fake_loader = DictDataLoader()
mock_var_manager = MagicMock()
mock_host = MagicMock()
mock_host.get_vars.return_value = dict()
mock_host.has_hostkey = True
mock_inventory = MagicMock()
mock_inventory.get.return_value = mock_host
tqm = TaskQueueManager(
inventory=mock_inventory,
variable_manager=mock_var_manager,
loader=fake_loader,
passwords=None,
forks=3,
)
tqm._initialize_processes(3)
tqm.hostvars = dict()
mock_task = MagicMock()
mock_task._uuid = 'abcd'
mock_task.throttle = 0
try:
strategy_base = StrategyBase(tqm=tqm)
strategy_base._queue_task(host=mock_host, task=mock_task, task_vars=dict(), play_context=MagicMock())
self.assertEqual(strategy_base._cur_worker, 1)
self.assertEqual(strategy_base._pending_results, 1)
strategy_base._queue_task(host=mock_host, task=mock_task, task_vars=dict(), play_context=MagicMock())
self.assertEqual(strategy_base._cur_worker, 2)
self.assertEqual(strategy_base._pending_results, 2)
strategy_base._queue_task(host=mock_host, task=mock_task, task_vars=dict(), play_context=MagicMock())
self.assertEqual(strategy_base._cur_worker, 0)
self.assertEqual(strategy_base._pending_results, 3)
finally:
tqm.cleanup()
def test_strategy_base_process_pending_results(self):
mock_tqm = MagicMock()
mock_tqm._terminated = False
mock_tqm._failed_hosts = dict()
mock_tqm._unreachable_hosts = dict()
mock_tqm.send_callback.return_value = None
queue_items = []
def _queue_empty(*args, **kwargs):
return len(queue_items) == 0
def _queue_get(*args, **kwargs):
if len(queue_items) == 0:
raise Queue.Empty
else:
return queue_items.pop()
def _queue_put(item, *args, **kwargs):
queue_items.append(item)
mock_queue = MagicMock()
mock_queue.empty.side_effect = _queue_empty
mock_queue.get.side_effect = _queue_get
mock_queue.put.side_effect = _queue_put
mock_tqm._final_q = mock_queue
mock_tqm._stats = MagicMock()
mock_tqm._stats.increment.return_value = None
mock_play = MagicMock()
mock_host = MagicMock()
mock_host.name = 'test01'
mock_host.vars = dict()
mock_host.get_vars.return_value = dict()
mock_host.has_hostkey = True
mock_task = MagicMock()
mock_task._role = None
mock_task._parent = None
mock_task.ignore_errors = False
mock_task.ignore_unreachable = False
mock_task._uuid = str(uuid.uuid4())
mock_task.loop = None
mock_task.copy.return_value = mock_task
mock_handler_task = Handler()
mock_handler_task.name = 'test handler'
mock_handler_task.action = 'foo'
mock_handler_task._parent = None
mock_handler_task._uuid = 'xxxxxxxxxxxxx'
mock_iterator = MagicMock()
mock_iterator._play = mock_play
mock_iterator.mark_host_failed.return_value = None
mock_iterator.get_next_task_for_host.return_value = (None, None)
mock_handler_block = MagicMock()
mock_handler_block.block = [mock_handler_task]
mock_handler_block.rescue = []
mock_handler_block.always = []
mock_play.handlers = [mock_handler_block]
mock_group = MagicMock()
mock_group.add_host.return_value = None
def _get_host(host_name):
if host_name == 'test01':
return mock_host
return None
def _get_group(group_name):
if group_name in ('all', 'foo'):
return mock_group
return None
mock_inventory = MagicMock()
mock_inventory._hosts_cache = dict()
mock_inventory.hosts.return_value = mock_host
mock_inventory.get_host.side_effect = _get_host
mock_inventory.get_group.side_effect = _get_group
mock_inventory.clear_pattern_cache.return_value = None
mock_inventory.get_host_vars.return_value = {}
mock_inventory.hosts.get.return_value = mock_host
mock_var_mgr = MagicMock()
mock_var_mgr.set_host_variable.return_value = None
mock_var_mgr.set_host_facts.return_value = None
mock_var_mgr.get_vars.return_value = dict()
strategy_base = StrategyBase(tqm=mock_tqm)
strategy_base._inventory = mock_inventory
strategy_base._variable_manager = mock_var_mgr
strategy_base._blocked_hosts = dict()
def _has_dead_workers():
return False
strategy_base._tqm.has_dead_workers.side_effect = _has_dead_workers
results = strategy_base._wait_on_pending_results(iterator=mock_iterator)
self.assertEqual(len(results), 0)
task_result = TaskResult(host=mock_host.name, task=mock_task._uuid, return_data=dict(changed=True))
queue_items.append(task_result)
strategy_base._blocked_hosts['test01'] = True
strategy_base._pending_results = 1
def mock_queued_task_cache():
return {
(mock_host.name, mock_task._uuid): {
'task': mock_task,
'host': mock_host,
'task_vars': {},
'play_context': {},
}
}
strategy_base._queued_task_cache = mock_queued_task_cache()
results = strategy_base._wait_on_pending_results(iterator=mock_iterator)
self.assertEqual(len(results), 1)
self.assertEqual(results[0], task_result)
self.assertEqual(strategy_base._pending_results, 0)
self.assertNotIn('test01', strategy_base._blocked_hosts)
task_result = TaskResult(host=mock_host.name, task=mock_task._uuid, return_data='{"failed":true}')
queue_items.append(task_result)
strategy_base._blocked_hosts['test01'] = True
strategy_base._pending_results = 1
mock_iterator.is_failed.return_value = True
strategy_base._queued_task_cache = mock_queued_task_cache()
results = strategy_base._wait_on_pending_results(iterator=mock_iterator)
self.assertEqual(len(results), 1)
self.assertEqual(results[0], task_result)
self.assertEqual(strategy_base._pending_results, 0)
self.assertNotIn('test01', strategy_base._blocked_hosts)
# self.assertIn('test01', mock_tqm._failed_hosts)
# del mock_tqm._failed_hosts['test01']
mock_iterator.is_failed.return_value = False
task_result = TaskResult(host=mock_host.name, task=mock_task._uuid, return_data='{"unreachable": true}')
queue_items.append(task_result)
strategy_base._blocked_hosts['test01'] = True
strategy_base._pending_results = 1
strategy_base._queued_task_cache = mock_queued_task_cache()
results = strategy_base._wait_on_pending_results(iterator=mock_iterator)
self.assertEqual(len(results), 1)
self.assertEqual(results[0], task_result)
self.assertEqual(strategy_base._pending_results, 0)
self.assertNotIn('test01', strategy_base._blocked_hosts)
self.assertIn('test01', mock_tqm._unreachable_hosts)
del mock_tqm._unreachable_hosts['test01']
task_result = TaskResult(host=mock_host.name, task=mock_task._uuid, return_data='{"skipped": true}')
queue_items.append(task_result)
strategy_base._blocked_hosts['test01'] = True
strategy_base._pending_results = 1
strategy_base._queued_task_cache = mock_queued_task_cache()
results = strategy_base._wait_on_pending_results(iterator=mock_iterator)
self.assertEqual(len(results), 1)
self.assertEqual(results[0], task_result)
self.assertEqual(strategy_base._pending_results, 0)
self.assertNotIn('test01', strategy_base._blocked_hosts)
queue_items.append(TaskResult(host=mock_host.name, task=mock_task._uuid, return_data=dict(add_host=dict(host_name='newhost01', new_groups=['foo']))))
strategy_base._blocked_hosts['test01'] = True
strategy_base._pending_results = 1
strategy_base._queued_task_cache = mock_queued_task_cache()
results = strategy_base._wait_on_pending_results(iterator=mock_iterator)
self.assertEqual(len(results), 1)
self.assertEqual(strategy_base._pending_results, 0)
self.assertNotIn('test01', strategy_base._blocked_hosts)
queue_items.append(TaskResult(host=mock_host.name, task=mock_task._uuid, return_data=dict(add_group=dict(group_name='foo'))))
strategy_base._blocked_hosts['test01'] = True
strategy_base._pending_results = 1
strategy_base._queued_task_cache = mock_queued_task_cache()
results = strategy_base._wait_on_pending_results(iterator=mock_iterator)
self.assertEqual(len(results), 1)
self.assertEqual(strategy_base._pending_results, 0)
self.assertNotIn('test01', strategy_base._blocked_hosts)
queue_items.append(TaskResult(host=mock_host.name, task=mock_task._uuid, return_data=dict(changed=True, _ansible_notify=['test handler'])))
strategy_base._blocked_hosts['test01'] = True
strategy_base._pending_results = 1
strategy_base._queued_task_cache = mock_queued_task_cache()
results = strategy_base._wait_on_pending_results(iterator=mock_iterator)
self.assertEqual(len(results), 1)
self.assertEqual(strategy_base._pending_results, 0)
self.assertNotIn('test01', strategy_base._blocked_hosts)
self.assertTrue(mock_handler_task.is_host_notified(mock_host))
# queue_items.append(('set_host_var', mock_host, mock_task, None, 'foo', 'bar'))
# results = strategy_base._process_pending_results(iterator=mock_iterator)
# self.assertEqual(len(results), 0)
# self.assertEqual(strategy_base._pending_results, 1)
# queue_items.append(('set_host_facts', mock_host, mock_task, None, 'foo', dict()))
# results = strategy_base._process_pending_results(iterator=mock_iterator)
# self.assertEqual(len(results), 0)
# self.assertEqual(strategy_base._pending_results, 1)
# queue_items.append(('bad'))
# self.assertRaises(AnsibleError, strategy_base._process_pending_results, iterator=mock_iterator)
strategy_base.cleanup()
def test_strategy_base_load_included_file(self):
fake_loader = DictDataLoader({
"test.yml": """
- debug: msg='foo'
""",
"bad.yml": """
""",
})
queue_items = []
def _queue_empty(*args, **kwargs):
return len(queue_items) == 0
def _queue_get(*args, **kwargs):
if len(queue_items) == 0:
raise Queue.Empty
else:
return queue_items.pop()
def _queue_put(item, *args, **kwargs):
queue_items.append(item)
mock_queue = MagicMock()
mock_queue.empty.side_effect = _queue_empty
mock_queue.get.side_effect = _queue_get
mock_queue.put.side_effect = _queue_put
mock_tqm = MagicMock()
mock_tqm._final_q = mock_queue
strategy_base = StrategyBase(tqm=mock_tqm)
strategy_base._loader = fake_loader
strategy_base.cleanup()
mock_play = MagicMock()
mock_block = MagicMock()
mock_block._play = mock_play
mock_block.vars = dict()
mock_task = MagicMock()
mock_task._block = mock_block
mock_task._role = None
mock_task._parent = None
mock_iterator = MagicMock()
mock_iterator.mark_host_failed.return_value = None
mock_inc_file = MagicMock()
mock_inc_file._task = mock_task
mock_inc_file._filename = "test.yml"
res = strategy_base._load_included_file(included_file=mock_inc_file, iterator=mock_iterator)
mock_inc_file._filename = "bad.yml"
res = strategy_base._load_included_file(included_file=mock_inc_file, iterator=mock_iterator)
self.assertEqual(res, [])
@patch.object(WorkerProcess, 'run')
def test_strategy_base_run_handlers(self, mock_worker):
def fake_run(*args):
return
mock_worker.side_effect = fake_run
mock_play_context = MagicMock()
mock_handler_task = Handler()
mock_handler_task.action = 'foo'
mock_handler_task.cached_name = False
mock_handler_task.name = "test handler"
mock_handler_task.listen = []
mock_handler_task._role = None
mock_handler_task._parent = None
mock_handler_task._uuid = 'xxxxxxxxxxxxxxxx'
mock_handler = MagicMock()
mock_handler.block = [mock_handler_task]
mock_handler.flag_for_host.return_value = False
mock_play = MagicMock()
mock_play.handlers = [mock_handler]
mock_host = MagicMock(Host)
mock_host.name = "test01"
mock_host.has_hostkey = True
mock_inventory = MagicMock()
mock_inventory.get_hosts.return_value = [mock_host]
mock_inventory.get.return_value = mock_host
mock_inventory.get_host.return_value = mock_host
mock_var_mgr = MagicMock()
mock_var_mgr.get_vars.return_value = dict()
mock_iterator = MagicMock()
mock_iterator._play = mock_play
fake_loader = DictDataLoader()
tqm = TaskQueueManager(
inventory=mock_inventory,
variable_manager=mock_var_mgr,
loader=fake_loader,
passwords=None,
forks=5,
)
tqm._initialize_processes(3)
tqm.hostvars = dict()
try:
strategy_base = StrategyBase(tqm=tqm)
strategy_base._inventory = mock_inventory
task_result = TaskResult(mock_host.name, mock_handler_task._uuid, dict(changed=False))
strategy_base._queued_task_cache = dict()
strategy_base._queued_task_cache[(mock_host.name, mock_handler_task._uuid)] = {
'task': mock_handler_task,
'host': mock_host,
'task_vars': {},
'play_context': mock_play_context
}
tqm._final_q.put(task_result)
result = strategy_base.run_handlers(iterator=mock_iterator, play_context=mock_play_context)
finally:
strategy_base.cleanup()
tqm.cleanup()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 58,938 |
Ignore missing/failed vault ids in vault_identity_list
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
Our ansible.cfg lists all well-known keys, but ansible-vault fails to run if any of the keys are missing. We only publish to our CI/respective user environments keys that are appropriate for that stack. This means that an operations person cannot run ansible without having access to all known environment's keys. To us, this makes the use of multiple vault ids not all that useful.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
ansible-vault
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```
ansible 2.7.10
config file = /Users/dkimsey/ansible/ansible.cfg
configured module search path = ['/Users/dkimsey/ansible/library']
ansible python module location = /Users/dkimsey/.virtualenvs/ansible-2.7/lib/python3.7/site-packages/ansible
executable location = /Users/dkimsey/.virtualenvs/ansible-2.7/bin/ansible
python version = 3.7.3 (default, May 1 2019, 10:48:04) [Clang 10.0.0 (clang-1000.11.45.5)]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```
DEFAULT_VAULT_IDENTITY_LIST(/Users/dkimsey/ansible/ansible.cfg) = ['qa@~/.vault-qa', 'stg@~/.vault-staging', 'prod@~/.vault-production']
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
n/a
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
<!--- Paste example playbooks or commands between quotes below -->
```
$ echo "my-secret-key" > ~/.vault-staging
$ ansible-vault encrypt_string --vault-id=stg --name client
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
I'd expect it to skip missing vault keys. Maybe log it in verbose mode that it tried to access and failed.
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes -->
```
ansible-vault 2.7.10
config file = /Users/dkimsey/ansible/ansible.cfg
configured module search path = ['/Users/dkimsey/ansible/library']
ansible python module location = /Users/dkimsey/.virtualenvs/ansible-2.7/lib/python3.7/site-packages/ansible
executable location = /Users/dkimsey/.virtualenvs/ansible-2.7/bin/ansible-vault
python version = 3.7.3 (default, May 1 2019, 10:48:04) [Clang 10.0.0 (clang-1000.11.45.5)]
Using /Users/dkimsey/ansible/ansible.cfg as config file
ERROR! The vault password file /Users/dkimsey/.vault-qa was not found
```
|
https://github.com/ansible/ansible/issues/58938
|
https://github.com/ansible/ansible/pull/75540
|
47ee28222794010ec447b7b6d73189eed1b46581
|
8bbecc7cac96203fedf99953a06eb32d74c7a4d7
| 2019-07-10T17:25:50Z |
python
| 2021-11-01T15:15:50Z |
changelogs/fragments/75540-warn-invalid-configured-vault-secret-files.yaml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 58,938 |
Ignore missing/failed vault ids in vault_identity_list
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
Our ansible.cfg lists all well-known keys, but ansible-vault fails to run if any of the keys are missing. We only publish to our CI/respective user environments keys that are appropriate for that stack. This means that an operations person cannot run ansible without having access to all known environment's keys. To us, this makes the use of multiple vault ids not all that useful.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
ansible-vault
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```
ansible 2.7.10
config file = /Users/dkimsey/ansible/ansible.cfg
configured module search path = ['/Users/dkimsey/ansible/library']
ansible python module location = /Users/dkimsey/.virtualenvs/ansible-2.7/lib/python3.7/site-packages/ansible
executable location = /Users/dkimsey/.virtualenvs/ansible-2.7/bin/ansible
python version = 3.7.3 (default, May 1 2019, 10:48:04) [Clang 10.0.0 (clang-1000.11.45.5)]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```
DEFAULT_VAULT_IDENTITY_LIST(/Users/dkimsey/ansible/ansible.cfg) = ['qa@~/.vault-qa', 'stg@~/.vault-staging', 'prod@~/.vault-production']
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
n/a
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
<!--- Paste example playbooks or commands between quotes below -->
```
$ echo "my-secret-key" > ~/.vault-staging
$ ansible-vault encrypt_string --vault-id=stg --name client
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
I'd expect it to skip missing vault keys. Maybe log it in verbose mode that it tried to access and failed.
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes -->
```
ansible-vault 2.7.10
config file = /Users/dkimsey/ansible/ansible.cfg
configured module search path = ['/Users/dkimsey/ansible/library']
ansible python module location = /Users/dkimsey/.virtualenvs/ansible-2.7/lib/python3.7/site-packages/ansible
executable location = /Users/dkimsey/.virtualenvs/ansible-2.7/bin/ansible-vault
python version = 3.7.3 (default, May 1 2019, 10:48:04) [Clang 10.0.0 (clang-1000.11.45.5)]
Using /Users/dkimsey/ansible/ansible.cfg as config file
ERROR! The vault password file /Users/dkimsey/.vault-qa was not found
```
|
https://github.com/ansible/ansible/issues/58938
|
https://github.com/ansible/ansible/pull/75540
|
47ee28222794010ec447b7b6d73189eed1b46581
|
8bbecc7cac96203fedf99953a06eb32d74c7a4d7
| 2019-07-10T17:25:50Z |
python
| 2021-11-01T15:15:50Z |
lib/ansible/cli/__init__.py
|
# Copyright: (c) 2012-2014, Michael DeHaan <[email protected]>
# Copyright: (c) 2016, Toshio Kuratomi <[email protected]>
# Copyright: (c) 2018, Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
# Make coding more python3-ish
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import sys
# Used for determining if the system is running a new enough python version
# and should only restrict on our documented minimum versions
if sys.version_info < (3, 8):
raise SystemExit(
'ERROR: Ansible requires Python 3.8 or newer on the controller. '
'Current version: %s' % ''.join(sys.version.splitlines())
)
from importlib.metadata import version
from ansible.module_utils.compat.version import LooseVersion
# Used for determining if the system is running a new enough Jinja2 version
# and should only restrict on our documented minimum versions
jinja2_version = version('jinja2')
if jinja2_version < LooseVersion('3.0'):
raise SystemExit(
'ERROR: Ansible requires Jinja2 3.0 or newer on the controller. '
'Current version: %s' % jinja2_version
)
import errno
import getpass
import os
import subprocess
import traceback
from abc import ABC, abstractmethod
from pathlib import Path
try:
from ansible import constants as C
from ansible.utils.display import Display, initialize_locale
initialize_locale()
display = Display()
except Exception as e:
print('ERROR: %s' % e, file=sys.stderr)
sys.exit(5)
from ansible import context
from ansible.cli.arguments import option_helpers as opt_help
from ansible.errors import AnsibleError, AnsibleOptionsError, AnsibleParserError
from ansible.inventory.manager import InventoryManager
from ansible.module_utils.six import string_types
from ansible.module_utils._text import to_bytes, to_text
from ansible.parsing.dataloader import DataLoader
from ansible.parsing.vault import PromptVaultSecret, get_file_vault_secret
from ansible.plugins.loader import add_all_plugin_dirs
from ansible.release import __version__
from ansible.utils.collection_loader import AnsibleCollectionConfig
from ansible.utils.collection_loader._collection_finder import _get_collection_name_from_path
from ansible.utils.path import unfrackpath
from ansible.utils.unsafe_proxy import to_unsafe_text
from ansible.vars.manager import VariableManager
try:
import argcomplete
HAS_ARGCOMPLETE = True
except ImportError:
HAS_ARGCOMPLETE = False
class CLI(ABC):
''' code behind bin/ansible* programs '''
PAGER = 'less'
# -F (quit-if-one-screen) -R (allow raw ansi control chars)
# -S (chop long lines) -X (disable termcap init and de-init)
LESS_OPTS = 'FRSX'
SKIP_INVENTORY_DEFAULTS = False
def __init__(self, args, callback=None):
"""
Base init method for all command line programs
"""
if not args:
raise ValueError('A non-empty list for args is required')
self.args = args
self.parser = None
self.callback = callback
if C.DEVEL_WARNING and __version__.endswith('dev0'):
display.warning(
'You are running the development version of Ansible. You should only run Ansible from "devel" if '
'you are modifying the Ansible engine, or trying out features under development. This is a rapidly '
'changing source of code and can become unstable at any point.'
)
@abstractmethod
def run(self):
"""Run the ansible command
Subclasses must implement this method. It does the actual work of
running an Ansible command.
"""
self.parse()
display.vv(to_text(opt_help.version(self.parser.prog)))
if C.CONFIG_FILE:
display.v(u"Using %s as config file" % to_text(C.CONFIG_FILE))
else:
display.v(u"No config file found; using defaults")
# warn about deprecated config options
for deprecated in C.config.DEPRECATED:
name = deprecated[0]
why = deprecated[1]['why']
if 'alternatives' in deprecated[1]:
alt = ', use %s instead' % deprecated[1]['alternatives']
else:
alt = ''
ver = deprecated[1].get('version')
date = deprecated[1].get('date')
collection_name = deprecated[1].get('collection_name')
display.deprecated("%s option, %s%s" % (name, why, alt),
version=ver, date=date, collection_name=collection_name)
@staticmethod
def split_vault_id(vault_id):
# return (before_@, after_@)
# if no @, return whole string as after_
if '@' not in vault_id:
return (None, vault_id)
parts = vault_id.split('@', 1)
ret = tuple(parts)
return ret
@staticmethod
def build_vault_ids(vault_ids, vault_password_files=None,
ask_vault_pass=None, create_new_password=None,
auto_prompt=True):
vault_password_files = vault_password_files or []
vault_ids = vault_ids or []
# convert vault_password_files into vault_ids slugs
for password_file in vault_password_files:
id_slug = u'%s@%s' % (C.DEFAULT_VAULT_IDENTITY, password_file)
# note this makes --vault-id higher precedence than --vault-password-file
# if we want to intertwingle them in order probably need a cli callback to populate vault_ids
# used by --vault-id and --vault-password-file
vault_ids.append(id_slug)
# if an action needs an encrypt password (create_new_password=True) and we dont
# have other secrets setup, then automatically add a password prompt as well.
# prompts cant/shouldnt work without a tty, so dont add prompt secrets
if ask_vault_pass or (not vault_ids and auto_prompt):
id_slug = u'%s@%s' % (C.DEFAULT_VAULT_IDENTITY, u'prompt_ask_vault_pass')
vault_ids.append(id_slug)
return vault_ids
# TODO: remove the now unused args
@staticmethod
def setup_vault_secrets(loader, vault_ids, vault_password_files=None,
ask_vault_pass=None, create_new_password=False,
auto_prompt=True):
# list of tuples
vault_secrets = []
# Depending on the vault_id value (including how --ask-vault-pass / --vault-password-file create a vault_id)
# we need to show different prompts. This is for compat with older Towers that expect a
# certain vault password prompt format, so 'promp_ask_vault_pass' vault_id gets the old format.
prompt_formats = {}
# If there are configured default vault identities, they are considered 'first'
# so we prepend them to vault_ids (from cli) here
vault_password_files = vault_password_files or []
if C.DEFAULT_VAULT_PASSWORD_FILE:
vault_password_files.append(C.DEFAULT_VAULT_PASSWORD_FILE)
if create_new_password:
prompt_formats['prompt'] = ['New vault password (%(vault_id)s): ',
'Confirm new vault password (%(vault_id)s): ']
# 2.3 format prompts for --ask-vault-pass
prompt_formats['prompt_ask_vault_pass'] = ['New Vault password: ',
'Confirm New Vault password: ']
else:
prompt_formats['prompt'] = ['Vault password (%(vault_id)s): ']
# The format when we use just --ask-vault-pass needs to match 'Vault password:\s*?$'
prompt_formats['prompt_ask_vault_pass'] = ['Vault password: ']
vault_ids = CLI.build_vault_ids(vault_ids,
vault_password_files,
ask_vault_pass,
create_new_password,
auto_prompt=auto_prompt)
for vault_id_slug in vault_ids:
vault_id_name, vault_id_value = CLI.split_vault_id(vault_id_slug)
if vault_id_value in ['prompt', 'prompt_ask_vault_pass']:
# --vault-id some_name@prompt_ask_vault_pass --vault-id other_name@prompt_ask_vault_pass will be a little
# confusing since it will use the old format without the vault id in the prompt
built_vault_id = vault_id_name or C.DEFAULT_VAULT_IDENTITY
# choose the prompt based on --vault-id=prompt or --ask-vault-pass. --ask-vault-pass
# always gets the old format for Tower compatibility.
# ie, we used --ask-vault-pass, so we need to use the old vault password prompt
# format since Tower needs to match on that format.
prompted_vault_secret = PromptVaultSecret(prompt_formats=prompt_formats[vault_id_value],
vault_id=built_vault_id)
# a empty or invalid password from the prompt will warn and continue to the next
# without erroring globally
try:
prompted_vault_secret.load()
except AnsibleError as exc:
display.warning('Error in vault password prompt (%s): %s' % (vault_id_name, exc))
raise
vault_secrets.append((built_vault_id, prompted_vault_secret))
# update loader with new secrets incrementally, so we can load a vault password
# that is encrypted with a vault secret provided earlier
loader.set_vault_secrets(vault_secrets)
continue
# assuming anything else is a password file
display.vvvvv('Reading vault password file: %s' % vault_id_value)
# read vault_pass from a file
file_vault_secret = get_file_vault_secret(filename=vault_id_value,
vault_id=vault_id_name,
loader=loader)
# an invalid password file will error globally
try:
file_vault_secret.load()
except AnsibleError as exc:
display.warning('Error in vault password file loading (%s): %s' % (vault_id_name, to_text(exc)))
raise
if vault_id_name:
vault_secrets.append((vault_id_name, file_vault_secret))
else:
vault_secrets.append((C.DEFAULT_VAULT_IDENTITY, file_vault_secret))
# update loader with as-yet-known vault secrets
loader.set_vault_secrets(vault_secrets)
return vault_secrets
@staticmethod
def _get_secret(prompt):
secret = getpass.getpass(prompt=prompt)
if secret:
secret = to_unsafe_text(secret)
return secret
@staticmethod
def ask_passwords():
''' prompt for connection and become passwords if needed '''
op = context.CLIARGS
sshpass = None
becomepass = None
become_prompt = ''
become_prompt_method = "BECOME" if C.AGNOSTIC_BECOME_PROMPT else op['become_method'].upper()
try:
become_prompt = "%s password: " % become_prompt_method
if op['ask_pass']:
sshpass = CLI._get_secret("SSH password: ")
become_prompt = "%s password[defaults to SSH password]: " % become_prompt_method
elif op['connection_password_file']:
sshpass = CLI.get_password_from_file(op['connection_password_file'])
if op['become_ask_pass']:
becomepass = CLI._get_secret(become_prompt)
if op['ask_pass'] and becomepass == '':
becomepass = sshpass
elif op['become_password_file']:
becomepass = CLI.get_password_from_file(op['become_password_file'])
except EOFError:
pass
return (sshpass, becomepass)
def validate_conflicts(self, op, runas_opts=False, fork_opts=False):
''' check for conflicting options '''
if fork_opts:
if op.forks < 1:
self.parser.error("The number of processes (--forks) must be >= 1")
return op
@abstractmethod
def init_parser(self, usage="", desc=None, epilog=None):
"""
Create an options parser for most ansible scripts
Subclasses need to implement this method. They will usually call the base class's
init_parser to create a basic version and then add their own options on top of that.
An implementation will look something like this::
def init_parser(self):
super(MyCLI, self).init_parser(usage="My Ansible CLI", inventory_opts=True)
ansible.arguments.option_helpers.add_runas_options(self.parser)
self.parser.add_option('--my-option', dest='my_option', action='store')
"""
self.parser = opt_help.create_base_parser(self.name, usage=usage, desc=desc, epilog=epilog)
@abstractmethod
def post_process_args(self, options):
"""Process the command line args
Subclasses need to implement this method. This method validates and transforms the command
line arguments. It can be used to check whether conflicting values were given, whether filenames
exist, etc.
An implementation will look something like this::
def post_process_args(self, options):
options = super(MyCLI, self).post_process_args(options)
if options.addition and options.subtraction:
raise AnsibleOptionsError('Only one of --addition and --subtraction can be specified')
if isinstance(options.listofhosts, string_types):
options.listofhosts = string_types.split(',')
return options
"""
# process tags
if hasattr(options, 'tags') and not options.tags:
# optparse defaults does not do what's expected
# More specifically, we want `--tags` to be additive. So we cannot
# simply change C.TAGS_RUN's default to ["all"] because then passing
# --tags foo would cause us to have ['all', 'foo']
options.tags = ['all']
if hasattr(options, 'tags') and options.tags:
tags = set()
for tag_set in options.tags:
for tag in tag_set.split(u','):
tags.add(tag.strip())
options.tags = list(tags)
# process skip_tags
if hasattr(options, 'skip_tags') and options.skip_tags:
skip_tags = set()
for tag_set in options.skip_tags:
for tag in tag_set.split(u','):
skip_tags.add(tag.strip())
options.skip_tags = list(skip_tags)
# process inventory options except for CLIs that require their own processing
if hasattr(options, 'inventory') and not self.SKIP_INVENTORY_DEFAULTS:
if options.inventory:
# should always be list
if isinstance(options.inventory, string_types):
options.inventory = [options.inventory]
# Ensure full paths when needed
options.inventory = [unfrackpath(opt, follow=False) if ',' not in opt else opt for opt in options.inventory]
else:
options.inventory = C.DEFAULT_HOST_LIST
return options
def parse(self):
"""Parse the command line args
This method parses the command line arguments. It uses the parser
stored in the self.parser attribute and saves the args and options in
context.CLIARGS.
Subclasses need to implement two helper methods, init_parser() and post_process_args() which
are called from this function before and after parsing the arguments.
"""
self.init_parser()
if HAS_ARGCOMPLETE:
argcomplete.autocomplete(self.parser)
try:
options = self.parser.parse_args(self.args[1:])
except SystemExit as e:
if(e.code != 0):
self.parser.exit(status=2, message=" \n%s" % self.parser.format_help())
raise
options = self.post_process_args(options)
context._init_global_context(options)
@staticmethod
def version_info(gitinfo=False):
''' return full ansible version info '''
if gitinfo:
# expensive call, user with care
ansible_version_string = opt_help.version()
else:
ansible_version_string = __version__
ansible_version = ansible_version_string.split()[0]
ansible_versions = ansible_version.split('.')
for counter in range(len(ansible_versions)):
if ansible_versions[counter] == "":
ansible_versions[counter] = 0
try:
ansible_versions[counter] = int(ansible_versions[counter])
except Exception:
pass
if len(ansible_versions) < 3:
for counter in range(len(ansible_versions), 3):
ansible_versions.append(0)
return {'string': ansible_version_string.strip(),
'full': ansible_version,
'major': ansible_versions[0],
'minor': ansible_versions[1],
'revision': ansible_versions[2]}
@staticmethod
def pager(text):
''' find reasonable way to display text '''
# this is a much simpler form of what is in pydoc.py
if not sys.stdout.isatty():
display.display(text, screen_only=True)
elif 'PAGER' in os.environ:
if sys.platform == 'win32':
display.display(text, screen_only=True)
else:
CLI.pager_pipe(text, os.environ['PAGER'])
else:
p = subprocess.Popen('less --version', shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
p.communicate()
if p.returncode == 0:
CLI.pager_pipe(text, 'less')
else:
display.display(text, screen_only=True)
@staticmethod
def pager_pipe(text, cmd):
''' pipe text through a pager '''
if 'LESS' not in os.environ:
os.environ['LESS'] = CLI.LESS_OPTS
try:
cmd = subprocess.Popen(cmd, shell=True, stdin=subprocess.PIPE, stdout=sys.stdout)
cmd.communicate(input=to_bytes(text))
except IOError:
pass
except KeyboardInterrupt:
pass
@staticmethod
def _play_prereqs():
options = context.CLIARGS
# all needs loader
loader = DataLoader()
basedir = options.get('basedir', False)
if basedir:
loader.set_basedir(basedir)
add_all_plugin_dirs(basedir)
AnsibleCollectionConfig.playbook_paths = basedir
default_collection = _get_collection_name_from_path(basedir)
if default_collection:
display.warning(u'running with default collection {0}'.format(default_collection))
AnsibleCollectionConfig.default_collection = default_collection
vault_ids = list(options['vault_ids'])
default_vault_ids = C.DEFAULT_VAULT_IDENTITY_LIST
vault_ids = default_vault_ids + vault_ids
vault_secrets = CLI.setup_vault_secrets(loader,
vault_ids=vault_ids,
vault_password_files=list(options['vault_password_files']),
ask_vault_pass=options['ask_vault_pass'],
auto_prompt=False)
loader.set_vault_secrets(vault_secrets)
# create the inventory, and filter it based on the subset specified (if any)
inventory = InventoryManager(loader=loader, sources=options['inventory'])
# create the variable manager, which will be shared throughout
# the code, ensuring a consistent view of global variables
variable_manager = VariableManager(loader=loader, inventory=inventory, version_info=CLI.version_info(gitinfo=False))
return loader, inventory, variable_manager
@staticmethod
def get_host_list(inventory, subset, pattern='all'):
no_hosts = False
if len(inventory.list_hosts()) == 0:
# Empty inventory
if C.LOCALHOST_WARNING and pattern not in C.LOCALHOST:
display.warning("provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all'")
no_hosts = True
inventory.subset(subset)
hosts = inventory.list_hosts(pattern)
if not hosts and no_hosts is False:
raise AnsibleError("Specified hosts and/or --limit does not match any hosts")
return hosts
@staticmethod
def get_password_from_file(pwd_file):
b_pwd_file = to_bytes(pwd_file)
secret = None
if b_pwd_file == b'-':
# ensure its read as bytes
secret = sys.stdin.buffer.read()
elif not os.path.exists(b_pwd_file):
raise AnsibleError("The password file %s was not found" % pwd_file)
elif os.path.is_executable(b_pwd_file):
display.vvvv(u'The password file %s is a script.' % to_text(pwd_file))
cmd = [b_pwd_file]
try:
p = subprocess.Popen(cmd, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
except OSError as e:
raise AnsibleError("Problem occured when trying to run the password script %s (%s)."
" If this is not a script, remove the executable bit from the file." % (pwd_file, e))
stdout, stderr = p.communicate()
if p.returncode != 0:
raise AnsibleError("The password script %s returned an error (rc=%s): %s" % (pwd_file, p.returncode, stderr))
secret = stdout
else:
try:
f = open(b_pwd_file, "rb")
secret = f.read().strip()
f.close()
except (OSError, IOError) as e:
raise AnsibleError("Could not read password file %s: %s" % (pwd_file, e))
secret = secret.strip(b'\r\n')
if not secret:
raise AnsibleError('Empty password was provided from file (%s)' % pwd_file)
return to_unsafe_text(secret)
@classmethod
def cli_executor(cls, args=None):
if args is None:
args = sys.argv
try:
display.debug("starting run")
ansible_dir = Path("~/.ansible").expanduser()
try:
ansible_dir.mkdir(mode=0o700)
except OSError as exc:
if exc.errno != errno.EEXIST:
display.warning(
"Failed to create the directory '%s': %s" % (ansible_dir, to_text(exc, errors='surrogate_or_replace'))
)
else:
display.debug("Created the '%s' directory" % ansible_dir)
try:
args = [to_text(a, errors='surrogate_or_strict') for a in args]
except UnicodeError:
display.error('Command line args are not in utf-8, unable to continue. Ansible currently only understands utf-8')
display.display(u"The full traceback was:\n\n%s" % to_text(traceback.format_exc()))
exit_code = 6
else:
cli = cls(args)
exit_code = cli.run()
except AnsibleOptionsError as e:
cli.parser.print_help()
display.error(to_text(e), wrap_text=False)
exit_code = 5
except AnsibleParserError as e:
display.error(to_text(e), wrap_text=False)
exit_code = 4
# TQM takes care of these, but leaving comment to reserve the exit codes
# except AnsibleHostUnreachable as e:
# display.error(str(e))
# exit_code = 3
# except AnsibleHostFailed as e:
# display.error(str(e))
# exit_code = 2
except AnsibleError as e:
display.error(to_text(e), wrap_text=False)
exit_code = 1
except KeyboardInterrupt:
display.error("User interrupted execution")
exit_code = 99
except Exception as e:
if C.DEFAULT_DEBUG:
# Show raw stacktraces in debug mode, It also allow pdb to
# enter post mortem mode.
raise
have_cli_options = bool(context.CLIARGS)
display.error("Unexpected Exception, this is probably a bug: %s" % to_text(e), wrap_text=False)
if not have_cli_options or have_cli_options and context.CLIARGS['verbosity'] > 2:
log_only = False
if hasattr(e, 'orig_exc'):
display.vvv('\nexception type: %s' % to_text(type(e.orig_exc)))
why = to_text(e.orig_exc)
if to_text(e) != why:
display.vvv('\noriginal msg: %s' % why)
else:
display.display("to see the full traceback, use -vvv")
log_only = True
display.display(u"the full traceback was:\n\n%s" % to_text(traceback.format_exc()), log_only=log_only)
exit_code = 250
sys.exit(exit_code)
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 58,938 |
Ignore missing/failed vault ids in vault_identity_list
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
Our ansible.cfg lists all well-known keys, but ansible-vault fails to run if any of the keys are missing. We only publish to our CI/respective user environments keys that are appropriate for that stack. This means that an operations person cannot run ansible without having access to all known environment's keys. To us, this makes the use of multiple vault ids not all that useful.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
ansible-vault
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```
ansible 2.7.10
config file = /Users/dkimsey/ansible/ansible.cfg
configured module search path = ['/Users/dkimsey/ansible/library']
ansible python module location = /Users/dkimsey/.virtualenvs/ansible-2.7/lib/python3.7/site-packages/ansible
executable location = /Users/dkimsey/.virtualenvs/ansible-2.7/bin/ansible
python version = 3.7.3 (default, May 1 2019, 10:48:04) [Clang 10.0.0 (clang-1000.11.45.5)]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```
DEFAULT_VAULT_IDENTITY_LIST(/Users/dkimsey/ansible/ansible.cfg) = ['qa@~/.vault-qa', 'stg@~/.vault-staging', 'prod@~/.vault-production']
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
n/a
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
<!--- Paste example playbooks or commands between quotes below -->
```
$ echo "my-secret-key" > ~/.vault-staging
$ ansible-vault encrypt_string --vault-id=stg --name client
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
I'd expect it to skip missing vault keys. Maybe log it in verbose mode that it tried to access and failed.
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes -->
```
ansible-vault 2.7.10
config file = /Users/dkimsey/ansible/ansible.cfg
configured module search path = ['/Users/dkimsey/ansible/library']
ansible python module location = /Users/dkimsey/.virtualenvs/ansible-2.7/lib/python3.7/site-packages/ansible
executable location = /Users/dkimsey/.virtualenvs/ansible-2.7/bin/ansible-vault
python version = 3.7.3 (default, May 1 2019, 10:48:04) [Clang 10.0.0 (clang-1000.11.45.5)]
Using /Users/dkimsey/ansible/ansible.cfg as config file
ERROR! The vault password file /Users/dkimsey/.vault-qa was not found
```
|
https://github.com/ansible/ansible/issues/58938
|
https://github.com/ansible/ansible/pull/75540
|
47ee28222794010ec447b7b6d73189eed1b46581
|
8bbecc7cac96203fedf99953a06eb32d74c7a4d7
| 2019-07-10T17:25:50Z |
python
| 2021-11-01T15:15:50Z |
test/integration/targets/ansible-vault/runme.sh
|
#!/usr/bin/env bash
set -euvx
source virtualenv.sh
MYTMPDIR=$(mktemp -d 2>/dev/null || mktemp -d -t 'mytmpdir')
trap 'rm -rf "${MYTMPDIR}"' EXIT
# create a test file
TEST_FILE="${MYTMPDIR}/test_file"
echo "This is a test file" > "${TEST_FILE}"
TEST_FILE_1_2="${MYTMPDIR}/test_file_1_2"
echo "This is a test file for format 1.2" > "${TEST_FILE_1_2}"
TEST_FILE_ENC_PASSWORD="${MYTMPDIR}/test_file_enc_password"
echo "This is a test file for encrypted with a vault password that is itself vault encrypted" > "${TEST_FILE_ENC_PASSWORD}"
TEST_FILE_ENC_PASSWORD_DEFAULT="${MYTMPDIR}/test_file_enc_password_default"
echo "This is a test file for encrypted with a vault password that is itself vault encrypted using --encrypted-vault-id default" > "${TEST_FILE_ENC_PASSWORD_DEFAULT}"
TEST_FILE_OUTPUT="${MYTMPDIR}/test_file_output"
TEST_FILE_EDIT="${MYTMPDIR}/test_file_edit"
echo "This is a test file for edit" > "${TEST_FILE_EDIT}"
TEST_FILE_EDIT2="${MYTMPDIR}/test_file_edit2"
echo "This is a test file for edit2" > "${TEST_FILE_EDIT2}"
# test case for https://github.com/ansible/ansible/issues/35834
# (being prompted for new password on vault-edit with no configured passwords)
TEST_FILE_EDIT3="${MYTMPDIR}/test_file_edit3"
echo "This is a test file for edit3" > "${TEST_FILE_EDIT3}"
# ansible-config view
ansible-config view
# ansible-config
ansible-config dump --only-changed
ansible-vault encrypt "$@" --vault-id vault-password "${TEST_FILE_EDIT3}"
# EDITOR=./faux-editor.py ansible-vault edit "$@" "${TEST_FILE_EDIT3}"
EDITOR=./faux-editor.py ansible-vault edit --vault-id vault-password -vvvvv "${TEST_FILE_EDIT3}"
echo $?
# view the vault encrypted password file
ansible-vault view "$@" --vault-id vault-password encrypted-vault-password
# encrypt with a password from a vault encrypted password file and multiple vault-ids
# should fail because we dont know which vault id to use to encrypt with
ansible-vault encrypt "$@" --vault-id vault-password --vault-id encrypted-vault-password "${TEST_FILE_ENC_PASSWORD}" && :
WRONG_RC=$?
echo "rc was $WRONG_RC (5 is expected)"
[ $WRONG_RC -eq 5 ]
# try to view the file encrypted with the vault-password we didnt specify
# to verify we didnt choose the wrong vault-id
ansible-vault view "$@" --vault-id vault-password encrypted-vault-password
FORMAT_1_1_HEADER="\$ANSIBLE_VAULT;1.1;AES256"
FORMAT_1_2_HEADER="\$ANSIBLE_VAULT;1.2;AES256"
VAULT_PASSWORD_FILE=vault-password
# new format, view, using password client script
ansible-vault view "$@" --vault-id [email protected] format_1_1_AES256.yml
# view, using password client script, unknown vault/keyname
ansible-vault view "$@" --vault-id [email protected] format_1_1_AES256.yml && :
# Use linux setsid to test without a tty. No setsid if osx/bsd though...
if [ -x "$(command -v setsid)" ]; then
# tests related to https://github.com/ansible/ansible/issues/30993
CMD='ansible-playbook -i ../../inventory -vvvvv --ask-vault-pass test_vault.yml'
setsid sh -c "echo test-vault-password|${CMD}" < /dev/null > log 2>&1 && :
WRONG_RC=$?
cat log
echo "rc was $WRONG_RC (0 is expected)"
[ $WRONG_RC -eq 0 ]
setsid sh -c 'tty; ansible-vault view --ask-vault-pass -vvvvv test_vault.yml' < /dev/null > log 2>&1 && :
WRONG_RC=$?
echo "rc was $WRONG_RC (1 is expected)"
[ $WRONG_RC -eq 1 ]
cat log
setsid sh -c 'tty; echo passbhkjhword|ansible-playbook -i ../../inventory -vvvvv --ask-vault-pass test_vault.yml' < /dev/null > log 2>&1 && :
WRONG_RC=$?
echo "rc was $WRONG_RC (1 is expected)"
[ $WRONG_RC -eq 1 ]
cat log
setsid sh -c 'tty; echo test-vault-password |ansible-playbook -i ../../inventory -vvvvv --ask-vault-pass test_vault.yml' < /dev/null > log 2>&1
echo $?
cat log
setsid sh -c 'tty; echo test-vault-password|ansible-playbook -i ../../inventory -vvvvv --ask-vault-pass test_vault.yml' < /dev/null > log 2>&1
echo $?
cat log
setsid sh -c 'tty; echo test-vault-password |ansible-playbook -i ../../inventory -vvvvv --ask-vault-pass test_vault.yml' < /dev/null > log 2>&1
echo $?
cat log
setsid sh -c 'tty; echo test-vault-password|ansible-vault view --ask-vault-pass -vvvvv vaulted.inventory' < /dev/null > log 2>&1
echo $?
cat log
# test using --ask-vault-password option
CMD='ansible-playbook -i ../../inventory -vvvvv --ask-vault-password test_vault.yml'
setsid sh -c "echo test-vault-password|${CMD}" < /dev/null > log 2>&1 && :
WRONG_RC=$?
cat log
echo "rc was $WRONG_RC (0 is expected)"
[ $WRONG_RC -eq 0 ]
fi
ansible-vault view "$@" --vault-password-file vault-password-wrong format_1_1_AES256.yml && :
WRONG_RC=$?
echo "rc was $WRONG_RC (1 is expected)"
[ $WRONG_RC -eq 1 ]
set -eux
# new format, view
ansible-vault view "$@" --vault-password-file vault-password format_1_1_AES256.yml
# new format, view with vault-id
ansible-vault view "$@" --vault-id=vault-password format_1_1_AES256.yml
# new format, view, using password script
ansible-vault view "$@" --vault-password-file password-script.py format_1_1_AES256.yml
# new format, view, using password script with vault-id
ansible-vault view "$@" --vault-id password-script.py format_1_1_AES256.yml
# new 1.2 format, view
ansible-vault view "$@" --vault-password-file vault-password format_1_2_AES256.yml
# new 1.2 format, view with vault-id
ansible-vault view "$@" --vault-id=test_vault_id@vault-password format_1_2_AES256.yml
# new 1,2 format, view, using password script
ansible-vault view "$@" --vault-password-file password-script.py format_1_2_AES256.yml
# new 1.2 format, view, using password script with vault-id
ansible-vault view "$@" --vault-id password-script.py format_1_2_AES256.yml
# newish 1.1 format, view, using a vault-id list from config env var
ANSIBLE_VAULT_IDENTITY_LIST='wrong-password@vault-password-wrong,default@vault-password' ansible-vault view "$@" --vault-id password-script.py format_1_1_AES256.yml
# new 1.2 format, view, ENFORCE_IDENTITY_MATCH=true, should fail, no 'test_vault_id' vault_id
ANSIBLE_VAULT_ID_MATCH=1 ansible-vault view "$@" --vault-password-file vault-password format_1_2_AES256.yml && :
WRONG_RC=$?
echo "rc was $WRONG_RC (1 is expected)"
[ $WRONG_RC -eq 1 ]
# new 1.2 format, view with vault-id, ENFORCE_IDENTITY_MATCH=true, should work, 'test_vault_id' is provided
ANSIBLE_VAULT_ID_MATCH=1 ansible-vault view "$@" --vault-id=test_vault_id@vault-password format_1_2_AES256.yml
# new 1,2 format, view, using password script, ENFORCE_IDENTITY_MATCH=true, should fail, no 'test_vault_id'
ANSIBLE_VAULT_ID_MATCH=1 ansible-vault view "$@" --vault-password-file password-script.py format_1_2_AES256.yml && :
WRONG_RC=$?
echo "rc was $WRONG_RC (1 is expected)"
[ $WRONG_RC -eq 1 ]
# new 1.2 format, view, using password script with vault-id, ENFORCE_IDENTITY_MATCH=true, should fail
ANSIBLE_VAULT_ID_MATCH=1 ansible-vault view "$@" --vault-id password-script.py format_1_2_AES256.yml && :
WRONG_RC=$?
echo "rc was $WRONG_RC (1 is expected)"
[ $WRONG_RC -eq 1 ]
# new 1.2 format, view, using password script with vault-id, ENFORCE_IDENTITY_MATCH=true, 'test_vault_id' provided should work
ANSIBLE_VAULT_ID_MATCH=1 ansible-vault view "$@" [email protected] format_1_2_AES256.yml
# test with a default vault password set via config/env, right password
ANSIBLE_VAULT_PASSWORD_FILE=vault-password ansible-vault view "$@" format_1_1_AES256.yml
# test with a default vault password set via config/env, wrong password
ANSIBLE_VAULT_PASSWORD_FILE=vault-password-wrong ansible-vault view "$@" format_1_1_AES.yml && :
WRONG_RC=$?
echo "rc was $WRONG_RC (1 is expected)"
[ $WRONG_RC -eq 1 ]
# test with a default vault-id list set via config/env, right password
ANSIBLE_VAULT_PASSWORD_FILE=wrong@vault-password-wrong,correct@vault-password ansible-vault view "$@" format_1_1_AES.yml && :
# test with a default vault-id list set via config/env,wrong passwords
ANSIBLE_VAULT_PASSWORD_FILE=wrong@vault-password-wrong,alsowrong@vault-password-wrong ansible-vault view "$@" format_1_1_AES.yml && :
WRONG_RC=$?
echo "rc was $WRONG_RC (1 is expected)"
[ $WRONG_RC -eq 1 ]
# try specifying a --encrypt-vault-id that doesnt exist, should exit with an error indicating
# that --encrypt-vault-id and the known vault-ids
ansible-vault encrypt "$@" --vault-password-file vault-password --encrypt-vault-id doesnt_exist "${TEST_FILE}" && :
WRONG_RC=$?
echo "rc was $WRONG_RC (1 is expected)"
[ $WRONG_RC -eq 1 ]
# encrypt it
ansible-vault encrypt "$@" --vault-password-file vault-password "${TEST_FILE}"
ansible-vault view "$@" --vault-password-file vault-password "${TEST_FILE}"
# view with multiple vault-password files, including a wrong one
ansible-vault view "$@" --vault-password-file vault-password --vault-password-file vault-password-wrong "${TEST_FILE}"
# view with multiple vault-password files, including a wrong one, using vault-id
ansible-vault view "$@" --vault-id vault-password --vault-id vault-password-wrong "${TEST_FILE}"
# And with the password files specified in a different order
ansible-vault view "$@" --vault-password-file vault-password-wrong --vault-password-file vault-password "${TEST_FILE}"
# And with the password files specified in a different order, using vault-id
ansible-vault view "$@" --vault-id vault-password-wrong --vault-id vault-password "${TEST_FILE}"
# And with the password files specified in a different order, using --vault-id and non default vault_ids
ansible-vault view "$@" --vault-id test_vault_id@vault-password-wrong --vault-id test_vault_id@vault-password "${TEST_FILE}"
ansible-vault decrypt "$@" --vault-password-file vault-password "${TEST_FILE}"
# encrypt it, using a vault_id so we write a 1.2 format file
ansible-vault encrypt "$@" --vault-id test_vault_1_2@vault-password "${TEST_FILE_1_2}"
ansible-vault view "$@" --vault-id vault-password "${TEST_FILE_1_2}"
ansible-vault view "$@" --vault-id test_vault_1_2@vault-password "${TEST_FILE_1_2}"
# view with multiple vault-password files, including a wrong one
ansible-vault view "$@" --vault-id vault-password --vault-id wrong_password@vault-password-wrong "${TEST_FILE_1_2}"
# And with the password files specified in a different order, using vault-id
ansible-vault view "$@" --vault-id vault-password-wrong --vault-id vault-password "${TEST_FILE_1_2}"
# And with the password files specified in a different order, using --vault-id and non default vault_ids
ansible-vault view "$@" --vault-id test_vault_id@vault-password-wrong --vault-id test_vault_id@vault-password "${TEST_FILE_1_2}"
ansible-vault decrypt "$@" --vault-id test_vault_1_2@vault-password "${TEST_FILE_1_2}"
# multiple vault passwords
ansible-vault view "$@" --vault-password-file vault-password --vault-password-file vault-password-wrong format_1_1_AES256.yml
# multiple vault passwords, --vault-id
ansible-vault view "$@" --vault-id test_vault_id@vault-password --vault-id test_vault_id@vault-password-wrong format_1_1_AES256.yml
# encrypt it, with password from password script
ansible-vault encrypt "$@" --vault-password-file password-script.py "${TEST_FILE}"
ansible-vault view "$@" --vault-password-file password-script.py "${TEST_FILE}"
ansible-vault decrypt "$@" --vault-password-file password-script.py "${TEST_FILE}"
# encrypt it, with password from password script
ansible-vault encrypt "$@" --vault-id [email protected] "${TEST_FILE}"
ansible-vault view "$@" --vault-id [email protected] "${TEST_FILE}"
ansible-vault decrypt "$@" --vault-id [email protected] "${TEST_FILE}"
# new password file for rekeyed file
NEW_VAULT_PASSWORD="${MYTMPDIR}/new-vault-password"
echo "newpassword" > "${NEW_VAULT_PASSWORD}"
ansible-vault encrypt "$@" --vault-password-file vault-password "${TEST_FILE}"
ansible-vault rekey "$@" --vault-password-file vault-password --new-vault-password-file "${NEW_VAULT_PASSWORD}" "${TEST_FILE}"
# --new-vault-password-file and --new-vault-id should cause options error
ansible-vault rekey "$@" --vault-password-file vault-password --new-vault-id=foobar --new-vault-password-file "${NEW_VAULT_PASSWORD}" "${TEST_FILE}" && :
WRONG_RC=$?
echo "rc was $WRONG_RC (2 is expected)"
[ $WRONG_RC -eq 2 ]
ansible-vault view "$@" --vault-password-file "${NEW_VAULT_PASSWORD}" "${TEST_FILE}"
# view file with unicode in filename
ansible-vault view "$@" --vault-password-file vault-password vault-café.yml
# view with old password file and new password file
ansible-vault view "$@" --vault-password-file "${NEW_VAULT_PASSWORD}" --vault-password-file vault-password "${TEST_FILE}"
# view with old password file and new password file, different order
ansible-vault view "$@" --vault-password-file vault-password --vault-password-file "${NEW_VAULT_PASSWORD}" "${TEST_FILE}"
# view with old password file and new password file and another wrong
ansible-vault view "$@" --vault-password-file "${NEW_VAULT_PASSWORD}" --vault-password-file vault-password-wrong --vault-password-file vault-password "${TEST_FILE}"
# view with old password file and new password file and another wrong, using --vault-id
ansible-vault view "$@" --vault-id "tmp_new_password@${NEW_VAULT_PASSWORD}" --vault-id wrong_password@vault-password-wrong --vault-id myorg@vault-password "${TEST_FILE}"
ansible-vault decrypt "$@" --vault-password-file "${NEW_VAULT_PASSWORD}" "${TEST_FILE}"
# reading/writing to/from stdin/stdin (See https://github.com/ansible/ansible/issues/23567)
ansible-vault encrypt "$@" --vault-password-file "${VAULT_PASSWORD_FILE}" --output="${TEST_FILE_OUTPUT}" < "${TEST_FILE}"
OUTPUT=$(ansible-vault decrypt "$@" --vault-password-file "${VAULT_PASSWORD_FILE}" --output=- < "${TEST_FILE_OUTPUT}")
echo "${OUTPUT}" | grep 'This is a test file'
OUTPUT_DASH=$(ansible-vault decrypt "$@" --vault-password-file "${VAULT_PASSWORD_FILE}" --output=- "${TEST_FILE_OUTPUT}")
echo "${OUTPUT_DASH}" | grep 'This is a test file'
OUTPUT_DASH_SPACE=$(ansible-vault decrypt "$@" --vault-password-file "${VAULT_PASSWORD_FILE}" --output - "${TEST_FILE_OUTPUT}")
echo "${OUTPUT_DASH_SPACE}" | grep 'This is a test file'
# test using an empty vault password file
ansible-vault view "$@" --vault-password-file empty-password format_1_1_AES256.yml && :
WRONG_RC=$?
echo "rc was $WRONG_RC (1 is expected)"
[ $WRONG_RC -eq 1 ]
ansible-vault view "$@" --vault-id=empty@empty-password --vault-password-file empty-password format_1_1_AES256.yml && :
WRONG_RC=$?
echo "rc was $WRONG_RC (1 is expected)"
[ $WRONG_RC -eq 1 ]
echo 'foo' > some_file.txt
ansible-vault encrypt "$@" --vault-password-file empty-password some_file.txt && :
WRONG_RC=$?
echo "rc was $WRONG_RC (1 is expected)"
[ $WRONG_RC -eq 1 ]
ansible-vault encrypt_string "$@" --vault-password-file "${NEW_VAULT_PASSWORD}" "a test string"
# Test with multiple vault password files
# https://github.com/ansible/ansible/issues/57172
env ANSIBLE_VAULT_PASSWORD_FILE=vault-password ansible-vault encrypt_string "$@" --vault-password-file "${NEW_VAULT_PASSWORD}" --encrypt-vault-id default "a test string"
ansible-vault encrypt_string "$@" --vault-password-file "${NEW_VAULT_PASSWORD}" --name "blippy" "a test string names blippy"
ansible-vault encrypt_string "$@" --vault-id "${NEW_VAULT_PASSWORD}" "a test string"
ansible-vault encrypt_string "$@" --vault-id "${NEW_VAULT_PASSWORD}" --name "blippy" "a test string names blippy"
# from stdin
ansible-vault encrypt_string "$@" --vault-password-file "${NEW_VAULT_PASSWORD}" < "${TEST_FILE}"
ansible-vault encrypt_string "$@" --vault-password-file "${NEW_VAULT_PASSWORD}" --stdin-name "the_var_from_stdin" < "${TEST_FILE}"
# write to file
ansible-vault encrypt_string "$@" --vault-password-file "${NEW_VAULT_PASSWORD}" --name "blippy" "a test string names blippy" --output "${MYTMPDIR}/enc_string_test_file"
# test ansible-vault edit with a faux editor
ansible-vault encrypt "$@" --vault-password-file vault-password "${TEST_FILE_EDIT}"
# edit a 1.1 format with no vault-id, should stay 1.1
EDITOR=./faux-editor.py ansible-vault edit "$@" --vault-password-file vault-password "${TEST_FILE_EDIT}"
head -1 "${TEST_FILE_EDIT}" | grep "${FORMAT_1_1_HEADER}"
# edit a 1.1 format with vault-id, should stay 1.1
cat "${TEST_FILE_EDIT}"
EDITOR=./faux-editor.py ansible-vault edit "$@" --vault-id vault_password@vault-password "${TEST_FILE_EDIT}"
cat "${TEST_FILE_EDIT}"
head -1 "${TEST_FILE_EDIT}" | grep "${FORMAT_1_1_HEADER}"
ansible-vault encrypt "$@" --vault-id vault_password@vault-password "${TEST_FILE_EDIT2}"
# verify that we aren't prompted for a new vault password on edit if we are running interactively (ie, with prompts)
# have to use setsid nd --ask-vault-pass to force a prompt to simulate.
# See https://github.com/ansible/ansible/issues/35834
setsid sh -c 'tty; echo password |ansible-vault edit --ask-vault-pass vault_test.yml' < /dev/null > log 2>&1 && :
grep 'New Vault password' log && :
WRONG_RC=$?
echo "The stdout log had 'New Vault password' in it and it is not supposed to. rc of grep was $WRONG_RC (1 is expected)"
[ $WRONG_RC -eq 1 ]
# edit a 1.2 format with vault id, should keep vault id and 1.2 format
EDITOR=./faux-editor.py ansible-vault edit "$@" --vault-id vault_password@vault-password "${TEST_FILE_EDIT2}"
head -1 "${TEST_FILE_EDIT2}" | grep "${FORMAT_1_2_HEADER};vault_password"
# edit a 1.2 file with no vault-id, should keep vault id and 1.2 format
EDITOR=./faux-editor.py ansible-vault edit "$@" --vault-password-file vault-password "${TEST_FILE_EDIT2}"
head -1 "${TEST_FILE_EDIT2}" | grep "${FORMAT_1_2_HEADER};vault_password"
# encrypt with a password from a vault encrypted password file and multiple vault-ids
# should fail because we dont know which vault id to use to encrypt with
ansible-vault encrypt "$@" --vault-id vault-password --vault-id encrypted-vault-password "${TEST_FILE_ENC_PASSWORD}" && :
WRONG_RC=$?
echo "rc was $WRONG_RC (5 is expected)"
[ $WRONG_RC -eq 5 ]
# encrypt with a password from a vault encrypted password file and multiple vault-ids
# but this time specify with --encrypt-vault-id, but specifying vault-id names (instead of default)
# ansible-vault encrypt "$@" --vault-id from_vault_password@vault-password --vault-id from_encrypted_vault_password@encrypted-vault-password --encrypt-vault-id from_encrypted_vault_password "${TEST_FILE(_ENC_PASSWORD}"
# try to view the file encrypted with the vault-password we didnt specify
# to verify we didnt choose the wrong vault-id
# ansible-vault view "$@" --vault-id vault-password "${TEST_FILE_ENC_PASSWORD}" && :
# WRONG_RC=$?
# echo "rc was $WRONG_RC (1 is expected)"
# [ $WRONG_RC -eq 1 ]
ansible-vault encrypt "$@" --vault-id vault-password "${TEST_FILE_ENC_PASSWORD}"
# view the file encrypted with a password from a vault encrypted password file
ansible-vault view "$@" --vault-id vault-password --vault-id encrypted-vault-password "${TEST_FILE_ENC_PASSWORD}"
# try to view the file encrypted with a password from a vault encrypted password file but without the password to the password file.
# This should fail with an
ansible-vault view "$@" --vault-id encrypted-vault-password "${TEST_FILE_ENC_PASSWORD}" && :
WRONG_RC=$?
echo "rc was $WRONG_RC (1 is expected)"
[ $WRONG_RC -eq 1 ]
# test playbooks using vaulted files
ansible-playbook test_vault.yml -i ../../inventory -v "$@" --vault-password-file vault-password --list-tasks
ansible-playbook test_vault.yml -i ../../inventory -v "$@" --vault-password-file vault-password --list-hosts
ansible-playbook test_vault.yml -i ../../inventory -v "$@" --vault-password-file vault-password --syntax-check
ansible-playbook test_vault.yml -i ../../inventory -v "$@" --vault-password-file vault-password
ansible-playbook test_vault_embedded.yml -i ../../inventory -v "$@" --vault-password-file vault-password --syntax-check
ansible-playbook test_vault_embedded.yml -i ../../inventory -v "$@" --vault-password-file vault-password
ansible-playbook test_vaulted_inventory.yml -i vaulted.inventory -v "$@" --vault-password-file vault-password
ansible-playbook test_vaulted_template.yml -i ../../inventory -v "$@" --vault-password-file vault-password
# test using --vault-pass-file option
ansible-playbook test_vault.yml -i ../../inventory -v "$@" --vault-pass-file vault-password
# install TOML for parse toml inventory
# test playbooks using vaulted files(toml)
pip install toml
ansible-vault encrypt ./inventory.toml -v "$@" --vault-password-file=./vault-password
ansible-playbook test_vaulted_inventory_toml.yml -i ./inventory.toml -v "$@" --vault-password-file vault-password
ansible-vault decrypt ./inventory.toml -v "$@" --vault-password-file=./vault-password
# test a playbook with a host_var whose value is non-ascii utf8 (see https://github.com/ansible/ansible/issues/37258)
ansible-playbook -i ../../inventory -v "$@" --vault-id vault-password test_vaulted_utf8_value.yml
# test with password from password script
ansible-playbook test_vault.yml -i ../../inventory -v "$@" --vault-password-file password-script.py
ansible-playbook test_vault_embedded.yml -i ../../inventory -v "$@" --vault-password-file password-script.py
# with multiple password files
ansible-playbook test_vault.yml -i ../../inventory -v "$@" --vault-password-file vault-password --vault-password-file vault-password-wrong
ansible-playbook test_vault.yml -i ../../inventory -v "$@" --vault-password-file vault-password-wrong --vault-password-file vault-password
ansible-playbook test_vault_embedded.yml -i ../../inventory -v "$@" --vault-password-file vault-password --vault-password-file vault-password-wrong --syntax-check
ansible-playbook test_vault_embedded.yml -i ../../inventory -v "$@" --vault-password-file vault-password-wrong --vault-password-file vault-password
# test with a default vault password file set in config
ANSIBLE_VAULT_PASSWORD_FILE=vault-password ansible-playbook test_vault_embedded.yml -i ../../inventory -v "$@" --vault-password-file vault-password-wrong
# test using vault_identity_list config
ANSIBLE_VAULT_IDENTITY_LIST='wrong-password@vault-password-wrong,default@vault-password' ansible-playbook test_vault.yml -i ../../inventory -v "$@"
# test that we can have a vault encrypted yaml file that includes embedded vault vars
# that were encrypted with a different vault secret
ansible-playbook test_vault_file_encrypted_embedded.yml -i ../../inventory "$@" --vault-id encrypted_file_encrypted_var_password --vault-id vault-password
# with multiple password files, --vault-id, ordering
ansible-playbook test_vault.yml -i ../../inventory -v "$@" --vault-id vault-password --vault-id vault-password-wrong
ansible-playbook test_vault.yml -i ../../inventory -v "$@" --vault-id vault-password-wrong --vault-id vault-password
ansible-playbook test_vault_embedded.yml -i ../../inventory -v "$@" --vault-id vault-password --vault-id vault-password-wrong --syntax-check
ansible-playbook test_vault_embedded.yml -i ../../inventory -v "$@" --vault-id vault-password-wrong --vault-id vault-password
# test with multiple password files, including a script, and a wrong password
ansible-playbook test_vault_embedded.yml -i ../../inventory -v "$@" --vault-password-file vault-password-wrong --vault-password-file password-script.py --vault-password-file vault-password
# test with multiple password files, including a script, and a wrong password, and a mix of --vault-id and --vault-password-file
ansible-playbook test_vault_embedded.yml -i ../../inventory -v "$@" --vault-password-file vault-password-wrong --vault-id password-script.py --vault-id vault-password
# test with multiple password files, including a script, and a wrong password, and a mix of --vault-id and --vault-password-file
ansible-playbook test_vault_embedded_ids.yml -i ../../inventory -v "$@" \
--vault-password-file vault-password-wrong \
--vault-id password-script.py --vault-id example1@example1_password \
--vault-id example2@example2_password --vault-password-file example3_password \
--vault-id vault-password
# with wrong password
ansible-playbook test_vault.yml -i ../../inventory -v "$@" --vault-password-file vault-password-wrong && :
WRONG_RC=$?
echo "rc was $WRONG_RC (1 is expected)"
[ $WRONG_RC -eq 1 ]
# with multiple wrong passwords
ansible-playbook test_vault.yml -i ../../inventory -v "$@" --vault-password-file vault-password-wrong --vault-password-file vault-password-wrong && :
WRONG_RC=$?
echo "rc was $WRONG_RC (1 is expected)"
[ $WRONG_RC -eq 1 ]
# with wrong password, --vault-id
ansible-playbook test_vault.yml -i ../../inventory -v "$@" --vault-id vault-password-wrong && :
WRONG_RC=$?
echo "rc was $WRONG_RC (1 is expected)"
[ $WRONG_RC -eq 1 ]
# with multiple wrong passwords with --vault-id
ansible-playbook test_vault.yml -i ../../inventory -v "$@" --vault-id vault-password-wrong --vault-id vault-password-wrong && :
WRONG_RC=$?
echo "rc was $WRONG_RC (1 is expected)"
[ $WRONG_RC -eq 1 ]
# with multiple wrong passwords with --vault-id
ansible-playbook test_vault.yml -i ../../inventory -v "$@" --vault-id wrong1@vault-password-wrong --vault-id wrong2@vault-password-wrong && :
WRONG_RC=$?
echo "rc was $WRONG_RC (1 is expected)"
[ $WRONG_RC -eq 1 ]
# with empty password file
ansible-playbook test_vault.yml -i ../../inventory -v "$@" --vault-id empty@empty-password && :
WRONG_RC=$?
echo "rc was $WRONG_RC (1 is expected)"
[ $WRONG_RC -eq 1 ]
# test invalid format ala https://github.com/ansible/ansible/issues/28038
EXPECTED_ERROR='Vault format unhexlify error: Non-hexadecimal digit found'
ansible-playbook "$@" -i invalid_format/inventory --vault-id invalid_format/vault-secret invalid_format/broken-host-vars-tasks.yml 2>&1 | grep "${EXPECTED_ERROR}"
EXPECTED_ERROR='Vault format unhexlify error: Odd-length string'
ansible-playbook "$@" -i invalid_format/inventory --vault-id invalid_format/vault-secret invalid_format/broken-group-vars-tasks.yml 2>&1 | grep "${EXPECTED_ERROR}"
# Run playbook with vault file with unicode in filename (https://github.com/ansible/ansible/issues/50316)
ansible-playbook -i ../../inventory -v "$@" --vault-password-file vault-password test_utf8_value_in_filename.yml
# Ensure we don't leave unencrypted temp files dangling
ansible-playbook -v "$@" --vault-password-file vault-password test_dangling_temp.yml
ansible-playbook "$@" --vault-password-file vault-password single_vault_as_string.yml
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 75,765 |
`set_fact` only runs on first host with `delegate_to` and `when`
|
### Summary
When using `set_fact` in combination with `delegate_to`, `delegate_facts` and `when` it seems to actually be only run on the first host:
```yaml
- hosts: localhost
gather_facts: no
connection: local
tasks:
- set_fact:
test: 123
delegate_to: "{{ item }}"
delegate_facts: true
when: test is not defined
loop: "{{ groups['all'] }}"
# note: the following will STILL fail!
#loop: "{{ groups['all'] | difference(["localhost"]) }}"
```
The problem only arises when setting the variable that is actually used in the `when` clause. If I had to guess, I assume the `when` is checked in the wrong loop iteration. Setting a fact that is not used in `when` will work fine.
#20508 seems to be related in some regard but was fixed some time ago.
### Issue Type
Bug Report
### Component Name
set_fact
### Ansible Version
```console
$ ansible --version
ansible [core 2.11.5]
config file = /Users/hojerst/.ansible.cfg
configured module search path = ['/Users/hojerst/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /Users/hojerst/.pyenv/versions/3.9.4/lib/python3.9/site-packages/ansible
ansible collection location = /Users/hojerst/.ansible/collections:/usr/share/ansible/collections
executable location = /Users/hojerst/.pyenv/versions/3.9.4/bin/ansible
python version = 3.9.4 (default, Apr 11 2021, 17:03:42) [Clang 12.0.0 (clang-1200.0.32.29)]
jinja version = 2.11.3
libyaml = True
```
### Configuration
```console
$ ansible-config dump --only-changed
DEFAULT_VAULT_PASSWORD_FILE(/Users/hojerst/.ansible.cfg) = /Users/hojerst/.ansible/vault-pass%
```
### OS / Environment
MacOS 12.0 Beta (21A5506j)
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```yaml
- hosts: localhost
gather_facts: no
connection: local
tasks:
- set_fact:
test: 123
delegate_to: "{{ item }}"
delegate_facts: true
when: test is not defined
loop: "{{ groups['all'] | difference(['localhost']) }}"
- debug:
var: test
```
```sh
ansible-playbook playbook.yml -i host1,host2,host3
```
### Expected Results
PLAY [localhost] **************************************************************************************************************
TASK [set_fact] ***************************************************************************************************************
ok: [localhost -> host1] => (item=host1)
ok: [localhost -> host2] => (item=host2)
ok: [localhost -> host3] => (item=host3)
PLAY RECAP ********************************************************************************************************************
localhost : ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
### Actual Results
```console
PLAY [localhost] **************************************************************************************************************
TASK [set_fact] ***************************************************************************************************************
ok: [localhost -> host1] => (item=host1)
skipping: [localhost] => (item=host2)
skipping: [localhost] => (item=host3)
TASK [debug] ******************************************************************************************************************
ok: [localhost] => {
"test": "VARIABLE IS NOT DEFINED!"
}
PLAY RECAP ********************************************************************************************************************
localhost : ok=2 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/75765
|
https://github.com/ansible/ansible/pull/75768
|
8bbecc7cac96203fedf99953a06eb32d74c7a4d7
|
7bec19606172e67ac4dbe5d10a6853b01b22ca8c
| 2021-09-23T14:57:58Z |
python
| 2021-11-01T15:25:10Z |
changelogs/fragments/set_fact_delegation.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 75,765 |
`set_fact` only runs on first host with `delegate_to` and `when`
|
### Summary
When using `set_fact` in combination with `delegate_to`, `delegate_facts` and `when` it seems to actually be only run on the first host:
```yaml
- hosts: localhost
gather_facts: no
connection: local
tasks:
- set_fact:
test: 123
delegate_to: "{{ item }}"
delegate_facts: true
when: test is not defined
loop: "{{ groups['all'] }}"
# note: the following will STILL fail!
#loop: "{{ groups['all'] | difference(["localhost"]) }}"
```
The problem only arises when setting the variable that is actually used in the `when` clause. If I had to guess, I assume the `when` is checked in the wrong loop iteration. Setting a fact that is not used in `when` will work fine.
#20508 seems to be related in some regard but was fixed some time ago.
### Issue Type
Bug Report
### Component Name
set_fact
### Ansible Version
```console
$ ansible --version
ansible [core 2.11.5]
config file = /Users/hojerst/.ansible.cfg
configured module search path = ['/Users/hojerst/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /Users/hojerst/.pyenv/versions/3.9.4/lib/python3.9/site-packages/ansible
ansible collection location = /Users/hojerst/.ansible/collections:/usr/share/ansible/collections
executable location = /Users/hojerst/.pyenv/versions/3.9.4/bin/ansible
python version = 3.9.4 (default, Apr 11 2021, 17:03:42) [Clang 12.0.0 (clang-1200.0.32.29)]
jinja version = 2.11.3
libyaml = True
```
### Configuration
```console
$ ansible-config dump --only-changed
DEFAULT_VAULT_PASSWORD_FILE(/Users/hojerst/.ansible.cfg) = /Users/hojerst/.ansible/vault-pass%
```
### OS / Environment
MacOS 12.0 Beta (21A5506j)
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```yaml
- hosts: localhost
gather_facts: no
connection: local
tasks:
- set_fact:
test: 123
delegate_to: "{{ item }}"
delegate_facts: true
when: test is not defined
loop: "{{ groups['all'] | difference(['localhost']) }}"
- debug:
var: test
```
```sh
ansible-playbook playbook.yml -i host1,host2,host3
```
### Expected Results
PLAY [localhost] **************************************************************************************************************
TASK [set_fact] ***************************************************************************************************************
ok: [localhost -> host1] => (item=host1)
ok: [localhost -> host2] => (item=host2)
ok: [localhost -> host3] => (item=host3)
PLAY RECAP ********************************************************************************************************************
localhost : ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
### Actual Results
```console
PLAY [localhost] **************************************************************************************************************
TASK [set_fact] ***************************************************************************************************************
ok: [localhost -> host1] => (item=host1)
skipping: [localhost] => (item=host2)
skipping: [localhost] => (item=host3)
TASK [debug] ******************************************************************************************************************
ok: [localhost] => {
"test": "VARIABLE IS NOT DEFINED!"
}
PLAY RECAP ********************************************************************************************************************
localhost : ok=2 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/75765
|
https://github.com/ansible/ansible/pull/75768
|
8bbecc7cac96203fedf99953a06eb32d74c7a4d7
|
7bec19606172e67ac4dbe5d10a6853b01b22ca8c
| 2021-09-23T14:57:58Z |
python
| 2021-11-01T15:25:10Z |
lib/ansible/executor/task_executor.py
|
# (c) 2012-2014, Michael DeHaan <[email protected]>
# (c) 2017 Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import os
import pty
import time
import json
import signal
import subprocess
import sys
import termios
import traceback
from ansible import constants as C
from ansible.errors import AnsibleError, AnsibleParserError, AnsibleUndefinedVariable, AnsibleConnectionFailure, AnsibleActionFail, AnsibleActionSkip
from ansible.executor.task_result import TaskResult
from ansible.executor.module_common import get_action_args_with_defaults
from ansible.module_utils.parsing.convert_bool import boolean
from ansible.module_utils.six import binary_type
from ansible.module_utils._text import to_text, to_native
from ansible.module_utils.connection import write_to_file_descriptor
from ansible.playbook.conditional import Conditional
from ansible.playbook.task import Task
from ansible.plugins.loader import become_loader, cliconf_loader, connection_loader, httpapi_loader, netconf_loader, terminal_loader
from ansible.template import Templar
from ansible.utils.collection_loader import AnsibleCollectionConfig
from ansible.utils.listify import listify_lookup_plugin_terms
from ansible.utils.unsafe_proxy import to_unsafe_text, wrap_var
from ansible.vars.clean import namespace_facts, clean_facts
from ansible.utils.display import Display
from ansible.utils.vars import combine_vars, isidentifier
display = Display()
RETURN_VARS = [x for x in C.MAGIC_VARIABLE_MAPPING.items() if 'become' not in x and '_pass' not in x]
__all__ = ['TaskExecutor']
class TaskTimeoutError(BaseException):
pass
def task_timeout(signum, frame):
raise TaskTimeoutError
def remove_omit(task_args, omit_token):
'''
Remove args with a value equal to the ``omit_token`` recursively
to align with now having suboptions in the argument_spec
'''
if not isinstance(task_args, dict):
return task_args
new_args = {}
for i in task_args.items():
if i[1] == omit_token:
continue
elif isinstance(i[1], dict):
new_args[i[0]] = remove_omit(i[1], omit_token)
elif isinstance(i[1], list):
new_args[i[0]] = [remove_omit(v, omit_token) for v in i[1]]
else:
new_args[i[0]] = i[1]
return new_args
class TaskExecutor:
'''
This is the main worker class for the executor pipeline, which
handles loading an action plugin to actually dispatch the task to
a given host. This class roughly corresponds to the old Runner()
class.
'''
def __init__(self, host, task, job_vars, play_context, new_stdin, loader, shared_loader_obj, final_q):
self._host = host
self._task = task
self._job_vars = job_vars
self._play_context = play_context
self._new_stdin = new_stdin
self._loader = loader
self._shared_loader_obj = shared_loader_obj
self._connection = None
self._final_q = final_q
self._loop_eval_error = None
self._task.squash()
def run(self):
'''
The main executor entrypoint, where we determine if the specified
task requires looping and either runs the task with self._run_loop()
or self._execute(). After that, the returned results are parsed and
returned as a dict.
'''
display.debug("in run() - task %s" % self._task._uuid)
try:
try:
items = self._get_loop_items()
except AnsibleUndefinedVariable as e:
# save the error raised here for use later
items = None
self._loop_eval_error = e
if items is not None:
if len(items) > 0:
item_results = self._run_loop(items)
# create the overall result item
res = dict(results=item_results)
# loop through the item results and set the global changed/failed/skipped result flags based on any item.
res['skipped'] = True
for item in item_results:
if 'changed' in item and item['changed'] and not res.get('changed'):
res['changed'] = True
if res['skipped'] and ('skipped' not in item or ('skipped' in item and not item['skipped'])):
res['skipped'] = False
if 'failed' in item and item['failed']:
item_ignore = item.pop('_ansible_ignore_errors')
if not res.get('failed'):
res['failed'] = True
res['msg'] = 'One or more items failed'
self._task.ignore_errors = item_ignore
elif self._task.ignore_errors and not item_ignore:
self._task.ignore_errors = item_ignore
# ensure to accumulate these
for array in ['warnings', 'deprecations']:
if array in item and item[array]:
if array not in res:
res[array] = []
if not isinstance(item[array], list):
item[array] = [item[array]]
res[array] = res[array] + item[array]
del item[array]
if not res.get('failed', False):
res['msg'] = 'All items completed'
if res['skipped']:
res['msg'] = 'All items skipped'
else:
res = dict(changed=False, skipped=True, skipped_reason='No items in the list', results=[])
else:
display.debug("calling self._execute()")
res = self._execute()
display.debug("_execute() done")
# make sure changed is set in the result, if it's not present
if 'changed' not in res:
res['changed'] = False
def _clean_res(res, errors='surrogate_or_strict'):
if isinstance(res, binary_type):
return to_unsafe_text(res, errors=errors)
elif isinstance(res, dict):
for k in res:
try:
res[k] = _clean_res(res[k], errors=errors)
except UnicodeError:
if k == 'diff':
# If this is a diff, substitute a replacement character if the value
# is undecodable as utf8. (Fix #21804)
display.warning("We were unable to decode all characters in the module return data."
" Replaced some in an effort to return as much as possible")
res[k] = _clean_res(res[k], errors='surrogate_then_replace')
else:
raise
elif isinstance(res, list):
for idx, item in enumerate(res):
res[idx] = _clean_res(item, errors=errors)
return res
display.debug("dumping result to json")
res = _clean_res(res)
display.debug("done dumping result, returning")
return res
except AnsibleError as e:
return dict(failed=True, msg=wrap_var(to_text(e, nonstring='simplerepr')), _ansible_no_log=self._play_context.no_log)
except Exception as e:
return dict(failed=True, msg='Unexpected failure during module execution.', exception=to_text(traceback.format_exc()),
stdout='', _ansible_no_log=self._play_context.no_log)
finally:
try:
self._connection.close()
except AttributeError:
pass
except Exception as e:
display.debug(u"error closing connection: %s" % to_text(e))
def _get_loop_items(self):
'''
Loads a lookup plugin to handle the with_* portion of a task (if specified),
and returns the items result.
'''
# get search path for this task to pass to lookup plugins
self._job_vars['ansible_search_path'] = self._task.get_search_path()
# ensure basedir is always in (dwim already searches here but we need to display it)
if self._loader.get_basedir() not in self._job_vars['ansible_search_path']:
self._job_vars['ansible_search_path'].append(self._loader.get_basedir())
templar = Templar(loader=self._loader, variables=self._job_vars)
items = None
loop_cache = self._job_vars.get('_ansible_loop_cache')
if loop_cache is not None:
# _ansible_loop_cache may be set in `get_vars` when calculating `delegate_to`
# to avoid reprocessing the loop
items = loop_cache
elif self._task.loop_with:
if self._task.loop_with in self._shared_loader_obj.lookup_loader:
fail = True
if self._task.loop_with == 'first_found':
# first_found loops are special. If the item is undefined then we want to fall through to the next value rather than failing.
fail = False
loop_terms = listify_lookup_plugin_terms(terms=self._task.loop, templar=templar, loader=self._loader, fail_on_undefined=fail,
convert_bare=False)
if not fail:
loop_terms = [t for t in loop_terms if not templar.is_template(t)]
# get lookup
mylookup = self._shared_loader_obj.lookup_loader.get(self._task.loop_with, loader=self._loader, templar=templar)
# give lookup task 'context' for subdir (mostly needed for first_found)
for subdir in ['template', 'var', 'file']: # TODO: move this to constants?
if subdir in self._task.action:
break
setattr(mylookup, '_subdir', subdir + 's')
# run lookup
items = wrap_var(mylookup.run(terms=loop_terms, variables=self._job_vars, wantlist=True))
else:
raise AnsibleError("Unexpected failure in finding the lookup named '%s' in the available lookup plugins" % self._task.loop_with)
elif self._task.loop is not None:
items = templar.template(self._task.loop)
if not isinstance(items, list):
raise AnsibleError(
"Invalid data passed to 'loop', it requires a list, got this instead: %s."
" Hint: If you passed a list/dict of just one element,"
" try adding wantlist=True to your lookup invocation or use q/query instead of lookup." % items
)
return items
def _run_loop(self, items):
'''
Runs the task with the loop items specified and collates the result
into an array named 'results' which is inserted into the final result
along with the item for which the loop ran.
'''
results = []
# make copies of the job vars and task so we can add the item to
# the variables and re-validate the task with the item variable
# task_vars = self._job_vars.copy()
task_vars = self._job_vars
loop_var = 'item'
index_var = None
label = None
loop_pause = 0
extended = False
templar = Templar(loader=self._loader, variables=self._job_vars)
# FIXME: move this to the object itself to allow post_validate to take care of templating (loop_control.post_validate)
if self._task.loop_control:
loop_var = templar.template(self._task.loop_control.loop_var)
index_var = templar.template(self._task.loop_control.index_var)
loop_pause = templar.template(self._task.loop_control.pause)
extended = templar.template(self._task.loop_control.extended)
# This may be 'None',so it is templated below after we ensure a value and an item is assigned
label = self._task.loop_control.label
# ensure we always have a label
if label is None:
label = '{{' + loop_var + '}}'
if loop_var in task_vars:
display.warning(u"%s: The loop variable '%s' is already in use. "
u"You should set the `loop_var` value in the `loop_control` option for the task"
u" to something else to avoid variable collisions and unexpected behavior." % (self._task, loop_var))
ran_once = False
no_log = False
items_len = len(items)
for item_index, item in enumerate(items):
task_vars['ansible_loop_var'] = loop_var
task_vars[loop_var] = item
if index_var:
task_vars['ansible_index_var'] = index_var
task_vars[index_var] = item_index
if extended:
task_vars['ansible_loop'] = {
'allitems': items,
'index': item_index + 1,
'index0': item_index,
'first': item_index == 0,
'last': item_index + 1 == items_len,
'length': items_len,
'revindex': items_len - item_index,
'revindex0': items_len - item_index - 1,
}
try:
task_vars['ansible_loop']['nextitem'] = items[item_index + 1]
except IndexError:
pass
if item_index - 1 >= 0:
task_vars['ansible_loop']['previtem'] = items[item_index - 1]
# Update template vars to reflect current loop iteration
templar.available_variables = task_vars
# pause between loop iterations
if loop_pause and ran_once:
try:
time.sleep(float(loop_pause))
except ValueError as e:
raise AnsibleError('Invalid pause value: %s, produced error: %s' % (loop_pause, to_native(e)))
else:
ran_once = True
try:
tmp_task = self._task.copy(exclude_parent=True, exclude_tasks=True)
tmp_task._parent = self._task._parent
tmp_play_context = self._play_context.copy()
except AnsibleParserError as e:
results.append(dict(failed=True, msg=to_text(e)))
continue
# now we swap the internal task and play context with their copies,
# execute, and swap them back so we can do the next iteration cleanly
(self._task, tmp_task) = (tmp_task, self._task)
(self._play_context, tmp_play_context) = (tmp_play_context, self._play_context)
res = self._execute(variables=task_vars)
task_fields = self._task.dump_attrs()
(self._task, tmp_task) = (tmp_task, self._task)
(self._play_context, tmp_play_context) = (tmp_play_context, self._play_context)
# update 'general no_log' based on specific no_log
no_log = no_log or tmp_task.no_log
# now update the result with the item info, and append the result
# to the list of results
res[loop_var] = item
res['ansible_loop_var'] = loop_var
if index_var:
res[index_var] = item_index
res['ansible_index_var'] = index_var
if extended:
res['ansible_loop'] = task_vars['ansible_loop']
res['_ansible_item_result'] = True
res['_ansible_ignore_errors'] = task_fields.get('ignore_errors')
# gets templated here unlike rest of loop_control fields, depends on loop_var above
try:
res['_ansible_item_label'] = templar.template(label, cache=False)
except AnsibleUndefinedVariable as e:
res.update({
'failed': True,
'msg': 'Failed to template loop_control.label: %s' % to_text(e)
})
tr = TaskResult(
self._host.name,
self._task._uuid,
res,
task_fields=task_fields,
)
if tr.is_failed() or tr.is_unreachable():
self._final_q.send_callback('v2_runner_item_on_failed', tr)
elif tr.is_skipped():
self._final_q.send_callback('v2_runner_item_on_skipped', tr)
else:
if getattr(self._task, 'diff', False):
self._final_q.send_callback('v2_on_file_diff', tr)
self._final_q.send_callback('v2_runner_item_on_ok', tr)
results.append(res)
del task_vars[loop_var]
# clear 'connection related' plugin variables for next iteration
if self._connection:
clear_plugins = {
'connection': self._connection._load_name,
'shell': self._connection._shell._load_name
}
if self._connection.become:
clear_plugins['become'] = self._connection.become._load_name
for plugin_type, plugin_name in clear_plugins.items():
for var in C.config.get_plugin_vars(plugin_type, plugin_name):
if var in task_vars and var not in self._job_vars:
del task_vars[var]
self._task.no_log = no_log
return results
def _execute(self, variables=None):
'''
The primary workhorse of the executor system, this runs the task
on the specified host (which may be the delegated_to host) and handles
the retry/until and block rescue/always execution
'''
if variables is None:
variables = self._job_vars
templar = Templar(loader=self._loader, variables=variables)
context_validation_error = None
try:
# TODO: remove play_context as this does not take delegation into account, task itself should hold values
# for connection/shell/become/terminal plugin options to finalize.
# Kept for now for backwards compatibility and a few functions that are still exclusive to it.
# apply the given task's information to the connection info,
# which may override some fields already set by the play or
# the options specified on the command line
self._play_context = self._play_context.set_task_and_variable_override(task=self._task, variables=variables, templar=templar)
# fields set from the play/task may be based on variables, so we have to
# do the same kind of post validation step on it here before we use it.
self._play_context.post_validate(templar=templar)
# now that the play context is finalized, if the remote_addr is not set
# default to using the host's address field as the remote address
if not self._play_context.remote_addr:
self._play_context.remote_addr = self._host.address
# We also add "magic" variables back into the variables dict to make sure
# a certain subset of variables exist.
self._play_context.update_vars(variables)
except AnsibleError as e:
# save the error, which we'll raise later if we don't end up
# skipping this task during the conditional evaluation step
context_validation_error = e
# Evaluate the conditional (if any) for this task, which we do before running
# the final task post-validation. We do this before the post validation due to
# the fact that the conditional may specify that the task be skipped due to a
# variable not being present which would otherwise cause validation to fail
try:
if not self._task.evaluate_conditional(templar, variables):
display.debug("when evaluation is False, skipping this task")
return dict(changed=False, skipped=True, skip_reason='Conditional result was False', _ansible_no_log=self._play_context.no_log)
except AnsibleError as e:
# loop error takes precedence
if self._loop_eval_error is not None:
# Display the error from the conditional as well to prevent
# losing information useful for debugging.
display.v(to_text(e))
raise self._loop_eval_error # pylint: disable=raising-bad-type
raise
# Not skipping, if we had loop error raised earlier we need to raise it now to halt the execution of this task
if self._loop_eval_error is not None:
raise self._loop_eval_error # pylint: disable=raising-bad-type
# if we ran into an error while setting up the PlayContext, raise it now, unless is known issue with delegation
if context_validation_error is not None and not (self._task.delegate_to and isinstance(context_validation_error, AnsibleUndefinedVariable)):
raise context_validation_error # pylint: disable=raising-bad-type
# if this task is a TaskInclude, we just return now with a success code so the
# main thread can expand the task list for the given host
if self._task.action in C._ACTION_ALL_INCLUDE_TASKS:
include_args = self._task.args.copy()
include_file = include_args.pop('_raw_params', None)
if not include_file:
return dict(failed=True, msg="No include file was specified to the include")
include_file = templar.template(include_file)
return dict(include=include_file, include_args=include_args)
# if this task is a IncludeRole, we just return now with a success code so the main thread can expand the task list for the given host
elif self._task.action in C._ACTION_INCLUDE_ROLE:
include_args = self._task.args.copy()
return dict(include_args=include_args)
# Now we do final validation on the task, which sets all fields to their final values.
try:
self._task.post_validate(templar=templar)
except AnsibleError:
raise
except Exception:
return dict(changed=False, failed=True, _ansible_no_log=self._play_context.no_log, exception=to_text(traceback.format_exc()))
if '_variable_params' in self._task.args:
variable_params = self._task.args.pop('_variable_params')
if isinstance(variable_params, dict):
if C.INJECT_FACTS_AS_VARS:
display.warning("Using a variable for a task's 'args' is unsafe in some situations "
"(see https://docs.ansible.com/ansible/devel/reference_appendices/faq.html#argsplat-unsafe)")
variable_params.update(self._task.args)
self._task.args = variable_params
if self._task.delegate_to:
# use vars from delegated host (which already include task vars) instead of original host
cvars = variables.get('ansible_delegated_vars', {}).get(self._task.delegate_to, {})
orig_vars = templar.available_variables
else:
# just use normal host vars
cvars = orig_vars = variables
templar.available_variables = cvars
# get the connection and the handler for this execution
if (not self._connection or
not getattr(self._connection, 'connected', False) or
self._play_context.remote_addr != self._connection._play_context.remote_addr):
self._connection = self._get_connection(cvars, templar)
else:
# if connection is reused, its _play_context is no longer valid and needs
# to be replaced with the one templated above, in case other data changed
self._connection._play_context = self._play_context
plugin_vars = self._set_connection_options(cvars, templar)
templar.available_variables = orig_vars
# TODO: eventually remove this block as this should be a 'consequence' of 'forced_local' modules
# special handling for python interpreter for network_os, default to ansible python unless overriden
if 'ansible_network_os' in cvars and 'ansible_python_interpreter' not in cvars:
# this also avoids 'python discovery'
cvars['ansible_python_interpreter'] = sys.executable
# get handler
self._handler = self._get_action_handler(connection=self._connection, templar=templar)
# Apply default params for action/module, if present
self._task.args = get_action_args_with_defaults(
self._task.resolved_action, self._task.args, self._task.module_defaults, templar,
action_groups=self._task._parent._play._action_groups
)
# And filter out any fields which were set to default(omit), and got the omit token value
omit_token = variables.get('omit')
if omit_token is not None:
self._task.args = remove_omit(self._task.args, omit_token)
# Read some values from the task, so that we can modify them if need be
if self._task.until:
retries = self._task.retries
if retries is None:
retries = 3
elif retries <= 0:
retries = 1
else:
retries += 1
else:
retries = 1
delay = self._task.delay
if delay < 0:
delay = 1
# make a copy of the job vars here, in case we need to update them
# with the registered variable value later on when testing conditions
vars_copy = variables.copy()
display.debug("starting attempt loop")
result = None
for attempt in range(1, retries + 1):
display.debug("running the handler")
try:
if self._task.timeout:
old_sig = signal.signal(signal.SIGALRM, task_timeout)
signal.alarm(self._task.timeout)
result = self._handler.run(task_vars=variables)
except (AnsibleActionFail, AnsibleActionSkip) as e:
return e.result
except AnsibleConnectionFailure as e:
return dict(unreachable=True, msg=to_text(e))
except TaskTimeoutError as e:
msg = 'The %s action failed to execute in the expected time frame (%d) and was terminated' % (self._task.action, self._task.timeout)
return dict(failed=True, msg=msg)
finally:
if self._task.timeout:
signal.alarm(0)
old_sig = signal.signal(signal.SIGALRM, old_sig)
self._handler.cleanup()
display.debug("handler run complete")
# preserve no log
result["_ansible_no_log"] = self._play_context.no_log
if self._task.action not in C._ACTION_WITH_CLEAN_FACTS:
result = wrap_var(result)
# update the local copy of vars with the registered value, if specified,
# or any facts which may have been generated by the module execution
if self._task.register:
if not isidentifier(self._task.register):
raise AnsibleError("Invalid variable name in 'register' specified: '%s'" % self._task.register)
vars_copy[self._task.register] = result
if self._task.async_val > 0:
if self._task.poll > 0 and not result.get('skipped') and not result.get('failed'):
result = self._poll_async_result(result=result, templar=templar, task_vars=vars_copy)
if result.get('failed'):
self._final_q.send_callback(
'v2_runner_on_async_failed',
TaskResult(self._host.name,
self._task, # We send the full task here, because the controller knows nothing about it, the TE created it
result,
task_fields=self._task.dump_attrs()))
else:
self._final_q.send_callback(
'v2_runner_on_async_ok',
TaskResult(self._host.name,
self._task, # We send the full task here, because the controller knows nothing about it, the TE created it
result,
task_fields=self._task.dump_attrs()))
# ensure no log is preserved
result["_ansible_no_log"] = self._play_context.no_log
# helper methods for use below in evaluating changed/failed_when
def _evaluate_changed_when_result(result):
if self._task.changed_when is not None and self._task.changed_when:
cond = Conditional(loader=self._loader)
cond.when = self._task.changed_when
result['changed'] = cond.evaluate_conditional(templar, vars_copy)
def _evaluate_failed_when_result(result):
if self._task.failed_when:
cond = Conditional(loader=self._loader)
cond.when = self._task.failed_when
failed_when_result = cond.evaluate_conditional(templar, vars_copy)
result['failed_when_result'] = result['failed'] = failed_when_result
else:
failed_when_result = False
return failed_when_result
if 'ansible_facts' in result and self._task.action not in C._ACTION_DEBUG:
if self._task.action in C._ACTION_WITH_CLEAN_FACTS:
vars_copy.update(result['ansible_facts'])
else:
# TODO: cleaning of facts should eventually become part of taskresults instead of vars
af = wrap_var(result['ansible_facts'])
vars_copy['ansible_facts'] = combine_vars(vars_copy.get('ansible_facts', {}), namespace_facts(af))
if C.INJECT_FACTS_AS_VARS:
vars_copy.update(clean_facts(af))
# set the failed property if it was missing.
if 'failed' not in result:
# rc is here for backwards compatibility and modules that use it instead of 'failed'
if 'rc' in result and result['rc'] not in [0, "0"]:
result['failed'] = True
else:
result['failed'] = False
# Make attempts and retries available early to allow their use in changed/failed_when
if self._task.until:
result['attempts'] = attempt
# set the changed property if it was missing.
if 'changed' not in result:
result['changed'] = False
if self._task.action not in C._ACTION_WITH_CLEAN_FACTS:
result = wrap_var(result)
# re-update the local copy of vars with the registered value, if specified,
# or any facts which may have been generated by the module execution
# This gives changed/failed_when access to additional recently modified
# attributes of result
if self._task.register:
vars_copy[self._task.register] = result
# if we didn't skip this task, use the helpers to evaluate the changed/
# failed_when properties
if 'skipped' not in result:
try:
condname = 'changed'
_evaluate_changed_when_result(result)
condname = 'failed'
_evaluate_failed_when_result(result)
except AnsibleError as e:
result['failed'] = True
result['%s_when_result' % condname] = to_text(e)
if retries > 1:
cond = Conditional(loader=self._loader)
cond.when = self._task.until
if cond.evaluate_conditional(templar, vars_copy):
break
else:
# no conditional check, or it failed, so sleep for the specified time
if attempt < retries:
result['_ansible_retry'] = True
result['retries'] = retries
display.debug('Retrying task, attempt %d of %d' % (attempt, retries))
self._final_q.send_callback(
'v2_runner_retry',
TaskResult(
self._host.name,
self._task._uuid,
result,
task_fields=self._task.dump_attrs()
)
)
time.sleep(delay)
self._handler = self._get_action_handler(connection=self._connection, templar=templar)
else:
if retries > 1:
# we ran out of attempts, so mark the result as failed
result['attempts'] = retries - 1
result['failed'] = True
if self._task.action not in C._ACTION_WITH_CLEAN_FACTS:
result = wrap_var(result)
# do the final update of the local variables here, for both registered
# values and any facts which may have been created
if self._task.register:
variables[self._task.register] = result
if 'ansible_facts' in result and self._task.action not in C._ACTION_DEBUG:
if self._task.action in C._ACTION_WITH_CLEAN_FACTS:
variables.update(result['ansible_facts'])
else:
# TODO: cleaning of facts should eventually become part of taskresults instead of vars
af = wrap_var(result['ansible_facts'])
variables['ansible_facts'] = combine_vars(variables.get('ansible_facts', {}), namespace_facts(af))
if C.INJECT_FACTS_AS_VARS:
variables.update(clean_facts(af))
# save the notification target in the result, if it was specified, as
# this task may be running in a loop in which case the notification
# may be item-specific, ie. "notify: service {{item}}"
if self._task.notify is not None:
result['_ansible_notify'] = self._task.notify
# add the delegated vars to the result, so we can reference them
# on the results side without having to do any further templating
# also now add conneciton vars results when delegating
if self._task.delegate_to:
result["_ansible_delegated_vars"] = {'ansible_delegated_host': self._task.delegate_to}
for k in plugin_vars:
result["_ansible_delegated_vars"][k] = cvars.get(k)
# note: here for callbacks that rely on this info to display delegation
for requireshed in ('ansible_host', 'ansible_port', 'ansible_user', 'ansible_connection'):
if requireshed not in result["_ansible_delegated_vars"] and requireshed in cvars:
result["_ansible_delegated_vars"][requireshed] = cvars.get(requireshed)
# and return
display.debug("attempt loop complete, returning result")
return result
def _poll_async_result(self, result, templar, task_vars=None):
'''
Polls for the specified JID to be complete
'''
if task_vars is None:
task_vars = self._job_vars
async_jid = result.get('ansible_job_id')
if async_jid is None:
return dict(failed=True, msg="No job id was returned by the async task")
# Create a new pseudo-task to run the async_status module, and run
# that (with a sleep for "poll" seconds between each retry) until the
# async time limit is exceeded.
async_task = Task().load(dict(action='async_status jid=%s' % async_jid, environment=self._task.environment))
# FIXME: this is no longer the case, normal takes care of all, see if this can just be generalized
# Because this is an async task, the action handler is async. However,
# we need the 'normal' action handler for the status check, so get it
# now via the action_loader
async_handler = self._shared_loader_obj.action_loader.get(
'ansible.legacy.async_status',
task=async_task,
connection=self._connection,
play_context=self._play_context,
loader=self._loader,
templar=templar,
shared_loader_obj=self._shared_loader_obj,
)
time_left = self._task.async_val
while time_left > 0:
time.sleep(self._task.poll)
try:
async_result = async_handler.run(task_vars=task_vars)
# We do not bail out of the loop in cases where the failure
# is associated with a parsing error. The async_runner can
# have issues which result in a half-written/unparseable result
# file on disk, which manifests to the user as a timeout happening
# before it's time to timeout.
if (int(async_result.get('finished', 0)) == 1 or
('failed' in async_result and async_result.get('_ansible_parsed', False)) or
'skipped' in async_result):
break
except Exception as e:
# Connections can raise exceptions during polling (eg, network bounce, reboot); these should be non-fatal.
# On an exception, call the connection's reset method if it has one
# (eg, drop/recreate WinRM connection; some reused connections are in a broken state)
display.vvvv("Exception during async poll, retrying... (%s)" % to_text(e))
display.debug("Async poll exception was:\n%s" % to_text(traceback.format_exc()))
try:
async_handler._connection.reset()
except AttributeError:
pass
# Little hack to raise the exception if we've exhausted the timeout period
time_left -= self._task.poll
if time_left <= 0:
raise
else:
time_left -= self._task.poll
self._final_q.send_callback(
'v2_runner_on_async_poll',
TaskResult(
self._host.name,
async_task, # We send the full task here, because the controller knows nothing about it, the TE created it
async_result,
task_fields=self._task.dump_attrs(),
),
)
if int(async_result.get('finished', 0)) != 1:
if async_result.get('_ansible_parsed'):
return dict(failed=True, msg="async task did not complete within the requested time - %ss" % self._task.async_val, async_result=async_result)
else:
return dict(failed=True, msg="async task produced unparseable results", async_result=async_result)
else:
# If the async task finished, automatically cleanup the temporary
# status file left behind.
cleanup_task = Task().load(
{
'async_status': {
'jid': async_jid,
'mode': 'cleanup',
},
'environment': self._task.environment,
}
)
cleanup_handler = self._shared_loader_obj.action_loader.get(
'ansible.legacy.async_status',
task=cleanup_task,
connection=self._connection,
play_context=self._play_context,
loader=self._loader,
templar=templar,
shared_loader_obj=self._shared_loader_obj,
)
cleanup_handler.run(task_vars=task_vars)
cleanup_handler.cleanup(force=True)
async_handler.cleanup(force=True)
return async_result
def _get_become(self, name):
become = become_loader.get(name)
if not become:
raise AnsibleError("Invalid become method specified, could not find matching plugin: '%s'. "
"Use `ansible-doc -t become -l` to list available plugins." % name)
return become
def _get_connection(self, cvars, templar):
'''
Reads the connection property for the host, and returns the
correct connection object from the list of connection plugins
'''
# use magic var if it exists, if not, let task inheritance do it's thing.
if cvars.get('ansible_connection') is not None:
self._play_context.connection = templar.template(cvars['ansible_connection'])
else:
self._play_context.connection = self._task.connection
# TODO: play context has logic to update the connection for 'smart'
# (default value, will chose between ssh and paramiko) and 'persistent'
# (really paramiko), eventually this should move to task object itself.
connection_name = self._play_context.connection
# load connection
conn_type = connection_name
connection, plugin_load_context = self._shared_loader_obj.connection_loader.get_with_context(
conn_type,
self._play_context,
self._new_stdin,
task_uuid=self._task._uuid,
ansible_playbook_pid=to_text(os.getppid())
)
if not connection:
raise AnsibleError("the connection plugin '%s' was not found" % conn_type)
# load become plugin if needed
if cvars.get('ansible_become') is not None:
become = boolean(templar.template(cvars['ansible_become']))
else:
become = self._task.become
if become:
if cvars.get('ansible_become_method'):
become_plugin = self._get_become(templar.template(cvars['ansible_become_method']))
else:
become_plugin = self._get_become(self._task.become_method)
try:
connection.set_become_plugin(become_plugin)
except AttributeError:
# Older connection plugin that does not support set_become_plugin
pass
if getattr(connection.become, 'require_tty', False) and not getattr(connection, 'has_tty', False):
raise AnsibleError(
"The '%s' connection does not provide a TTY which is required for the selected "
"become plugin: %s." % (conn_type, become_plugin.name)
)
# Backwards compat for connection plugins that don't support become plugins
# Just do this unconditionally for now, we could move it inside of the
# AttributeError above later
self._play_context.set_become_plugin(become_plugin.name)
# Also backwards compat call for those still using play_context
self._play_context.set_attributes_from_plugin(connection)
if any(((connection.supports_persistence and C.USE_PERSISTENT_CONNECTIONS), connection.force_persistence)):
self._play_context.timeout = connection.get_option('persistent_command_timeout')
display.vvvv('attempting to start connection', host=self._play_context.remote_addr)
display.vvvv('using connection plugin %s' % connection.transport, host=self._play_context.remote_addr)
options = self._get_persistent_connection_options(connection, cvars, templar)
socket_path = start_connection(self._play_context, options, self._task._uuid)
display.vvvv('local domain socket path is %s' % socket_path, host=self._play_context.remote_addr)
setattr(connection, '_socket_path', socket_path)
return connection
def _get_persistent_connection_options(self, connection, final_vars, templar):
option_vars = C.config.get_plugin_vars('connection', connection._load_name)
plugin = connection._sub_plugin
if plugin.get('type'):
option_vars.extend(C.config.get_plugin_vars(plugin['type'], plugin['name']))
options = {}
for k in option_vars:
if k in final_vars:
options[k] = templar.template(final_vars[k])
return options
def _set_plugin_options(self, plugin_type, variables, templar, task_keys):
try:
plugin = getattr(self._connection, '_%s' % plugin_type)
except AttributeError:
# Some plugins are assigned to private attrs, ``become`` is not
plugin = getattr(self._connection, plugin_type)
option_vars = C.config.get_plugin_vars(plugin_type, plugin._load_name)
options = {}
for k in option_vars:
if k in variables:
options[k] = templar.template(variables[k])
# TODO move to task method?
plugin.set_options(task_keys=task_keys, var_options=options)
return option_vars
def _set_connection_options(self, variables, templar):
# keep list of variable names possibly consumed
varnames = []
# grab list of usable vars for this plugin
option_vars = C.config.get_plugin_vars('connection', self._connection._load_name)
varnames.extend(option_vars)
# create dict of 'templated vars'
options = {'_extras': {}}
for k in option_vars:
if k in variables:
options[k] = templar.template(variables[k])
# add extras if plugin supports them
if getattr(self._connection, 'allow_extras', False):
for k in variables:
if k.startswith('ansible_%s_' % self._connection._load_name) and k not in options:
options['_extras'][k] = templar.template(variables[k])
task_keys = self._task.dump_attrs()
# The task_keys 'timeout' attr is the task's timeout, not the connection timeout.
# The connection timeout is threaded through the play_context for now.
task_keys['timeout'] = self._play_context.timeout
if self._play_context.password:
# The connection password is threaded through the play_context for
# now. This is something we ultimately want to avoid, but the first
# step is to get connection plugins pulling the password through the
# config system instead of directly accessing play_context.
task_keys['password'] = self._play_context.password
# Prevent task retries from overriding connection retries
del(task_keys['retries'])
# set options with 'templated vars' specific to this plugin and dependent ones
self._connection.set_options(task_keys=task_keys, var_options=options)
varnames.extend(self._set_plugin_options('shell', variables, templar, task_keys))
if self._connection.become is not None:
if self._play_context.become_pass:
# FIXME: eventually remove from task and play_context, here for backwards compat
# keep out of play objects to avoid accidental disclosure, only become plugin should have
# The become pass is already in the play_context if given on
# the CLI (-K). Make the plugin aware of it in this case.
task_keys['become_pass'] = self._play_context.become_pass
varnames.extend(self._set_plugin_options('become', variables, templar, task_keys))
# FOR BACKWARDS COMPAT:
for option in ('become_user', 'become_flags', 'become_exe', 'become_pass'):
try:
setattr(self._play_context, option, self._connection.become.get_option(option))
except KeyError:
pass # some plugins don't support all base flags
self._play_context.prompt = self._connection.become.prompt
return varnames
def _get_action_handler(self, connection, templar):
'''
Returns the correct action plugin to handle the requestion task action
'''
module_collection, separator, module_name = self._task.action.rpartition(".")
module_prefix = module_name.split('_')[0]
if module_collection:
# For network modules, which look for one action plugin per platform, look for the
# action plugin in the same collection as the module by prefixing the action plugin
# with the same collection.
network_action = "{0}.{1}".format(module_collection, module_prefix)
else:
network_action = module_prefix
collections = self._task.collections
# let action plugin override module, fallback to 'normal' action plugin otherwise
if self._shared_loader_obj.action_loader.has_plugin(self._task.action, collection_list=collections):
handler_name = self._task.action
elif all((module_prefix in C.NETWORK_GROUP_MODULES, self._shared_loader_obj.action_loader.has_plugin(network_action, collection_list=collections))):
handler_name = network_action
display.vvvv("Using network group action {handler} for {action}".format(handler=handler_name,
action=self._task.action),
host=self._play_context.remote_addr)
else:
# use ansible.legacy.normal to allow (historic) local action_plugins/ override without collections search
handler_name = 'ansible.legacy.normal'
collections = None # until then, we don't want the task's collection list to be consulted; use the builtin
handler = self._shared_loader_obj.action_loader.get(
handler_name,
task=self._task,
connection=connection,
play_context=self._play_context,
loader=self._loader,
templar=templar,
shared_loader_obj=self._shared_loader_obj,
collection_list=collections
)
if not handler:
raise AnsibleError("the handler '%s' was not found" % handler_name)
return handler
def start_connection(play_context, variables, task_uuid):
'''
Starts the persistent connection
'''
candidate_paths = [C.ANSIBLE_CONNECTION_PATH or os.path.dirname(sys.argv[0])]
candidate_paths.extend(os.environ.get('PATH', '').split(os.pathsep))
for dirname in candidate_paths:
ansible_connection = os.path.join(dirname, 'ansible-connection')
if os.path.isfile(ansible_connection):
display.vvvv("Found ansible-connection at path {0}".format(ansible_connection))
break
else:
raise AnsibleError("Unable to find location of 'ansible-connection'. "
"Please set or check the value of ANSIBLE_CONNECTION_PATH")
env = os.environ.copy()
env.update({
# HACK; most of these paths may change during the controller's lifetime
# (eg, due to late dynamic role includes, multi-playbook execution), without a way
# to invalidate/update, ansible-connection won't always see the same plugins the controller
# can.
'ANSIBLE_BECOME_PLUGINS': become_loader.print_paths(),
'ANSIBLE_CLICONF_PLUGINS': cliconf_loader.print_paths(),
'ANSIBLE_COLLECTIONS_PATH': to_native(os.pathsep.join(AnsibleCollectionConfig.collection_paths)),
'ANSIBLE_CONNECTION_PLUGINS': connection_loader.print_paths(),
'ANSIBLE_HTTPAPI_PLUGINS': httpapi_loader.print_paths(),
'ANSIBLE_NETCONF_PLUGINS': netconf_loader.print_paths(),
'ANSIBLE_TERMINAL_PLUGINS': terminal_loader.print_paths(),
})
python = sys.executable
master, slave = pty.openpty()
p = subprocess.Popen(
[python, ansible_connection, to_text(os.getppid()), to_text(task_uuid)],
stdin=slave, stdout=subprocess.PIPE, stderr=subprocess.PIPE, env=env
)
os.close(slave)
# We need to set the pty into noncanonical mode. This ensures that we
# can receive lines longer than 4095 characters (plus newline) without
# truncating.
old = termios.tcgetattr(master)
new = termios.tcgetattr(master)
new[3] = new[3] & ~termios.ICANON
try:
termios.tcsetattr(master, termios.TCSANOW, new)
write_to_file_descriptor(master, variables)
write_to_file_descriptor(master, play_context.serialize())
(stdout, stderr) = p.communicate()
finally:
termios.tcsetattr(master, termios.TCSANOW, old)
os.close(master)
if p.returncode == 0:
result = json.loads(to_text(stdout, errors='surrogate_then_replace'))
else:
try:
result = json.loads(to_text(stderr, errors='surrogate_then_replace'))
except getattr(json.decoder, 'JSONDecodeError', ValueError):
# JSONDecodeError only available on Python 3.5+
result = {'error': to_text(stderr, errors='surrogate_then_replace')}
if 'messages' in result:
for level, message in result['messages']:
if level == 'log':
display.display(message, log_only=True)
elif level in ('debug', 'v', 'vv', 'vvv', 'vvvv', 'vvvvv', 'vvvvvv'):
getattr(display, level)(message, host=play_context.remote_addr)
else:
if hasattr(display, level):
getattr(display, level)(message)
else:
display.vvvv(message, host=play_context.remote_addr)
if 'error' in result:
if play_context.verbosity > 2:
if result.get('exception'):
msg = "The full traceback is:\n" + result['exception']
display.display(msg, color=C.COLOR_ERROR)
raise AnsibleError(result['error'])
return result['socket_path']
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 75,765 |
`set_fact` only runs on first host with `delegate_to` and `when`
|
### Summary
When using `set_fact` in combination with `delegate_to`, `delegate_facts` and `when` it seems to actually be only run on the first host:
```yaml
- hosts: localhost
gather_facts: no
connection: local
tasks:
- set_fact:
test: 123
delegate_to: "{{ item }}"
delegate_facts: true
when: test is not defined
loop: "{{ groups['all'] }}"
# note: the following will STILL fail!
#loop: "{{ groups['all'] | difference(["localhost"]) }}"
```
The problem only arises when setting the variable that is actually used in the `when` clause. If I had to guess, I assume the `when` is checked in the wrong loop iteration. Setting a fact that is not used in `when` will work fine.
#20508 seems to be related in some regard but was fixed some time ago.
### Issue Type
Bug Report
### Component Name
set_fact
### Ansible Version
```console
$ ansible --version
ansible [core 2.11.5]
config file = /Users/hojerst/.ansible.cfg
configured module search path = ['/Users/hojerst/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /Users/hojerst/.pyenv/versions/3.9.4/lib/python3.9/site-packages/ansible
ansible collection location = /Users/hojerst/.ansible/collections:/usr/share/ansible/collections
executable location = /Users/hojerst/.pyenv/versions/3.9.4/bin/ansible
python version = 3.9.4 (default, Apr 11 2021, 17:03:42) [Clang 12.0.0 (clang-1200.0.32.29)]
jinja version = 2.11.3
libyaml = True
```
### Configuration
```console
$ ansible-config dump --only-changed
DEFAULT_VAULT_PASSWORD_FILE(/Users/hojerst/.ansible.cfg) = /Users/hojerst/.ansible/vault-pass%
```
### OS / Environment
MacOS 12.0 Beta (21A5506j)
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```yaml
- hosts: localhost
gather_facts: no
connection: local
tasks:
- set_fact:
test: 123
delegate_to: "{{ item }}"
delegate_facts: true
when: test is not defined
loop: "{{ groups['all'] | difference(['localhost']) }}"
- debug:
var: test
```
```sh
ansible-playbook playbook.yml -i host1,host2,host3
```
### Expected Results
PLAY [localhost] **************************************************************************************************************
TASK [set_fact] ***************************************************************************************************************
ok: [localhost -> host1] => (item=host1)
ok: [localhost -> host2] => (item=host2)
ok: [localhost -> host3] => (item=host3)
PLAY RECAP ********************************************************************************************************************
localhost : ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
### Actual Results
```console
PLAY [localhost] **************************************************************************************************************
TASK [set_fact] ***************************************************************************************************************
ok: [localhost -> host1] => (item=host1)
skipping: [localhost] => (item=host2)
skipping: [localhost] => (item=host3)
TASK [debug] ******************************************************************************************************************
ok: [localhost] => {
"test": "VARIABLE IS NOT DEFINED!"
}
PLAY RECAP ********************************************************************************************************************
localhost : ok=2 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/75765
|
https://github.com/ansible/ansible/pull/75768
|
8bbecc7cac96203fedf99953a06eb32d74c7a4d7
|
7bec19606172e67ac4dbe5d10a6853b01b22ca8c
| 2021-09-23T14:57:58Z |
python
| 2021-11-01T15:25:10Z |
test/integration/targets/delegate_to/delegate_facts_loop.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 75,765 |
`set_fact` only runs on first host with `delegate_to` and `when`
|
### Summary
When using `set_fact` in combination with `delegate_to`, `delegate_facts` and `when` it seems to actually be only run on the first host:
```yaml
- hosts: localhost
gather_facts: no
connection: local
tasks:
- set_fact:
test: 123
delegate_to: "{{ item }}"
delegate_facts: true
when: test is not defined
loop: "{{ groups['all'] }}"
# note: the following will STILL fail!
#loop: "{{ groups['all'] | difference(["localhost"]) }}"
```
The problem only arises when setting the variable that is actually used in the `when` clause. If I had to guess, I assume the `when` is checked in the wrong loop iteration. Setting a fact that is not used in `when` will work fine.
#20508 seems to be related in some regard but was fixed some time ago.
### Issue Type
Bug Report
### Component Name
set_fact
### Ansible Version
```console
$ ansible --version
ansible [core 2.11.5]
config file = /Users/hojerst/.ansible.cfg
configured module search path = ['/Users/hojerst/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /Users/hojerst/.pyenv/versions/3.9.4/lib/python3.9/site-packages/ansible
ansible collection location = /Users/hojerst/.ansible/collections:/usr/share/ansible/collections
executable location = /Users/hojerst/.pyenv/versions/3.9.4/bin/ansible
python version = 3.9.4 (default, Apr 11 2021, 17:03:42) [Clang 12.0.0 (clang-1200.0.32.29)]
jinja version = 2.11.3
libyaml = True
```
### Configuration
```console
$ ansible-config dump --only-changed
DEFAULT_VAULT_PASSWORD_FILE(/Users/hojerst/.ansible.cfg) = /Users/hojerst/.ansible/vault-pass%
```
### OS / Environment
MacOS 12.0 Beta (21A5506j)
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```yaml
- hosts: localhost
gather_facts: no
connection: local
tasks:
- set_fact:
test: 123
delegate_to: "{{ item }}"
delegate_facts: true
when: test is not defined
loop: "{{ groups['all'] | difference(['localhost']) }}"
- debug:
var: test
```
```sh
ansible-playbook playbook.yml -i host1,host2,host3
```
### Expected Results
PLAY [localhost] **************************************************************************************************************
TASK [set_fact] ***************************************************************************************************************
ok: [localhost -> host1] => (item=host1)
ok: [localhost -> host2] => (item=host2)
ok: [localhost -> host3] => (item=host3)
PLAY RECAP ********************************************************************************************************************
localhost : ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
### Actual Results
```console
PLAY [localhost] **************************************************************************************************************
TASK [set_fact] ***************************************************************************************************************
ok: [localhost -> host1] => (item=host1)
skipping: [localhost] => (item=host2)
skipping: [localhost] => (item=host3)
TASK [debug] ******************************************************************************************************************
ok: [localhost] => {
"test": "VARIABLE IS NOT DEFINED!"
}
PLAY RECAP ********************************************************************************************************************
localhost : ok=2 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/75765
|
https://github.com/ansible/ansible/pull/75768
|
8bbecc7cac96203fedf99953a06eb32d74c7a4d7
|
7bec19606172e67ac4dbe5d10a6853b01b22ca8c
| 2021-09-23T14:57:58Z |
python
| 2021-11-01T15:25:10Z |
test/integration/targets/delegate_to/runme.sh
|
#!/usr/bin/env bash
set -eux
platform="$(uname)"
function setup() {
if [[ "${platform}" == "FreeBSD" ]] || [[ "${platform}" == "Darwin" ]]; then
ifconfig lo0
existing=$(ifconfig lo0 | grep '^[[:blank:]]inet 127\.0\.0\. ' || true)
echo "${existing}"
for i in 3 4 254; do
ip="127.0.0.${i}"
if [[ "${existing}" != *"${ip}"* ]]; then
ifconfig lo0 alias "${ip}" up
fi
done
ifconfig lo0
fi
}
function teardown() {
if [[ "${platform}" == "FreeBSD" ]] || [[ "${platform}" == "Darwin" ]]; then
for i in 3 4 254; do
ip="127.0.0.${i}"
if [[ "${existing}" != *"${ip}"* ]]; then
ifconfig lo0 -alias "${ip}"
fi
done
ifconfig lo0
fi
}
setup
trap teardown EXIT
ANSIBLE_SSH_ARGS='-C -o ControlMaster=auto -o ControlPersist=60s -o UserKnownHostsFile=/dev/null' \
ANSIBLE_HOST_KEY_CHECKING=false ansible-playbook test_delegate_to.yml -i inventory -v "$@"
# this test is not doing what it says it does, also relies on var that should not be available
#ansible-playbook test_loop_control.yml -v "$@"
ansible-playbook test_delegate_to_loop_randomness.yml -v "$@"
ansible-playbook delegate_and_nolog.yml -i inventory -v "$@"
ansible-playbook delegate_facts_block.yml -i inventory -v "$@"
ansible-playbook test_delegate_to_loop_caching.yml -i inventory -v "$@"
# ensure we are using correct settings when delegating
ANSIBLE_TIMEOUT=3 ansible-playbook delegate_vars_hanldling.yml -i inventory -v "$@"
ansible-playbook has_hostvars.yml -i inventory -v "$@"
# test ansible_x_interpreter
# python
source virtualenv.sh
(
cd "${OUTPUT_DIR}"/venv/bin
ln -s python firstpython
ln -s python secondpython
)
ansible-playbook verify_interpreter.yml -i inventory_interpreters -v "$@"
ansible-playbook discovery_applied.yml -i inventory -v "$@"
ansible-playbook resolve_vars.yml -i inventory -v "$@"
ansible-playbook test_delegate_to_lookup_context.yml -i inventory -v "$@"
ansible-playbook delegate_local_from_root.yml -i inventory -v "$@" -e 'ansible_user=root'
ansible-playbook delegate_with_fact_from_delegate_host.yml "$@"
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 75,612 |
Unexpected exception when validating a role argspec when a dict with suboptions is expected but a str is given
|
### Summary
An exception is thrown when an argspec expecting a dict with suboptions is given a string.
This only occurs when the argument in the argspec has suboptions.
### Issue Type
Bug Report
### Component Name
module_utils/common/parameters.py
### Ansible Version
```console
ansible [core 2.11.4]
```
### Configuration
```console
$ ansible-config dump --only-changed
```
### OS / Environment
RHEL 8
### Steps to Reproduce
`meta/argument_specs.yml`
```yaml
---
argument_specs:
main:
options:
dict_option:
type: dict
options:
some_option:
type: str
```
Playbook:
```
---
- hosts: localhost
gather_facts: false
vars:
dict_option: this is a string
roles:
- test
```
### Expected Results
An exception should not be thrown.
### Actual Results
```console
The full traceback is:
Traceback (most recent call last):
File "/home/user/ansible4/lib/python3.8/site-packages/ansible/executor/task_executor.py", line 158, in run
res = self._execute()
File "/home/user/ansible4/lib/python3.8/site-packages/ansible/executor/task_executor.py", line 582, in _execute
result = self._handler.run(task_vars=variables)
File "/home/user/ansible4/lib/python3.8/site-packages/ansible/plugins/action/validate_argument_spec.py", line 83, in run
validation_result = validator.validate(provided_arguments)
File "/home/user/ansible4/lib/python3.8/site-packages/ansible/module_utils/common/arg_spec.py", line 244, in validate
_validate_sub_spec(self.argument_spec, result._validated_parameters,
File "/home/user/ansible4/lib/python3.8/site-packages/ansible/module_utils/common/parameters.py", line 761, in _validate_sub_spec
unsupported_parameters.update(_get_unsupported_parameters(sub_spec, sub_parameters, legal_inputs, options_context))
File "/home/user/ansible4/lib/python3.8/site-packages/ansible/module_utils/common/parameters.py", line 177, in _get_unsupported_parameters
for k in parameters.keys():
AttributeError: 'str' object has no attribute 'keys'
fatal: [localhost]: FAILED! => {
"msg": "Unexpected failure during module execution.",
"stdout": ""
}
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/75612
|
https://github.com/ansible/ansible/pull/75635
|
7bec19606172e67ac4dbe5d10a6853b01b22ca8c
|
b5ed41edb34a387c560e172ee2928cc8ac9b4959
| 2021-09-01T06:26:06Z |
python
| 2021-11-01T16:06:49Z |
changelogs/fragments/75635-fix-role-argspec-suboption-validation.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 75,612 |
Unexpected exception when validating a role argspec when a dict with suboptions is expected but a str is given
|
### Summary
An exception is thrown when an argspec expecting a dict with suboptions is given a string.
This only occurs when the argument in the argspec has suboptions.
### Issue Type
Bug Report
### Component Name
module_utils/common/parameters.py
### Ansible Version
```console
ansible [core 2.11.4]
```
### Configuration
```console
$ ansible-config dump --only-changed
```
### OS / Environment
RHEL 8
### Steps to Reproduce
`meta/argument_specs.yml`
```yaml
---
argument_specs:
main:
options:
dict_option:
type: dict
options:
some_option:
type: str
```
Playbook:
```
---
- hosts: localhost
gather_facts: false
vars:
dict_option: this is a string
roles:
- test
```
### Expected Results
An exception should not be thrown.
### Actual Results
```console
The full traceback is:
Traceback (most recent call last):
File "/home/user/ansible4/lib/python3.8/site-packages/ansible/executor/task_executor.py", line 158, in run
res = self._execute()
File "/home/user/ansible4/lib/python3.8/site-packages/ansible/executor/task_executor.py", line 582, in _execute
result = self._handler.run(task_vars=variables)
File "/home/user/ansible4/lib/python3.8/site-packages/ansible/plugins/action/validate_argument_spec.py", line 83, in run
validation_result = validator.validate(provided_arguments)
File "/home/user/ansible4/lib/python3.8/site-packages/ansible/module_utils/common/arg_spec.py", line 244, in validate
_validate_sub_spec(self.argument_spec, result._validated_parameters,
File "/home/user/ansible4/lib/python3.8/site-packages/ansible/module_utils/common/parameters.py", line 761, in _validate_sub_spec
unsupported_parameters.update(_get_unsupported_parameters(sub_spec, sub_parameters, legal_inputs, options_context))
File "/home/user/ansible4/lib/python3.8/site-packages/ansible/module_utils/common/parameters.py", line 177, in _get_unsupported_parameters
for k in parameters.keys():
AttributeError: 'str' object has no attribute 'keys'
fatal: [localhost]: FAILED! => {
"msg": "Unexpected failure during module execution.",
"stdout": ""
}
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/75612
|
https://github.com/ansible/ansible/pull/75635
|
7bec19606172e67ac4dbe5d10a6853b01b22ca8c
|
b5ed41edb34a387c560e172ee2928cc8ac9b4959
| 2021-09-01T06:26:06Z |
python
| 2021-11-01T16:06:49Z |
lib/ansible/module_utils/common/parameters.py
|
# -*- coding: utf-8 -*-
# Copyright (c) 2019 Ansible Project
# Simplified BSD License (see licenses/simplified_bsd.txt or https://opensource.org/licenses/BSD-2-Clause)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
import datetime
import os
from collections import deque
from itertools import chain
from ansible.module_utils.common.collections import is_iterable
from ansible.module_utils.common.text.converters import to_bytes, to_native, to_text
from ansible.module_utils.common.text.formatters import lenient_lowercase
from ansible.module_utils.common.warnings import warn
from ansible.module_utils.errors import (
AliasError,
AnsibleFallbackNotFound,
AnsibleValidationErrorMultiple,
ArgumentTypeError,
ArgumentValueError,
ElementError,
MutuallyExclusiveError,
NoLogError,
RequiredByError,
RequiredError,
RequiredIfError,
RequiredOneOfError,
RequiredTogetherError,
SubParameterTypeError,
)
from ansible.module_utils.parsing.convert_bool import BOOLEANS_FALSE, BOOLEANS_TRUE
from ansible.module_utils.common._collections_compat import (
KeysView,
Set,
Sequence,
Mapping,
MutableMapping,
MutableSet,
MutableSequence,
)
from ansible.module_utils.six import (
binary_type,
integer_types,
string_types,
text_type,
PY2,
PY3,
)
from ansible.module_utils.common.validation import (
check_mutually_exclusive,
check_required_arguments,
check_required_together,
check_required_one_of,
check_required_if,
check_required_by,
check_type_bits,
check_type_bool,
check_type_bytes,
check_type_dict,
check_type_float,
check_type_int,
check_type_jsonarg,
check_type_list,
check_type_path,
check_type_raw,
check_type_str,
)
# Python2 & 3 way to get NoneType
NoneType = type(None)
_ADDITIONAL_CHECKS = (
{'func': check_required_together, 'attr': 'required_together', 'err': RequiredTogetherError},
{'func': check_required_one_of, 'attr': 'required_one_of', 'err': RequiredOneOfError},
{'func': check_required_if, 'attr': 'required_if', 'err': RequiredIfError},
{'func': check_required_by, 'attr': 'required_by', 'err': RequiredByError},
)
# if adding boolean attribute, also add to PASS_BOOL
# some of this dupes defaults from controller config
PASS_VARS = {
'check_mode': ('check_mode', False),
'debug': ('_debug', False),
'diff': ('_diff', False),
'keep_remote_files': ('_keep_remote_files', False),
'module_name': ('_name', None),
'no_log': ('no_log', False),
'remote_tmp': ('_remote_tmp', None),
'selinux_special_fs': ('_selinux_special_fs', ['fuse', 'nfs', 'vboxsf', 'ramfs', '9p', 'vfat']),
'shell_executable': ('_shell', '/bin/sh'),
'socket': ('_socket_path', None),
'string_conversion_action': ('_string_conversion_action', 'warn'),
'syslog_facility': ('_syslog_facility', 'INFO'),
'tmpdir': ('_tmpdir', None),
'verbosity': ('_verbosity', 0),
'version': ('ansible_version', '0.0'),
}
PASS_BOOLS = ('check_mode', 'debug', 'diff', 'keep_remote_files', 'no_log')
DEFAULT_TYPE_VALIDATORS = {
'str': check_type_str,
'list': check_type_list,
'dict': check_type_dict,
'bool': check_type_bool,
'int': check_type_int,
'float': check_type_float,
'path': check_type_path,
'raw': check_type_raw,
'jsonarg': check_type_jsonarg,
'json': check_type_jsonarg,
'bytes': check_type_bytes,
'bits': check_type_bits,
}
def _get_type_validator(wanted):
"""Returns the callable used to validate a wanted type and the type name.
:arg wanted: String or callable. If a string, get the corresponding
validation function from DEFAULT_TYPE_VALIDATORS. If callable,
get the name of the custom callable and return that for the type_checker.
:returns: Tuple of callable function or None, and a string that is the name
of the wanted type.
"""
# Use one of our builtin validators.
if not callable(wanted):
if wanted is None:
# Default type for parameters
wanted = 'str'
type_checker = DEFAULT_TYPE_VALIDATORS.get(wanted)
# Use the custom callable for validation.
else:
type_checker = wanted
wanted = getattr(wanted, '__name__', to_native(type(wanted)))
return type_checker, wanted
def _get_legal_inputs(argument_spec, parameters, aliases=None):
if aliases is None:
aliases = _handle_aliases(argument_spec, parameters)
return list(aliases.keys()) + list(argument_spec.keys())
def _get_unsupported_parameters(argument_spec, parameters, legal_inputs=None, options_context=None):
"""Check keys in parameters against those provided in legal_inputs
to ensure they contain legal values. If legal_inputs are not supplied,
they will be generated using the argument_spec.
:arg argument_spec: Dictionary of parameters, their type, and valid values.
:arg parameters: Dictionary of parameters.
:arg legal_inputs: List of valid key names property names. Overrides values
in argument_spec.
:arg options_context: List of parent keys for tracking the context of where
a parameter is defined.
:returns: Set of unsupported parameters. Empty set if no unsupported parameters
are found.
"""
if legal_inputs is None:
legal_inputs = _get_legal_inputs(argument_spec, parameters)
unsupported_parameters = set()
for k in parameters.keys():
if k not in legal_inputs:
context = k
if options_context:
context = tuple(options_context + [k])
unsupported_parameters.add(context)
return unsupported_parameters
def _handle_aliases(argument_spec, parameters, alias_warnings=None, alias_deprecations=None):
"""Process aliases from an argument_spec including warnings and deprecations.
Modify ``parameters`` by adding a new key for each alias with the supplied
value from ``parameters``.
If a list is provided to the alias_warnings parameter, it will be filled with tuples
(option, alias) in every case where both an option and its alias are specified.
If a list is provided to alias_deprecations, it will be populated with dictionaries,
each containing deprecation information for each alias found in argument_spec.
:param argument_spec: Dictionary of parameters, their type, and valid values.
:type argument_spec: dict
:param parameters: Dictionary of parameters.
:type parameters: dict
:param alias_warnings:
:type alias_warnings: list
:param alias_deprecations:
:type alias_deprecations: list
"""
aliases_results = {} # alias:canon
for (k, v) in argument_spec.items():
aliases = v.get('aliases', None)
default = v.get('default', None)
required = v.get('required', False)
if alias_deprecations is not None:
for alias in argument_spec[k].get('deprecated_aliases', []):
if alias.get('name') in parameters:
alias_deprecations.append(alias)
if default is not None and required:
# not alias specific but this is a good place to check this
raise ValueError("internal error: required and default are mutually exclusive for %s" % k)
if aliases is None:
continue
if not is_iterable(aliases) or isinstance(aliases, (binary_type, text_type)):
raise TypeError('internal error: aliases must be a list or tuple')
for alias in aliases:
aliases_results[alias] = k
if alias in parameters:
if k in parameters and alias_warnings is not None:
alias_warnings.append((k, alias))
parameters[k] = parameters[alias]
return aliases_results
def _list_deprecations(argument_spec, parameters, prefix=''):
"""Return a list of deprecations
:arg argument_spec: An argument spec dictionary
:arg parameters: Dictionary of parameters
:returns: List of dictionaries containing a message and version in which
the deprecated parameter will be removed, or an empty list.
:Example return:
.. code-block:: python
[
{
'msg': "Param 'deptest' is deprecated. See the module docs for more information",
'version': '2.9'
}
]
"""
deprecations = []
for arg_name, arg_opts in argument_spec.items():
if arg_name in parameters:
if prefix:
sub_prefix = '%s["%s"]' % (prefix, arg_name)
else:
sub_prefix = arg_name
if arg_opts.get('removed_at_date') is not None:
deprecations.append({
'msg': "Param '%s' is deprecated. See the module docs for more information" % sub_prefix,
'date': arg_opts.get('removed_at_date'),
'collection_name': arg_opts.get('removed_from_collection'),
})
elif arg_opts.get('removed_in_version') is not None:
deprecations.append({
'msg': "Param '%s' is deprecated. See the module docs for more information" % sub_prefix,
'version': arg_opts.get('removed_in_version'),
'collection_name': arg_opts.get('removed_from_collection'),
})
# Check sub-argument spec
sub_argument_spec = arg_opts.get('options')
if sub_argument_spec is not None:
sub_arguments = parameters[arg_name]
if isinstance(sub_arguments, Mapping):
sub_arguments = [sub_arguments]
if isinstance(sub_arguments, list):
for sub_params in sub_arguments:
if isinstance(sub_params, Mapping):
deprecations.extend(_list_deprecations(sub_argument_spec, sub_params, prefix=sub_prefix))
return deprecations
def _list_no_log_values(argument_spec, params):
"""Return set of no log values
:arg argument_spec: An argument spec dictionary
:arg params: Dictionary of all parameters
:returns: :class:`set` of strings that should be hidden from output:
"""
no_log_values = set()
for arg_name, arg_opts in argument_spec.items():
if arg_opts.get('no_log', False):
# Find the value for the no_log'd param
no_log_object = params.get(arg_name, None)
if no_log_object:
try:
no_log_values.update(_return_datastructure_name(no_log_object))
except TypeError as e:
raise TypeError('Failed to convert "%s": %s' % (arg_name, to_native(e)))
# Get no_log values from suboptions
sub_argument_spec = arg_opts.get('options')
if sub_argument_spec is not None:
wanted_type = arg_opts.get('type')
sub_parameters = params.get(arg_name)
if sub_parameters is not None:
if wanted_type == 'dict' or (wanted_type == 'list' and arg_opts.get('elements', '') == 'dict'):
# Sub parameters can be a dict or list of dicts. Ensure parameters are always a list.
if not isinstance(sub_parameters, list):
sub_parameters = [sub_parameters]
for sub_param in sub_parameters:
# Validate dict fields in case they came in as strings
if isinstance(sub_param, string_types):
sub_param = check_type_dict(sub_param)
if not isinstance(sub_param, Mapping):
raise TypeError("Value '{1}' in the sub parameter field '{0}' must by a {2}, "
"not '{1.__class__.__name__}'".format(arg_name, sub_param, wanted_type))
no_log_values.update(_list_no_log_values(sub_argument_spec, sub_param))
return no_log_values
def _return_datastructure_name(obj):
""" Return native stringified values from datastructures.
For use with removing sensitive values pre-jsonification."""
if isinstance(obj, (text_type, binary_type)):
if obj:
yield to_native(obj, errors='surrogate_or_strict')
return
elif isinstance(obj, Mapping):
for element in obj.items():
for subelement in _return_datastructure_name(element[1]):
yield subelement
elif is_iterable(obj):
for element in obj:
for subelement in _return_datastructure_name(element):
yield subelement
elif isinstance(obj, (bool, NoneType)):
# This must come before int because bools are also ints
return
elif isinstance(obj, tuple(list(integer_types) + [float])):
yield to_native(obj, nonstring='simplerepr')
else:
raise TypeError('Unknown parameter type: %s' % (type(obj)))
def _remove_values_conditions(value, no_log_strings, deferred_removals):
"""
Helper function for :meth:`remove_values`.
:arg value: The value to check for strings that need to be stripped
:arg no_log_strings: set of strings which must be stripped out of any values
:arg deferred_removals: List which holds information about nested
containers that have to be iterated for removals. It is passed into
this function so that more entries can be added to it if value is
a container type. The format of each entry is a 2-tuple where the first
element is the ``value`` parameter and the second value is a new
container to copy the elements of ``value`` into once iterated.
:returns: if ``value`` is a scalar, returns ``value`` with two exceptions:
1. :class:`~datetime.datetime` objects which are changed into a string representation.
2. objects which are in ``no_log_strings`` are replaced with a placeholder
so that no sensitive data is leaked.
If ``value`` is a container type, returns a new empty container.
``deferred_removals`` is added to as a side-effect of this function.
.. warning:: It is up to the caller to make sure the order in which value
is passed in is correct. For instance, higher level containers need
to be passed in before lower level containers. For example, given
``{'level1': {'level2': 'level3': [True]} }`` first pass in the
dictionary for ``level1``, then the dict for ``level2``, and finally
the list for ``level3``.
"""
if isinstance(value, (text_type, binary_type)):
# Need native str type
native_str_value = value
if isinstance(value, text_type):
value_is_text = True
if PY2:
native_str_value = to_bytes(value, errors='surrogate_or_strict')
elif isinstance(value, binary_type):
value_is_text = False
if PY3:
native_str_value = to_text(value, errors='surrogate_or_strict')
if native_str_value in no_log_strings:
return 'VALUE_SPECIFIED_IN_NO_LOG_PARAMETER'
for omit_me in no_log_strings:
native_str_value = native_str_value.replace(omit_me, '*' * 8)
if value_is_text and isinstance(native_str_value, binary_type):
value = to_text(native_str_value, encoding='utf-8', errors='surrogate_then_replace')
elif not value_is_text and isinstance(native_str_value, text_type):
value = to_bytes(native_str_value, encoding='utf-8', errors='surrogate_then_replace')
else:
value = native_str_value
elif isinstance(value, Sequence):
if isinstance(value, MutableSequence):
new_value = type(value)()
else:
new_value = [] # Need a mutable value
deferred_removals.append((value, new_value))
value = new_value
elif isinstance(value, Set):
if isinstance(value, MutableSet):
new_value = type(value)()
else:
new_value = set() # Need a mutable value
deferred_removals.append((value, new_value))
value = new_value
elif isinstance(value, Mapping):
if isinstance(value, MutableMapping):
new_value = type(value)()
else:
new_value = {} # Need a mutable value
deferred_removals.append((value, new_value))
value = new_value
elif isinstance(value, tuple(chain(integer_types, (float, bool, NoneType)))):
stringy_value = to_native(value, encoding='utf-8', errors='surrogate_or_strict')
if stringy_value in no_log_strings:
return 'VALUE_SPECIFIED_IN_NO_LOG_PARAMETER'
for omit_me in no_log_strings:
if omit_me in stringy_value:
return 'VALUE_SPECIFIED_IN_NO_LOG_PARAMETER'
elif isinstance(value, (datetime.datetime, datetime.date)):
value = value.isoformat()
else:
raise TypeError('Value of unknown type: %s, %s' % (type(value), value))
return value
def _set_defaults(argument_spec, parameters, set_default=True):
"""Set default values for parameters when no value is supplied.
Modifies parameters directly.
:arg argument_spec: Argument spec
:type argument_spec: dict
:arg parameters: Parameters to evaluate
:type parameters: dict
:kwarg set_default: Whether or not to set the default values
:type set_default: bool
:returns: Set of strings that should not be logged.
:rtype: set
"""
no_log_values = set()
for param, value in argument_spec.items():
# TODO: Change the default value from None to Sentinel to differentiate between
# user supplied None and a default value set by this function.
default = value.get('default', None)
# This prevents setting defaults on required items on the 1st run,
# otherwise will set things without a default to None on the 2nd.
if param not in parameters and (default is not None or set_default):
# Make sure any default value for no_log fields are masked.
if value.get('no_log', False) and default:
no_log_values.add(default)
parameters[param] = default
return no_log_values
def _sanitize_keys_conditions(value, no_log_strings, ignore_keys, deferred_removals):
""" Helper method to :func:`sanitize_keys` to build ``deferred_removals`` and avoid deep recursion. """
if isinstance(value, (text_type, binary_type)):
return value
if isinstance(value, Sequence):
if isinstance(value, MutableSequence):
new_value = type(value)()
else:
new_value = [] # Need a mutable value
deferred_removals.append((value, new_value))
return new_value
if isinstance(value, Set):
if isinstance(value, MutableSet):
new_value = type(value)()
else:
new_value = set() # Need a mutable value
deferred_removals.append((value, new_value))
return new_value
if isinstance(value, Mapping):
if isinstance(value, MutableMapping):
new_value = type(value)()
else:
new_value = {} # Need a mutable value
deferred_removals.append((value, new_value))
return new_value
if isinstance(value, tuple(chain(integer_types, (float, bool, NoneType)))):
return value
if isinstance(value, (datetime.datetime, datetime.date)):
return value
raise TypeError('Value of unknown type: %s, %s' % (type(value), value))
def _validate_elements(wanted_type, parameter, values, options_context=None, errors=None):
if errors is None:
errors = AnsibleValidationErrorMultiple()
type_checker, wanted_element_type = _get_type_validator(wanted_type)
validated_parameters = []
# Get param name for strings so we can later display this value in a useful error message if needed
# Only pass 'kwargs' to our checkers and ignore custom callable checkers
kwargs = {}
if wanted_element_type == 'str' and isinstance(wanted_type, string_types):
if isinstance(parameter, string_types):
kwargs['param'] = parameter
elif isinstance(parameter, dict):
kwargs['param'] = list(parameter.keys())[0]
for value in values:
try:
validated_parameters.append(type_checker(value, **kwargs))
except (TypeError, ValueError) as e:
msg = "Elements value for option '%s'" % parameter
if options_context:
msg += " found in '%s'" % " -> ".join(options_context)
msg += " is of type %s and we were unable to convert to %s: %s" % (type(value), wanted_element_type, to_native(e))
errors.append(ElementError(msg))
return validated_parameters
def _validate_argument_types(argument_spec, parameters, prefix='', options_context=None, errors=None):
"""Validate that parameter types match the type in the argument spec.
Determine the appropriate type checker function and run each
parameter value through that function. All error messages from type checker
functions are returned. If any parameter fails to validate, it will not
be in the returned parameters.
:arg argument_spec: Argument spec
:type argument_spec: dict
:arg parameters: Parameters
:type parameters: dict
:kwarg prefix: Name of the parent key that contains the spec. Used in the error message
:type prefix: str
:kwarg options_context: List of contexts?
:type options_context: list
:returns: Two item tuple containing validated and coerced parameters
and a list of any errors that were encountered.
:rtype: tuple
"""
if errors is None:
errors = AnsibleValidationErrorMultiple()
for param, spec in argument_spec.items():
if param not in parameters:
continue
value = parameters[param]
if value is None:
continue
wanted_type = spec.get('type')
type_checker, wanted_name = _get_type_validator(wanted_type)
# Get param name for strings so we can later display this value in a useful error message if needed
# Only pass 'kwargs' to our checkers and ignore custom callable checkers
kwargs = {}
if wanted_name == 'str' and isinstance(wanted_type, string_types):
kwargs['param'] = list(parameters.keys())[0]
# Get the name of the parent key if this is a nested option
if prefix:
kwargs['prefix'] = prefix
try:
parameters[param] = type_checker(value, **kwargs)
elements_wanted_type = spec.get('elements', None)
if elements_wanted_type:
elements = parameters[param]
if wanted_type != 'list' or not isinstance(elements, list):
msg = "Invalid type %s for option '%s'" % (wanted_name, elements)
if options_context:
msg += " found in '%s'." % " -> ".join(options_context)
msg += ", elements value check is supported only with 'list' type"
errors.append(ArgumentTypeError(msg))
parameters[param] = _validate_elements(elements_wanted_type, param, elements, options_context, errors)
except (TypeError, ValueError) as e:
msg = "argument '%s' is of type %s" % (param, type(value))
if options_context:
msg += " found in '%s'." % " -> ".join(options_context)
msg += " and we were unable to convert to %s: %s" % (wanted_name, to_native(e))
errors.append(ArgumentTypeError(msg))
def _validate_argument_values(argument_spec, parameters, options_context=None, errors=None):
"""Ensure all arguments have the requested values, and there are no stray arguments"""
if errors is None:
errors = AnsibleValidationErrorMultiple()
for param, spec in argument_spec.items():
choices = spec.get('choices')
if choices is None:
continue
if isinstance(choices, (frozenset, KeysView, Sequence)) and not isinstance(choices, (binary_type, text_type)):
if param in parameters:
# Allow one or more when type='list' param with choices
if isinstance(parameters[param], list):
diff_list = ", ".join([item for item in parameters[param] if item not in choices])
if diff_list:
choices_str = ", ".join([to_native(c) for c in choices])
msg = "value of %s must be one or more of: %s. Got no match for: %s" % (param, choices_str, diff_list)
if options_context:
msg = "{0} found in {1}".format(msg, " -> ".join(options_context))
errors.append(ArgumentValueError(msg))
elif parameters[param] not in choices:
# PyYaml converts certain strings to bools. If we can unambiguously convert back, do so before checking
# the value. If we can't figure this out, module author is responsible.
lowered_choices = None
if parameters[param] == 'False':
lowered_choices = lenient_lowercase(choices)
overlap = BOOLEANS_FALSE.intersection(choices)
if len(overlap) == 1:
# Extract from a set
(parameters[param],) = overlap
if parameters[param] == 'True':
if lowered_choices is None:
lowered_choices = lenient_lowercase(choices)
overlap = BOOLEANS_TRUE.intersection(choices)
if len(overlap) == 1:
(parameters[param],) = overlap
if parameters[param] not in choices:
choices_str = ", ".join([to_native(c) for c in choices])
msg = "value of %s must be one of: %s, got: %s" % (param, choices_str, parameters[param])
if options_context:
msg = "{0} found in {1}".format(msg, " -> ".join(options_context))
errors.append(ArgumentValueError(msg))
else:
msg = "internal error: choices for argument %s are not iterable: %s" % (param, choices)
if options_context:
msg = "{0} found in {1}".format(msg, " -> ".join(options_context))
errors.append(ArgumentTypeError(msg))
def _validate_sub_spec(argument_spec, parameters, prefix='', options_context=None, errors=None, no_log_values=None, unsupported_parameters=None):
"""Validate sub argument spec.
This function is recursive.
"""
if options_context is None:
options_context = []
if errors is None:
errors = AnsibleValidationErrorMultiple()
if no_log_values is None:
no_log_values = set()
if unsupported_parameters is None:
unsupported_parameters = set()
for param, value in argument_spec.items():
wanted = value.get('type')
if wanted == 'dict' or (wanted == 'list' and value.get('elements', '') == 'dict'):
sub_spec = value.get('options')
if value.get('apply_defaults', False):
if sub_spec is not None:
if parameters.get(param) is None:
parameters[param] = {}
else:
continue
elif sub_spec is None or param not in parameters or parameters[param] is None:
continue
# Keep track of context for warning messages
options_context.append(param)
# Make sure we can iterate over the elements
if isinstance(parameters[param], dict):
elements = [parameters[param]]
else:
elements = parameters[param]
for idx, sub_parameters in enumerate(elements):
if not isinstance(sub_parameters, dict):
errors.append(SubParameterTypeError("value of '%s' must be of type dict or list of dicts" % param))
# Set prefix for warning messages
new_prefix = prefix + param
if wanted == 'list':
new_prefix += '[%d]' % idx
new_prefix += '.'
no_log_values.update(set_fallbacks(sub_spec, sub_parameters))
alias_warnings = []
alias_deprecations = []
try:
options_aliases = _handle_aliases(sub_spec, sub_parameters, alias_warnings, alias_deprecations)
except (TypeError, ValueError) as e:
options_aliases = {}
errors.append(AliasError(to_native(e)))
for option, alias in alias_warnings:
warn('Both option %s and its alias %s are set.' % (option, alias))
try:
no_log_values.update(_list_no_log_values(sub_spec, sub_parameters))
except TypeError as te:
errors.append(NoLogError(to_native(te)))
legal_inputs = _get_legal_inputs(sub_spec, sub_parameters, options_aliases)
unsupported_parameters.update(_get_unsupported_parameters(sub_spec, sub_parameters, legal_inputs, options_context))
try:
check_mutually_exclusive(value.get('mutually_exclusive'), sub_parameters, options_context)
except TypeError as e:
errors.append(MutuallyExclusiveError(to_native(e)))
no_log_values.update(_set_defaults(sub_spec, sub_parameters, False))
try:
check_required_arguments(sub_spec, sub_parameters, options_context)
except TypeError as e:
errors.append(RequiredError(to_native(e)))
_validate_argument_types(sub_spec, sub_parameters, new_prefix, options_context, errors=errors)
_validate_argument_values(sub_spec, sub_parameters, options_context, errors=errors)
for check in _ADDITIONAL_CHECKS:
try:
check['func'](value.get(check['attr']), sub_parameters, options_context)
except TypeError as e:
errors.append(check['err'](to_native(e)))
no_log_values.update(_set_defaults(sub_spec, sub_parameters))
# Handle nested specs
_validate_sub_spec(sub_spec, sub_parameters, new_prefix, options_context, errors, no_log_values, unsupported_parameters)
options_context.pop()
def env_fallback(*args, **kwargs):
"""Load value from environment variable"""
for arg in args:
if arg in os.environ:
return os.environ[arg]
raise AnsibleFallbackNotFound
def set_fallbacks(argument_spec, parameters):
no_log_values = set()
for param, value in argument_spec.items():
fallback = value.get('fallback', (None,))
fallback_strategy = fallback[0]
fallback_args = []
fallback_kwargs = {}
if param not in parameters and fallback_strategy is not None:
for item in fallback[1:]:
if isinstance(item, dict):
fallback_kwargs = item
else:
fallback_args = item
try:
fallback_value = fallback_strategy(*fallback_args, **fallback_kwargs)
except AnsibleFallbackNotFound:
continue
else:
if value.get('no_log', False) and fallback_value:
no_log_values.add(fallback_value)
parameters[param] = fallback_value
return no_log_values
def sanitize_keys(obj, no_log_strings, ignore_keys=frozenset()):
"""Sanitize the keys in a container object by removing ``no_log`` values from key names.
This is a companion function to the :func:`remove_values` function. Similar to that function,
we make use of ``deferred_removals`` to avoid hitting maximum recursion depth in cases of
large data structures.
:arg obj: The container object to sanitize. Non-container objects are returned unmodified.
:arg no_log_strings: A set of string values we do not want logged.
:kwarg ignore_keys: A set of string values of keys to not sanitize.
:returns: An object with sanitized keys.
"""
deferred_removals = deque()
no_log_strings = [to_native(s, errors='surrogate_or_strict') for s in no_log_strings]
new_value = _sanitize_keys_conditions(obj, no_log_strings, ignore_keys, deferred_removals)
while deferred_removals:
old_data, new_data = deferred_removals.popleft()
if isinstance(new_data, Mapping):
for old_key, old_elem in old_data.items():
if old_key in ignore_keys or old_key.startswith('_ansible'):
new_data[old_key] = _sanitize_keys_conditions(old_elem, no_log_strings, ignore_keys, deferred_removals)
else:
# Sanitize the old key. We take advantage of the sanitizing code in
# _remove_values_conditions() rather than recreating it here.
new_key = _remove_values_conditions(old_key, no_log_strings, None)
new_data[new_key] = _sanitize_keys_conditions(old_elem, no_log_strings, ignore_keys, deferred_removals)
else:
for elem in old_data:
new_elem = _sanitize_keys_conditions(elem, no_log_strings, ignore_keys, deferred_removals)
if isinstance(new_data, MutableSequence):
new_data.append(new_elem)
elif isinstance(new_data, MutableSet):
new_data.add(new_elem)
else:
raise TypeError('Unknown container type encountered when removing private values from keys')
return new_value
def remove_values(value, no_log_strings):
"""Remove strings in ``no_log_strings`` from value.
If value is a container type, then remove a lot more.
Use of ``deferred_removals`` exists, rather than a pure recursive solution,
because of the potential to hit the maximum recursion depth when dealing with
large amounts of data (see `issue #24560 <https://github.com/ansible/ansible/issues/24560>`_).
"""
deferred_removals = deque()
no_log_strings = [to_native(s, errors='surrogate_or_strict') for s in no_log_strings]
new_value = _remove_values_conditions(value, no_log_strings, deferred_removals)
while deferred_removals:
old_data, new_data = deferred_removals.popleft()
if isinstance(new_data, Mapping):
for old_key, old_elem in old_data.items():
new_elem = _remove_values_conditions(old_elem, no_log_strings, deferred_removals)
new_data[old_key] = new_elem
else:
for elem in old_data:
new_elem = _remove_values_conditions(elem, no_log_strings, deferred_removals)
if isinstance(new_data, MutableSequence):
new_data.append(new_elem)
elif isinstance(new_data, MutableSet):
new_data.add(new_elem)
else:
raise TypeError('Unknown container type encountered when removing private values from output')
return new_value
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 75,612 |
Unexpected exception when validating a role argspec when a dict with suboptions is expected but a str is given
|
### Summary
An exception is thrown when an argspec expecting a dict with suboptions is given a string.
This only occurs when the argument in the argspec has suboptions.
### Issue Type
Bug Report
### Component Name
module_utils/common/parameters.py
### Ansible Version
```console
ansible [core 2.11.4]
```
### Configuration
```console
$ ansible-config dump --only-changed
```
### OS / Environment
RHEL 8
### Steps to Reproduce
`meta/argument_specs.yml`
```yaml
---
argument_specs:
main:
options:
dict_option:
type: dict
options:
some_option:
type: str
```
Playbook:
```
---
- hosts: localhost
gather_facts: false
vars:
dict_option: this is a string
roles:
- test
```
### Expected Results
An exception should not be thrown.
### Actual Results
```console
The full traceback is:
Traceback (most recent call last):
File "/home/user/ansible4/lib/python3.8/site-packages/ansible/executor/task_executor.py", line 158, in run
res = self._execute()
File "/home/user/ansible4/lib/python3.8/site-packages/ansible/executor/task_executor.py", line 582, in _execute
result = self._handler.run(task_vars=variables)
File "/home/user/ansible4/lib/python3.8/site-packages/ansible/plugins/action/validate_argument_spec.py", line 83, in run
validation_result = validator.validate(provided_arguments)
File "/home/user/ansible4/lib/python3.8/site-packages/ansible/module_utils/common/arg_spec.py", line 244, in validate
_validate_sub_spec(self.argument_spec, result._validated_parameters,
File "/home/user/ansible4/lib/python3.8/site-packages/ansible/module_utils/common/parameters.py", line 761, in _validate_sub_spec
unsupported_parameters.update(_get_unsupported_parameters(sub_spec, sub_parameters, legal_inputs, options_context))
File "/home/user/ansible4/lib/python3.8/site-packages/ansible/module_utils/common/parameters.py", line 177, in _get_unsupported_parameters
for k in parameters.keys():
AttributeError: 'str' object has no attribute 'keys'
fatal: [localhost]: FAILED! => {
"msg": "Unexpected failure during module execution.",
"stdout": ""
}
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/75612
|
https://github.com/ansible/ansible/pull/75635
|
7bec19606172e67ac4dbe5d10a6853b01b22ca8c
|
b5ed41edb34a387c560e172ee2928cc8ac9b4959
| 2021-09-01T06:26:06Z |
python
| 2021-11-01T16:06:49Z |
test/integration/targets/roles_arg_spec/roles/test1/meta/argument_specs.yml
|
---
argument_specs:
main:
short_description: "EXPECTED FAILURE Validate the argument spec for the 'test1' role"
options:
test1_choices:
required: false
# required: true
choices:
- "this paddle game"
- "the astray"
- "this remote control"
- "the chair"
type: "str"
default: "this paddle game"
tidy_expected:
# required: false
# default: none
type: "list"
test1_var1:
# required: true
default: "THIS IS THE DEFAULT SURVEY ANSWER FOR test1_survey_test1_var1"
type: "str"
test1_var2:
required: false
default: "This IS THE DEFAULT fake band name / test1_var2 answer from survey_spec.yml"
type: "str"
bust_some_stuff:
# required: false
type: "int"
some_choices:
choices:
- "choice1"
- "choice2"
required: false
type: "str"
some_str:
type: "str"
some_list:
type: "list"
elements: "float"
some_dict:
type: "dict"
some_bool:
type: "bool"
some_int:
type: "int"
some_float:
type: "float"
some_path:
type: "path"
some_raw:
type: "raw"
some_jsonarg:
type: "jsonarg"
required: true
some_json:
type: "json"
required: true
some_bytes:
type: "bytes"
some_bits:
type: "bits"
some_str_aliases:
type: "str"
aliases:
- "some_str_nicknames"
- "some_str_akas"
- "some_str_formerly_known_as"
some_dict_options:
type: "dict"
options:
some_second_level:
type: "bool"
default: true
some_str_removed_in:
type: "str"
removed_in: 2.10
some_tmp_path:
type: "path"
multi_level_option:
type: "dict"
options:
second_level:
type: "dict"
options:
third_level:
type: "int"
required: true
other:
short_description: "test1_simple_preset_arg_spec_other"
description: "A simpler set of required args for other tasks"
options:
test1_var1:
default: "This the default value for the other set of arg specs for test1 test1_var1"
type: "str"
test1_other:
description: "test1_other for role_that_includes_role"
options:
some_test1_other_arg:
default: "The some_test1_other_arg default value"
type: str
some_required_str:
type: str
required: true
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 75,612 |
Unexpected exception when validating a role argspec when a dict with suboptions is expected but a str is given
|
### Summary
An exception is thrown when an argspec expecting a dict with suboptions is given a string.
This only occurs when the argument in the argspec has suboptions.
### Issue Type
Bug Report
### Component Name
module_utils/common/parameters.py
### Ansible Version
```console
ansible [core 2.11.4]
```
### Configuration
```console
$ ansible-config dump --only-changed
```
### OS / Environment
RHEL 8
### Steps to Reproduce
`meta/argument_specs.yml`
```yaml
---
argument_specs:
main:
options:
dict_option:
type: dict
options:
some_option:
type: str
```
Playbook:
```
---
- hosts: localhost
gather_facts: false
vars:
dict_option: this is a string
roles:
- test
```
### Expected Results
An exception should not be thrown.
### Actual Results
```console
The full traceback is:
Traceback (most recent call last):
File "/home/user/ansible4/lib/python3.8/site-packages/ansible/executor/task_executor.py", line 158, in run
res = self._execute()
File "/home/user/ansible4/lib/python3.8/site-packages/ansible/executor/task_executor.py", line 582, in _execute
result = self._handler.run(task_vars=variables)
File "/home/user/ansible4/lib/python3.8/site-packages/ansible/plugins/action/validate_argument_spec.py", line 83, in run
validation_result = validator.validate(provided_arguments)
File "/home/user/ansible4/lib/python3.8/site-packages/ansible/module_utils/common/arg_spec.py", line 244, in validate
_validate_sub_spec(self.argument_spec, result._validated_parameters,
File "/home/user/ansible4/lib/python3.8/site-packages/ansible/module_utils/common/parameters.py", line 761, in _validate_sub_spec
unsupported_parameters.update(_get_unsupported_parameters(sub_spec, sub_parameters, legal_inputs, options_context))
File "/home/user/ansible4/lib/python3.8/site-packages/ansible/module_utils/common/parameters.py", line 177, in _get_unsupported_parameters
for k in parameters.keys():
AttributeError: 'str' object has no attribute 'keys'
fatal: [localhost]: FAILED! => {
"msg": "Unexpected failure during module execution.",
"stdout": ""
}
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/75612
|
https://github.com/ansible/ansible/pull/75635
|
7bec19606172e67ac4dbe5d10a6853b01b22ca8c
|
b5ed41edb34a387c560e172ee2928cc8ac9b4959
| 2021-09-01T06:26:06Z |
python
| 2021-11-01T16:06:49Z |
test/integration/targets/roles_arg_spec/test_complex_role_fails.yml
|
---
- name: "Running include_role test1"
hosts: localhost
gather_facts: false
vars:
ansible_unicode_type_match: "<type 'ansible.parsing.yaml.objects.AnsibleUnicode'>"
unicode_type_match: "<type 'unicode'>"
string_type_match: "<type 'str'>"
float_type_match: "<type 'float'>"
list_type_match: "<type 'list'>"
ansible_list_type_match: "<type 'ansible.parsing.yaml.objects.AnsibleSequence'>"
dict_type_match: "<type 'dict'>"
ansible_dict_type_match: "<type 'ansible.parsing.yaml.objects.AnsibleMapping'>"
ansible_unicode_class_match: "<class 'ansible.parsing.yaml.objects.AnsibleUnicode'>"
unicode_class_match: "<class 'unicode'>"
string_class_match: "<class 'str'>"
bytes_class_match: "<class 'bytes'>"
float_class_match: "<class 'float'>"
list_class_match: "<class 'list'>"
ansible_list_class_match: "<class 'ansible.parsing.yaml.objects.AnsibleSequence'>"
dict_class_match: "<class 'dict'>"
ansible_dict_class_match: "<class 'ansible.parsing.yaml.objects.AnsibleMapping'>"
expected:
test1_1:
argument_errors: [
"argument 'tidy_expected' is of type <class 'ansible.parsing.yaml.objects.AnsibleMapping'> and we were unable to convert to list: <class 'ansible.parsing.yaml.objects.AnsibleMapping'> cannot be converted to a list",
"argument 'bust_some_stuff' is of type <class 'str'> and we were unable to convert to int: <class 'str'> cannot be converted to an int",
"argument 'some_list' is of type <class 'ansible.parsing.yaml.objects.AnsibleMapping'> and we were unable to convert to list: <class 'ansible.parsing.yaml.objects.AnsibleMapping'> cannot be converted to a list",
"argument 'some_dict' is of type <class 'ansible.parsing.yaml.objects.AnsibleSequence'> and we were unable to convert to dict: <class 'ansible.parsing.yaml.objects.AnsibleSequence'> cannot be converted to a dict",
"argument 'some_int' is of type <class 'float'> and we were unable to convert to int: <class 'float'> cannot be converted to an int",
"argument 'some_float' is of type <class 'str'> and we were unable to convert to float: <class 'str'> cannot be converted to a float",
"argument 'some_bytes' is of type <class 'bytes'> and we were unable to convert to bytes: <class 'bytes'> cannot be converted to a Byte value",
"argument 'some_bits' is of type <class 'str'> and we were unable to convert to bits: <class 'str'> cannot be converted to a Bit value",
"value of test1_choices must be one of: this paddle game, the astray, this remote control, the chair, got: My dog",
"value of some_choices must be one of: choice1, choice2, got: choice4",
"argument 'some_second_level' is of type <class 'ansible.parsing.yaml.objects.AnsibleUnicode'> found in 'some_dict_options'. and we were unable to convert to bool: The value 'not-a-bool' is not a valid boolean. ",
"argument 'third_level' is of type <class 'ansible.parsing.yaml.objects.AnsibleUnicode'> found in 'multi_level_option -> second_level'. and we were unable to convert to int: <class 'ansible.parsing.yaml.objects.AnsibleUnicode'> cannot be converted to an int"
]
tasks:
- name: include_role test1 since it has a arg_spec.yml
block:
- include_role:
name: test1
vars:
tidy_expected:
some_key: some_value
test1_var1: 37.4
test1_choices: "My dog"
bust_some_stuff: "some_string_that_is_not_an_int"
some_choices: "choice4"
some_str: 37.5
some_list: {'a': false}
some_dict:
- "foo"
- "bar"
some_int: 37.
some_float: "notafloatisit"
some_path: "anything_is_a_valid_path"
some_raw: {"anything_can_be": "a_raw_type"}
# not sure what would be an invalid jsonarg
# some_jsonarg: "not sure what this does yet"
some_json: |
'{[1, 3, 3] 345345|45v<#!}'
some_jsonarg: |
{"foo": [1, 3, 3]}
# not sure we can load binary in safe_load
some_bytes: !!binary |
R0lGODlhDAAMAIQAAP//9/X17unp5WZmZgAAAOfn515eXvPz7Y6OjuDg4J+fn5
OTk6enp56enmlpaWNjY6Ojo4SEhP/++f/++f/++f/++f/++f/++f/++f/++f/+
+f/++f/++f/++f/++f/++SH+Dk1hZGUgd2l0aCBHSU1QACwAAAAADAAMAAAFLC
AgjoEwnuNAFOhpEMTRiggcz4BNJHrv/zCFcLiwMWYNG84BwwEeECcgggoBADs=
some_bits: "foo"
# some_str_nicknames: []
# some_str_akas: {}
some_str_removed_in: "foo"
some_dict_options:
some_second_level: "not-a-bool"
multi_level_option:
second_level:
third_level: "should_be_int"
- fail:
msg: "Should not get here"
rescue:
- debug:
var: ansible_failed_result
- name: replace py version specific types with generic names so tests work on py2 and py3
set_fact:
# We want to compare if the actual failure messages and the expected failure messages
# are the same. But to compare and do set differences, we have to handle some
# differences between py2/py3.
# The validation failure messages include python type and class reprs, which are
# different between py2 and py3. For ex, "<type 'str'>" vs "<class 'str'>". Plus
# the usual py2/py3 unicode/str/bytes type shenanigans. The 'THE_FLOAT_REPR' is
# because py3 quotes the value in the error while py2 does not, so we just ignore
# the rest of the line.
actual_generic: "{{ ansible_failed_result.argument_errors|
map('replace', ansible_unicode_type_match, 'STR')|
map('replace', unicode_type_match, 'STR')|
map('replace', string_type_match, 'STR')|
map('replace', float_type_match, 'FLOAT')|
map('replace', list_type_match, 'LIST')|
map('replace', ansible_list_type_match, 'LIST')|
map('replace', dict_type_match, 'DICT')|
map('replace', ansible_dict_type_match, 'DICT')|
map('replace', ansible_unicode_class_match, 'STR')|
map('replace', unicode_class_match, 'STR')|
map('replace', string_class_match, 'STR')|
map('replace', bytes_class_match, 'STR')|
map('replace', float_class_match, 'FLOAT')|
map('replace', list_class_match, 'LIST')|
map('replace', ansible_list_class_match, 'LIST')|
map('replace', dict_class_match, 'DICT')|
map('replace', ansible_dict_class_match, 'DICT')|
map('regex_replace', '''float:.*$''', 'THE_FLOAT_REPR')|
map('regex_replace', 'Valid booleans include.*$', '')|
list }}"
expected_generic: "{{ expected.test1_1.argument_errors|
map('replace', ansible_unicode_type_match, 'STR')|
map('replace', unicode_type_match, 'STR')|
map('replace', string_type_match, 'STR')|
map('replace', float_type_match, 'FLOAT')|
map('replace', list_type_match, 'LIST')|
map('replace', ansible_list_type_match, 'LIST')|
map('replace', dict_type_match, 'DICT')|
map('replace', ansible_dict_type_match, 'DICT')|
map('replace', ansible_unicode_class_match, 'STR')|
map('replace', unicode_class_match, 'STR')|
map('replace', string_class_match, 'STR')|
map('replace', bytes_class_match, 'STR')|
map('replace', float_class_match, 'FLOAT')|
map('replace', list_class_match, 'LIST')|
map('replace', ansible_list_class_match, 'LIST')|
map('replace', dict_class_match, 'DICT')|
map('replace', ansible_dict_class_match, 'DICT')|
map('regex_replace', '''float:.*$''', 'THE_FLOAT_REPR')|
map('regex_replace', 'Valid booleans include.*$', '')|
list }}"
- name: figure out the difference between expected and actual validate_argument_spec failures
set_fact:
actual_not_in_expected: "{{ actual_generic| difference(expected_generic) | sort() }}"
expected_not_in_actual: "{{ expected_generic | difference(actual_generic) | sort() }}"
- name: assert that all actual validate_argument_spec failures were in expected
assert:
that:
- actual_not_in_expected | length == 0
msg: "Actual validate_argument_spec failures that were not expected: {{ actual_not_in_expected }}"
- name: assert that all expected validate_argument_spec failures were in expected
assert:
that:
- expected_not_in_actual | length == 0
msg: "Expected validate_argument_spec failures that were not in actual results: {{ expected_not_in_actual }}"
- name: assert that `validate_args_context` return value has what we expect
assert:
that:
- ansible_failed_result.validate_args_context.argument_spec_name == "main"
- ansible_failed_result.validate_args_context.name == "test1"
- ansible_failed_result.validate_args_context.type == "role"
- "ansible_failed_result.validate_args_context.path is search('roles_arg_spec/roles/test1')"
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 76,192 |
GPG Validation failed on DNF/YUM local RPM install.
|
### Summary
When I try to use the yum/dnf module to install an RPM from a local file, not from a repo, it fails with a GPG error.
`Failed to validate GPG signature for xyy.rpm`
I've created a simple example here complete with a Vagrant to make it easy to reproduce:
https://github.com/jeffmacdonald/ansible-yum-error
### Issue Type
Bug Report
### Component Name
dnf, yum
### Ansible Version
```console
$ ansible --version
ansible [core 2.11.1]
config file = None
configured module search path = ['/Users/jeff.macdonald/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /Users/jeff.macdonald/.pyenv/versions/3.9.5/envs/python3.9.5/lib/python3.9/site-packages/ansible
ansible collection location = /Users/jeff.macdonald/.ansible/collections:/usr/share/ansible/collections
executable location = /Users/jeff.macdonald/.pyenv/versions/python3.9.5/bin/ansible
python version = 3.9.5 (default, May 30 2021, 21:26:15) [Clang 12.0.5 (clang-1205.0.22.9)]
jinja version = 3.0.1
libyaml = True
```
### Configuration
```console
$ ansible-config dump --only-changed
No output
```
### OS / Environment
Ansible is being run from a Mac (latest) on Rocky Linux 8.3
### Steps to Reproduce
https://github.com/jeffmacdonald/ansible-yum-error
```yaml
---
- name: Install Telegraf
hosts: all
tasks:
- name: install influx gpg key
become: yes
rpm_key:
key: "https://repos.influxdata.com/influxdb.key"
state: present
- name: download telegraf RPM
get_url:
url: https://dl.influxdata.com/telegraf/releases/telegraf-1.13.4-1.x86_64.rpm
dest: /tmp/telegraf-1.13.4-1.x86_64.rpm
- name: install telegraf packages
become: yes
yum:
name: /tmp/telegraf-1.13.4-1.x86_64.rpm
state: present
```
### Expected Results
I expect the RPM to install
### Actual Results
```console
TASK [install telegraf packages] ***********************************************
fatal: [node1]: FAILED! => {"changed": false, "msg": "Failed to validate GPG signature for telegraf-1.13.4-1.x86_64"}
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/76192
|
https://github.com/ansible/ansible/pull/76194
|
fdc33488595e01ed39ee1c9482ad11acda3d4a66
|
150faf25d3e9a23afde4e16e66281f1fb88b824f
| 2021-11-01T19:52:09Z |
python
| 2021-11-02T14:01:12Z |
changelogs/fragments/76192-dnf-specific-gpg-error.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 76,192 |
GPG Validation failed on DNF/YUM local RPM install.
|
### Summary
When I try to use the yum/dnf module to install an RPM from a local file, not from a repo, it fails with a GPG error.
`Failed to validate GPG signature for xyy.rpm`
I've created a simple example here complete with a Vagrant to make it easy to reproduce:
https://github.com/jeffmacdonald/ansible-yum-error
### Issue Type
Bug Report
### Component Name
dnf, yum
### Ansible Version
```console
$ ansible --version
ansible [core 2.11.1]
config file = None
configured module search path = ['/Users/jeff.macdonald/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /Users/jeff.macdonald/.pyenv/versions/3.9.5/envs/python3.9.5/lib/python3.9/site-packages/ansible
ansible collection location = /Users/jeff.macdonald/.ansible/collections:/usr/share/ansible/collections
executable location = /Users/jeff.macdonald/.pyenv/versions/python3.9.5/bin/ansible
python version = 3.9.5 (default, May 30 2021, 21:26:15) [Clang 12.0.5 (clang-1205.0.22.9)]
jinja version = 3.0.1
libyaml = True
```
### Configuration
```console
$ ansible-config dump --only-changed
No output
```
### OS / Environment
Ansible is being run from a Mac (latest) on Rocky Linux 8.3
### Steps to Reproduce
https://github.com/jeffmacdonald/ansible-yum-error
```yaml
---
- name: Install Telegraf
hosts: all
tasks:
- name: install influx gpg key
become: yes
rpm_key:
key: "https://repos.influxdata.com/influxdb.key"
state: present
- name: download telegraf RPM
get_url:
url: https://dl.influxdata.com/telegraf/releases/telegraf-1.13.4-1.x86_64.rpm
dest: /tmp/telegraf-1.13.4-1.x86_64.rpm
- name: install telegraf packages
become: yes
yum:
name: /tmp/telegraf-1.13.4-1.x86_64.rpm
state: present
```
### Expected Results
I expect the RPM to install
### Actual Results
```console
TASK [install telegraf packages] ***********************************************
fatal: [node1]: FAILED! => {"changed": false, "msg": "Failed to validate GPG signature for telegraf-1.13.4-1.x86_64"}
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/76192
|
https://github.com/ansible/ansible/pull/76194
|
fdc33488595e01ed39ee1c9482ad11acda3d4a66
|
150faf25d3e9a23afde4e16e66281f1fb88b824f
| 2021-11-01T19:52:09Z |
python
| 2021-11-02T14:01:12Z |
lib/ansible/modules/dnf.py
|
#!/usr/bin/python
# -*- coding: utf-8 -*-
# Copyright 2015 Cristian van Ee <cristian at cvee.org>
# Copyright 2015 Igor Gnatenko <[email protected]>
# Copyright 2018 Adam Miller <[email protected]>
#
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
DOCUMENTATION = '''
---
module: dnf
version_added: 1.9
short_description: Manages packages with the I(dnf) package manager
description:
- Installs, upgrade, removes, and lists packages and groups with the I(dnf) package manager.
options:
name:
description:
- "A package name or package specifier with version, like C(name-1.0).
When using state=latest, this can be '*' which means run: dnf -y update.
You can also pass a url or a local path to a rpm file.
To operate on several packages this can accept a comma separated string of packages or a list of packages."
- Comparison operators for package version are valid here C(>), C(<), C(>=), C(<=). Example - C(name>=1.0)
- You can also pass an absolute path for a binary which is provided by the package to install.
See examples for more information.
required: true
aliases:
- pkg
type: list
elements: str
list:
description:
- Various (non-idempotent) commands for usage with C(/usr/bin/ansible) and I(not) playbooks. See examples.
type: str
state:
description:
- Whether to install (C(present), C(latest)), or remove (C(absent)) a package.
- Default is C(None), however in effect the default action is C(present) unless the C(autoremove) option is
enabled for this module, then C(absent) is inferred.
choices: ['absent', 'present', 'installed', 'removed', 'latest']
type: str
enablerepo:
description:
- I(Repoid) of repositories to enable for the install/update operation.
These repos will not persist beyond the transaction.
When specifying multiple repos, separate them with a ",".
type: list
elements: str
disablerepo:
description:
- I(Repoid) of repositories to disable for the install/update operation.
These repos will not persist beyond the transaction.
When specifying multiple repos, separate them with a ",".
type: list
elements: str
conf_file:
description:
- The remote dnf configuration file to use for the transaction.
type: str
disable_gpg_check:
description:
- Whether to disable the GPG checking of signatures of packages being
installed. Has an effect only if state is I(present) or I(latest).
- This setting affects packages installed from a repository as well as
"local" packages installed from the filesystem or a URL.
type: bool
default: 'no'
installroot:
description:
- Specifies an alternative installroot, relative to which all packages
will be installed.
version_added: "2.3"
default: "/"
type: str
releasever:
description:
- Specifies an alternative release from which all packages will be
installed.
version_added: "2.6"
type: str
autoremove:
description:
- If C(yes), removes all "leaf" packages from the system that were originally
installed as dependencies of user-installed packages but which are no longer
required by any such package. Should be used alone or when state is I(absent)
type: bool
default: "no"
version_added: "2.4"
exclude:
description:
- Package name(s) to exclude when state=present, or latest. This can be a
list or a comma separated string.
version_added: "2.7"
type: list
elements: str
skip_broken:
description:
- Skip all unavailable packages or packages with broken dependencies
without raising an error. Equivalent to passing the --skip-broken option.
type: bool
default: "no"
version_added: "2.7"
update_cache:
description:
- Force dnf to check if cache is out of date and redownload if needed.
Has an effect only if state is I(present) or I(latest).
type: bool
default: "no"
aliases: [ expire-cache ]
version_added: "2.7"
update_only:
description:
- When using latest, only update installed packages. Do not install packages.
- Has an effect only if state is I(latest)
default: "no"
type: bool
version_added: "2.7"
security:
description:
- If set to C(yes), and C(state=latest) then only installs updates that have been marked security related.
- Note that, similar to ``dnf upgrade-minimal``, this filter applies to dependencies as well.
type: bool
default: "no"
version_added: "2.7"
bugfix:
description:
- If set to C(yes), and C(state=latest) then only installs updates that have been marked bugfix related.
- Note that, similar to ``dnf upgrade-minimal``, this filter applies to dependencies as well.
default: "no"
type: bool
version_added: "2.7"
enable_plugin:
description:
- I(Plugin) name to enable for the install/update operation.
The enabled plugin will not persist beyond the transaction.
version_added: "2.7"
type: list
elements: str
disable_plugin:
description:
- I(Plugin) name to disable for the install/update operation.
The disabled plugins will not persist beyond the transaction.
version_added: "2.7"
type: list
elements: str
disable_excludes:
description:
- Disable the excludes defined in DNF config files.
- If set to C(all), disables all excludes.
- If set to C(main), disable excludes defined in [main] in dnf.conf.
- If set to C(repoid), disable excludes defined for given repo id.
version_added: "2.7"
type: str
validate_certs:
description:
- This only applies if using a https url as the source of the rpm. e.g. for localinstall. If set to C(no), the SSL certificates will not be validated.
- This should only set to C(no) used on personally controlled sites using self-signed certificates as it avoids verifying the source site.
type: bool
default: "yes"
version_added: "2.7"
allow_downgrade:
description:
- Specify if the named package and version is allowed to downgrade
a maybe already installed higher version of that package.
Note that setting allow_downgrade=True can make this module
behave in a non-idempotent way. The task could end up with a set
of packages that does not match the complete list of specified
packages to install (because dependencies between the downgraded
package and others can cause changes to the packages which were
in the earlier transaction).
type: bool
default: "no"
version_added: "2.7"
install_repoquery:
description:
- This is effectively a no-op in DNF as it is not needed with DNF, but is an accepted parameter for feature
parity/compatibility with the I(yum) module.
type: bool
default: "yes"
version_added: "2.7"
download_only:
description:
- Only download the packages, do not install them.
default: "no"
type: bool
version_added: "2.7"
lock_timeout:
description:
- Amount of time to wait for the dnf lockfile to be freed.
required: false
default: 30
type: int
version_added: "2.8"
install_weak_deps:
description:
- Will also install all packages linked by a weak dependency relation.
type: bool
default: "yes"
version_added: "2.8"
download_dir:
description:
- Specifies an alternate directory to store packages.
- Has an effect only if I(download_only) is specified.
type: str
version_added: "2.8"
allowerasing:
description:
- If C(yes) it allows erasing of installed packages to resolve dependencies.
required: false
type: bool
default: "no"
version_added: "2.10"
nobest:
description:
- Set best option to False, so that transactions are not limited to best candidates only.
required: false
type: bool
default: "no"
version_added: "2.11"
cacheonly:
description:
- Tells dnf to run entirely from system cache; does not download or update metadata.
type: bool
default: "no"
version_added: "2.12"
extends_documentation_fragment:
- action_common_attributes
- action_common_attributes.flow
attributes:
action:
details: In the case of dnf, it has 2 action plugins that use it under the hood, M(yum) and M(package).
support: partial
async:
support: none
bypass_host_loop:
support: none
check_mode:
support: full
diff_mode:
support: full
platform:
platforms: rhel
notes:
- When used with a `loop:` each package will be processed individually, it is much more efficient to pass the list directly to the `name` option.
- Group removal doesn't work if the group was installed with Ansible because
upstream dnf's API doesn't properly mark groups as installed, therefore upon
removal the module is unable to detect that the group is installed
(https://bugzilla.redhat.com/show_bug.cgi?id=1620324)
requirements:
- "python >= 2.6"
- python-dnf
- for the autoremove option you need dnf >= 2.0.1"
author:
- Igor Gnatenko (@ignatenkobrain) <[email protected]>
- Cristian van Ee (@DJMuggs) <cristian at cvee.org>
- Berend De Schouwer (@berenddeschouwer)
- Adam Miller (@maxamillion) <[email protected]>
'''
EXAMPLES = '''
- name: Install the latest version of Apache
dnf:
name: httpd
state: latest
- name: Install Apache >= 2.4
dnf:
name: httpd>=2.4
state: present
- name: Install the latest version of Apache and MariaDB
dnf:
name:
- httpd
- mariadb-server
state: latest
- name: Remove the Apache package
dnf:
name: httpd
state: absent
- name: Install the latest version of Apache from the testing repo
dnf:
name: httpd
enablerepo: testing
state: present
- name: Upgrade all packages
dnf:
name: "*"
state: latest
- name: Install the nginx rpm from a remote repo
dnf:
name: 'http://nginx.org/packages/centos/6/noarch/RPMS/nginx-release-centos-6-0.el6.ngx.noarch.rpm'
state: present
- name: Install nginx rpm from a local file
dnf:
name: /usr/local/src/nginx-release-centos-6-0.el6.ngx.noarch.rpm
state: present
- name: Install Package based upon the file it provides
dnf:
name: /usr/bin/cowsay
state: present
- name: Install the 'Development tools' package group
dnf:
name: '@Development tools'
state: present
- name: Autoremove unneeded packages installed as dependencies
dnf:
autoremove: yes
- name: Uninstall httpd but keep its dependencies
dnf:
name: httpd
state: absent
autoremove: no
- name: Install a modularity appstream with defined stream and profile
dnf:
name: '@postgresql:9.6/client'
state: present
- name: Install a modularity appstream with defined stream
dnf:
name: '@postgresql:9.6'
state: present
- name: Install a modularity appstream with defined profile
dnf:
name: '@postgresql/client'
state: present
'''
import os
import re
import sys
from ansible.module_utils._text import to_native, to_text
from ansible.module_utils.urls import fetch_file
from ansible.module_utils.six import PY2, text_type
from ansible.module_utils.compat.version import LooseVersion
from ansible.module_utils.basic import AnsibleModule
from ansible.module_utils.common.locale import get_best_parsable_locale
from ansible.module_utils.common.respawn import has_respawned, probe_interpreters_for_module, respawn_module
from ansible.module_utils.yumdnf import YumDnf, yumdnf_argument_spec
# NOTE dnf Python bindings import is postponed, see DnfModule._ensure_dnf(),
# because we need AnsibleModule object to use get_best_parsable_locale()
# to set proper locale before importing dnf to be able to scrape
# the output in some cases (FIXME?).
dnf = None
class DnfModule(YumDnf):
"""
DNF Ansible module back-end implementation
"""
def __init__(self, module):
# This populates instance vars for all argument spec params
super(DnfModule, self).__init__(module)
self._ensure_dnf()
self.lockfile = "/var/cache/dnf/*_lock.pid"
self.pkg_mgr_name = "dnf"
try:
self.with_modules = dnf.base.WITH_MODULES
except AttributeError:
self.with_modules = False
# DNF specific args that are not part of YumDnf
self.allowerasing = self.module.params['allowerasing']
self.nobest = self.module.params['nobest']
def is_lockfile_pid_valid(self):
# FIXME? it looks like DNF takes care of invalid lock files itself?
# https://github.com/ansible/ansible/issues/57189
return True
def _sanitize_dnf_error_msg_install(self, spec, error):
"""
For unhandled dnf.exceptions.Error scenarios, there are certain error
messages we want to filter in an install scenario. Do that here.
"""
if (
to_text("no package matched") in to_text(error) or
to_text("No match for argument:") in to_text(error)
):
return "No package {0} available.".format(spec)
return error
def _sanitize_dnf_error_msg_remove(self, spec, error):
"""
For unhandled dnf.exceptions.Error scenarios, there are certain error
messages we want to ignore in a removal scenario as known benign
failures. Do that here.
"""
if (
'no package matched' in to_native(error) or
'No match for argument:' in to_native(error)
):
return (False, "{0} is not installed".format(spec))
# Return value is tuple of:
# ("Is this actually a failure?", "Error Message")
return (True, error)
def _package_dict(self, package):
"""Return a dictionary of information for the package."""
# NOTE: This no longer contains the 'dnfstate' field because it is
# already known based on the query type.
result = {
'name': package.name,
'arch': package.arch,
'epoch': str(package.epoch),
'release': package.release,
'version': package.version,
'repo': package.repoid}
# envra format for alignment with the yum module
result['envra'] = '{epoch}:{name}-{version}-{release}.{arch}'.format(**result)
# keep nevra key for backwards compat as it was previously
# defined with a value in envra format
result['nevra'] = result['envra']
if package.installtime == 0:
result['yumstate'] = 'available'
else:
result['yumstate'] = 'installed'
return result
def _split_package_arch(self, packagename):
# This list was auto generated on a Fedora 28 system with the following one-liner
# printf '[ '; for arch in $(ls /usr/lib/rpm/platform); do printf '"%s", ' ${arch%-linux}; done; printf ']\n'
redhat_rpm_arches = [
"aarch64", "alphaev56", "alphaev5", "alphaev67", "alphaev6", "alpha",
"alphapca56", "amd64", "armv3l", "armv4b", "armv4l", "armv5tejl", "armv5tel",
"armv5tl", "armv6hl", "armv6l", "armv7hl", "armv7hnl", "armv7l", "athlon",
"geode", "i386", "i486", "i586", "i686", "ia32e", "ia64", "m68k", "mips64el",
"mips64", "mips64r6el", "mips64r6", "mipsel", "mips", "mipsr6el", "mipsr6",
"noarch", "pentium3", "pentium4", "ppc32dy4", "ppc64iseries", "ppc64le", "ppc64",
"ppc64p7", "ppc64pseries", "ppc8260", "ppc8560", "ppciseries", "ppc", "ppcpseries",
"riscv64", "s390", "s390x", "sh3", "sh4a", "sh4", "sh", "sparc64", "sparc64v",
"sparc", "sparcv8", "sparcv9", "sparcv9v", "x86_64"
]
name, delimiter, arch = packagename.rpartition('.')
if name and arch and arch in redhat_rpm_arches:
return name, arch
return packagename, None
def _packagename_dict(self, packagename):
"""
Return a dictionary of information for a package name string or None
if the package name doesn't contain at least all NVR elements
"""
if packagename[-4:] == '.rpm':
packagename = packagename[:-4]
rpm_nevr_re = re.compile(r'(\S+)-(?:(\d*):)?(.*)-(~?\w+[\w.+]*)')
try:
arch = None
nevr, arch = self._split_package_arch(packagename)
if arch:
packagename = nevr
rpm_nevr_match = rpm_nevr_re.match(packagename)
if rpm_nevr_match:
name, epoch, version, release = rpm_nevr_re.match(packagename).groups()
if not version or not version.split('.')[0].isdigit():
return None
else:
return None
except AttributeError as e:
self.module.fail_json(
msg='Error attempting to parse package: %s, %s' % (packagename, to_native(e)),
rc=1,
results=[]
)
if not epoch:
epoch = "0"
if ':' in name:
epoch_name = name.split(":")
epoch = epoch_name[0]
name = ''.join(epoch_name[1:])
result = {
'name': name,
'epoch': epoch,
'release': release,
'version': version,
}
return result
# Original implementation from yum.rpmUtils.miscutils (GPLv2+)
# http://yum.baseurl.org/gitweb?p=yum.git;a=blob;f=rpmUtils/miscutils.py
def _compare_evr(self, e1, v1, r1, e2, v2, r2):
# return 1: a is newer than b
# 0: a and b are the same version
# -1: b is newer than a
if e1 is None:
e1 = '0'
else:
e1 = str(e1)
v1 = str(v1)
r1 = str(r1)
if e2 is None:
e2 = '0'
else:
e2 = str(e2)
v2 = str(v2)
r2 = str(r2)
# print '%s, %s, %s vs %s, %s, %s' % (e1, v1, r1, e2, v2, r2)
rc = dnf.rpm.rpm.labelCompare((e1, v1, r1), (e2, v2, r2))
# print '%s, %s, %s vs %s, %s, %s = %s' % (e1, v1, r1, e2, v2, r2, rc)
return rc
def _ensure_dnf(self):
locale = get_best_parsable_locale(self.module)
os.environ['LC_ALL'] = os.environ['LC_MESSAGES'] = os.environ['LANG'] = locale
global dnf
try:
import dnf
import dnf.cli
import dnf.const
import dnf.exceptions
import dnf.subject
import dnf.util
HAS_DNF = True
except ImportError:
HAS_DNF = False
if HAS_DNF:
return
system_interpreters = ['/usr/libexec/platform-python',
'/usr/bin/python3',
'/usr/bin/python2',
'/usr/bin/python']
if not has_respawned():
# probe well-known system Python locations for accessible bindings, favoring py3
interpreter = probe_interpreters_for_module(system_interpreters, 'dnf')
if interpreter:
# respawn under the interpreter where the bindings should be found
respawn_module(interpreter)
# end of the line for this module, the process will exit here once the respawned module completes
# done all we can do, something is just broken (auto-install isn't useful anymore with respawn, so it was removed)
self.module.fail_json(
msg="Could not import the dnf python module using {0} ({1}). "
"Please install `python3-dnf` or `python2-dnf` package or ensure you have specified the "
"correct ansible_python_interpreter. (attempted {2})"
.format(sys.executable, sys.version.replace('\n', ''), system_interpreters),
results=[]
)
def _configure_base(self, base, conf_file, disable_gpg_check, installroot='/'):
"""Configure the dnf Base object."""
conf = base.conf
# Change the configuration file path if provided, this must be done before conf.read() is called
if conf_file:
# Fail if we can't read the configuration file.
if not os.access(conf_file, os.R_OK):
self.module.fail_json(
msg="cannot read configuration file", conf_file=conf_file,
results=[],
)
else:
conf.config_file_path = conf_file
# Read the configuration file
conf.read()
# Turn off debug messages in the output
conf.debuglevel = 0
# Set whether to check gpg signatures
conf.gpgcheck = not disable_gpg_check
conf.localpkg_gpgcheck = not disable_gpg_check
# Don't prompt for user confirmations
conf.assumeyes = True
# Set installroot
conf.installroot = installroot
# Load substitutions from the filesystem
conf.substitutions.update_from_etc(installroot)
# Handle different DNF versions immutable mutable datatypes and
# dnf v1/v2/v3
#
# In DNF < 3.0 are lists, and modifying them works
# In DNF >= 3.0 < 3.6 are lists, but modifying them doesn't work
# In DNF >= 3.6 have been turned into tuples, to communicate that modifying them doesn't work
#
# https://www.happyassassin.net/2018/06/27/adams-debugging-adventures-the-immutable-mutable-object/
#
# Set excludes
if self.exclude:
_excludes = list(conf.exclude)
_excludes.extend(self.exclude)
conf.exclude = _excludes
# Set disable_excludes
if self.disable_excludes:
_disable_excludes = list(conf.disable_excludes)
if self.disable_excludes not in _disable_excludes:
_disable_excludes.append(self.disable_excludes)
conf.disable_excludes = _disable_excludes
# Set releasever
if self.releasever is not None:
conf.substitutions['releasever'] = self.releasever
# Set skip_broken (in dnf this is strict=0)
if self.skip_broken:
conf.strict = 0
# Set best
if self.nobest:
conf.best = 0
if self.download_only:
conf.downloadonly = True
if self.download_dir:
conf.destdir = self.download_dir
if self.cacheonly:
conf.cacheonly = True
# Default in dnf upstream is true
conf.clean_requirements_on_remove = self.autoremove
# Default in dnf (and module default) is True
conf.install_weak_deps = self.install_weak_deps
def _specify_repositories(self, base, disablerepo, enablerepo):
"""Enable and disable repositories matching the provided patterns."""
base.read_all_repos()
repos = base.repos
# Disable repositories
for repo_pattern in disablerepo:
if repo_pattern:
for repo in repos.get_matching(repo_pattern):
repo.disable()
# Enable repositories
for repo_pattern in enablerepo:
if repo_pattern:
for repo in repos.get_matching(repo_pattern):
repo.enable()
def _base(self, conf_file, disable_gpg_check, disablerepo, enablerepo, installroot):
"""Return a fully configured dnf Base object."""
base = dnf.Base()
self._configure_base(base, conf_file, disable_gpg_check, installroot)
try:
# this method has been supported in dnf-4.2.17-6 or later
# https://bugzilla.redhat.com/show_bug.cgi?id=1788212
base.setup_loggers()
except AttributeError:
pass
try:
base.init_plugins(set(self.disable_plugin), set(self.enable_plugin))
base.pre_configure_plugins()
except AttributeError:
pass # older versions of dnf didn't require this and don't have these methods
self._specify_repositories(base, disablerepo, enablerepo)
try:
base.configure_plugins()
except AttributeError:
pass # older versions of dnf didn't require this and don't have these methods
try:
if self.update_cache:
try:
base.update_cache()
except dnf.exceptions.RepoError as e:
self.module.fail_json(
msg="{0}".format(to_text(e)),
results=[],
rc=1
)
base.fill_sack(load_system_repo='auto')
except dnf.exceptions.RepoError as e:
self.module.fail_json(
msg="{0}".format(to_text(e)),
results=[],
rc=1
)
add_security_filters = getattr(base, "add_security_filters", None)
if callable(add_security_filters):
filters = {}
if self.bugfix:
filters.setdefault('types', []).append('bugfix')
if self.security:
filters.setdefault('types', []).append('security')
if filters:
add_security_filters('eq', **filters)
else:
filters = []
if self.bugfix:
key = {'advisory_type__eq': 'bugfix'}
filters.append(base.sack.query().upgrades().filter(**key))
if self.security:
key = {'advisory_type__eq': 'security'}
filters.append(base.sack.query().upgrades().filter(**key))
if filters:
base._update_security_filters = filters
return base
def list_items(self, command):
"""List package info based on the command."""
# Rename updates to upgrades
if command == 'updates':
command = 'upgrades'
# Return the corresponding packages
if command in ['installed', 'upgrades', 'available']:
results = [
self._package_dict(package)
for package in getattr(self.base.sack.query(), command)()]
# Return the enabled repository ids
elif command in ['repos', 'repositories']:
results = [
{'repoid': repo.id, 'state': 'enabled'}
for repo in self.base.repos.iter_enabled()]
# Return any matching packages
else:
packages = dnf.subject.Subject(command).get_best_query(self.base.sack)
results = [self._package_dict(package) for package in packages]
self.module.exit_json(msg="", results=results)
def _is_installed(self, pkg):
installed = self.base.sack.query().installed()
package_spec = {}
name, arch = self._split_package_arch(pkg)
if arch:
package_spec['arch'] = arch
package_details = self._packagename_dict(pkg)
if package_details:
package_details['epoch'] = int(package_details['epoch'])
package_spec.update(package_details)
else:
package_spec['name'] = name
if installed.filter(**package_spec):
return True
else:
return False
def _is_newer_version_installed(self, pkg_name):
candidate_pkg = self._packagename_dict(pkg_name)
if not candidate_pkg:
# The user didn't provide a versioned rpm, so version checking is
# not required
return False
installed = self.base.sack.query().installed()
installed_pkg = installed.filter(name=candidate_pkg['name']).run()
if installed_pkg:
installed_pkg = installed_pkg[0]
# this looks weird but one is a dict and the other is a dnf.Package
evr_cmp = self._compare_evr(
installed_pkg.epoch, installed_pkg.version, installed_pkg.release,
candidate_pkg['epoch'], candidate_pkg['version'], candidate_pkg['release'],
)
if evr_cmp == 1:
return True
else:
return False
else:
return False
def _mark_package_install(self, pkg_spec, upgrade=False):
"""Mark the package for install."""
is_newer_version_installed = self._is_newer_version_installed(pkg_spec)
is_installed = self._is_installed(pkg_spec)
try:
if is_newer_version_installed:
if self.allow_downgrade:
# dnf only does allow_downgrade, we have to handle this ourselves
# because it allows a possibility for non-idempotent transactions
# on a system's package set (pending the yum repo has many old
# NVRs indexed)
if upgrade:
if is_installed:
self.base.upgrade(pkg_spec)
else:
self.base.install(pkg_spec)
else:
self.base.install(pkg_spec)
else: # Nothing to do, report back
pass
elif is_installed: # An potentially older (or same) version is installed
if upgrade:
self.base.upgrade(pkg_spec)
else: # Nothing to do, report back
pass
else: # The package is not installed, simply install it
self.base.install(pkg_spec)
return {'failed': False, 'msg': '', 'failure': '', 'rc': 0}
except dnf.exceptions.MarkingError as e:
return {
'failed': True,
'msg': "No package {0} available.".format(pkg_spec),
'failure': " ".join((pkg_spec, to_native(e))),
'rc': 1,
"results": []
}
except dnf.exceptions.DepsolveError as e:
return {
'failed': True,
'msg': "Depsolve Error occurred for package {0}.".format(pkg_spec),
'failure': " ".join((pkg_spec, to_native(e))),
'rc': 1,
"results": []
}
except dnf.exceptions.Error as e:
if to_text("already installed") in to_text(e):
return {'failed': False, 'msg': '', 'failure': ''}
else:
return {
'failed': True,
'msg': "Unknown Error occurred for package {0}.".format(pkg_spec),
'failure': " ".join((pkg_spec, to_native(e))),
'rc': 1,
"results": []
}
def _whatprovides(self, filepath):
self.base.read_all_repos()
available = self.base.sack.query().available()
# Search in file
files_filter = available.filter(file=filepath)
# And Search in provides
pkg_spec = files_filter.union(available.filter(provides=filepath)).run()
if pkg_spec:
return pkg_spec[0].name
def _parse_spec_group_file(self):
pkg_specs, grp_specs, module_specs, filenames = [], [], [], []
already_loaded_comps = False # Only load this if necessary, it's slow
for name in self.names:
if '://' in name:
name = fetch_file(self.module, name)
filenames.append(name)
elif name.endswith(".rpm"):
filenames.append(name)
elif name.startswith('/'):
# like "dnf install /usr/bin/vi"
pkg_spec = self._whatprovides(name)
if pkg_spec:
pkg_specs.append(pkg_spec)
continue
elif name.startswith("@") or ('/' in name):
if not already_loaded_comps:
self.base.read_comps()
already_loaded_comps = True
grp_env_mdl_candidate = name[1:].strip()
if self.with_modules:
mdl = self.module_base._get_modules(grp_env_mdl_candidate)
if mdl[0]:
module_specs.append(grp_env_mdl_candidate)
else:
grp_specs.append(grp_env_mdl_candidate)
else:
grp_specs.append(grp_env_mdl_candidate)
else:
pkg_specs.append(name)
return pkg_specs, grp_specs, module_specs, filenames
def _update_only(self, pkgs):
not_installed = []
for pkg in pkgs:
if self._is_installed(pkg):
try:
if isinstance(to_text(pkg), text_type):
self.base.upgrade(pkg)
else:
self.base.package_upgrade(pkg)
except Exception as e:
self.module.fail_json(
msg="Error occurred attempting update_only operation: {0}".format(to_native(e)),
results=[],
rc=1,
)
else:
not_installed.append(pkg)
return not_installed
def _install_remote_rpms(self, filenames):
if int(dnf.__version__.split(".")[0]) >= 2:
pkgs = list(sorted(self.base.add_remote_rpms(list(filenames)), reverse=True))
else:
pkgs = []
try:
for filename in filenames:
pkgs.append(self.base.add_remote_rpm(filename))
except IOError as e:
if to_text("Can not load RPM file") in to_text(e):
self.module.fail_json(
msg="Error occurred attempting remote rpm install of package: {0}. {1}".format(filename, to_native(e)),
results=[],
rc=1,
)
if self.update_only:
self._update_only(pkgs)
else:
for pkg in pkgs:
try:
if self._is_newer_version_installed(self._package_dict(pkg)['nevra']):
if self.allow_downgrade:
self.base.package_install(pkg)
else:
self.base.package_install(pkg)
except Exception as e:
self.module.fail_json(
msg="Error occurred attempting remote rpm operation: {0}".format(to_native(e)),
results=[],
rc=1,
)
def _is_module_installed(self, module_spec):
if self.with_modules:
module_spec = module_spec.strip()
module_list, nsv = self.module_base._get_modules(module_spec)
enabled_streams = self.base._moduleContainer.getEnabledStream(nsv.name)
if enabled_streams:
if nsv.stream:
if nsv.stream in enabled_streams:
return True # The provided stream was found
else:
return False # The provided stream was not found
else:
return True # No stream provided, but module found
return False # seems like a sane default
def ensure(self):
response = {
'msg': "",
'changed': False,
'results': [],
'rc': 0
}
# Accumulate failures. Package management modules install what they can
# and fail with a message about what they can't.
failure_response = {
'msg': "",
'failures': [],
'results': [],
'rc': 1
}
# Autoremove is called alone
# Jump to remove path where base.autoremove() is run
if not self.names and self.autoremove:
self.names = []
self.state = 'absent'
if self.names == ['*'] and self.state == 'latest':
try:
self.base.upgrade_all()
except dnf.exceptions.DepsolveError as e:
failure_response['msg'] = "Depsolve Error occurred attempting to upgrade all packages"
self.module.fail_json(**failure_response)
else:
pkg_specs, group_specs, module_specs, filenames = self._parse_spec_group_file()
pkg_specs = [p.strip() for p in pkg_specs]
filenames = [f.strip() for f in filenames]
groups = []
environments = []
for group_spec in (g.strip() for g in group_specs):
group = self.base.comps.group_by_pattern(group_spec)
if group:
groups.append(group.id)
else:
environment = self.base.comps.environment_by_pattern(group_spec)
if environment:
environments.append(environment.id)
else:
self.module.fail_json(
msg="No group {0} available.".format(group_spec),
results=[],
)
if self.state in ['installed', 'present']:
# Install files.
self._install_remote_rpms(filenames)
for filename in filenames:
response['results'].append("Installed {0}".format(filename))
# Install modules
if module_specs and self.with_modules:
for module in module_specs:
try:
if not self._is_module_installed(module):
response['results'].append("Module {0} installed.".format(module))
self.module_base.install([module])
self.module_base.enable([module])
except dnf.exceptions.MarkingErrors as e:
failure_response['failures'].append(' '.join((module, to_native(e))))
# Install groups.
for group in groups:
try:
group_pkg_count_installed = self.base.group_install(group, dnf.const.GROUP_PACKAGE_TYPES)
if group_pkg_count_installed == 0:
response['results'].append("Group {0} already installed.".format(group))
else:
response['results'].append("Group {0} installed.".format(group))
except dnf.exceptions.DepsolveError as e:
failure_response['msg'] = "Depsolve Error occurred attempting to install group: {0}".format(group)
self.module.fail_json(**failure_response)
except dnf.exceptions.Error as e:
# In dnf 2.0 if all the mandatory packages in a group do
# not install, an error is raised. We want to capture
# this but still install as much as possible.
failure_response['failures'].append(" ".join((group, to_native(e))))
for environment in environments:
try:
self.base.environment_install(environment, dnf.const.GROUP_PACKAGE_TYPES)
except dnf.exceptions.DepsolveError as e:
failure_response['msg'] = "Depsolve Error occurred attempting to install environment: {0}".format(environment)
self.module.fail_json(**failure_response)
except dnf.exceptions.Error as e:
failure_response['failures'].append(" ".join((environment, to_native(e))))
if module_specs and not self.with_modules:
# This means that the group or env wasn't found in comps
self.module.fail_json(
msg="No group {0} available.".format(module_specs[0]),
results=[],
)
# Install packages.
if self.update_only:
not_installed = self._update_only(pkg_specs)
for spec in not_installed:
response['results'].append("Packages providing %s not installed due to update_only specified" % spec)
else:
for pkg_spec in pkg_specs:
install_result = self._mark_package_install(pkg_spec)
if install_result['failed']:
if install_result['msg']:
failure_response['msg'] += install_result['msg']
failure_response['failures'].append(self._sanitize_dnf_error_msg_install(pkg_spec, install_result['failure']))
else:
if install_result['msg']:
response['results'].append(install_result['msg'])
elif self.state == 'latest':
# "latest" is same as "installed" for filenames.
self._install_remote_rpms(filenames)
for filename in filenames:
response['results'].append("Installed {0}".format(filename))
# Upgrade modules
if module_specs and self.with_modules:
for module in module_specs:
try:
if self._is_module_installed(module):
response['results'].append("Module {0} upgraded.".format(module))
self.module_base.upgrade([module])
except dnf.exceptions.MarkingErrors as e:
failure_response['failures'].append(' '.join((module, to_native(e))))
for group in groups:
try:
try:
self.base.group_upgrade(group)
response['results'].append("Group {0} upgraded.".format(group))
except dnf.exceptions.CompsError:
if not self.update_only:
# If not already installed, try to install.
group_pkg_count_installed = self.base.group_install(group, dnf.const.GROUP_PACKAGE_TYPES)
if group_pkg_count_installed == 0:
response['results'].append("Group {0} already installed.".format(group))
else:
response['results'].append("Group {0} installed.".format(group))
except dnf.exceptions.Error as e:
failure_response['failures'].append(" ".join((group, to_native(e))))
for environment in environments:
try:
try:
self.base.environment_upgrade(environment)
except dnf.exceptions.CompsError:
# If not already installed, try to install.
self.base.environment_install(environment, dnf.const.GROUP_PACKAGE_TYPES)
except dnf.exceptions.DepsolveError as e:
failure_response['msg'] = "Depsolve Error occurred attempting to install environment: {0}".format(environment)
except dnf.exceptions.Error as e:
failure_response['failures'].append(" ".join((environment, to_native(e))))
if self.update_only:
not_installed = self._update_only(pkg_specs)
for spec in not_installed:
response['results'].append("Packages providing %s not installed due to update_only specified" % spec)
else:
for pkg_spec in pkg_specs:
# best effort causes to install the latest package
# even if not previously installed
self.base.conf.best = True
install_result = self._mark_package_install(pkg_spec, upgrade=True)
if install_result['failed']:
if install_result['msg']:
failure_response['msg'] += install_result['msg']
failure_response['failures'].append(self._sanitize_dnf_error_msg_install(pkg_spec, install_result['failure']))
else:
if install_result['msg']:
response['results'].append(install_result['msg'])
else:
# state == absent
if filenames:
self.module.fail_json(
msg="Cannot remove paths -- please specify package name.",
results=[],
)
# Remove modules
if module_specs and self.with_modules:
for module in module_specs:
try:
if self._is_module_installed(module):
response['results'].append("Module {0} removed.".format(module))
self.module_base.remove([module])
self.module_base.disable([module])
self.module_base.reset([module])
except dnf.exceptions.MarkingErrors as e:
failure_response['failures'].append(' '.join((module, to_native(e))))
for group in groups:
try:
self.base.group_remove(group)
except dnf.exceptions.CompsError:
# Group is already uninstalled.
pass
except AttributeError:
# Group either isn't installed or wasn't marked installed at install time
# because of DNF bug
#
# This is necessary until the upstream dnf API bug is fixed where installing
# a group via the dnf API doesn't actually mark the group as installed
# https://bugzilla.redhat.com/show_bug.cgi?id=1620324
pass
for environment in environments:
try:
self.base.environment_remove(environment)
except dnf.exceptions.CompsError:
# Environment is already uninstalled.
pass
installed = self.base.sack.query().installed()
for pkg_spec in pkg_specs:
# short-circuit installed check for wildcard matching
if '*' in pkg_spec:
try:
self.base.remove(pkg_spec)
except dnf.exceptions.MarkingError as e:
is_failure, handled_remove_error = self._sanitize_dnf_error_msg_remove(pkg_spec, to_native(e))
if is_failure:
failure_response['failures'].append('{0} - {1}'.format(pkg_spec, to_native(e)))
else:
response['results'].append(handled_remove_error)
continue
installed_pkg = dnf.subject.Subject(pkg_spec).get_best_query(
sack=self.base.sack).installed().run()
for pkg in installed_pkg:
self.base.remove(str(pkg))
# Like the dnf CLI we want to allow recursive removal of dependent
# packages
self.allowerasing = True
if self.autoremove:
self.base.autoremove()
try:
if not self.base.resolve(allow_erasing=self.allowerasing):
if failure_response['failures']:
failure_response['msg'] = 'Failed to install some of the specified packages'
self.module.fail_json(**failure_response)
response['msg'] = "Nothing to do"
self.module.exit_json(**response)
else:
response['changed'] = True
# If packages got installed/removed, add them to the results.
# We do this early so we can use it for both check_mode and not.
if self.download_only:
install_action = 'Downloaded'
else:
install_action = 'Installed'
for package in self.base.transaction.install_set:
response['results'].append("{0}: {1}".format(install_action, package))
for package in self.base.transaction.remove_set:
response['results'].append("Removed: {0}".format(package))
if failure_response['failures']:
failure_response['msg'] = 'Failed to install some of the specified packages'
self.module.fail_json(**failure_response)
if self.module.check_mode:
response['msg'] = "Check mode: No changes made, but would have if not in check mode"
self.module.exit_json(**response)
try:
if self.download_only and self.download_dir and self.base.conf.destdir:
dnf.util.ensure_dir(self.base.conf.destdir)
self.base.repos.all().pkgdir = self.base.conf.destdir
self.base.download_packages(self.base.transaction.install_set)
except dnf.exceptions.DownloadError as e:
self.module.fail_json(
msg="Failed to download packages: {0}".format(to_text(e)),
results=[],
)
# Validate GPG. This is NOT done in dnf.Base (it's done in the
# upstream CLI subclass of dnf.Base)
if not self.disable_gpg_check:
for package in self.base.transaction.install_set:
fail = False
gpgres, gpgerr = self.base._sig_check_pkg(package)
if gpgres == 0: # validated successfully
continue
elif gpgres == 1: # validation failed, install cert?
try:
self.base._get_key_for_package(package)
except dnf.exceptions.Error as e:
fail = True
else: # fatal error
fail = True
if fail:
msg = 'Failed to validate GPG signature for {0}'.format(package)
self.module.fail_json(msg)
if self.download_only:
# No further work left to do, and the results were already updated above.
# Just return them.
self.module.exit_json(**response)
else:
tid = self.base.do_transaction()
if tid is not None:
transaction = self.base.history.old([tid])[0]
if transaction.return_code:
failure_response['failures'].append(transaction.output())
if failure_response['failures']:
failure_response['msg'] = 'Failed to install some of the specified packages'
self.module.fail_json(**failure_response)
self.module.exit_json(**response)
except dnf.exceptions.DepsolveError as e:
failure_response['msg'] = "Depsolve Error occurred: {0}".format(to_native(e))
self.module.fail_json(**failure_response)
except dnf.exceptions.Error as e:
if to_text("already installed") in to_text(e):
response['changed'] = False
response['results'].append("Package already installed: {0}".format(to_native(e)))
self.module.exit_json(**response)
else:
failure_response['msg'] = "Unknown Error occurred: {0}".format(to_native(e))
self.module.fail_json(**failure_response)
def run(self):
"""The main function."""
# Check if autoremove is called correctly
if self.autoremove:
if LooseVersion(dnf.__version__) < LooseVersion('2.0.1'):
self.module.fail_json(
msg="Autoremove requires dnf>=2.0.1. Current dnf version is %s" % dnf.__version__,
results=[],
)
# Check if download_dir is called correctly
if self.download_dir:
if LooseVersion(dnf.__version__) < LooseVersion('2.6.2'):
self.module.fail_json(
msg="download_dir requires dnf>=2.6.2. Current dnf version is %s" % dnf.__version__,
results=[],
)
if self.update_cache and not self.names and not self.list:
self.base = self._base(
self.conf_file, self.disable_gpg_check, self.disablerepo,
self.enablerepo, self.installroot
)
self.module.exit_json(
msg="Cache updated",
changed=False,
results=[],
rc=0
)
# Set state as installed by default
# This is not set in AnsibleModule() because the following shouldn't happen
# - dnf: autoremove=yes state=installed
if self.state is None:
self.state = 'installed'
if self.list:
self.base = self._base(
self.conf_file, self.disable_gpg_check, self.disablerepo,
self.enablerepo, self.installroot
)
self.list_items(self.list)
else:
# Note: base takes a long time to run so we want to check for failure
# before running it.
if not self.download_only and not dnf.util.am_i_root():
self.module.fail_json(
msg="This command has to be run under the root user.",
results=[],
)
self.base = self._base(
self.conf_file, self.disable_gpg_check, self.disablerepo,
self.enablerepo, self.installroot
)
if self.with_modules:
self.module_base = dnf.module.module_base.ModuleBase(self.base)
self.ensure()
def main():
# state=installed name=pkgspec
# state=removed name=pkgspec
# state=latest name=pkgspec
#
# informational commands:
# list=installed
# list=updates
# list=available
# list=repos
# list=pkgspec
# Extend yumdnf_argument_spec with dnf-specific features that will never be
# backported to yum because yum is now in "maintenance mode" upstream
yumdnf_argument_spec['argument_spec']['allowerasing'] = dict(default=False, type='bool')
yumdnf_argument_spec['argument_spec']['nobest'] = dict(default=False, type='bool')
module = AnsibleModule(
**yumdnf_argument_spec
)
module_implementation = DnfModule(module)
try:
module_implementation.run()
except dnf.exceptions.RepoError as de:
module.fail_json(
msg="Failed to synchronize repodata: {0}".format(to_native(de)),
rc=1,
results=[],
changed=False
)
if __name__ == '__main__':
main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 76,192 |
GPG Validation failed on DNF/YUM local RPM install.
|
### Summary
When I try to use the yum/dnf module to install an RPM from a local file, not from a repo, it fails with a GPG error.
`Failed to validate GPG signature for xyy.rpm`
I've created a simple example here complete with a Vagrant to make it easy to reproduce:
https://github.com/jeffmacdonald/ansible-yum-error
### Issue Type
Bug Report
### Component Name
dnf, yum
### Ansible Version
```console
$ ansible --version
ansible [core 2.11.1]
config file = None
configured module search path = ['/Users/jeff.macdonald/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /Users/jeff.macdonald/.pyenv/versions/3.9.5/envs/python3.9.5/lib/python3.9/site-packages/ansible
ansible collection location = /Users/jeff.macdonald/.ansible/collections:/usr/share/ansible/collections
executable location = /Users/jeff.macdonald/.pyenv/versions/python3.9.5/bin/ansible
python version = 3.9.5 (default, May 30 2021, 21:26:15) [Clang 12.0.5 (clang-1205.0.22.9)]
jinja version = 3.0.1
libyaml = True
```
### Configuration
```console
$ ansible-config dump --only-changed
No output
```
### OS / Environment
Ansible is being run from a Mac (latest) on Rocky Linux 8.3
### Steps to Reproduce
https://github.com/jeffmacdonald/ansible-yum-error
```yaml
---
- name: Install Telegraf
hosts: all
tasks:
- name: install influx gpg key
become: yes
rpm_key:
key: "https://repos.influxdata.com/influxdb.key"
state: present
- name: download telegraf RPM
get_url:
url: https://dl.influxdata.com/telegraf/releases/telegraf-1.13.4-1.x86_64.rpm
dest: /tmp/telegraf-1.13.4-1.x86_64.rpm
- name: install telegraf packages
become: yes
yum:
name: /tmp/telegraf-1.13.4-1.x86_64.rpm
state: present
```
### Expected Results
I expect the RPM to install
### Actual Results
```console
TASK [install telegraf packages] ***********************************************
fatal: [node1]: FAILED! => {"changed": false, "msg": "Failed to validate GPG signature for telegraf-1.13.4-1.x86_64"}
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/76192
|
https://github.com/ansible/ansible/pull/76194
|
fdc33488595e01ed39ee1c9482ad11acda3d4a66
|
150faf25d3e9a23afde4e16e66281f1fb88b824f
| 2021-11-01T19:52:09Z |
python
| 2021-11-02T14:01:12Z |
test/integration/targets/dnf/tasks/gpg.yml
|
# Set up a repo of unsigned rpms
- block:
- set_fact:
pkg_name: langtable
pkg_repo_dir: "{{ remote_tmp_dir }}/unsigned"
- name: Ensure our test package isn't already installed
dnf:
name:
- '{{ pkg_name }}'
state: absent
- name: Install rpm-sign
dnf:
name:
- rpm-sign
state: present
- name: Create directory to use as local repo
file:
path: "{{ pkg_repo_dir }}"
state: directory
- name: Download the test package
dnf:
name: '{{ pkg_name }}'
state: latest
download_only: true
download_dir: "{{ pkg_repo_dir }}"
- name: Unsign the RPM
shell: rpmsign --delsign {{ remote_tmp_dir }}/unsigned/{{ pkg_name }}*
- name: createrepo
command: createrepo .
args:
chdir: "{{ pkg_repo_dir }}"
- name: Add the repo
yum_repository:
name: unsigned
description: unsigned rpms
baseurl: "file://{{ pkg_repo_dir }}"
# we want to ensure that signing is verified
gpgcheck: true
- name: Install test package
dnf:
name:
- "{{ pkg_name }}"
disablerepo: '*'
enablerepo: unsigned
register: res
ignore_errors: yes
- assert:
that:
- res is failed
- "'Failed to validate GPG signature' in res.msg"
always:
- name: Remove rpm-sign (and test package if it got installed)
dnf:
name:
- rpm-sign
- "{{ pkg_name }}"
state: absent
- name: Remove test repo
yum_repository:
name: unsigned
state: absent
- name: Remove repo dir
file:
path: "{{ pkg_repo_dir }}"
state: absent
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 74,326 |
docker not detected in centos, alpine, debian
|
### Summary
The facts for ansible_virtualization_type and ansible_virtualization_role are wrong if run in a dockercontainer of centos (tested with 8), alpine or debian (tested with buster). Only in an ubuntu-container (tested with focal or hirsute) are the values correct.
The problem can be reproduced with the latest stable version and the devel branch.
### Issue Type
Bug Report
### Component Name
setup, virtual/linux.py
### Ansible Version
```console
$ ansible --version
ansible 2.10.8
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/jfader/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.9/site-packages/ansible
executable location = /usr/bin/ansible
python version = 3.9.3 (default, Apr 8 2021, 23:35:02) [GCC 10.2.0]
```
### Configuration
```console
$ ansible-config dump --only-changed
no changes recorded
```
### OS / Environment
Archlinux
### Steps to Reproduce
Start up two containers. One with centos:8 and one with ubuntu:hirsute:
```bash
docker run --name centos_8 -it centos:8 /bin/bash
dnf -y install python3
```
```bash
docker run --name ubuntu_hirsute -it ubuntu:hirsute /bin/bash
apt-get update && apt-get install -y python3
```
Inventory:
```
ubuntu_hirsute ansible_connection=docker
centos_8 ansible_connection=docker
```
Playbook:
```yaml
---
- hosts: centos_8,ubuntu_hirsute
gather_facts: True
tasks:
- name: "output information about virtualization (type and role)"
debug:
var: "{{ item }}"
loop:
- ansible_virtualization_type
- ansible_virtualization_role
```
### Expected Results
```console
TASK [output information about virtualization (type and role)] ************************************************
ok: [centos_8] => (item=ansible_virtualization_type) => {
"ansible_loop_var": "item",
"ansible_virtualization_type": "docker",
"item": "ansible_virtualization_type"
}
ok: [ubuntu_hirsute] => (item=ansible_virtualization_type) => {
"ansible_loop_var": "item",
"ansible_virtualization_type": "docker",
"item": "ansible_virtualization_type"
}
ok: [centos_8] => (item=ansible_virtualization_role) => {
"ansible_loop_var": "item",
"ansible_virtualization_role": "guest",
"item": "ansible_virtualization_role"
}
ok: [ubuntu_hirsute] => (item=ansible_virtualization_role) => {
"ansible_loop_var": "item",
"ansible_virtualization_role": "guest",
"item": "ansible_virtualization_role"
}
```
### Actual Results
```console
TASK [output information about virtualization (type and role)] ************************************************
ok: [centos_8] => (item=ansible_virtualization_type) => {
"ansible_loop_var": "item",
"ansible_virtualization_type": "NA",
"item": "ansible_virtualization_type"
}
ok: [ubuntu_hirsute] => (item=ansible_virtualization_type) => {
"ansible_loop_var": "item",
"ansible_virtualization_type": "docker",
"item": "ansible_virtualization_type"
}
ok: [centos_8] => (item=ansible_virtualization_role) => {
"ansible_loop_var": "item",
"ansible_virtualization_role": "NA",
"item": "ansible_virtualization_role"
}
ok: [ubuntu_hirsute] => (item=ansible_virtualization_role) => {
"ansible_loop_var": "item",
"ansible_virtualization_role": "guest",
"item": "ansible_virtualization_role"
}
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/74326
|
https://github.com/ansible/ansible/pull/74349
|
636b317a4f4ad054485401acc9bb2d8af65dd8c8
|
17ec2d4952e0a641b07ce7774f744ee8f1a037ad
| 2021-04-17T10:26:05Z |
python
| 2021-11-03T19:26:36Z |
changelogs/fragments/74349-improve-docker-detection.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 74,326 |
docker not detected in centos, alpine, debian
|
### Summary
The facts for ansible_virtualization_type and ansible_virtualization_role are wrong if run in a dockercontainer of centos (tested with 8), alpine or debian (tested with buster). Only in an ubuntu-container (tested with focal or hirsute) are the values correct.
The problem can be reproduced with the latest stable version and the devel branch.
### Issue Type
Bug Report
### Component Name
setup, virtual/linux.py
### Ansible Version
```console
$ ansible --version
ansible 2.10.8
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/jfader/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.9/site-packages/ansible
executable location = /usr/bin/ansible
python version = 3.9.3 (default, Apr 8 2021, 23:35:02) [GCC 10.2.0]
```
### Configuration
```console
$ ansible-config dump --only-changed
no changes recorded
```
### OS / Environment
Archlinux
### Steps to Reproduce
Start up two containers. One with centos:8 and one with ubuntu:hirsute:
```bash
docker run --name centos_8 -it centos:8 /bin/bash
dnf -y install python3
```
```bash
docker run --name ubuntu_hirsute -it ubuntu:hirsute /bin/bash
apt-get update && apt-get install -y python3
```
Inventory:
```
ubuntu_hirsute ansible_connection=docker
centos_8 ansible_connection=docker
```
Playbook:
```yaml
---
- hosts: centos_8,ubuntu_hirsute
gather_facts: True
tasks:
- name: "output information about virtualization (type and role)"
debug:
var: "{{ item }}"
loop:
- ansible_virtualization_type
- ansible_virtualization_role
```
### Expected Results
```console
TASK [output information about virtualization (type and role)] ************************************************
ok: [centos_8] => (item=ansible_virtualization_type) => {
"ansible_loop_var": "item",
"ansible_virtualization_type": "docker",
"item": "ansible_virtualization_type"
}
ok: [ubuntu_hirsute] => (item=ansible_virtualization_type) => {
"ansible_loop_var": "item",
"ansible_virtualization_type": "docker",
"item": "ansible_virtualization_type"
}
ok: [centos_8] => (item=ansible_virtualization_role) => {
"ansible_loop_var": "item",
"ansible_virtualization_role": "guest",
"item": "ansible_virtualization_role"
}
ok: [ubuntu_hirsute] => (item=ansible_virtualization_role) => {
"ansible_loop_var": "item",
"ansible_virtualization_role": "guest",
"item": "ansible_virtualization_role"
}
```
### Actual Results
```console
TASK [output information about virtualization (type and role)] ************************************************
ok: [centos_8] => (item=ansible_virtualization_type) => {
"ansible_loop_var": "item",
"ansible_virtualization_type": "NA",
"item": "ansible_virtualization_type"
}
ok: [ubuntu_hirsute] => (item=ansible_virtualization_type) => {
"ansible_loop_var": "item",
"ansible_virtualization_type": "docker",
"item": "ansible_virtualization_type"
}
ok: [centos_8] => (item=ansible_virtualization_role) => {
"ansible_loop_var": "item",
"ansible_virtualization_role": "NA",
"item": "ansible_virtualization_role"
}
ok: [ubuntu_hirsute] => (item=ansible_virtualization_role) => {
"ansible_loop_var": "item",
"ansible_virtualization_role": "guest",
"item": "ansible_virtualization_role"
}
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/74326
|
https://github.com/ansible/ansible/pull/74349
|
636b317a4f4ad054485401acc9bb2d8af65dd8c8
|
17ec2d4952e0a641b07ce7774f744ee8f1a037ad
| 2021-04-17T10:26:05Z |
python
| 2021-11-03T19:26:36Z |
lib/ansible/module_utils/facts/virtual/linux.py
|
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import glob
import os
import re
from ansible.module_utils.facts.virtual.base import Virtual, VirtualCollector
from ansible.module_utils.facts.utils import get_file_content, get_file_lines
class LinuxVirtual(Virtual):
"""
This is a Linux-specific subclass of Virtual. It defines
- virtualization_type
- virtualization_role
"""
platform = 'Linux'
# For more information, check: http://people.redhat.com/~rjones/virt-what/
def get_virtual_facts(self):
virtual_facts = {}
# We want to maintain compatibility with the old "virtualization_type"
# and "virtualization_role" entries, so we need to track if we found
# them. We won't return them until the end, but if we found them early,
# we should avoid updating them again.
found_virt = False
# But as we go along, we also want to track virt tech the new way.
host_tech = set()
guest_tech = set()
# lxc/docker
if os.path.exists('/proc/1/cgroup'):
for line in get_file_lines('/proc/1/cgroup'):
if re.search(r'/docker(/|-[0-9a-f]+\.scope)', line):
guest_tech.add('docker')
if not found_virt:
virtual_facts['virtualization_type'] = 'docker'
virtual_facts['virtualization_role'] = 'guest'
found_virt = True
if re.search('/lxc/', line) or re.search('/machine.slice/machine-lxc', line):
guest_tech.add('lxc')
if not found_virt:
virtual_facts['virtualization_type'] = 'lxc'
virtual_facts['virtualization_role'] = 'guest'
found_virt = True
if re.search('/system.slice/containerd.service', line):
guest_tech.add('containerd')
if not found_virt:
virtual_facts['virtualization_type'] = 'containerd'
virtual_facts['virtualization_role'] = 'guest'
found_virt = True
# lxc does not always appear in cgroups anymore but sets 'container=lxc' environment var, requires root privs
if os.path.exists('/proc/1/environ'):
for line in get_file_lines('/proc/1/environ', line_sep='\x00'):
if re.search('container=lxc', line):
guest_tech.add('lxc')
if not found_virt:
virtual_facts['virtualization_type'] = 'lxc'
virtual_facts['virtualization_role'] = 'guest'
found_virt = True
if re.search('container=podman', line):
guest_tech.add('podman')
if not found_virt:
virtual_facts['virtualization_type'] = 'podman'
virtual_facts['virtualization_role'] = 'guest'
found_virt = True
if re.search('^container=.', line):
guest_tech.add('container')
if not found_virt:
virtual_facts['virtualization_type'] = 'container'
virtual_facts['virtualization_role'] = 'guest'
found_virt = True
if os.path.exists('/proc/vz') and not os.path.exists('/proc/lve'):
virtual_facts['virtualization_type'] = 'openvz'
if os.path.exists('/proc/bc'):
host_tech.add('openvz')
if not found_virt:
virtual_facts['virtualization_role'] = 'host'
else:
guest_tech.add('openvz')
if not found_virt:
virtual_facts['virtualization_role'] = 'guest'
found_virt = True
systemd_container = get_file_content('/run/systemd/container')
if systemd_container:
guest_tech.add(systemd_container)
if not found_virt:
virtual_facts['virtualization_type'] = systemd_container
virtual_facts['virtualization_role'] = 'guest'
found_virt = True
# ensure 'container' guest_tech is appropriately set
if guest_tech.intersection(set(['docker', 'lxc', 'podman', 'openvz', 'containerd'])) or systemd_container:
guest_tech.add('container')
if os.path.exists("/proc/xen"):
is_xen_host = False
try:
for line in get_file_lines('/proc/xen/capabilities'):
if "control_d" in line:
is_xen_host = True
except IOError:
pass
if is_xen_host:
host_tech.add('xen')
if not found_virt:
virtual_facts['virtualization_type'] = 'xen'
virtual_facts['virtualization_role'] = 'host'
else:
if not found_virt:
virtual_facts['virtualization_type'] = 'xen'
virtual_facts['virtualization_role'] = 'guest'
found_virt = True
# assume guest for this block
if not found_virt:
virtual_facts['virtualization_role'] = 'guest'
product_name = get_file_content('/sys/devices/virtual/dmi/id/product_name')
sys_vendor = get_file_content('/sys/devices/virtual/dmi/id/sys_vendor')
product_family = get_file_content('/sys/devices/virtual/dmi/id/product_family')
if product_name in ('KVM', 'KVM Server', 'Bochs', 'AHV'):
guest_tech.add('kvm')
if not found_virt:
virtual_facts['virtualization_type'] = 'kvm'
found_virt = True
if sys_vendor == 'oVirt':
guest_tech.add('oVirt')
if not found_virt:
virtual_facts['virtualization_type'] = 'oVirt'
found_virt = True
if sys_vendor == 'Red Hat':
if product_family == 'RHV':
guest_tech.add('RHV')
if not found_virt:
virtual_facts['virtualization_type'] = 'RHV'
found_virt = True
elif product_name == 'RHEV Hypervisor':
guest_tech.add('RHEV')
if not found_virt:
virtual_facts['virtualization_type'] = 'RHEV'
found_virt = True
if product_name in ('VMware Virtual Platform', 'VMware7,1'):
guest_tech.add('VMware')
if not found_virt:
virtual_facts['virtualization_type'] = 'VMware'
found_virt = True
if product_name in ('OpenStack Compute', 'OpenStack Nova'):
guest_tech.add('openstack')
if not found_virt:
virtual_facts['virtualization_type'] = 'openstack'
found_virt = True
bios_vendor = get_file_content('/sys/devices/virtual/dmi/id/bios_vendor')
if bios_vendor == 'Xen':
guest_tech.add('xen')
if not found_virt:
virtual_facts['virtualization_type'] = 'xen'
found_virt = True
if bios_vendor == 'innotek GmbH':
guest_tech.add('virtualbox')
if not found_virt:
virtual_facts['virtualization_type'] = 'virtualbox'
found_virt = True
if bios_vendor in ('Amazon EC2', 'DigitalOcean', 'Hetzner'):
guest_tech.add('kvm')
if not found_virt:
virtual_facts['virtualization_type'] = 'kvm'
found_virt = True
KVM_SYS_VENDORS = ('QEMU', 'Amazon EC2', 'DigitalOcean', 'Google', 'Scaleway', 'Nutanix')
if sys_vendor in KVM_SYS_VENDORS:
guest_tech.add('kvm')
if not found_virt:
virtual_facts['virtualization_type'] = 'kvm'
found_virt = True
if sys_vendor == 'KubeVirt':
guest_tech.add('KubeVirt')
if not found_virt:
virtual_facts['virtualization_type'] = 'KubeVirt'
found_virt = True
# FIXME: This does also match hyperv
if sys_vendor == 'Microsoft Corporation':
guest_tech.add('VirtualPC')
if not found_virt:
virtual_facts['virtualization_type'] = 'VirtualPC'
found_virt = True
if sys_vendor == 'Parallels Software International Inc.':
guest_tech.add('parallels')
if not found_virt:
virtual_facts['virtualization_type'] = 'parallels'
found_virt = True
if sys_vendor == 'OpenStack Foundation':
guest_tech.add('openstack')
if not found_virt:
virtual_facts['virtualization_type'] = 'openstack'
found_virt = True
# unassume guest
if not found_virt:
del virtual_facts['virtualization_role']
if os.path.exists('/proc/self/status'):
for line in get_file_lines('/proc/self/status'):
if re.match(r'^VxID:\s+\d+', line):
if not found_virt:
virtual_facts['virtualization_type'] = 'linux_vserver'
if re.match(r'^VxID:\s+0', line):
host_tech.add('linux_vserver')
if not found_virt:
virtual_facts['virtualization_role'] = 'host'
else:
guest_tech.add('linux_vserver')
if not found_virt:
virtual_facts['virtualization_role'] = 'guest'
found_virt = True
if os.path.exists('/proc/cpuinfo'):
for line in get_file_lines('/proc/cpuinfo'):
if re.match('^model name.*QEMU Virtual CPU', line):
guest_tech.add('kvm')
if not found_virt:
virtual_facts['virtualization_type'] = 'kvm'
elif re.match('^vendor_id.*User Mode Linux', line):
guest_tech.add('uml')
if not found_virt:
virtual_facts['virtualization_type'] = 'uml'
elif re.match('^model name.*UML', line):
guest_tech.add('uml')
if not found_virt:
virtual_facts['virtualization_type'] = 'uml'
elif re.match('^machine.*CHRP IBM pSeries .emulated by qemu.', line):
guest_tech.add('kvm')
if not found_virt:
virtual_facts['virtualization_type'] = 'kvm'
elif re.match('^vendor_id.*PowerVM Lx86', line):
guest_tech.add('powervm_lx86')
if not found_virt:
virtual_facts['virtualization_type'] = 'powervm_lx86'
elif re.match('^vendor_id.*IBM/S390', line):
guest_tech.add('PR/SM')
if not found_virt:
virtual_facts['virtualization_type'] = 'PR/SM'
lscpu = self.module.get_bin_path('lscpu')
if lscpu:
rc, out, err = self.module.run_command(["lscpu"])
if rc == 0:
for line in out.splitlines():
data = line.split(":", 1)
key = data[0].strip()
if key == 'Hypervisor':
tech = data[1].strip()
guest_tech.add(tech)
if not found_virt:
virtual_facts['virtualization_type'] = tech
else:
guest_tech.add('ibm_systemz')
if not found_virt:
virtual_facts['virtualization_type'] = 'ibm_systemz'
else:
continue
if virtual_facts['virtualization_type'] == 'PR/SM':
if not found_virt:
virtual_facts['virtualization_role'] = 'LPAR'
else:
if not found_virt:
virtual_facts['virtualization_role'] = 'guest'
if not found_virt:
found_virt = True
# Beware that we can have both kvm and virtualbox running on a single system
if os.path.exists("/proc/modules") and os.access('/proc/modules', os.R_OK):
modules = []
for line in get_file_lines("/proc/modules"):
data = line.split(" ", 1)
modules.append(data[0])
if 'kvm' in modules:
host_tech.add('kvm')
if not found_virt:
virtual_facts['virtualization_type'] = 'kvm'
virtual_facts['virtualization_role'] = 'host'
if os.path.isdir('/rhev/'):
# Check whether this is a RHEV hypervisor (is vdsm running ?)
for f in glob.glob('/proc/[0-9]*/comm'):
try:
with open(f) as virt_fh:
comm_content = virt_fh.read().rstrip()
if comm_content in ('vdsm', 'vdsmd'):
# We add both kvm and RHEV to host_tech in this case.
# It's accurate. RHEV uses KVM.
host_tech.add('RHEV')
if not found_virt:
virtual_facts['virtualization_type'] = 'RHEV'
break
except Exception:
pass
found_virt = True
if 'vboxdrv' in modules:
host_tech.add('virtualbox')
if not found_virt:
virtual_facts['virtualization_type'] = 'virtualbox'
virtual_facts['virtualization_role'] = 'host'
found_virt = True
if 'virtio' in modules:
host_tech.add('kvm')
if not found_virt:
virtual_facts['virtualization_type'] = 'kvm'
virtual_facts['virtualization_role'] = 'guest'
found_virt = True
# In older Linux Kernel versions, /sys filesystem is not available
# dmidecode is the safest option to parse virtualization related values
dmi_bin = self.module.get_bin_path('dmidecode')
# We still want to continue even if dmidecode is not available
if dmi_bin is not None:
(rc, out, err) = self.module.run_command('%s -s system-product-name' % dmi_bin)
if rc == 0:
# Strip out commented lines (specific dmidecode output)
vendor_name = ''.join([line.strip() for line in out.splitlines() if not line.startswith('#')])
if vendor_name.startswith('VMware'):
guest_tech.add('VMware')
if not found_virt:
virtual_facts['virtualization_type'] = 'VMware'
virtual_facts['virtualization_role'] = 'guest'
found_virt = True
if 'BHYVE' in out:
guest_tech.add('bhyve')
if not found_virt:
virtual_facts['virtualization_type'] = 'bhyve'
virtual_facts['virtualization_role'] = 'guest'
found_virt = True
if os.path.exists('/dev/kvm'):
host_tech.add('kvm')
if not found_virt:
virtual_facts['virtualization_type'] = 'kvm'
virtual_facts['virtualization_role'] = 'host'
found_virt = True
# If none of the above matches, return 'NA' for virtualization_type
# and virtualization_role. This allows for proper grouping.
if not found_virt:
virtual_facts['virtualization_type'] = 'NA'
virtual_facts['virtualization_role'] = 'NA'
found_virt = True
virtual_facts['virtualization_tech_guest'] = guest_tech
virtual_facts['virtualization_tech_host'] = host_tech
return virtual_facts
class LinuxVirtualCollector(VirtualCollector):
_fact_class = LinuxVirtual
_platform = 'Linux'
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 74,326 |
docker not detected in centos, alpine, debian
|
### Summary
The facts for ansible_virtualization_type and ansible_virtualization_role are wrong if run in a dockercontainer of centos (tested with 8), alpine or debian (tested with buster). Only in an ubuntu-container (tested with focal or hirsute) are the values correct.
The problem can be reproduced with the latest stable version and the devel branch.
### Issue Type
Bug Report
### Component Name
setup, virtual/linux.py
### Ansible Version
```console
$ ansible --version
ansible 2.10.8
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/jfader/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.9/site-packages/ansible
executable location = /usr/bin/ansible
python version = 3.9.3 (default, Apr 8 2021, 23:35:02) [GCC 10.2.0]
```
### Configuration
```console
$ ansible-config dump --only-changed
no changes recorded
```
### OS / Environment
Archlinux
### Steps to Reproduce
Start up two containers. One with centos:8 and one with ubuntu:hirsute:
```bash
docker run --name centos_8 -it centos:8 /bin/bash
dnf -y install python3
```
```bash
docker run --name ubuntu_hirsute -it ubuntu:hirsute /bin/bash
apt-get update && apt-get install -y python3
```
Inventory:
```
ubuntu_hirsute ansible_connection=docker
centos_8 ansible_connection=docker
```
Playbook:
```yaml
---
- hosts: centos_8,ubuntu_hirsute
gather_facts: True
tasks:
- name: "output information about virtualization (type and role)"
debug:
var: "{{ item }}"
loop:
- ansible_virtualization_type
- ansible_virtualization_role
```
### Expected Results
```console
TASK [output information about virtualization (type and role)] ************************************************
ok: [centos_8] => (item=ansible_virtualization_type) => {
"ansible_loop_var": "item",
"ansible_virtualization_type": "docker",
"item": "ansible_virtualization_type"
}
ok: [ubuntu_hirsute] => (item=ansible_virtualization_type) => {
"ansible_loop_var": "item",
"ansible_virtualization_type": "docker",
"item": "ansible_virtualization_type"
}
ok: [centos_8] => (item=ansible_virtualization_role) => {
"ansible_loop_var": "item",
"ansible_virtualization_role": "guest",
"item": "ansible_virtualization_role"
}
ok: [ubuntu_hirsute] => (item=ansible_virtualization_role) => {
"ansible_loop_var": "item",
"ansible_virtualization_role": "guest",
"item": "ansible_virtualization_role"
}
```
### Actual Results
```console
TASK [output information about virtualization (type and role)] ************************************************
ok: [centos_8] => (item=ansible_virtualization_type) => {
"ansible_loop_var": "item",
"ansible_virtualization_type": "NA",
"item": "ansible_virtualization_type"
}
ok: [ubuntu_hirsute] => (item=ansible_virtualization_type) => {
"ansible_loop_var": "item",
"ansible_virtualization_type": "docker",
"item": "ansible_virtualization_type"
}
ok: [centos_8] => (item=ansible_virtualization_role) => {
"ansible_loop_var": "item",
"ansible_virtualization_role": "NA",
"item": "ansible_virtualization_role"
}
ok: [ubuntu_hirsute] => (item=ansible_virtualization_role) => {
"ansible_loop_var": "item",
"ansible_virtualization_role": "guest",
"item": "ansible_virtualization_role"
}
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/74326
|
https://github.com/ansible/ansible/pull/74349
|
636b317a4f4ad054485401acc9bb2d8af65dd8c8
|
17ec2d4952e0a641b07ce7774f744ee8f1a037ad
| 2021-04-17T10:26:05Z |
python
| 2021-11-03T19:26:36Z |
test/units/module_utils/facts/virtual/test_linux.py
|
# -*- coding: utf-8 -*-
# Copyright (c) 2020 Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
from ansible.module_utils.facts.virtual import linux
def test_get_virtual_facts_bhyve(mocker):
mocker.patch('os.path.exists', return_value=False)
mocker.patch('ansible.module_utils.facts.virtual.linux.get_file_content', return_value='')
mocker.patch('ansible.module_utils.facts.virtual.linux.get_file_lines', return_value=[])
module = mocker.Mock()
module.run_command.return_value = (0, 'BHYVE\n', '')
inst = linux.LinuxVirtual(module)
facts = inst.get_virtual_facts()
expected = {
'virtualization_role': 'guest',
'virtualization_tech_host': set(),
'virtualization_type': 'bhyve',
'virtualization_tech_guest': set(['bhyve']),
}
assert facts == expected
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 76,242 |
Porting entry for module Python requirements is abmbiguous
|
### Summary
( https://github.com/ansible/ansible/blob/devel/docs/docsite/rst/porting_guides/porting_guide_core_2.13.rst#modules )
I was reading the porting docs not about modules now requiring Python >= 2.7.
```
Python 2.7 is a hard requirement for module execution in this release.
```
While with the context I'm aware that this should be Python >= 2.7, as written it's slightly ambiguous and could be read as Python == 2.7 rather than Python >= 2.7.
### Issue Type
Documentation Report
### Component Name
docs/docsite/rst/porting_guides/porting_guide_core_2.13.rst
### Ansible Version
```console
devel branch
```
### Configuration
```console
n/a
```
### OS / Environment
n/a
### Additional Information
Mentioned it in #ansible-docs:
> samccann > yeah I think we might need to wordsmith that porting guide entry a bit. "Python 2.7 or later is a hard requirement..."
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/76242
|
https://github.com/ansible/ansible/pull/76258
|
5c225dc0f5bfa677addeac100a8018df3f3a9db1
|
f6ec773e6ef2e7153ae3b3a992d01957a9811b46
| 2021-11-08T14:57:08Z |
python
| 2021-11-09T14:22:16Z |
docs/docsite/rst/porting_guides/porting_guide_core_2.13.rst
|
.. _porting_2.13_guide_core:
*******************************
Ansible-core 2.13 Porting Guide
*******************************
This section discusses the behavioral changes between ``ansible-core`` 2.12 and ``ansible-core`` 2.13.
It is intended to assist in updating your playbooks, plugins and other parts of your Ansible infrastructure so they will work with this version of Ansible.
We suggest you read this page along with `ansible-core Changelog for 2.13 <https://github.com/ansible/ansible/blob/devel/changelogs/CHANGELOG-v2.13.rst>`_ to understand what updates you may need to make.
This document is part of a collection on porting. The complete list of porting guides can be found at :ref:`porting guides <porting_guides>`.
.. contents:: Topics
Playbook
========
* Templating - You can no longer perform arithmetic and concatenation operations outside of the jinja template. The following statement will need to be rewritten to produce ``[1, 2]``:
.. code-block:: yaml
- name: Prior to 2.13
debug:
msg: '[1] + {{ [2] }}'
- name: 2.13 and forward
debug:
msg: '{{ [1] + [2] }}'
* The return value of the ``__repr__`` method of an undefined variable represented by the ``AnsibleUndefined`` object changed. ``{{ '%r'|format(undefined_variable) }}`` returns ``AnsibleUndefined(hint=None, obj=missing, name='undefined_variable')`` in 2.13 as opposed to just ``AnsibleUndefined`` in versions 2.12 and prior.
Command Line
============
No notable changes
Deprecated
==========
No notable changes
Modules
=======
* Python 2.7 is a hard requirement for module execution in this release. Any code utilizing ``ansible.module_utils.basic`` will not function with a lower Python version.
Modules removed
---------------
The following modules no longer exist:
* No notable changes
Deprecation notices
-------------------
No notable changes
Noteworthy module changes
-------------------------
No notable changes
Plugins
=======
No notable changes
Porting custom scripts
======================
No notable changes
Networking
==========
No notable changes
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 76,180 |
The documentation is missing available data types for role parameters specification
|
### Summary
https://docs.ansible.com/ansible/latest/user_guide/playbooks_reuse_roles.html#specification-format
the 'type' entry declares roles param type, the docs says:
``` rst
:type:
* The data type of the option. Default is ``str``.
* If an option is of type ``list``, ``elements`` should be specified.
```
I haven't found any other types, tho the example mentions 'int'. is there bool? dict?
### Issue Type
Documentation Report
### Component Name
user_guide/playbooks_reuse_roles.html#specification-format
### Ansible Version
```console
$ ansible --version
website - latest / 4
```
### Configuration
```console
$ ansible-config dump --only-changed
```
### OS / Environment
I suspect it's the same on other OSes
### Additional Information
It would make it easier to write roles
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/76180
|
https://github.com/ansible/ansible/pull/76276
|
b6725ec6c994d725cc2190b784a422299fd98385
|
90de24da7be7a0e2d7fd173532222db0b116088d
| 2021-10-30T09:23:23Z |
python
| 2021-11-10T15:06:33Z |
docs/docsite/rst/user_guide/playbooks_reuse_roles.rst
|
.. _playbooks_reuse_roles:
*****
Roles
*****
Roles let you automatically load related vars, files, tasks, handlers, and other Ansible artifacts based on a known file structure. After you group your content in roles, you can easily reuse them and share them with other users.
.. contents::
:local:
Role directory structure
========================
An Ansible role has a defined directory structure with eight main standard directories. You must include at least one of these directories in each role. You can omit any directories the role does not use. For example:
.. code-block:: text
# playbooks
site.yml
webservers.yml
fooservers.yml
roles/
common/
tasks/
handlers/
library/
files/
templates/
vars/
defaults/
meta/
webservers/
tasks/
defaults/
meta/
By default Ansible will look in each directory within a role for a ``main.yml`` file for relevant content (also ``main.yaml`` and ``main``):
- ``tasks/main.yml`` - the main list of tasks that the role executes.
- ``handlers/main.yml`` - handlers, which may be used within or outside this role.
- ``library/my_module.py`` - modules, which may be used within this role (see :ref:`embedding_modules_and_plugins_in_roles` for more information).
- ``defaults/main.yml`` - default variables for the role (see :ref:`playbooks_variables` for more information). These variables have the lowest priority of any variables available, and can be easily overridden by any other variable, including inventory variables.
- ``vars/main.yml`` - other variables for the role (see :ref:`playbooks_variables` for more information).
- ``files/main.yml`` - files that the role deploys.
- ``templates/main.yml`` - templates that the role deploys.
- ``meta/main.yml`` - metadata for the role, including role dependencies.
You can add other YAML files in some directories. For example, you can place platform-specific tasks in separate files and refer to them in the ``tasks/main.yml`` file:
.. code-block:: yaml
# roles/example/tasks/main.yml
- name: Install the correct web server for RHEL
import_tasks: redhat.yml
when: ansible_facts['os_family']|lower == 'redhat'
- name: Install the correct web server for Debian
import_tasks: debian.yml
when: ansible_facts['os_family']|lower == 'debian'
# roles/example/tasks/redhat.yml
- name: Install web server
ansible.builtin.yum:
name: "httpd"
state: present
# roles/example/tasks/debian.yml
- name: Install web server
ansible.builtin.apt:
name: "apache2"
state: present
Roles may also include modules and other plugin types in a directory called ``library``. For more information, please refer to :ref:`embedding_modules_and_plugins_in_roles` below.
.. _role_search_path:
Storing and finding roles
=========================
By default, Ansible looks for roles in the following locations:
- in collections, if you are using them
- in a directory called ``roles/``, relative to the playbook file
- in the configured :ref:`roles_path <DEFAULT_ROLES_PATH>`. The default search path is ``~/.ansible/roles:/usr/share/ansible/roles:/etc/ansible/roles``.
- in the directory where the playbook file is located
If you store your roles in a different location, set the :ref:`roles_path <DEFAULT_ROLES_PATH>` configuration option so Ansible can find your roles. Checking shared roles into a single location makes them easier to use in multiple playbooks. See :ref:`intro_configuration` for details about managing settings in ansible.cfg.
Alternatively, you can call a role with a fully qualified path:
.. code-block:: yaml
---
- hosts: webservers
roles:
- role: '/path/to/my/roles/common'
Using roles
===========
You can use roles in three ways:
- at the play level with the ``roles`` option: This is the classic way of using roles in a play.
- at the tasks level with ``include_role``: You can reuse roles dynamically anywhere in the ``tasks`` section of a play using ``include_role``.
- at the tasks level with ``import_role``: You can reuse roles statically anywhere in the ``tasks`` section of a play using ``import_role``.
.. _roles_keyword:
Using roles at the play level
-----------------------------
The classic (original) way to use roles is with the ``roles`` option for a given play:
.. code-block:: yaml
---
- hosts: webservers
roles:
- common
- webservers
When you use the ``roles`` option at the play level, for each role 'x':
- If roles/x/tasks/main.yml exists, Ansible adds the tasks in that file to the play.
- If roles/x/handlers/main.yml exists, Ansible adds the handlers in that file to the play.
- If roles/x/vars/main.yml exists, Ansible adds the variables in that file to the play.
- If roles/x/defaults/main.yml exists, Ansible adds the variables in that file to the play.
- If roles/x/meta/main.yml exists, Ansible adds any role dependencies in that file to the list of roles.
- Any copy, script, template or include tasks (in the role) can reference files in roles/x/{files,templates,tasks}/ (dir depends on task) without having to path them relatively or absolutely.
When you use the ``roles`` option at the play level, Ansible treats the roles as static imports and processes them during playbook parsing. Ansible executes your playbook in this order:
- Any ``pre_tasks`` defined in the play.
- Any handlers triggered by pre_tasks.
- Each role listed in ``roles:``, in the order listed. Any role dependencies defined in the role's ``meta/main.yml`` run first, subject to tag filtering and conditionals. See :ref:`role_dependencies` for more details.
- Any ``tasks`` defined in the play.
- Any handlers triggered by the roles or tasks.
- Any ``post_tasks`` defined in the play.
- Any handlers triggered by post_tasks.
.. note::
If using tags with tasks in a role, be sure to also tag your pre_tasks, post_tasks, and role dependencies and pass those along as well, especially if the pre/post tasks and role dependencies are used for monitoring outage window control or load balancing. See :ref:`tags` for details on adding and using tags.
You can pass other keywords to the ``roles`` option:
.. code-block:: yaml
---
- hosts: webservers
roles:
- common
- role: foo_app_instance
vars:
dir: '/opt/a'
app_port: 5000
tags: typeA
- role: foo_app_instance
vars:
dir: '/opt/b'
app_port: 5001
tags: typeB
When you add a tag to the ``role`` option, Ansible applies the tag to ALL tasks within the role.
When using ``vars:`` within the ``roles:`` section of a playbook, the variables are added to the play variables, making them available to all tasks within the play before and after the role. This behavior can be changed by :ref:`DEFAULT_PRIVATE_ROLE_VARS`.
Including roles: dynamic reuse
------------------------------
You can reuse roles dynamically anywhere in the ``tasks`` section of a play using ``include_role``. While roles added in a ``roles`` section run before any other tasks in a playbook, included roles run in the order they are defined. If there are other tasks before an ``include_role`` task, the other tasks will run first.
To include a role:
.. code-block:: yaml
---
- hosts: webservers
tasks:
- name: Print a message
ansible.builtin.debug:
msg: "this task runs before the example role"
- name: Include the example role
include_role:
name: example
- name: Print a message
ansible.builtin.debug:
msg: "this task runs after the example role"
You can pass other keywords, including variables and tags, when including roles:
.. code-block:: yaml
---
- hosts: webservers
tasks:
- name: Include the foo_app_instance role
include_role:
name: foo_app_instance
vars:
dir: '/opt/a'
app_port: 5000
tags: typeA
...
When you add a :ref:`tag <tags>` to an ``include_role`` task, Ansible applies the tag `only` to the include itself. This means you can pass ``--tags`` to run only selected tasks from the role, if those tasks themselves have the same tag as the include statement. See :ref:`selective_reuse` for details.
You can conditionally include a role:
.. code-block:: yaml
---
- hosts: webservers
tasks:
- name: Include the some_role role
include_role:
name: some_role
when: "ansible_facts['os_family'] == 'RedHat'"
Importing roles: static reuse
-----------------------------
You can reuse roles statically anywhere in the ``tasks`` section of a play using ``import_role``. The behavior is the same as using the ``roles`` keyword. For example:
.. code-block:: yaml
---
- hosts: webservers
tasks:
- name: Print a message
ansible.builtin.debug:
msg: "before we run our role"
- name: Import the example role
import_role:
name: example
- name: Print a message
ansible.builtin.debug:
msg: "after we ran our role"
You can pass other keywords, including variables and tags, when importing roles:
.. code-block:: yaml
---
- hosts: webservers
tasks:
- name: Import the foo_app_instance role
import_role:
name: foo_app_instance
vars:
dir: '/opt/a'
app_port: 5000
...
When you add a tag to an ``import_role`` statement, Ansible applies the tag to `all` tasks within the role. See :ref:`tag_inheritance` for details.
Role argument validation
========================
Beginning with version 2.11, you may choose to enable role argument validation based on an argument
specification. This specification is defined in the ``meta/argument_specs.yml`` file (or with the ``.yaml``
file extension). When this argument specification is defined, a new task is inserted at the beginning of role execution
that will validate the parameters supplied for the role against the specification. If the parameters fail
validation, the role will fail execution.
.. note::
Ansible also supports role specifications defined in the role ``meta/main.yml`` file, as well. However,
any role that defines the specs within this file will not work on versions below 2.11. For this reason,
we recommend using the ``meta/argument_specs.yml`` file to maintain backward compatibility.
.. note::
When role argument validation is used on a role that has defined :ref:`dependencies <role_dependencies>`,
then validation on those dependencies will run before the dependent role, even if argument validation fails
for the dependent role.
Specification format
--------------------
The role argument specification must be defined in a top-level ``argument_specs`` block within the
role ``meta/argument_specs.yml`` file. All fields are lower-case.
:entry-point-name:
* The name of the role entry point.
* This should be ``main`` in the case of an unspecified entry point.
* This will be the base name of the tasks file to execute, with no ``.yml`` or ``.yaml`` file extension.
:short_description:
* A short, one-line description of the entry point.
* The ``short_description`` is displayed by ``ansible-doc -t role -l``.
:description:
* A longer description that may contain multiple lines.
:author:
* Name of the entry point authors.
* Use a multi-line list if there is more than one author.
:options:
* Options are often called "parameters" or "arguments". This section defines those options.
* For each role option (argument), you may include:
:option-name:
* The name of the option/argument.
:description:
* Detailed explanation of what this option does. It should be written in full sentences.
:type:
* The data type of the option. Default is ``str``.
* If an option is of type ``list``, ``elements`` should be specified.
:required:
* Only needed if ``true``.
* If missing, the option is not required.
:default:
* If ``required`` is false/missing, ``default`` may be specified (assumed 'null' if missing).
* Ensure that the default value in the docs matches the default value in the code. The actual
default for the role variable will always come from ``defaults/main.yml``.
* The default field must not be listed as part of the description, unless it requires additional information or conditions.
* If the option is a boolean value, you can use any of the boolean values recognized by Ansible:
(such as true/false or yes/no). Choose the one that reads better in the context of the option.
:choices:
* List of option values.
* Should be absent if empty.
:elements:
* Specifies the data type for list elements when type is ``list``.
:options:
* If this option takes a dict or list of dicts, you can define the structure here.
Sample specification
--------------------
.. code-block:: yaml
# roles/myapp/meta/argument_specs.yml
---
argument_specs:
# roles/myapp/tasks/main.yml entry point
main:
short_description: The main entry point for the myapp role.
options:
myapp_int:
type: "int"
required: false
default: 42
description: "The integer value, defaulting to 42."
myapp_str:
type: "str"
required: true
description: "The string value"
# roles/maypp/tasks/alternate.yml entry point
alternate:
short_description: The alternate entry point for the myapp role.
options:
myapp_int:
type: "int"
required: false
default: 1024
description: "The integer value, defaulting to 1024."
.. _run_role_twice:
Running a role multiple times in one playbook
=============================================
Ansible only executes each role once, even if you define it multiple times, unless the parameters defined on the role are different for each definition. For example, Ansible only runs the role ``foo`` once in a play like this:
.. code-block:: yaml
---
- hosts: webservers
roles:
- foo
- bar
- foo
You have two options to force Ansible to run a role more than once.
Passing different parameters
----------------------------
If you pass different parameters in each role definition, Ansible runs the role more than once. Providing different variable values is not the same as passing different role parameters. You must use the ``roles`` keyword for this behavior, since ``import_role`` and ``include_role`` do not accept role parameters.
This playbook runs the ``foo`` role twice:
.. code-block:: yaml
---
- hosts: webservers
roles:
- { role: foo, message: "first" }
- { role: foo, message: "second" }
This syntax also runs the ``foo`` role twice;
.. code-block:: yaml
---
- hosts: webservers
roles:
- role: foo
message: "first"
- role: foo
message: "second"
In these examples, Ansible runs ``foo`` twice because each role definition has different parameters.
Using ``allow_duplicates: true``
--------------------------------
Add ``allow_duplicates: true`` to the ``meta/main.yml`` file for the role:
.. code-block:: yaml
# playbook.yml
---
- hosts: webservers
roles:
- foo
- foo
# roles/foo/meta/main.yml
---
allow_duplicates: true
In this example, Ansible runs ``foo`` twice because we have explicitly enabled it to do so.
.. _role_dependencies:
Using role dependencies
=======================
Role dependencies let you automatically pull in other roles when using a role. Ansible does not execute role dependencies when you include or import a role. You must use the ``roles`` keyword if you want Ansible to execute role dependencies.
Role dependencies are prerequisites, not true dependencies. The roles do not have a parent/child relationship. Ansible loads all listed roles, runs the roles listed under ``dependencies`` first, then runs the role that lists them. The play object is the parent of all roles, including roles called by a ``dependencies`` list.
Role dependencies are stored in the ``meta/main.yml`` file within the role directory. This file should contain a list of roles and parameters to insert before the specified role. For example:
.. code-block:: yaml
# roles/myapp/meta/main.yml
---
dependencies:
- role: common
vars:
some_parameter: 3
- role: apache
vars:
apache_port: 80
- role: postgres
vars:
dbname: blarg
other_parameter: 12
Ansible always executes roles listed in ``dependencies`` before the role that lists them. Ansible executes this pattern recursively when you use the ``roles`` keyword. For example, if you list role ``foo`` under ``roles:``, role ``foo`` lists role ``bar`` under ``dependencies`` in its meta/main.yml file, and role ``bar`` lists role ``baz`` under ``dependencies`` in its meta/main.yml, Ansible executes ``baz``, then ``bar``, then ``foo``.
Running role dependencies multiple times in one play
----------------------------------------------------
Ansible treats duplicate role dependencies like duplicate roles listed under ``roles:``: Ansible only executes role dependencies once, even if defined multiple times, unless the parameters, tags, or when clause defined on the role are different for each definition. If two roles in a play both list a third role as a dependency, Ansible only runs that role dependency once, unless you pass different parameters, tags, when clause, or use ``allow_duplicates: true`` in the role you want to run multiple times. See :ref:`Galaxy role dependencies <galaxy_dependencies>` for more details.
.. note::
Role deduplication does not consult the invocation signature of parent roles. Additionally, when using ``vars:`` instead of role params, there is a side effect of changing variable scoping. Using ``vars:`` results in those variables being scoped at the play level. In the below example, using ``vars:`` would cause ``n`` to be defined as ``4`` through the entire play, including roles called before it.
In addition to the above, users should be aware that role de-duplication occurs before variable evaluation. This means that :term:`Lazy Evaluation` may make seemingly different role invocations equivalently the same, preventing the role from running more than once.
For example, a role named ``car`` depends on a role named ``wheel`` as follows:
.. code-block:: yaml
---
dependencies:
- role: wheel
n: 1
- role: wheel
n: 2
- role: wheel
n: 3
- role: wheel
n: 4
And the ``wheel`` role depends on two roles: ``tire`` and ``brake``. The ``meta/main.yml`` for wheel would then contain the following:
.. code-block:: yaml
---
dependencies:
- role: tire
- role: brake
And the ``meta/main.yml`` for ``tire`` and ``brake`` would contain the following:
.. code-block:: yaml
---
allow_duplicates: true
The resulting order of execution would be as follows:
.. code-block:: text
tire(n=1)
brake(n=1)
wheel(n=1)
tire(n=2)
brake(n=2)
wheel(n=2)
...
car
To use ``allow_duplicates: true`` with role dependencies, you must specify it for the role listed under ``dependencies``, not for the role that lists it. In the example above, ``allow_duplicates: true`` appears in the ``meta/main.yml`` of the ``tire`` and ``brake`` roles. The ``wheel`` role does not require ``allow_duplicates: true``, because each instance defined by ``car`` uses different parameter values.
.. note::
See :ref:`playbooks_variables` for details on how Ansible chooses among variable values defined in different places (variable inheritance and scope).
Also deduplication happens ONLY at the play level, so multiple plays in the same playbook may rerun the roles.
.. _embedding_modules_and_plugins_in_roles:
Embedding modules and plugins in roles
======================================
If you write a custom module (see :ref:`developing_modules`) or a plugin (see :ref:`developing_plugins`), you might wish to distribute it as part of a role. For example, if you write a module that helps configure your company's internal software, and you want other people in your organization to use this module, but you do not want to tell everyone how to configure their Ansible library path, you can include the module in your internal_config role.
To add a module or a plugin to a role:
Alongside the 'tasks' and 'handlers' structure of a role, add a directory named 'library' and then include the module directly inside the 'library' directory.
Assuming you had this:
.. code-block:: text
roles/
my_custom_modules/
library/
module1
module2
The module will be usable in the role itself, as well as any roles that are called *after* this role, as follows:
.. code-block:: yaml
---
- hosts: webservers
roles:
- my_custom_modules
- some_other_role_using_my_custom_modules
- yet_another_role_using_my_custom_modules
If necessary, you can also embed a module in a role to modify a module in Ansible's core distribution. For example, you can use the development version of a particular module before it is released in production releases by copying the module and embedding the copy in a role. Use this approach with caution, as API signatures may change in core components, and this workaround is not guaranteed to work.
The same mechanism can be used to embed and distribute plugins in a role, using the same schema. For example, for a filter plugin:
.. code-block:: text
roles/
my_custom_filter/
filter_plugins
filter1
filter2
These filters can then be used in a Jinja template in any role called after 'my_custom_filter'.
Sharing roles: Ansible Galaxy
=============================
`Ansible Galaxy <https://galaxy.ansible.com>`_ is a free site for finding, downloading, rating, and reviewing all kinds of community-developed Ansible roles and can be a great way to get a jumpstart on your automation projects.
The client ``ansible-galaxy`` is included in Ansible. The Galaxy client allows you to download roles from Ansible Galaxy, and also provides an excellent default framework for creating your own roles.
Read the `Ansible Galaxy documentation <https://galaxy.ansible.com/docs/>`_ page for more information
.. seealso::
:ref:`ansible_galaxy`
How to create new roles, share roles on Galaxy, role management
:ref:`yaml_syntax`
Learn about YAML syntax
:ref:`working_with_playbooks`
Review the basic Playbook language features
:ref:`playbooks_best_practices`
Tips and tricks for playbooks
:ref:`playbooks_variables`
Variables in playbooks
:ref:`playbooks_conditionals`
Conditionals in playbooks
:ref:`playbooks_loops`
Loops in playbooks
:ref:`tags`
Using tags to select or skip roles/tasks in long playbooks
:ref:`list_of_collections`
Browse existing collections, modules, and plugins
:ref:`developing_modules`
Extending Ansible by writing your own modules
`GitHub Ansible examples <https://github.com/ansible/ansible-examples>`_
Complete playbook files from the GitHub project source
`Mailing List <https://groups.google.com/group/ansible-project>`_
Questions? Help? Ideas? Stop by the list on Google Groups
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 75,827 |
psrp contains deprecated call to be removed in 2.13
|
##### SUMMARY
psrp contains call to Display.deprecated or AnsibleModule.deprecate and is scheduled for removal
```
lib/ansible/plugins/connection/psrp.py:481:12: ansible-deprecated-version: Deprecated version ('2.13') found in call to Display.deprecated or AnsibleModule.deprecate
```
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
```
lib/ansible/plugins/connection/psrp.py
```
##### ANSIBLE VERSION
```
2.13
```
##### CONFIGURATION
N/A
##### OS / ENVIRONMENT
N/A
##### STEPS TO REPRODUCE
N/A
##### EXPECTED RESULTS
N/A
##### ACTUAL RESULTS
N/A
|
https://github.com/ansible/ansible/issues/75827
|
https://github.com/ansible/ansible/pull/76252
|
a734da5e821c53075829f1d6b82c2e022db9d606
|
fb55d878db0a114c49bd3b0384f4f5c9593ea536
| 2021-09-29T18:15:41Z |
python
| 2021-11-12T00:57:28Z |
changelogs/fragments/psrp-put_file-dep.yaml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 75,827 |
psrp contains deprecated call to be removed in 2.13
|
##### SUMMARY
psrp contains call to Display.deprecated or AnsibleModule.deprecate and is scheduled for removal
```
lib/ansible/plugins/connection/psrp.py:481:12: ansible-deprecated-version: Deprecated version ('2.13') found in call to Display.deprecated or AnsibleModule.deprecate
```
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
```
lib/ansible/plugins/connection/psrp.py
```
##### ANSIBLE VERSION
```
2.13
```
##### CONFIGURATION
N/A
##### OS / ENVIRONMENT
N/A
##### STEPS TO REPRODUCE
N/A
##### EXPECTED RESULTS
N/A
##### ACTUAL RESULTS
N/A
|
https://github.com/ansible/ansible/issues/75827
|
https://github.com/ansible/ansible/pull/76252
|
a734da5e821c53075829f1d6b82c2e022db9d606
|
fb55d878db0a114c49bd3b0384f4f5c9593ea536
| 2021-09-29T18:15:41Z |
python
| 2021-11-12T00:57:28Z |
lib/ansible/plugins/connection/psrp.py
|
# Copyright (c) 2018 Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
DOCUMENTATION = """
author: Ansible Core Team
name: psrp
short_description: Run tasks over Microsoft PowerShell Remoting Protocol
description:
- Run commands or put/fetch on a target via PSRP (WinRM plugin)
- This is similar to the I(winrm) connection plugin which uses the same
underlying transport but instead runs in a PowerShell interpreter.
version_added: "2.7"
requirements:
- pypsrp>=0.4.0 (Python library)
extends_documentation_fragment:
- connection_pipelining
options:
# transport options
remote_addr:
description:
- The hostname or IP address of the remote host.
default: inventory_hostname
type: str
vars:
- name: ansible_host
- name: ansible_psrp_host
remote_user:
description:
- The user to log in as.
type: str
vars:
- name: ansible_user
- name: ansible_psrp_user
remote_password:
description: Authentication password for the C(remote_user). Can be supplied as CLI option.
type: str
vars:
- name: ansible_password
- name: ansible_winrm_pass
- name: ansible_winrm_password
aliases:
- password # Needed for --ask-pass to come through on delegation
port:
description:
- The port for PSRP to connect on the remote target.
- Default is C(5986) if I(protocol) is not defined or is C(https),
otherwise the port is C(5985).
type: int
vars:
- name: ansible_port
- name: ansible_psrp_port
protocol:
description:
- Set the protocol to use for the connection.
- Default is C(https) if I(port) is not defined or I(port) is not C(5985).
choices:
- http
- https
type: str
vars:
- name: ansible_psrp_protocol
path:
description:
- The URI path to connect to.
type: str
vars:
- name: ansible_psrp_path
default: 'wsman'
auth:
description:
- The authentication protocol to use when authenticating the remote user.
- The default, C(negotiate), will attempt to use C(Kerberos) if it is
available and fall back to C(NTLM) if it isn't.
type: str
vars:
- name: ansible_psrp_auth
choices:
- basic
- certificate
- negotiate
- kerberos
- ntlm
- credssp
default: negotiate
cert_validation:
description:
- Whether to validate the remote server's certificate or not.
- Set to C(ignore) to not validate any certificates.
- I(ca_cert) can be set to the path of a PEM certificate chain to
use in the validation.
choices:
- validate
- ignore
default: validate
type: str
vars:
- name: ansible_psrp_cert_validation
ca_cert:
description:
- The path to a PEM certificate chain to use when validating the server's
certificate.
- This value is ignored if I(cert_validation) is set to C(ignore).
type: path
vars:
- name: ansible_psrp_cert_trust_path
- name: ansible_psrp_ca_cert
aliases: [ cert_trust_path ]
connection_timeout:
description:
- The connection timeout for making the request to the remote host.
- This is measured in seconds.
type: int
vars:
- name: ansible_psrp_connection_timeout
default: 30
read_timeout:
description:
- The read timeout for receiving data from the remote host.
- This value must always be greater than I(operation_timeout).
- This option requires pypsrp >= 0.3.
- This is measured in seconds.
type: int
vars:
- name: ansible_psrp_read_timeout
default: 30
version_added: '2.8'
reconnection_retries:
description:
- The number of retries on connection errors.
type: int
vars:
- name: ansible_psrp_reconnection_retries
default: 0
version_added: '2.8'
reconnection_backoff:
description:
- The backoff time to use in between reconnection attempts.
(First sleeps X, then sleeps 2*X, then sleeps 4*X, ...)
- This is measured in seconds.
- The C(ansible_psrp_reconnection_backoff) variable was added in Ansible
2.9.
type: int
vars:
- name: ansible_psrp_connection_backoff
- name: ansible_psrp_reconnection_backoff
default: 2
version_added: '2.8'
message_encryption:
description:
- Controls the message encryption settings, this is different from TLS
encryption when I(ansible_psrp_protocol) is C(https).
- Only the auth protocols C(negotiate), C(kerberos), C(ntlm), and
C(credssp) can do message encryption. The other authentication protocols
only support encryption when C(protocol) is set to C(https).
- C(auto) means means message encryption is only used when not using
TLS/HTTPS.
- C(always) is the same as C(auto) but message encryption is always used
even when running over TLS/HTTPS.
- C(never) disables any encryption checks that are in place when running
over HTTP and disables any authentication encryption processes.
type: str
vars:
- name: ansible_psrp_message_encryption
choices:
- auto
- always
- never
default: auto
proxy:
description:
- Set the proxy URL to use when connecting to the remote host.
vars:
- name: ansible_psrp_proxy
type: str
ignore_proxy:
description:
- Will disable any environment proxy settings and connect directly to the
remote host.
- This option is ignored if C(proxy) is set.
vars:
- name: ansible_psrp_ignore_proxy
type: bool
default: 'no'
# auth options
certificate_key_pem:
description:
- The local path to an X509 certificate key to use with certificate auth.
type: path
vars:
- name: ansible_psrp_certificate_key_pem
certificate_pem:
description:
- The local path to an X509 certificate to use with certificate auth.
type: path
vars:
- name: ansible_psrp_certificate_pem
credssp_auth_mechanism:
description:
- The sub authentication mechanism to use with CredSSP auth.
- When C(auto), both Kerberos and NTLM is attempted with kerberos being
preferred.
type: str
choices:
- auto
- kerberos
- ntlm
default: auto
vars:
- name: ansible_psrp_credssp_auth_mechanism
credssp_disable_tlsv1_2:
description:
- Disables the use of TLSv1.2 on the CredSSP authentication channel.
- This should not be set to C(yes) unless dealing with a host that does not
have TLSv1.2.
default: no
type: bool
vars:
- name: ansible_psrp_credssp_disable_tlsv1_2
credssp_minimum_version:
description:
- The minimum CredSSP server authentication version that will be accepted.
- Set to C(5) to ensure the server has been patched and is not vulnerable
to CVE 2018-0886.
default: 2
type: int
vars:
- name: ansible_psrp_credssp_minimum_version
negotiate_delegate:
description:
- Allow the remote user the ability to delegate it's credentials to another
server, i.e. credential delegation.
- Only valid when Kerberos was the negotiated auth or was explicitly set as
the authentication.
- Ignored when NTLM was the negotiated auth.
type: bool
vars:
- name: ansible_psrp_negotiate_delegate
negotiate_hostname_override:
description:
- Override the remote hostname when searching for the host in the Kerberos
lookup.
- This allows Ansible to connect over IP but authenticate with the remote
server using it's DNS name.
- Only valid when Kerberos was the negotiated auth or was explicitly set as
the authentication.
- Ignored when NTLM was the negotiated auth.
type: str
vars:
- name: ansible_psrp_negotiate_hostname_override
negotiate_send_cbt:
description:
- Send the Channel Binding Token (CBT) structure when authenticating.
- CBT is used to provide extra protection against Man in the Middle C(MitM)
attacks by binding the outer transport channel to the auth channel.
- CBT is not used when using just C(HTTP), only C(HTTPS).
default: yes
type: bool
vars:
- name: ansible_psrp_negotiate_send_cbt
negotiate_service:
description:
- Override the service part of the SPN used during Kerberos authentication.
- Only valid when Kerberos was the negotiated auth or was explicitly set as
the authentication.
- Ignored when NTLM was the negotiated auth.
default: WSMAN
type: str
vars:
- name: ansible_psrp_negotiate_service
# protocol options
operation_timeout:
description:
- Sets the WSMan timeout for each operation.
- This is measured in seconds.
- This should not exceed the value for C(connection_timeout).
type: int
vars:
- name: ansible_psrp_operation_timeout
default: 20
max_envelope_size:
description:
- Sets the maximum size of each WSMan message sent to the remote host.
- This is measured in bytes.
- Defaults to C(150KiB) for compatibility with older hosts.
type: int
vars:
- name: ansible_psrp_max_envelope_size
default: 153600
configuration_name:
description:
- The name of the PowerShell configuration endpoint to connect to.
type: str
vars:
- name: ansible_psrp_configuration_name
default: Microsoft.PowerShell
"""
import base64
import json
import logging
import os
from ansible import constants as C
from ansible.errors import AnsibleConnectionFailure, AnsibleError
from ansible.errors import AnsibleFileNotFound
from ansible.module_utils.parsing.convert_bool import boolean
from ansible.module_utils._text import to_bytes, to_native, to_text
from ansible.plugins.connection import ConnectionBase
from ansible.plugins.shell.powershell import _common_args
from ansible.utils.display import Display
from ansible.utils.hashing import sha1
HAS_PYPSRP = True
PYPSRP_IMP_ERR = None
try:
import pypsrp
from pypsrp.complex_objects import GenericComplexObject, PSInvocationState, RunspacePoolState
from pypsrp.exceptions import AuthenticationError, WinRMError
from pypsrp.host import PSHost, PSHostUserInterface
from pypsrp.powershell import PowerShell, RunspacePool
from pypsrp.shell import Process, SignalCode, WinRS
from pypsrp.wsman import WSMan, AUTH_KWARGS
from requests.exceptions import ConnectionError, ConnectTimeout
except ImportError as err:
HAS_PYPSRP = False
PYPSRP_IMP_ERR = err
NEWER_PYPSRP = True
try:
import pypsrp.pwsh_scripts
except ImportError:
NEWER_PYPSRP = False
display = Display()
class Connection(ConnectionBase):
transport = 'psrp'
module_implementation_preferences = ('.ps1', '.exe', '')
allow_executable = False
has_pipelining = True
allow_extras = True
def __init__(self, *args, **kwargs):
self.always_pipeline_modules = True
self.has_native_async = True
self.runspace = None
self.host = None
self._last_pipeline = False
self._shell_type = 'powershell'
super(Connection, self).__init__(*args, **kwargs)
if not C.DEFAULT_DEBUG:
logging.getLogger('pypsrp').setLevel(logging.WARNING)
logging.getLogger('requests_credssp').setLevel(logging.INFO)
logging.getLogger('urllib3').setLevel(logging.INFO)
def _connect(self):
if not HAS_PYPSRP:
raise AnsibleError("pypsrp or dependencies are not installed: %s"
% to_native(PYPSRP_IMP_ERR))
super(Connection, self)._connect()
self._build_kwargs()
display.vvv("ESTABLISH PSRP CONNECTION FOR USER: %s ON PORT %s TO %s" %
(self._psrp_user, self._psrp_port, self._psrp_host),
host=self._psrp_host)
if not self.runspace:
connection = WSMan(**self._psrp_conn_kwargs)
# create our pseudo host to capture the exit code and host output
host_ui = PSHostUserInterface()
self.host = PSHost(None, None, False, "Ansible PSRP Host", None,
host_ui, None)
self.runspace = RunspacePool(
connection, host=self.host,
configuration_name=self._psrp_configuration_name
)
display.vvvvv(
"PSRP OPEN RUNSPACE: auth=%s configuration=%s endpoint=%s" %
(self._psrp_auth, self._psrp_configuration_name,
connection.transport.endpoint), host=self._psrp_host
)
try:
self.runspace.open()
except AuthenticationError as e:
raise AnsibleConnectionFailure("failed to authenticate with "
"the server: %s" % to_native(e))
except WinRMError as e:
raise AnsibleConnectionFailure(
"psrp connection failure during runspace open: %s"
% to_native(e)
)
except (ConnectionError, ConnectTimeout) as e:
raise AnsibleConnectionFailure(
"Failed to connect to the host via PSRP: %s"
% to_native(e)
)
self._connected = True
self._last_pipeline = None
return self
def reset(self):
if not self._connected:
self.runspace = None
return
# Try out best to ensure the runspace is closed to free up server side resources
try:
self.close()
except Exception as e:
# There's a good chance the connection was already closed so just log the error and move on
display.debug("PSRP reset - failed to closed runspace: %s" % to_text(e))
display.vvvvv("PSRP: Reset Connection", host=self._psrp_host)
self.runspace = None
self._connect()
def exec_command(self, cmd, in_data=None, sudoable=True):
super(Connection, self).exec_command(cmd, in_data=in_data,
sudoable=sudoable)
if cmd.startswith(" ".join(_common_args) + " -EncodedCommand"):
# This is a PowerShell script encoded by the shell plugin, we will
# decode the script and execute it in the runspace instead of
# starting a new interpreter to save on time
b_command = base64.b64decode(cmd.split(" ")[-1])
script = to_text(b_command, 'utf-16-le')
in_data = to_text(in_data, errors="surrogate_or_strict", nonstring="passthru")
if in_data and in_data.startswith(u"#!"):
# ANSIBALLZ wrapper, we need to get the interpreter and execute
# that as the script - note this won't work as basic.py relies
# on packages not available on Windows, once fixed we can enable
# this path
interpreter = to_native(in_data.splitlines()[0][2:])
# script = "$input | &'%s' -" % interpreter
# in_data = to_text(in_data)
raise AnsibleError("cannot run the interpreter '%s' on the psrp "
"connection plugin" % interpreter)
# call build_module_command to get the bootstrap wrapper text
bootstrap_wrapper = self._shell.build_module_command('', '', '')
if bootstrap_wrapper == cmd:
# Do not display to the user each invocation of the bootstrap wrapper
display.vvv("PSRP: EXEC (via pipeline wrapper)")
else:
display.vvv("PSRP: EXEC %s" % script, host=self._psrp_host)
else:
# In other cases we want to execute the cmd as the script. We add on the 'exit $LASTEXITCODE' to ensure the
# rc is propagated back to the connection plugin.
script = to_text(u"%s\nexit $LASTEXITCODE" % cmd)
display.vvv(u"PSRP: EXEC %s" % script, host=self._psrp_host)
rc, stdout, stderr = self._exec_psrp_script(script, in_data)
return rc, stdout, stderr
def put_file(self, in_path, out_path):
super(Connection, self).put_file(in_path, out_path)
out_path = self._shell._unquote(out_path)
display.vvv("PUT %s TO %s" % (in_path, out_path), host=self._psrp_host)
# The new method that uses PSRP directly relies on a feature added in pypsrp 0.4.0 (release 2019-09-19). In
# case someone still has an older version present we warn them asking to update their library to a newer
# release and fallback to the old WSMV shell.
if NEWER_PYPSRP:
rc, stdout, stderr, local_sha1 = self._put_file_new(in_path, out_path)
else:
display.deprecated("Older pypsrp library detected, please update to pypsrp>=0.4.0 to use the newer copy "
"method over PSRP.", version="2.13", collection_name='ansible.builtin')
rc, stdout, stderr, local_sha1 = self._put_file_old(in_path, out_path)
if rc != 0:
raise AnsibleError(to_native(stderr))
put_output = json.loads(to_text(stdout))
remote_sha1 = put_output.get("sha1")
if not remote_sha1:
raise AnsibleError("Remote sha1 was not returned, stdout: '%s', stderr: '%s'"
% (to_native(stdout), to_native(stderr)))
if not remote_sha1 == local_sha1:
raise AnsibleError("Remote sha1 hash %s does not match local hash %s"
% (to_native(remote_sha1), to_native(local_sha1)))
def _put_file_old(self, in_path, out_path):
script = u'''begin {
$ErrorActionPreference = "Stop"
$ProgressPreference = 'SilentlyContinue'
$path = '%s'
$fd = [System.IO.File]::Create($path)
$algo = [System.Security.Cryptography.SHA1CryptoServiceProvider]::Create()
$bytes = @()
} process {
$bytes = [System.Convert]::FromBase64String($input)
$algo.TransformBlock($bytes, 0, $bytes.Length, $bytes, 0) > $null
$fd.Write($bytes, 0, $bytes.Length)
} end {
$fd.Close()
$algo.TransformFinalBlock($bytes, 0, 0) > $null
$hash = [System.BitConverter]::ToString($algo.Hash)
$hash = $hash.Replace("-", "").ToLowerInvariant()
Write-Output -InputObject "{`"sha1`":`"$hash`"}"
}''' % out_path
cmd_parts = self._shell._encode_script(script, as_list=True,
strict_mode=False,
preserve_rc=False)
b_in_path = to_bytes(in_path, errors='surrogate_or_strict')
if not os.path.exists(b_in_path):
raise AnsibleFileNotFound('file or module does not exist: "%s"'
% to_native(in_path))
in_size = os.path.getsize(b_in_path)
buffer_size = int(self.runspace.connection.max_payload_size / 4 * 3)
sha1_hash = sha1()
# copying files is faster when using the raw WinRM shell and not PSRP
# we will create a WinRS shell just for this process
# TODO: speed this up as there is overhead creating a shell for this
with WinRS(self.runspace.connection, codepage=65001) as shell:
process = Process(shell, cmd_parts[0], cmd_parts[1:])
process.begin_invoke()
offset = 0
with open(b_in_path, 'rb') as src_file:
for data in iter((lambda: src_file.read(buffer_size)), b""):
offset += len(data)
display.vvvvv("PSRP PUT %s to %s (offset=%d, size=%d" %
(in_path, out_path, offset, len(data)),
host=self._psrp_host)
b64_data = base64.b64encode(data) + b"\r\n"
process.send(b64_data, end=(src_file.tell() == in_size))
sha1_hash.update(data)
# the file was empty, return empty buffer
if offset == 0:
process.send(b"", end=True)
process.end_invoke()
process.signal(SignalCode.CTRL_C)
return process.rc, process.stdout, process.stderr, sha1_hash.hexdigest()
def _put_file_new(self, in_path, out_path):
copy_script = '''begin {
$ErrorActionPreference = "Stop"
$WarningPreference = "Continue"
$path = $MyInvocation.UnboundArguments[0]
$fd = [System.IO.File]::Create($path)
$algo = [System.Security.Cryptography.SHA1CryptoServiceProvider]::Create()
$bytes = @()
$bindingFlags = [System.Reflection.BindingFlags]'NonPublic, Instance'
Function Get-Property {
<#
.SYNOPSIS
Gets the private/internal property specified of the object passed in.
#>
Param (
[Parameter(Mandatory=$true, ValueFromPipeline=$true)]
[System.Object]
$Object,
[Parameter(Mandatory=$true, Position=1)]
[System.String]
$Name
)
$Object.GetType().GetProperty($Name, $bindingFlags).GetValue($Object, $null)
}
Function Set-Property {
<#
.SYNOPSIS
Sets the private/internal property specified on the object passed in.
#>
Param (
[Parameter(Mandatory=$true, ValueFromPipeline=$true)]
[System.Object]
$Object,
[Parameter(Mandatory=$true, Position=1)]
[System.String]
$Name,
[Parameter(Mandatory=$true, Position=2)]
[AllowNull()]
[System.Object]
$Value
)
$Object.GetType().GetProperty($Name, $bindingFlags).SetValue($Object, $Value, $null)
}
Function Get-Field {
<#
.SYNOPSIS
Gets the private/internal field specified of the object passed in.
#>
Param (
[Parameter(Mandatory=$true, ValueFromPipeline=$true)]
[System.Object]
$Object,
[Parameter(Mandatory=$true, Position=1)]
[System.String]
$Name
)
$Object.GetType().GetField($Name, $bindingFlags).GetValue($Object)
}
# MaximumAllowedMemory is required to be set to so we can send input data that exceeds the limit on a PS
# Runspace. We use reflection to access/set this property as it is not accessible publicly. This is not ideal
# but works on all PowerShell versions I've tested with. We originally used WinRS to send the raw bytes to the
# host but this falls flat if someone is using a custom PS configuration name so this is a workaround. This
# isn't required for smaller files so if it fails we ignore the error and hope it wasn't needed.
# https://github.com/PowerShell/PowerShell/blob/c8e72d1e664b1ee04a14f226adf655cced24e5f0/src/System.Management.Automation/engine/serialization.cs#L325
try {
$Host | Get-Property 'ExternalHost' | `
Get-Field '_transportManager' | `
Get-Property 'Fragmentor' | `
Get-Property 'DeserializationContext' | `
Set-Property 'MaximumAllowedMemory' $null
} catch {}
}
process {
$bytes = [System.Convert]::FromBase64String($input)
$algo.TransformBlock($bytes, 0, $bytes.Length, $bytes, 0) > $null
$fd.Write($bytes, 0, $bytes.Length)
}
end {
$fd.Close()
$algo.TransformFinalBlock($bytes, 0, 0) > $null
$hash = [System.BitConverter]::ToString($algo.Hash).Replace('-', '').ToLowerInvariant()
Write-Output -InputObject "{`"sha1`":`"$hash`"}"
}
'''
# Get the buffer size of each fragment to send, subtract 82 for the fragment, message, and other header info
# fields that PSRP adds. Adjust to size of the base64 encoded bytes length.
buffer_size = int((self.runspace.connection.max_payload_size - 82) / 4 * 3)
sha1_hash = sha1()
b_in_path = to_bytes(in_path, errors='surrogate_or_strict')
if not os.path.exists(b_in_path):
raise AnsibleFileNotFound('file or module does not exist: "%s"' % to_native(in_path))
def read_gen():
offset = 0
with open(b_in_path, 'rb') as src_fd:
for b_data in iter((lambda: src_fd.read(buffer_size)), b""):
data_len = len(b_data)
offset += data_len
sha1_hash.update(b_data)
# PSRP technically supports sending raw bytes but that method requires a larger CLIXML message.
# Sending base64 is still more efficient here.
display.vvvvv("PSRP PUT %s to %s (offset=%d, size=%d" % (in_path, out_path, offset, data_len),
host=self._psrp_host)
b64_data = base64.b64encode(b_data)
yield [to_text(b64_data)]
if offset == 0: # empty file
yield [""]
rc, stdout, stderr = self._exec_psrp_script(copy_script, read_gen(), arguments=[out_path])
return rc, stdout, stderr, sha1_hash.hexdigest()
def fetch_file(self, in_path, out_path):
super(Connection, self).fetch_file(in_path, out_path)
display.vvv("FETCH %s TO %s" % (in_path, out_path),
host=self._psrp_host)
in_path = self._shell._unquote(in_path)
out_path = out_path.replace('\\', '/')
# because we are dealing with base64 data we need to get the max size
# of the bytes that the base64 size would equal
max_b64_size = int(self.runspace.connection.max_payload_size -
(self.runspace.connection.max_payload_size / 4 * 3))
buffer_size = max_b64_size - (max_b64_size % 1024)
# setup the file stream with read only mode
setup_script = '''$ErrorActionPreference = "Stop"
$path = '%s'
if (Test-Path -Path $path -PathType Leaf) {
$fs = New-Object -TypeName System.IO.FileStream -ArgumentList @(
$path,
[System.IO.FileMode]::Open,
[System.IO.FileAccess]::Read,
[System.IO.FileShare]::Read
)
$buffer_size = %d
} elseif (Test-Path -Path $path -PathType Container) {
Write-Output -InputObject "[DIR]"
} else {
Write-Error -Message "$path does not exist"
$host.SetShouldExit(1)
}''' % (self._shell._escape(in_path), buffer_size)
# read the file stream at the offset and return the b64 string
read_script = '''$ErrorActionPreference = "Stop"
$fs.Seek(%d, [System.IO.SeekOrigin]::Begin) > $null
$buffer = New-Object -TypeName byte[] -ArgumentList $buffer_size
$bytes_read = $fs.Read($buffer, 0, $buffer_size)
if ($bytes_read -gt 0) {
$bytes = $buffer[0..($bytes_read - 1)]
Write-Output -InputObject ([System.Convert]::ToBase64String($bytes))
}'''
# need to run the setup script outside of the local scope so the
# file stream stays active between fetch operations
rc, stdout, stderr = self._exec_psrp_script(setup_script,
use_local_scope=False)
if rc != 0:
raise AnsibleError("failed to setup file stream for fetch '%s': %s"
% (out_path, to_native(stderr)))
elif stdout.strip() == '[DIR]':
# to be consistent with other connection plugins, we assume the caller has created the target dir
return
b_out_path = to_bytes(out_path, errors='surrogate_or_strict')
# to be consistent with other connection plugins, we assume the caller has created the target dir
offset = 0
with open(b_out_path, 'wb') as out_file:
while True:
display.vvvvv("PSRP FETCH %s to %s (offset=%d" %
(in_path, out_path, offset), host=self._psrp_host)
rc, stdout, stderr = self._exec_psrp_script(read_script % offset)
if rc != 0:
raise AnsibleError("failed to transfer file to '%s': %s"
% (out_path, to_native(stderr)))
data = base64.b64decode(stdout.strip())
out_file.write(data)
if len(data) < buffer_size:
break
offset += len(data)
rc, stdout, stderr = self._exec_psrp_script("$fs.Close()")
if rc != 0:
display.warning("failed to close remote file stream of file "
"'%s': %s" % (in_path, to_native(stderr)))
def close(self):
if self.runspace and self.runspace.state == RunspacePoolState.OPENED:
display.vvvvv("PSRP CLOSE RUNSPACE: %s" % (self.runspace.id),
host=self._psrp_host)
self.runspace.close()
self.runspace = None
self._connected = False
self._last_pipeline = None
def _build_kwargs(self):
self._psrp_host = self.get_option('remote_addr')
self._psrp_user = self.get_option('remote_user')
self._psrp_pass = self.get_option('remote_password')
protocol = self.get_option('protocol')
port = self.get_option('port')
if protocol is None and port is None:
protocol = 'https'
port = 5986
elif protocol is None:
protocol = 'https' if int(port) != 5985 else 'http'
elif port is None:
port = 5986 if protocol == 'https' else 5985
self._psrp_protocol = protocol
self._psrp_port = int(port)
self._psrp_path = self.get_option('path')
self._psrp_auth = self.get_option('auth')
# cert validation can either be a bool or a path to the cert
cert_validation = self.get_option('cert_validation')
cert_trust_path = self.get_option('ca_cert')
if cert_validation == 'ignore':
self._psrp_cert_validation = False
elif cert_trust_path is not None:
self._psrp_cert_validation = cert_trust_path
else:
self._psrp_cert_validation = True
self._psrp_connection_timeout = self.get_option('connection_timeout') # Can be None
self._psrp_read_timeout = self.get_option('read_timeout') # Can be None
self._psrp_message_encryption = self.get_option('message_encryption')
self._psrp_proxy = self.get_option('proxy')
self._psrp_ignore_proxy = boolean(self.get_option('ignore_proxy'))
self._psrp_operation_timeout = int(self.get_option('operation_timeout'))
self._psrp_max_envelope_size = int(self.get_option('max_envelope_size'))
self._psrp_configuration_name = self.get_option('configuration_name')
self._psrp_reconnection_retries = int(self.get_option('reconnection_retries'))
self._psrp_reconnection_backoff = float(self.get_option('reconnection_backoff'))
self._psrp_certificate_key_pem = self.get_option('certificate_key_pem')
self._psrp_certificate_pem = self.get_option('certificate_pem')
self._psrp_credssp_auth_mechanism = self.get_option('credssp_auth_mechanism')
self._psrp_credssp_disable_tlsv1_2 = self.get_option('credssp_disable_tlsv1_2')
self._psrp_credssp_minimum_version = self.get_option('credssp_minimum_version')
self._psrp_negotiate_send_cbt = self.get_option('negotiate_send_cbt')
self._psrp_negotiate_delegate = self.get_option('negotiate_delegate')
self._psrp_negotiate_hostname_override = self.get_option('negotiate_hostname_override')
self._psrp_negotiate_service = self.get_option('negotiate_service')
supported_args = []
for auth_kwarg in AUTH_KWARGS.values():
supported_args.extend(auth_kwarg)
extra_args = {v.replace('ansible_psrp_', '') for v in self.get_option('_extras')}
unsupported_args = extra_args.difference(supported_args)
for arg in unsupported_args:
display.warning("ansible_psrp_%s is unsupported by the current "
"psrp version installed" % arg)
self._psrp_conn_kwargs = dict(
server=self._psrp_host, port=self._psrp_port,
username=self._psrp_user, password=self._psrp_pass,
ssl=self._psrp_protocol == 'https', path=self._psrp_path,
auth=self._psrp_auth, cert_validation=self._psrp_cert_validation,
connection_timeout=self._psrp_connection_timeout,
encryption=self._psrp_message_encryption, proxy=self._psrp_proxy,
no_proxy=self._psrp_ignore_proxy,
max_envelope_size=self._psrp_max_envelope_size,
operation_timeout=self._psrp_operation_timeout,
certificate_key_pem=self._psrp_certificate_key_pem,
certificate_pem=self._psrp_certificate_pem,
credssp_auth_mechanism=self._psrp_credssp_auth_mechanism,
credssp_disable_tlsv1_2=self._psrp_credssp_disable_tlsv1_2,
credssp_minimum_version=self._psrp_credssp_minimum_version,
negotiate_send_cbt=self._psrp_negotiate_send_cbt,
negotiate_delegate=self._psrp_negotiate_delegate,
negotiate_hostname_override=self._psrp_negotiate_hostname_override,
negotiate_service=self._psrp_negotiate_service,
)
# Check if PSRP version supports newer read_timeout argument (needs pypsrp 0.3.0+)
if hasattr(pypsrp, 'FEATURES') and 'wsman_read_timeout' in pypsrp.FEATURES:
self._psrp_conn_kwargs['read_timeout'] = self._psrp_read_timeout
elif self._psrp_read_timeout is not None:
display.warning("ansible_psrp_read_timeout is unsupported by the current psrp version installed, "
"using ansible_psrp_connection_timeout value for read_timeout instead.")
# Check if PSRP version supports newer reconnection_retries argument (needs pypsrp 0.3.0+)
if hasattr(pypsrp, 'FEATURES') and 'wsman_reconnections' in pypsrp.FEATURES:
self._psrp_conn_kwargs['reconnection_retries'] = self._psrp_reconnection_retries
self._psrp_conn_kwargs['reconnection_backoff'] = self._psrp_reconnection_backoff
else:
if self._psrp_reconnection_retries is not None:
display.warning("ansible_psrp_reconnection_retries is unsupported by the current psrp version installed.")
if self._psrp_reconnection_backoff is not None:
display.warning("ansible_psrp_reconnection_backoff is unsupported by the current psrp version installed.")
# add in the extra args that were set
for arg in extra_args.intersection(supported_args):
option = self.get_option('_extras')['ansible_psrp_%s' % arg]
self._psrp_conn_kwargs[arg] = option
def _exec_psrp_script(self, script, input_data=None, use_local_scope=True, arguments=None):
# Check if there's a command on the current pipeline that still needs to be closed.
if self._last_pipeline:
# Current pypsrp versions raise an exception if the current state was not RUNNING. We manually set it so we
# can call stop without any issues.
self._last_pipeline.state = PSInvocationState.RUNNING
self._last_pipeline.stop()
self._last_pipeline = None
ps = PowerShell(self.runspace)
ps.add_script(script, use_local_scope=use_local_scope)
if arguments:
for arg in arguments:
ps.add_argument(arg)
ps.invoke(input=input_data)
rc, stdout, stderr = self._parse_pipeline_result(ps)
# We should really call .stop() on all pipelines that are run to decrement the concurrent command counter on
# PSSession but that involves another round trip and is done when the runspace is closed. We instead store the
# last pipeline which is closed if another command is run on the runspace.
self._last_pipeline = ps
return rc, stdout, stderr
def _parse_pipeline_result(self, pipeline):
"""
PSRP doesn't have the same concept as other protocols with its output.
We need some extra logic to convert the pipeline streams and host
output into the format that Ansible understands.
:param pipeline: The finished PowerShell pipeline that invoked our
commands
:return: rc, stdout, stderr based on the pipeline output
"""
# we try and get the rc from our host implementation, this is set if
# exit or $host.SetShouldExit() is called in our pipeline, if not we
# set to 0 if the pipeline had not errors and 1 if it did
rc = self.host.rc or (1 if pipeline.had_errors else 0)
# TODO: figure out a better way of merging this with the host output
stdout_list = []
for output in pipeline.output:
# Not all pipeline outputs are a string or contain a __str__ value,
# we will create our own output based on the properties of the
# complex object if that is the case.
if isinstance(output, GenericComplexObject) and output.to_string is None:
obj_lines = output.property_sets
for key, value in output.adapted_properties.items():
obj_lines.append(u"%s: %s" % (key, value))
for key, value in output.extended_properties.items():
obj_lines.append(u"%s: %s" % (key, value))
output_msg = u"\n".join(obj_lines)
else:
output_msg = to_text(output, nonstring='simplerepr')
stdout_list.append(output_msg)
if len(self.host.ui.stdout) > 0:
stdout_list += self.host.ui.stdout
stdout = u"\r\n".join(stdout_list)
stderr_list = []
for error in pipeline.streams.error:
# the error record is not as fully fleshed out like we usually get
# in PS, we will manually create it here
command_name = "%s : " % error.command_name if error.command_name else ''
position = "%s\r\n" % error.invocation_position_message if error.invocation_position_message else ''
error_msg = "%s%s\r\n%s" \
" + CategoryInfo : %s\r\n" \
" + FullyQualifiedErrorId : %s" \
% (command_name, str(error), position,
error.message, error.fq_error)
stacktrace = error.script_stacktrace
if self._play_context.verbosity >= 3 and stacktrace is not None:
error_msg += "\r\nStackTrace:\r\n%s" % stacktrace
stderr_list.append(error_msg)
if len(self.host.ui.stderr) > 0:
stderr_list += self.host.ui.stderr
stderr = u"\r\n".join([to_text(o) for o in stderr_list])
display.vvvvv("PSRP RC: %d" % rc, host=self._psrp_host)
display.vvvvv("PSRP STDOUT: %s" % stdout, host=self._psrp_host)
display.vvvvv("PSRP STDERR: %s" % stderr, host=self._psrp_host)
# reset the host back output back to defaults, needed if running
# multiple pipelines on the same RunspacePool
self.host.rc = 0
self.host.ui.stdout = []
self.host.ui.stderr = []
return rc, to_bytes(stdout, encoding='utf-8'), to_bytes(stderr, encoding='utf-8')
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 75,827 |
psrp contains deprecated call to be removed in 2.13
|
##### SUMMARY
psrp contains call to Display.deprecated or AnsibleModule.deprecate and is scheduled for removal
```
lib/ansible/plugins/connection/psrp.py:481:12: ansible-deprecated-version: Deprecated version ('2.13') found in call to Display.deprecated or AnsibleModule.deprecate
```
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
```
lib/ansible/plugins/connection/psrp.py
```
##### ANSIBLE VERSION
```
2.13
```
##### CONFIGURATION
N/A
##### OS / ENVIRONMENT
N/A
##### STEPS TO REPRODUCE
N/A
##### EXPECTED RESULTS
N/A
##### ACTUAL RESULTS
N/A
|
https://github.com/ansible/ansible/issues/75827
|
https://github.com/ansible/ansible/pull/76252
|
a734da5e821c53075829f1d6b82c2e022db9d606
|
fb55d878db0a114c49bd3b0384f4f5c9593ea536
| 2021-09-29T18:15:41Z |
python
| 2021-11-12T00:57:28Z |
test/sanity/ignore.txt
|
.azure-pipelines/scripts/publish-codecov.py replace-urlopen
docs/docsite/rst/dev_guide/testing/sanity/no-smart-quotes.rst no-smart-quotes
docs/docsite/rst/locales/ja/LC_MESSAGES/dev_guide.po no-smart-quotes # Translation of the no-smart-quotes rule
examples/scripts/ConfigureRemotingForAnsible.ps1 pslint:PSCustomUseLiteralPath
examples/scripts/upgrade_to_ps3.ps1 pslint:PSCustomUseLiteralPath
examples/scripts/upgrade_to_ps3.ps1 pslint:PSUseApprovedVerbs
lib/ansible/cli/scripts/ansible_connection_cli_stub.py shebang
lib/ansible/config/base.yml no-unwanted-files
lib/ansible/executor/playbook_executor.py pylint:disallowed-name
lib/ansible/executor/powershell/async_watchdog.ps1 pslint:PSCustomUseLiteralPath
lib/ansible/executor/powershell/async_wrapper.ps1 pslint:PSCustomUseLiteralPath
lib/ansible/executor/powershell/exec_wrapper.ps1 pslint:PSCustomUseLiteralPath
lib/ansible/executor/task_queue_manager.py pylint:disallowed-name
lib/ansible/keyword_desc.yml no-unwanted-files
lib/ansible/modules/apt_key.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/apt.py validate-modules:parameter-invalid
lib/ansible/modules/apt_repository.py validate-modules:parameter-invalid
lib/ansible/modules/assemble.py validate-modules:nonexistent-parameter-documented
lib/ansible/modules/async_status.py use-argspec-type-path
lib/ansible/modules/async_status.py validate-modules!skip
lib/ansible/modules/async_wrapper.py ansible-doc!skip # not an actual module
lib/ansible/modules/async_wrapper.py pylint:ansible-bad-function # ignore, required
lib/ansible/modules/async_wrapper.py use-argspec-type-path
lib/ansible/modules/blockinfile.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/blockinfile.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/command.py validate-modules:doc-default-does-not-match-spec # _uses_shell is undocumented
lib/ansible/modules/command.py validate-modules:doc-missing-type
lib/ansible/modules/command.py validate-modules:nonexistent-parameter-documented
lib/ansible/modules/command.py validate-modules:undocumented-parameter
lib/ansible/modules/copy.py pylint:disallowed-name
lib/ansible/modules/copy.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/copy.py validate-modules:nonexistent-parameter-documented
lib/ansible/modules/copy.py validate-modules:undocumented-parameter
lib/ansible/modules/dnf.py validate-modules:doc-required-mismatch
lib/ansible/modules/dnf.py validate-modules:parameter-invalid
lib/ansible/modules/file.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/file.py validate-modules:undocumented-parameter
lib/ansible/modules/find.py use-argspec-type-path # fix needed
lib/ansible/modules/git.py pylint:disallowed-name
lib/ansible/modules/git.py use-argspec-type-path
lib/ansible/modules/git.py validate-modules:doc-missing-type
lib/ansible/modules/git.py validate-modules:doc-required-mismatch
lib/ansible/modules/hostname.py validate-modules:invalid-ansiblemodule-schema
lib/ansible/modules/iptables.py pylint:disallowed-name
lib/ansible/modules/lineinfile.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/lineinfile.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/lineinfile.py validate-modules:nonexistent-parameter-documented
lib/ansible/modules/package_facts.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/pip.py pylint:disallowed-name
lib/ansible/modules/pip.py validate-modules:invalid-ansiblemodule-schema
lib/ansible/modules/replace.py validate-modules:nonexistent-parameter-documented
lib/ansible/modules/service.py validate-modules:nonexistent-parameter-documented
lib/ansible/modules/service.py validate-modules:use-run-command-not-popen
lib/ansible/modules/stat.py validate-modules:doc-default-does-not-match-spec # get_md5 is undocumented
lib/ansible/modules/stat.py validate-modules:parameter-invalid
lib/ansible/modules/stat.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/stat.py validate-modules:undocumented-parameter
lib/ansible/modules/systemd.py validate-modules:parameter-invalid
lib/ansible/modules/systemd.py validate-modules:return-syntax-error
lib/ansible/modules/sysvinit.py validate-modules:return-syntax-error
lib/ansible/modules/unarchive.py validate-modules:nonexistent-parameter-documented
lib/ansible/modules/uri.py pylint:disallowed-name
lib/ansible/modules/uri.py validate-modules:doc-required-mismatch
lib/ansible/modules/user.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/user.py validate-modules:doc-default-incompatible-type
lib/ansible/modules/user.py validate-modules:use-run-command-not-popen
lib/ansible/modules/yum.py pylint:disallowed-name
lib/ansible/modules/yum.py validate-modules:parameter-invalid
lib/ansible/modules/yum_repository.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/yum_repository.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/yum_repository.py validate-modules:undocumented-parameter
lib/ansible/module_utils/compat/_selectors2.py future-import-boilerplate # ignore bundled
lib/ansible/module_utils/compat/_selectors2.py metaclass-boilerplate # ignore bundled
lib/ansible/module_utils/compat/_selectors2.py pylint:disallowed-name
lib/ansible/module_utils/compat/selinux.py import-2.7!skip # pass/fail depends on presence of libselinux.so
lib/ansible/module_utils/compat/selinux.py import-3.5!skip # pass/fail depends on presence of libselinux.so
lib/ansible/module_utils/compat/selinux.py import-3.6!skip # pass/fail depends on presence of libselinux.so
lib/ansible/module_utils/compat/selinux.py import-3.7!skip # pass/fail depends on presence of libselinux.so
lib/ansible/module_utils/compat/selinux.py import-3.8!skip # pass/fail depends on presence of libselinux.so
lib/ansible/module_utils/compat/selinux.py import-3.9!skip # pass/fail depends on presence of libselinux.so
lib/ansible/module_utils/distro/_distro.py future-import-boilerplate # ignore bundled
lib/ansible/module_utils/distro/_distro.py metaclass-boilerplate # ignore bundled
lib/ansible/module_utils/distro/_distro.py no-assert
lib/ansible/module_utils/distro/_distro.py pep8!skip # bundled code we don't want to modify
lib/ansible/module_utils/distro/__init__.py empty-init # breaks namespacing, bundled, do not override
lib/ansible/module_utils/facts/__init__.py empty-init # breaks namespacing, deprecate and eventually remove
lib/ansible/module_utils/facts/network/linux.py pylint:disallowed-name
lib/ansible/module_utils/powershell/Ansible.ModuleUtils.ArgvParser.psm1 pslint:PSUseApprovedVerbs
lib/ansible/module_utils/powershell/Ansible.ModuleUtils.CommandUtil.psm1 pslint:PSProvideCommentHelp # need to agree on best format for comment location
lib/ansible/module_utils/powershell/Ansible.ModuleUtils.CommandUtil.psm1 pslint:PSUseApprovedVerbs
lib/ansible/module_utils/powershell/Ansible.ModuleUtils.FileUtil.psm1 pslint:PSCustomUseLiteralPath
lib/ansible/module_utils/powershell/Ansible.ModuleUtils.FileUtil.psm1 pslint:PSProvideCommentHelp
lib/ansible/module_utils/powershell/Ansible.ModuleUtils.Legacy.psm1 pslint:PSCustomUseLiteralPath
lib/ansible/module_utils/powershell/Ansible.ModuleUtils.Legacy.psm1 pslint:PSUseApprovedVerbs
lib/ansible/module_utils/powershell/Ansible.ModuleUtils.LinkUtil.psm1 pslint:PSUseApprovedVerbs
lib/ansible/module_utils/pycompat24.py no-get-exception
lib/ansible/module_utils/six/__init__.py empty-init # breaks namespacing, bundled, do not override
lib/ansible/module_utils/six/__init__.py future-import-boilerplate # ignore bundled
lib/ansible/module_utils/six/__init__.py metaclass-boilerplate # ignore bundled
lib/ansible/module_utils/six/__init__.py no-basestring
lib/ansible/module_utils/six/__init__.py no-dict-iteritems
lib/ansible/module_utils/six/__init__.py no-dict-iterkeys
lib/ansible/module_utils/six/__init__.py no-dict-itervalues
lib/ansible/module_utils/six/__init__.py pylint:self-assigning-variable
lib/ansible/module_utils/six/__init__.py replace-urlopen
lib/ansible/module_utils/urls.py pylint:disallowed-name
lib/ansible/module_utils/urls.py replace-urlopen
lib/ansible/parsing/vault/__init__.py pylint:disallowed-name
lib/ansible/parsing/yaml/objects.py pylint:arguments-renamed
lib/ansible/playbook/base.py pylint:disallowed-name
lib/ansible/playbook/collectionsearch.py required-and-default-attributes # https://github.com/ansible/ansible/issues/61460
lib/ansible/playbook/helpers.py pylint:disallowed-name
lib/ansible/plugins/action/normal.py action-plugin-docs # default action plugin for modules without a dedicated action plugin
lib/ansible/plugins/cache/base.py ansible-doc!skip # not a plugin, but a stub for backwards compatibility
lib/ansible/plugins/callback/__init__.py pylint:arguments-renamed
lib/ansible/plugins/connection/psrp.py pylint:ansible-deprecated-version
lib/ansible/plugins/inventory/advanced_host_list.py pylint:arguments-renamed
lib/ansible/plugins/inventory/host_list.py pylint:arguments-renamed
lib/ansible/plugins/lookup/random_choice.py pylint:arguments-renamed
lib/ansible/plugins/lookup/sequence.py pylint:disallowed-name
lib/ansible/plugins/shell/cmd.py pylint:arguments-renamed
lib/ansible/plugins/strategy/__init__.py pylint:disallowed-name
lib/ansible/plugins/strategy/linear.py pylint:disallowed-name
lib/ansible/utils/collection_loader/_collection_finder.py pylint:deprecated-class
lib/ansible/utils/collection_loader/_collection_meta.py pylint:deprecated-class
lib/ansible/vars/hostvars.py pylint:disallowed-name
test/integration/targets/ansible-test/ansible_collections/ns/col/plugins/modules/hello.py pylint:relative-beyond-top-level
test/integration/targets/ansible-test/ansible_collections/ns/col/tests/integration/targets/hello/files/bad.py pylint:ansible-bad-function # ignore, required for testing
test/integration/targets/ansible-test/ansible_collections/ns/col/tests/integration/targets/hello/files/bad.py pylint:ansible-bad-import-from # ignore, required for testing
test/integration/targets/ansible-test/ansible_collections/ns/col/tests/integration/targets/hello/files/bad.py pylint:ansible-bad-import # ignore, required for testing
test/integration/targets/ansible-test/ansible_collections/ns/col/tests/unit/plugins/modules/test_hello.py pylint:relative-beyond-top-level
test/integration/targets/ansible-test/ansible_collections/ns/col/tests/unit/plugins/module_utils/test_my_util.py pylint:relative-beyond-top-level
test/integration/targets/ansible-test-docker/ansible_collections/ns/col/plugins/modules/hello.py pylint:relative-beyond-top-level
test/integration/targets/ansible-test-docker/ansible_collections/ns/col/tests/unit/plugins/modules/test_hello.py pylint:relative-beyond-top-level
test/integration/targets/ansible-test-docker/ansible_collections/ns/col/tests/unit/plugins/module_utils/test_my_util.py pylint:relative-beyond-top-level
test/integration/targets/collections_relative_imports/collection_root/ansible_collections/my_ns/my_col/plugins/modules/my_module.py pylint:relative-beyond-top-level
test/integration/targets/collections_relative_imports/collection_root/ansible_collections/my_ns/my_col/plugins/module_utils/my_util2.py pylint:relative-beyond-top-level
test/integration/targets/collections_relative_imports/collection_root/ansible_collections/my_ns/my_col/plugins/module_utils/my_util3.py pylint:relative-beyond-top-level
test/integration/targets/gathering_facts/library/bogus_facts shebang
test/integration/targets/gathering_facts/library/facts_one shebang
test/integration/targets/gathering_facts/library/facts_two shebang
test/integration/targets/incidental_win_reboot/templates/post_reboot.ps1 pslint!skip
test/integration/targets/json_cleanup/library/bad_json shebang
test/integration/targets/lookup_csvfile/files/crlf.csv line-endings
test/integration/targets/lookup_ini/lookup-8859-15.ini no-smart-quotes
test/integration/targets/module_precedence/lib_with_extension/a.ini shebang
test/integration/targets/module_precedence/lib_with_extension/ping.ini shebang
test/integration/targets/module_precedence/roles_with_extension/foo/library/a.ini shebang
test/integration/targets/module_precedence/roles_with_extension/foo/library/ping.ini shebang
test/integration/targets/module_utils/library/test.py future-import-boilerplate # allow testing of Python 2.x implicit relative imports
test/integration/targets/module_utils/module_utils/bar0/foo.py pylint:disallowed-name
test/integration/targets/module_utils/module_utils/foo.py pylint:disallowed-name
test/integration/targets/module_utils/module_utils/sub/bar/bar.py pylint:disallowed-name
test/integration/targets/module_utils/module_utils/sub/bar/__init__.py pylint:disallowed-name
test/integration/targets/module_utils/module_utils/yak/zebra/foo.py pylint:disallowed-name
test/integration/targets/old_style_modules_posix/library/helloworld.sh shebang
test/integration/targets/template/files/encoding_1252_utf-8.expected no-smart-quotes
test/integration/targets/template/files/encoding_1252_windows-1252.expected no-smart-quotes
test/integration/targets/template/files/foo.dos.txt line-endings
test/integration/targets/template/templates/encoding_1252.j2 no-smart-quotes
test/integration/targets/unicode/unicode.yml no-smart-quotes
test/integration/targets/windows-minimal/library/win_ping_syntax_error.ps1 pslint!skip
test/integration/targets/win_exec_wrapper/library/test_fail.ps1 pslint:PSCustomUseLiteralPath
test/integration/targets/win_exec_wrapper/tasks/main.yml no-smart-quotes # We are explicitly testing smart quote support for env vars
test/integration/targets/win_fetch/tasks/main.yml no-smart-quotes # We are explictly testing smart quotes in the file name to fetch
test/integration/targets/win_module_utils/library/legacy_only_new_way_win_line_ending.ps1 line-endings # Explicitly tests that we still work with Windows line endings
test/integration/targets/win_module_utils/library/legacy_only_old_way_win_line_ending.ps1 line-endings # Explicitly tests that we still work with Windows line endings
test/integration/targets/win_script/files/test_script.ps1 pslint:PSAvoidUsingWriteHost # Keep
test/integration/targets/win_script/files/test_script_removes_file.ps1 pslint:PSCustomUseLiteralPath
test/integration/targets/win_script/files/test_script_with_args.ps1 pslint:PSAvoidUsingWriteHost # Keep
test/integration/targets/win_script/files/test_script_with_splatting.ps1 pslint:PSAvoidUsingWriteHost # Keep
test/lib/ansible_test/_data/requirements/sanity.pslint.ps1 pslint:PSCustomUseLiteralPath # Uses wildcards on purpose
test/lib/ansible_test/_util/target/setup/ConfigureRemotingForAnsible.ps1 pslint:PSCustomUseLiteralPath
test/support/integration/plugins/inventory/aws_ec2.py pylint:use-a-generator
test/support/integration/plugins/modules/ec2_group.py pylint:use-a-generator
test/support/integration/plugins/modules/timezone.py pylint:disallowed-name
test/support/integration/plugins/module_utils/aws/core.py pylint:property-with-parameters
test/support/integration/plugins/module_utils/cloud.py future-import-boilerplate
test/support/integration/plugins/module_utils/cloud.py metaclass-boilerplate
test/support/integration/plugins/module_utils/cloud.py pylint:isinstance-second-argument-not-valid-type
test/support/integration/plugins/module_utils/compat/ipaddress.py future-import-boilerplate
test/support/integration/plugins/module_utils/compat/ipaddress.py metaclass-boilerplate
test/support/integration/plugins/module_utils/compat/ipaddress.py no-unicode-literals
test/support/integration/plugins/module_utils/network/common/utils.py future-import-boilerplate
test/support/integration/plugins/module_utils/network/common/utils.py metaclass-boilerplate
test/support/integration/plugins/module_utils/network/common/utils.py pylint:use-a-generator
test/support/network-integration/collections/ansible_collections/ansible/netcommon/plugins/filter/network.py pylint:consider-using-dict-comprehension
test/support/network-integration/collections/ansible_collections/ansible/netcommon/plugins/module_utils/compat/ipaddress.py no-unicode-literals
test/support/network-integration/collections/ansible_collections/ansible/netcommon/plugins/module_utils/compat/ipaddress.py pep8:E203
test/support/network-integration/collections/ansible_collections/ansible/netcommon/plugins/module_utils/network/common/facts/facts.py pylint:unnecessary-comprehension
test/support/network-integration/collections/ansible_collections/ansible/netcommon/plugins/module_utils/network/common/utils.py pylint:use-a-generator
test/support/network-integration/collections/ansible_collections/ansible/netcommon/plugins/netconf/default.py pylint:unnecessary-comprehension
test/support/network-integration/collections/ansible_collections/cisco/ios/plugins/cliconf/ios.py pylint:arguments-renamed
test/support/network-integration/collections/ansible_collections/cisco/ios/plugins/modules/ios_config.py pep8:E501
test/support/network-integration/collections/ansible_collections/vyos/vyos/plugins/cliconf/vyos.py pylint:arguments-renamed
test/support/network-integration/collections/ansible_collections/vyos/vyos/plugins/modules/vyos_command.py pep8:E231
test/support/network-integration/collections/ansible_collections/vyos/vyos/plugins/modules/vyos_command.py pylint:disallowed-name
test/support/windows-integration/collections/ansible_collections/ansible/windows/plugins/module_utils/WebRequest.psm1 pslint!skip
test/support/windows-integration/collections/ansible_collections/ansible/windows/plugins/modules/win_uri.ps1 pslint!skip
test/support/windows-integration/plugins/modules/async_status.ps1 pslint!skip
test/support/windows-integration/plugins/modules/setup.ps1 pslint!skip
test/support/windows-integration/plugins/modules/slurp.ps1 pslint!skip
test/support/windows-integration/plugins/modules/win_acl.ps1 pslint!skip
test/support/windows-integration/plugins/modules/win_certificate_store.ps1 pslint!skip
test/support/windows-integration/plugins/modules/win_command.ps1 pslint!skip
test/support/windows-integration/plugins/modules/win_copy.ps1 pslint!skip
test/support/windows-integration/plugins/modules/win_file.ps1 pslint!skip
test/support/windows-integration/plugins/modules/win_get_url.ps1 pslint!skip
test/support/windows-integration/plugins/modules/win_lineinfile.ps1 pslint!skip
test/support/windows-integration/plugins/modules/win_regedit.ps1 pslint!skip
test/support/windows-integration/plugins/modules/win_shell.ps1 pslint!skip
test/support/windows-integration/plugins/modules/win_stat.ps1 pslint!skip
test/support/windows-integration/plugins/modules/win_tempfile.ps1 pslint!skip
test/support/windows-integration/plugins/modules/win_user_right.ps1 pslint!skip
test/support/windows-integration/plugins/modules/win_user.ps1 pslint!skip
test/support/windows-integration/plugins/modules/win_wait_for.ps1 pslint!skip
test/support/windows-integration/plugins/modules/win_whoami.ps1 pslint!skip
test/units/executor/test_play_iterator.py pylint:disallowed-name
test/units/modules/test_apt.py pylint:disallowed-name
test/units/module_utils/basic/test_deprecate_warn.py pylint:ansible-deprecated-no-version
test/units/module_utils/basic/test_deprecate_warn.py pylint:ansible-deprecated-version
test/units/module_utils/basic/test_run_command.py pylint:disallowed-name
test/units/module_utils/urls/fixtures/multipart.txt line-endings # Fixture for HTTP tests that use CRLF
test/units/module_utils/urls/test_fetch_url.py replace-urlopen
test/units/module_utils/urls/test_Request.py replace-urlopen
test/units/parsing/vault/test_vault.py pylint:disallowed-name
test/units/playbook/role/test_role.py pylint:disallowed-name
test/units/plugins/test_plugins.py pylint:disallowed-name
test/units/template/test_templar.py pylint:disallowed-name
test/units/utils/collection_loader/fixtures/collections/ansible_collections/testns/testcoll/plugins/action/my_action.py pylint:relative-beyond-top-level
test/units/utils/collection_loader/fixtures/collections/ansible_collections/testns/testcoll/plugins/modules/__init__.py empty-init # testing that collections don't need inits
test/units/utils/collection_loader/fixtures/collections_masked/ansible_collections/ansible/__init__.py empty-init # testing that collections don't need inits
test/units/utils/collection_loader/fixtures/collections_masked/ansible_collections/__init__.py empty-init # testing that collections don't need inits
test/units/utils/collection_loader/fixtures/collections_masked/ansible_collections/testns/__init__.py empty-init # testing that collections don't need inits
test/units/utils/collection_loader/fixtures/collections_masked/ansible_collections/testns/testcoll/__init__.py empty-init # testing that collections don't need inits
test/units/utils/collection_loader/test_collection_loader.py pylint:undefined-variable # magic runtime local var splatting
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 76,027 |
ansible.builtin.hostname, RedHat strategy assumes Python 2
|
### Summary
The [`RedHatStrategy` class in `lib/ansible/modules/hostname.py`](https://github.com/ansible/ansible/blob/c5d8dc0e119c8a1632afae8b55fb15e2a4a44177/lib/ansible/modules/hostname.py#L283-L318) does not follow the [Unicode Sandwich principles](https://docs.ansible.com/ansible/latest/dev_guide/developing_python_3.html#unicode-sandwich-common-borders-places-to-convert-bytes-to-text-in-controller-code) and so runs into an exception when the module is executed with Python 3:
```python
Traceback (most recent call last):
File "/tmp/ansible_hostname_payload_zjjaaj3t/ansible_hostname_payload.zip/ansible/modules/hostname.py", line 315, in get_permanent_hostname
TypeError: startswith first arg must be bytes or a tuple of bytes, not str
```
### Issue Type
Bug Report
### Component Name
ansible.builtin.hostname
### Ansible Version
```console
$ ansible --version
ansible [core 2.11.5]
config file = None
configured module search path = ['/Users/mj/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/Cellar/ansible/4.6.0/libexec/lib/python3.9/site-packages/ansible
ansible collection location = /Users/mj/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/local/bin/ansible
python version = 3.9.7 (default, Sep 3 2021, 12:37:55) [Clang 12.0.5 (clang-1205.0.22.9)]
jinja version = 3.0.1
libyaml = True
```
### Configuration
```console
$ ansible-config dump --only-changed
```
### OS / Environment
CentOS 8 Stream
### Steps to Reproduce
The following commands start a CentOS 8 docker container in the background, then attempts to set the hostname on that container; the last command removes the container again:
```sh
docker run -d --rm --cap-add SYS_ADMIN --name mcve-hostname centos:8 /bin/sh -c 'while true; do sleep 5; done'
ansible -i mcve-hostname, -c docker -m ansible.builtin.hostname -a 'name=mcve-hostname use=redhat' all -vvvv
docker stop mcve-hostname
```
Note: the `--cap-add SYS_ADMIN` option for `docker run` is used to [allow the `hostname` command to be used](https://github.com/moby/moby/issues/8902#issuecomment-218911749).
### Expected Results
I expected the hostname command to run without issue.
### Actual Results
```console
ansible [core 2.11.5]
config file = None
configured module search path = ['/Users/mj/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/Cellar/ansible/4.6.0/libexec/lib/python3.9/site-packages/ansible
ansible collection location = /Users/mj/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/local/bin/ansible
python version = 3.9.7 (default, Sep 3 2021, 12:37:55) [Clang 12.0.5 (clang-1205.0.22.9)]
jinja version = 3.0.1
libyaml = True
No config file found; using defaults
setting up inventory plugins
Parsed mcve-hostname, inventory source with host_list plugin
Loading callback plugin minimal of type stdout, v2.0 from /usr/local/Cellar/ansible/4.6.0/libexec/lib/python3.9/site-packages/ansible/plugins/callback/minimal.py
Skipping callback 'default', as we already have a stdout callback.
Skipping callback 'minimal', as we already have a stdout callback.
Skipping callback 'oneline', as we already have a stdout callback.
META: ran handlers
redirecting (type: connection) ansible.builtin.docker to community.docker.docker
Loading collection community.docker from /Users/mj/.ansible/collections/ansible_collections/community/docker
<mcve-hostname> ESTABLISH DOCKER CONNECTION FOR USER: root
<mcve-hostname> EXEC ['/usr/local/bin/docker', b'exec', b'-i', 'mcve-hostname', '/bin/sh', '-c', "/bin/sh -c 'echo ~ && sleep 0'"]
<mcve-hostname> EXEC ['/usr/local/bin/docker', b'exec', b'-i', 'mcve-hostname', '/bin/sh', '-c', '/bin/sh -c \'( umask 77 && mkdir -p "` echo /root/.ansible/tmp `"&& mkdir "` echo /root/.ansible/tmp/ansible-tmp-1634119089.8989718-55435-22274985961259 `" && echo ansible-tmp-1634119089.8989718-55435-22274985961259="` echo /root/.ansible/tmp/ansible-tmp-1634119089.8989718-55435-22274985961259 `" ) && sleep 0\'']
<mcve-hostname> Attempting python interpreter discovery
<mcve-hostname> EXEC ['/usr/local/bin/docker', b'exec', b'-i', 'mcve-hostname', '/bin/sh', '-c', '/bin/sh -c \'echo PLATFORM; uname; echo FOUND; command -v \'"\'"\'/usr/bin/python\'"\'"\'; command -v \'"\'"\'python3.9\'"\'"\'; command -v \'"\'"\'python3.8\'"\'"\'; command -v \'"\'"\'python3.7\'"\'"\'; command -v \'"\'"\'python3.6\'"\'"\'; command -v \'"\'"\'python3.5\'"\'"\'; command -v \'"\'"\'python2.7\'"\'"\'; command -v \'"\'"\'python2.6\'"\'"\'; command -v \'"\'"\'/usr/libexec/platform-python\'"\'"\'; command -v \'"\'"\'/usr/bin/python3\'"\'"\'; command -v \'"\'"\'python\'"\'"\'; echo ENDFOUND && sleep 0\'']
<mcve-hostname> EXEC ['/usr/local/bin/docker', b'exec', b'-i', 'mcve-hostname', '/bin/sh', '-c', "/bin/sh -c '/usr/libexec/platform-python && sleep 0'"]
Using module file /usr/local/Cellar/ansible/4.6.0/libexec/lib/python3.9/site-packages/ansible/modules/hostname.py
<mcve-hostname> PUT /Users/mj/.ansible/tmp/ansible-local-55433gdx_vwgr/tmp6ggw39on TO /root/.ansible/tmp/ansible-tmp-1634119089.8989718-55435-22274985961259/AnsiballZ_hostname.py
<mcve-hostname> EXEC ['/usr/local/bin/docker', b'exec', b'-i', 'mcve-hostname', '/bin/sh', '-c', "/bin/sh -c 'chmod u+x /root/.ansible/tmp/ansible-tmp-1634119089.8989718-55435-22274985961259/ /root/.ansible/tmp/ansible-tmp-1634119089.8989718-55435-22274985961259/AnsiballZ_hostname.py && sleep 0'"]
<mcve-hostname> EXEC ['/usr/local/bin/docker', b'exec', b'-i', 'mcve-hostname', '/bin/sh', '-c', "/bin/sh -c '/usr/libexec/platform-python /root/.ansible/tmp/ansible-tmp-1634119089.8989718-55435-22274985961259/AnsiballZ_hostname.py && sleep 0'"]
<mcve-hostname> EXEC ['/usr/local/bin/docker', b'exec', b'-i', 'mcve-hostname', '/bin/sh', '-c', "/bin/sh -c 'rm -f -r /root/.ansible/tmp/ansible-tmp-1634119089.8989718-55435-22274985961259/ > /dev/null 2>&1 && sleep 0'"]
The full traceback is:
Traceback (most recent call last):
File "/tmp/ansible_ansible.builtin.hostname_payload_wgq1w34m/ansible_ansible.builtin.hostname_payload.zip/ansible/modules/hostname.py", line 315, in get_permanent_hostname
TypeError: startswith first arg must be bytes or a tuple of bytes, not str
mcve-hostname | FAILED! => {
"ansible_facts": {
"discovered_interpreter_python": "/usr/libexec/platform-python"
},
"changed": false,
"invocation": {
"module_args": {
"name": "mcve-hostname",
"use": "redhat"
}
},
"msg": "failed to read hostname: startswith first arg must be bytes or a tuple of bytes, not str"
}
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/76027
|
https://github.com/ansible/ansible/pull/76032
|
abac141122d3c3beac29dae7f04551fb05ed9c74
|
08af0fbf95220f94e9bdcfdb3bb27f1b4d58f930
| 2021-10-13T09:28:03Z |
python
| 2021-11-15T06:15:20Z |
lib/ansible/modules/hostname.py
|
#!/usr/bin/python
# -*- coding: utf-8 -*-
# Copyright: (c) 2013, Hiroaki Nakamura <[email protected]>
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
DOCUMENTATION = '''
---
module: hostname
author:
- Adrian Likins (@alikins)
- Hideki Saito (@saito-hideki)
version_added: "1.4"
short_description: Manage hostname
requirements: [ hostname ]
description:
- Set system's hostname. Supports most OSs/Distributions including those using C(systemd).
- Windows, HP-UX, and AIX are not currently supported.
notes:
- This module does B(NOT) modify C(/etc/hosts). You need to modify it yourself using other modules such as M(ansible.builtin.template)
or M(ansible.builtin.replace).
- On macOS, this module uses C(scutil) to set C(HostName), C(ComputerName), and C(LocalHostName). Since C(LocalHostName)
cannot contain spaces or most special characters, this module will replace characters when setting C(LocalHostName).
options:
name:
description:
- Name of the host.
- If the value is a fully qualified domain name that does not resolve from the given host,
this will cause the module to hang for a few seconds while waiting for the name resolution attempt to timeout.
type: str
required: true
use:
description:
- Which strategy to use to update the hostname.
- If not set we try to autodetect, but this can be problematic, particularly with containers as they can present misleading information.
- Note that 'systemd' should be specified for RHEL/EL/CentOS 7+. Older distributions should use 'redhat'.
choices: ['alpine', 'debian', 'freebsd', 'generic', 'macos', 'macosx', 'darwin', 'openbsd', 'openrc', 'redhat', 'sles', 'solaris', 'systemd']
type: str
version_added: '2.9'
extends_documentation_fragment:
- action_common_attributes
- action_common_attributes.facts
attributes:
check_mode:
support: full
diff_mode:
support: full
facts:
support: full
platform:
platforms: posix
'''
EXAMPLES = '''
- name: Set a hostname
ansible.builtin.hostname:
name: web01
- name: Set a hostname specifying strategy
ansible.builtin.hostname:
name: web01
use: systemd
'''
import os
import platform
import socket
import traceback
from ansible.module_utils.basic import (
AnsibleModule,
get_distribution,
get_distribution_version,
)
from ansible.module_utils.common.sys_info import get_platform_subclass
from ansible.module_utils.facts.system.service_mgr import ServiceMgrFactCollector
from ansible.module_utils.facts.utils import get_file_lines
from ansible.module_utils._text import to_native, to_text
from ansible.module_utils.six import PY3, text_type
STRATS = {
'alpine': 'Alpine',
'debian': 'Debian',
'freebsd': 'FreeBSD',
'generic': 'Generic',
'macos': 'Darwin',
'macosx': 'Darwin',
'darwin': 'Darwin',
'openbsd': 'OpenBSD',
'openrc': 'OpenRC',
'redhat': 'RedHat',
'sles': 'SLES',
'solaris': 'Solaris',
'systemd': 'Systemd',
}
class UnimplementedStrategy(object):
def __init__(self, module):
self.module = module
def update_current_and_permanent_hostname(self):
self.unimplemented_error()
def update_current_hostname(self):
self.unimplemented_error()
def update_permanent_hostname(self):
self.unimplemented_error()
def get_current_hostname(self):
self.unimplemented_error()
def set_current_hostname(self, name):
self.unimplemented_error()
def get_permanent_hostname(self):
self.unimplemented_error()
def set_permanent_hostname(self, name):
self.unimplemented_error()
def unimplemented_error(self):
system = platform.system()
distribution = get_distribution()
if distribution is not None:
msg_platform = '%s (%s)' % (system, distribution)
else:
msg_platform = system
self.module.fail_json(
msg='hostname module cannot be used on platform %s' % msg_platform)
class Hostname(object):
"""
This is a generic Hostname manipulation class that is subclassed
based on platform.
A subclass may wish to set different strategy instance to self.strategy.
All subclasses MUST define platform and distribution (which may be None).
"""
platform = 'Generic'
distribution = None
strategy_class = UnimplementedStrategy
def __new__(cls, *args, **kwargs):
new_cls = get_platform_subclass(Hostname)
return super(cls, new_cls).__new__(new_cls)
def __init__(self, module):
self.module = module
self.name = module.params['name']
self.use = module.params['use']
if self.use is not None:
strat = globals()['%sStrategy' % STRATS[self.use]]
self.strategy = strat(module)
elif self.platform == 'Linux' and ServiceMgrFactCollector.is_systemd_managed(module):
self.strategy = SystemdStrategy(module)
else:
self.strategy = self.strategy_class(module)
def update_current_and_permanent_hostname(self):
return self.strategy.update_current_and_permanent_hostname()
def get_current_hostname(self):
return self.strategy.get_current_hostname()
def set_current_hostname(self, name):
self.strategy.set_current_hostname(name)
def get_permanent_hostname(self):
return self.strategy.get_permanent_hostname()
def set_permanent_hostname(self, name):
self.strategy.set_permanent_hostname(name)
class BaseStrategy(object):
def __init__(self, module):
self.module = module
self.changed = False
def update_current_and_permanent_hostname(self):
self.update_current_hostname()
self.update_permanent_hostname()
return self.changed
def update_current_hostname(self):
name = self.module.params['name']
current_name = self.get_current_hostname()
if current_name != name:
if not self.module.check_mode:
self.set_current_hostname(name)
self.changed = True
def update_permanent_hostname(self):
name = self.module.params['name']
permanent_name = self.get_permanent_hostname()
if permanent_name != name:
if not self.module.check_mode:
self.set_permanent_hostname(name)
self.changed = True
def get_current_hostname(self):
return self.get_permanent_hostname()
def set_current_hostname(self, name):
pass
def get_permanent_hostname(self):
raise NotImplementedError
def set_permanent_hostname(self, name):
raise NotImplementedError
class CommandStrategy(BaseStrategy):
COMMAND = 'hostname'
def __init__(self, module):
super(CommandStrategy, self).__init__(module)
self.hostname_cmd = self.module.get_bin_path(self.COMMAND, True)
def get_current_hostname(self):
cmd = [self.hostname_cmd]
rc, out, err = self.module.run_command(cmd)
if rc != 0:
self.module.fail_json(msg="Command failed rc=%d, out=%s, err=%s" % (rc, out, err))
return to_native(out).strip()
def set_current_hostname(self, name):
cmd = [self.hostname_cmd, name]
rc, out, err = self.module.run_command(cmd)
if rc != 0:
self.module.fail_json(msg="Command failed rc=%d, out=%s, err=%s" % (rc, out, err))
def get_permanent_hostname(self):
return 'UNKNOWN'
def set_permanent_hostname(self, name):
pass
class FileStrategy(BaseStrategy):
FILE = '/etc/hostname'
def get_permanent_hostname(self):
if not os.path.isfile(self.FILE):
return ''
try:
return get_file_lines(self.FILE)
except Exception as e:
self.module.fail_json(
msg="failed to read hostname: %s" % to_native(e),
exception=traceback.format_exc())
def set_permanent_hostname(self, name):
try:
with open(self.FILE, 'w+') as f:
f.write("%s\n" % name)
except Exception as e:
self.module.fail_json(
msg="failed to update hostname: %s" % to_native(e),
exception=traceback.format_exc())
class SLESStrategy(FileStrategy):
"""
This is a SLES Hostname strategy class - it edits the
/etc/HOSTNAME file.
"""
FILE = '/etc/HOSTNAME'
class RedHatStrategy(BaseStrategy):
"""
This is a Redhat Hostname strategy class - it edits the
/etc/sysconfig/network file.
"""
NETWORK_FILE = '/etc/sysconfig/network'
def get_permanent_hostname(self):
try:
for line in get_file_lines(self.NETWORK_FILE):
if line.startswith('HOSTNAME'):
k, v = line.split('=')
return v.strip()
except Exception as e:
self.module.fail_json(
msg="failed to read hostname: %s" % to_native(e),
exception=traceback.format_exc())
def set_permanent_hostname(self, name):
try:
lines = []
found = False
for line in get_file_lines(self.NETWORK_FILE):
if line.startswith('HOSTNAME'):
lines.append("HOSTNAME=%s\n" % name)
found = True
else:
lines.append(line)
if not found:
lines.append("HOSTNAME=%s\n" % name)
with open(self.NETWORK_FILE, 'w+') as f:
f.writelines(lines)
except Exception as e:
self.module.fail_json(
msg="failed to update hostname: %s" % to_native(e),
exception=traceback.format_exc())
class AlpineStrategy(FileStrategy):
"""
This is a Alpine Linux Hostname manipulation strategy class - it edits
the /etc/hostname file then run hostname -F /etc/hostname.
"""
FILE = '/etc/hostname'
COMMAND = 'hostname'
def set_current_hostname(self, name):
super(AlpineStrategy, self).set_current_hostname(name)
hostname_cmd = self.module.get_bin_path(self.COMMAND, True)
cmd = [hostname_cmd, '-F', self.FILE]
rc, out, err = self.module.run_command(cmd)
if rc != 0:
self.module.fail_json(msg="Command failed rc=%d, out=%s, err=%s" % (rc, out, err))
class SystemdStrategy(BaseStrategy):
"""
This is a Systemd hostname manipulation strategy class - it uses
the hostnamectl command.
"""
COMMAND = "hostnamectl"
def __init__(self, module):
super(SystemdStrategy, self).__init__(module)
self.hostnamectl_cmd = self.module.get_bin_path(self.COMMAND, True)
def get_current_hostname(self):
cmd = [self.hostnamectl_cmd, '--transient', 'status']
rc, out, err = self.module.run_command(cmd)
if rc != 0:
self.module.fail_json(msg="Command failed rc=%d, out=%s, err=%s" % (rc, out, err))
return to_native(out).strip()
def set_current_hostname(self, name):
if len(name) > 64:
self.module.fail_json(msg="name cannot be longer than 64 characters on systemd servers, try a shorter name")
cmd = [self.hostnamectl_cmd, '--transient', 'set-hostname', name]
rc, out, err = self.module.run_command(cmd)
if rc != 0:
self.module.fail_json(msg="Command failed rc=%d, out=%s, err=%s" % (rc, out, err))
def get_permanent_hostname(self):
cmd = [self.hostnamectl_cmd, '--static', 'status']
rc, out, err = self.module.run_command(cmd)
if rc != 0:
self.module.fail_json(msg="Command failed rc=%d, out=%s, err=%s" % (rc, out, err))
return to_native(out).strip()
def set_permanent_hostname(self, name):
if len(name) > 64:
self.module.fail_json(msg="name cannot be longer than 64 characters on systemd servers, try a shorter name")
cmd = [self.hostnamectl_cmd, '--pretty', 'set-hostname', name]
rc, out, err = self.module.run_command(cmd)
if rc != 0:
self.module.fail_json(msg="Command failed rc=%d, out=%s, err=%s" % (rc, out, err))
cmd = [self.hostnamectl_cmd, '--static', 'set-hostname', name]
rc, out, err = self.module.run_command(cmd)
if rc != 0:
self.module.fail_json(msg="Command failed rc=%d, out=%s, err=%s" % (rc, out, err))
class OpenRCStrategy(BaseStrategy):
"""
This is a Gentoo (OpenRC) Hostname manipulation strategy class - it edits
the /etc/conf.d/hostname file.
"""
FILE = '/etc/conf.d/hostname'
def get_permanent_hostname(self):
if not os.path.isfile(self.FILE):
return ''
try:
for line in get_file_lines(self.FILE):
line = line.strip()
if line.startswith('hostname='):
return line[10:].strip('"')
except Exception as e:
self.module.fail_json(
msg="failed to read hostname: %s" % to_native(e),
exception=traceback.format_exc())
def set_permanent_hostname(self, name):
try:
lines = [x.strip() for x in get_file_lines(self.FILE)]
for i, line in enumerate(lines):
if line.startswith('hostname='):
lines[i] = 'hostname="%s"' % name
break
with open(self.FILE, 'w') as f:
f.write('\n'.join(lines) + '\n')
except Exception as e:
self.module.fail_json(
msg="failed to update hostname: %s" % to_native(e),
exception=traceback.format_exc())
class OpenBSDStrategy(FileStrategy):
"""
This is a OpenBSD family Hostname manipulation strategy class - it edits
the /etc/myname file.
"""
FILE = '/etc/myname'
class SolarisStrategy(BaseStrategy):
"""
This is a Solaris11 or later Hostname manipulation strategy class - it
execute hostname command.
"""
COMMAND = "hostname"
def __init__(self, module):
super(SolarisStrategy, self).__init__(module)
self.hostname_cmd = self.module.get_bin_path(self.COMMAND, True)
def set_current_hostname(self, name):
cmd_option = '-t'
cmd = [self.hostname_cmd, cmd_option, name]
rc, out, err = self.module.run_command(cmd)
if rc != 0:
self.module.fail_json(msg="Command failed rc=%d, out=%s, err=%s" % (rc, out, err))
def get_permanent_hostname(self):
fmri = 'svc:/system/identity:node'
pattern = 'config/nodename'
cmd = '/usr/sbin/svccfg -s %s listprop -o value %s' % (fmri, pattern)
rc, out, err = self.module.run_command(cmd, use_unsafe_shell=True)
if rc != 0:
self.module.fail_json(msg="Command failed rc=%d, out=%s, err=%s" % (rc, out, err))
return to_native(out).strip()
def set_permanent_hostname(self, name):
cmd = [self.hostname_cmd, name]
rc, out, err = self.module.run_command(cmd)
if rc != 0:
self.module.fail_json(msg="Command failed rc=%d, out=%s, err=%s" % (rc, out, err))
class FreeBSDStrategy(BaseStrategy):
"""
This is a FreeBSD hostname manipulation strategy class - it edits
the /etc/rc.conf.d/hostname file.
"""
FILE = '/etc/rc.conf.d/hostname'
COMMAND = "hostname"
def __init__(self, module):
super(FreeBSDStrategy, self).__init__(module)
self.hostname_cmd = self.module.get_bin_path(self.COMMAND, True)
def get_current_hostname(self):
cmd = [self.hostname_cmd]
rc, out, err = self.module.run_command(cmd)
if rc != 0:
self.module.fail_json(msg="Command failed rc=%d, out=%s, err=%s" % (rc, out, err))
return to_native(out).strip()
def set_current_hostname(self, name):
cmd = [self.hostname_cmd, name]
rc, out, err = self.module.run_command(cmd)
if rc != 0:
self.module.fail_json(msg="Command failed rc=%d, out=%s, err=%s" % (rc, out, err))
def get_permanent_hostname(self):
if not os.path.isfile(self.FILE):
return ''
try:
for line in get_file_lines(self.FILE):
line = line.strip()
if line.startswith('hostname='):
return line[10:].strip('"')
except Exception as e:
self.module.fail_json(
msg="failed to read hostname: %s" % to_native(e),
exception=traceback.format_exc())
def set_permanent_hostname(self, name):
try:
if os.path.isfile(self.FILE):
lines = [x.strip() for x in get_file_lines(self.FILE)]
for i, line in enumerate(lines):
if line.startswith('hostname='):
lines[i] = 'hostname="%s"' % name
break
else:
lines = ['hostname="%s"' % name]
with open(self.FILE, 'w') as f:
f.write('\n'.join(lines) + '\n')
except Exception as e:
self.module.fail_json(
msg="failed to update hostname: %s" % to_native(e),
exception=traceback.format_exc())
class DarwinStrategy(BaseStrategy):
"""
This is a macOS hostname manipulation strategy class. It uses
/usr/sbin/scutil to set ComputerName, HostName, and LocalHostName.
HostName corresponds to what most platforms consider to be hostname.
It controls the name used on the command line and SSH.
However, macOS also has LocalHostName and ComputerName settings.
LocalHostName controls the Bonjour/ZeroConf name, used by services
like AirDrop. This class implements a method, _scrub_hostname(), that mimics
the transformations macOS makes on hostnames when enterened in the Sharing
preference pane. It replaces spaces with dashes and removes all special
characters.
ComputerName is the name used for user-facing GUI services, like the
System Preferences/Sharing pane and when users connect to the Mac over the network.
"""
def __init__(self, module):
super(DarwinStrategy, self).__init__(module)
self.scutil = self.module.get_bin_path('scutil', True)
self.name_types = ('HostName', 'ComputerName', 'LocalHostName')
self.scrubbed_name = self._scrub_hostname(self.module.params['name'])
def _make_translation(self, replace_chars, replacement_chars, delete_chars):
if PY3:
return str.maketrans(replace_chars, replacement_chars, delete_chars)
if not isinstance(replace_chars, text_type) or not isinstance(replacement_chars, text_type):
raise ValueError('replace_chars and replacement_chars must both be strings')
if len(replace_chars) != len(replacement_chars):
raise ValueError('replacement_chars must be the same length as replace_chars')
table = dict(zip((ord(c) for c in replace_chars), replacement_chars))
for char in delete_chars:
table[ord(char)] = None
return table
def _scrub_hostname(self, name):
"""
LocalHostName only accepts valid DNS characters while HostName and ComputerName
accept a much wider range of characters. This function aims to mimic how macOS
translates a friendly name to the LocalHostName.
"""
# Replace all these characters with a single dash
name = to_text(name)
replace_chars = u'\'"~`!@#$%^&*(){}[]/=?+\\|-_ '
delete_chars = u".'"
table = self._make_translation(replace_chars, u'-' * len(replace_chars), delete_chars)
name = name.translate(table)
# Replace multiple dashes with a single dash
while '-' * 2 in name:
name = name.replace('-' * 2, '')
name = name.rstrip('-')
return name
def get_current_hostname(self):
cmd = [self.scutil, '--get', 'HostName']
rc, out, err = self.module.run_command(cmd)
if rc != 0 and 'HostName: not set' not in err:
self.module.fail_json(msg="Failed to get current hostname rc=%d, out=%s, err=%s" % (rc, out, err))
return to_native(out).strip()
def get_permanent_hostname(self):
cmd = [self.scutil, '--get', 'ComputerName']
rc, out, err = self.module.run_command(cmd)
if rc != 0:
self.module.fail_json(msg="Failed to get permanent hostname rc=%d, out=%s, err=%s" % (rc, out, err))
return to_native(out).strip()
def set_permanent_hostname(self, name):
for hostname_type in self.name_types:
cmd = [self.scutil, '--set', hostname_type]
if hostname_type == 'LocalHostName':
cmd.append(to_native(self.scrubbed_name))
else:
cmd.append(to_native(name))
rc, out, err = self.module.run_command(cmd)
if rc != 0:
self.module.fail_json(msg="Failed to set {3} to '{2}': {0} {1}".format(to_native(out), to_native(err), to_native(name), hostname_type))
def set_current_hostname(self, name):
pass
def update_current_hostname(self):
pass
def update_permanent_hostname(self):
name = self.module.params['name']
# Get all the current host name values in the order of self.name_types
all_names = tuple(self.module.run_command([self.scutil, '--get', name_type])[1].strip() for name_type in self.name_types)
# Get the expected host name values based on the order in self.name_types
expected_names = tuple(self.scrubbed_name if n == 'LocalHostName' else name for n in self.name_types)
# Ensure all three names are updated
if all_names != expected_names:
if not self.module.check_mode:
self.set_permanent_hostname(name)
self.changed = True
class FedoraHostname(Hostname):
platform = 'Linux'
distribution = 'Fedora'
strategy_class = SystemdStrategy
class SLESHostname(Hostname):
platform = 'Linux'
distribution = 'Sles'
try:
distribution_version = get_distribution_version()
# cast to float may raise ValueError on non SLES, we use float for a little more safety over int
if distribution_version and 10 <= float(distribution_version) <= 12:
strategy_class = SLESStrategy
else:
raise ValueError()
except ValueError:
strategy_class = UnimplementedStrategy
class OpenSUSEHostname(Hostname):
platform = 'Linux'
distribution = 'Opensuse'
strategy_class = SystemdStrategy
class OpenSUSELeapHostname(Hostname):
platform = 'Linux'
distribution = 'Opensuse-leap'
strategy_class = SystemdStrategy
class OpenSUSETumbleweedHostname(Hostname):
platform = 'Linux'
distribution = 'Opensuse-tumbleweed'
strategy_class = SystemdStrategy
class AsteraHostname(Hostname):
platform = 'Linux'
distribution = '"astralinuxce"'
strategy_class = SystemdStrategy
class ArchHostname(Hostname):
platform = 'Linux'
distribution = 'Arch'
strategy_class = SystemdStrategy
class ArchARMHostname(Hostname):
platform = 'Linux'
distribution = 'Archarm'
strategy_class = SystemdStrategy
class AlmaLinuxHostname(Hostname):
platform = 'Linux'
distribution = 'Almalinux'
strategy_class = SystemdStrategy
class ManjaroHostname(Hostname):
platform = 'Linux'
distribution = 'Manjaro'
strategy_class = SystemdStrategy
class ManjaroARMHostname(Hostname):
platform = 'Linux'
distribution = 'Manjaro-arm'
strategy_class = SystemdStrategy
class RHELHostname(Hostname):
platform = 'Linux'
distribution = 'Redhat'
strategy_class = RedHatStrategy
class CentOSHostname(Hostname):
platform = 'Linux'
distribution = 'Centos'
strategy_class = RedHatStrategy
class AnolisOSHostname(Hostname):
platform = 'Linux'
distribution = 'Anolis'
strategy_class = RedHatStrategy
class ClearLinuxHostname(Hostname):
platform = 'Linux'
distribution = 'Clear-linux-os'
strategy_class = SystemdStrategy
class CloudlinuxserverHostname(Hostname):
platform = 'Linux'
distribution = 'Cloudlinuxserver'
strategy_class = RedHatStrategy
class CloudlinuxHostname(Hostname):
platform = 'Linux'
distribution = 'Cloudlinux'
strategy_class = RedHatStrategy
class AlinuxHostname(Hostname):
platform = 'Linux'
distribution = 'Alinux'
strategy_class = RedHatStrategy
class CoreosHostname(Hostname):
platform = 'Linux'
distribution = 'Coreos'
strategy_class = SystemdStrategy
class FlatcarHostname(Hostname):
platform = 'Linux'
distribution = 'Flatcar'
strategy_class = SystemdStrategy
class ScientificHostname(Hostname):
platform = 'Linux'
distribution = 'Scientific'
strategy_class = RedHatStrategy
class OracleLinuxHostname(Hostname):
platform = 'Linux'
distribution = 'Oracle'
strategy_class = RedHatStrategy
class VirtuozzoLinuxHostname(Hostname):
platform = 'Linux'
distribution = 'Virtuozzo'
strategy_class = RedHatStrategy
class AmazonLinuxHostname(Hostname):
platform = 'Linux'
distribution = 'Amazon'
strategy_class = RedHatStrategy
class DebianHostname(Hostname):
platform = 'Linux'
distribution = 'Debian'
strategy_class = FileStrategy
class KylinHostname(Hostname):
platform = 'Linux'
distribution = 'Kylin'
strategy_class = FileStrategy
class CumulusHostname(Hostname):
platform = 'Linux'
distribution = 'Cumulus-linux'
strategy_class = FileStrategy
class KaliHostname(Hostname):
platform = 'Linux'
distribution = 'Kali'
strategy_class = FileStrategy
class ParrotHostname(Hostname):
platform = 'Linux'
distribution = 'Parrot'
strategy_class = FileStrategy
class UbuntuHostname(Hostname):
platform = 'Linux'
distribution = 'Ubuntu'
strategy_class = FileStrategy
class LinuxmintHostname(Hostname):
platform = 'Linux'
distribution = 'Linuxmint'
strategy_class = FileStrategy
class LinaroHostname(Hostname):
platform = 'Linux'
distribution = 'Linaro'
strategy_class = FileStrategy
class DevuanHostname(Hostname):
platform = 'Linux'
distribution = 'Devuan'
strategy_class = FileStrategy
class RaspbianHostname(Hostname):
platform = 'Linux'
distribution = 'Raspbian'
strategy_class = FileStrategy
class GentooHostname(Hostname):
platform = 'Linux'
distribution = 'Gentoo'
strategy_class = OpenRCStrategy
class ALTLinuxHostname(Hostname):
platform = 'Linux'
distribution = 'Altlinux'
strategy_class = RedHatStrategy
class AlpineLinuxHostname(Hostname):
platform = 'Linux'
distribution = 'Alpine'
strategy_class = AlpineStrategy
class OpenBSDHostname(Hostname):
platform = 'OpenBSD'
distribution = None
strategy_class = OpenBSDStrategy
class SolarisHostname(Hostname):
platform = 'SunOS'
distribution = None
strategy_class = SolarisStrategy
class FreeBSDHostname(Hostname):
platform = 'FreeBSD'
distribution = None
strategy_class = FreeBSDStrategy
class NetBSDHostname(Hostname):
platform = 'NetBSD'
distribution = None
strategy_class = FreeBSDStrategy
class NeonHostname(Hostname):
platform = 'Linux'
distribution = 'Neon'
strategy_class = FileStrategy
class DarwinHostname(Hostname):
platform = 'Darwin'
distribution = None
strategy_class = DarwinStrategy
class OsmcHostname(Hostname):
platform = 'Linux'
distribution = 'Osmc'
strategy_class = SystemdStrategy
class PardusHostname(Hostname):
platform = 'Linux'
distribution = 'Pardus'
strategy_class = SystemdStrategy
class VoidLinuxHostname(Hostname):
platform = 'Linux'
distribution = 'Void'
strategy_class = FileStrategy
class PopHostname(Hostname):
platform = 'Linux'
distribution = 'Pop'
strategy_class = FileStrategy
class RockyHostname(Hostname):
platform = 'Linux'
distribution = 'Rocky'
strategy_class = SystemdStrategy
class RedosHostname(Hostname):
platform = 'Linux'
distribution = 'Redos'
strategy_class = SystemdStrategy
def main():
module = AnsibleModule(
argument_spec=dict(
name=dict(type='str', required=True),
use=dict(type='str', choices=STRATS.keys())
),
supports_check_mode=True,
)
hostname = Hostname(module)
name = module.params['name']
current_hostname = hostname.get_current_hostname()
permanent_hostname = hostname.get_permanent_hostname()
changed = hostname.update_current_and_permanent_hostname()
if name != current_hostname:
name_before = current_hostname
elif name != permanent_hostname:
name_before = permanent_hostname
else:
name_before = permanent_hostname
# NOTE: socket.getfqdn() calls gethostbyaddr(socket.gethostname()), which can be
# slow to return if the name does not resolve correctly.
kw = dict(changed=changed, name=name,
ansible_facts=dict(ansible_hostname=name.split('.')[0],
ansible_nodename=name,
ansible_fqdn=socket.getfqdn(),
ansible_domain='.'.join(socket.getfqdn().split('.')[1:])))
if changed:
kw['diff'] = {'after': 'hostname = ' + name + '\n',
'before': 'hostname = ' + name_before + '\n'}
module.exit_json(**kw)
if __name__ == '__main__':
main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 76,027 |
ansible.builtin.hostname, RedHat strategy assumes Python 2
|
### Summary
The [`RedHatStrategy` class in `lib/ansible/modules/hostname.py`](https://github.com/ansible/ansible/blob/c5d8dc0e119c8a1632afae8b55fb15e2a4a44177/lib/ansible/modules/hostname.py#L283-L318) does not follow the [Unicode Sandwich principles](https://docs.ansible.com/ansible/latest/dev_guide/developing_python_3.html#unicode-sandwich-common-borders-places-to-convert-bytes-to-text-in-controller-code) and so runs into an exception when the module is executed with Python 3:
```python
Traceback (most recent call last):
File "/tmp/ansible_hostname_payload_zjjaaj3t/ansible_hostname_payload.zip/ansible/modules/hostname.py", line 315, in get_permanent_hostname
TypeError: startswith first arg must be bytes or a tuple of bytes, not str
```
### Issue Type
Bug Report
### Component Name
ansible.builtin.hostname
### Ansible Version
```console
$ ansible --version
ansible [core 2.11.5]
config file = None
configured module search path = ['/Users/mj/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/Cellar/ansible/4.6.0/libexec/lib/python3.9/site-packages/ansible
ansible collection location = /Users/mj/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/local/bin/ansible
python version = 3.9.7 (default, Sep 3 2021, 12:37:55) [Clang 12.0.5 (clang-1205.0.22.9)]
jinja version = 3.0.1
libyaml = True
```
### Configuration
```console
$ ansible-config dump --only-changed
```
### OS / Environment
CentOS 8 Stream
### Steps to Reproduce
The following commands start a CentOS 8 docker container in the background, then attempts to set the hostname on that container; the last command removes the container again:
```sh
docker run -d --rm --cap-add SYS_ADMIN --name mcve-hostname centos:8 /bin/sh -c 'while true; do sleep 5; done'
ansible -i mcve-hostname, -c docker -m ansible.builtin.hostname -a 'name=mcve-hostname use=redhat' all -vvvv
docker stop mcve-hostname
```
Note: the `--cap-add SYS_ADMIN` option for `docker run` is used to [allow the `hostname` command to be used](https://github.com/moby/moby/issues/8902#issuecomment-218911749).
### Expected Results
I expected the hostname command to run without issue.
### Actual Results
```console
ansible [core 2.11.5]
config file = None
configured module search path = ['/Users/mj/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/Cellar/ansible/4.6.0/libexec/lib/python3.9/site-packages/ansible
ansible collection location = /Users/mj/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/local/bin/ansible
python version = 3.9.7 (default, Sep 3 2021, 12:37:55) [Clang 12.0.5 (clang-1205.0.22.9)]
jinja version = 3.0.1
libyaml = True
No config file found; using defaults
setting up inventory plugins
Parsed mcve-hostname, inventory source with host_list plugin
Loading callback plugin minimal of type stdout, v2.0 from /usr/local/Cellar/ansible/4.6.0/libexec/lib/python3.9/site-packages/ansible/plugins/callback/minimal.py
Skipping callback 'default', as we already have a stdout callback.
Skipping callback 'minimal', as we already have a stdout callback.
Skipping callback 'oneline', as we already have a stdout callback.
META: ran handlers
redirecting (type: connection) ansible.builtin.docker to community.docker.docker
Loading collection community.docker from /Users/mj/.ansible/collections/ansible_collections/community/docker
<mcve-hostname> ESTABLISH DOCKER CONNECTION FOR USER: root
<mcve-hostname> EXEC ['/usr/local/bin/docker', b'exec', b'-i', 'mcve-hostname', '/bin/sh', '-c', "/bin/sh -c 'echo ~ && sleep 0'"]
<mcve-hostname> EXEC ['/usr/local/bin/docker', b'exec', b'-i', 'mcve-hostname', '/bin/sh', '-c', '/bin/sh -c \'( umask 77 && mkdir -p "` echo /root/.ansible/tmp `"&& mkdir "` echo /root/.ansible/tmp/ansible-tmp-1634119089.8989718-55435-22274985961259 `" && echo ansible-tmp-1634119089.8989718-55435-22274985961259="` echo /root/.ansible/tmp/ansible-tmp-1634119089.8989718-55435-22274985961259 `" ) && sleep 0\'']
<mcve-hostname> Attempting python interpreter discovery
<mcve-hostname> EXEC ['/usr/local/bin/docker', b'exec', b'-i', 'mcve-hostname', '/bin/sh', '-c', '/bin/sh -c \'echo PLATFORM; uname; echo FOUND; command -v \'"\'"\'/usr/bin/python\'"\'"\'; command -v \'"\'"\'python3.9\'"\'"\'; command -v \'"\'"\'python3.8\'"\'"\'; command -v \'"\'"\'python3.7\'"\'"\'; command -v \'"\'"\'python3.6\'"\'"\'; command -v \'"\'"\'python3.5\'"\'"\'; command -v \'"\'"\'python2.7\'"\'"\'; command -v \'"\'"\'python2.6\'"\'"\'; command -v \'"\'"\'/usr/libexec/platform-python\'"\'"\'; command -v \'"\'"\'/usr/bin/python3\'"\'"\'; command -v \'"\'"\'python\'"\'"\'; echo ENDFOUND && sleep 0\'']
<mcve-hostname> EXEC ['/usr/local/bin/docker', b'exec', b'-i', 'mcve-hostname', '/bin/sh', '-c', "/bin/sh -c '/usr/libexec/platform-python && sleep 0'"]
Using module file /usr/local/Cellar/ansible/4.6.0/libexec/lib/python3.9/site-packages/ansible/modules/hostname.py
<mcve-hostname> PUT /Users/mj/.ansible/tmp/ansible-local-55433gdx_vwgr/tmp6ggw39on TO /root/.ansible/tmp/ansible-tmp-1634119089.8989718-55435-22274985961259/AnsiballZ_hostname.py
<mcve-hostname> EXEC ['/usr/local/bin/docker', b'exec', b'-i', 'mcve-hostname', '/bin/sh', '-c', "/bin/sh -c 'chmod u+x /root/.ansible/tmp/ansible-tmp-1634119089.8989718-55435-22274985961259/ /root/.ansible/tmp/ansible-tmp-1634119089.8989718-55435-22274985961259/AnsiballZ_hostname.py && sleep 0'"]
<mcve-hostname> EXEC ['/usr/local/bin/docker', b'exec', b'-i', 'mcve-hostname', '/bin/sh', '-c', "/bin/sh -c '/usr/libexec/platform-python /root/.ansible/tmp/ansible-tmp-1634119089.8989718-55435-22274985961259/AnsiballZ_hostname.py && sleep 0'"]
<mcve-hostname> EXEC ['/usr/local/bin/docker', b'exec', b'-i', 'mcve-hostname', '/bin/sh', '-c', "/bin/sh -c 'rm -f -r /root/.ansible/tmp/ansible-tmp-1634119089.8989718-55435-22274985961259/ > /dev/null 2>&1 && sleep 0'"]
The full traceback is:
Traceback (most recent call last):
File "/tmp/ansible_ansible.builtin.hostname_payload_wgq1w34m/ansible_ansible.builtin.hostname_payload.zip/ansible/modules/hostname.py", line 315, in get_permanent_hostname
TypeError: startswith first arg must be bytes or a tuple of bytes, not str
mcve-hostname | FAILED! => {
"ansible_facts": {
"discovered_interpreter_python": "/usr/libexec/platform-python"
},
"changed": false,
"invocation": {
"module_args": {
"name": "mcve-hostname",
"use": "redhat"
}
},
"msg": "failed to read hostname: startswith first arg must be bytes or a tuple of bytes, not str"
}
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/76027
|
https://github.com/ansible/ansible/pull/76032
|
abac141122d3c3beac29dae7f04551fb05ed9c74
|
08af0fbf95220f94e9bdcfdb3bb27f1b4d58f930
| 2021-10-13T09:28:03Z |
python
| 2021-11-15T06:15:20Z |
test/units/modules/test_hostname.py
|
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
from units.compat.mock import patch, MagicMock, mock_open
from ansible.module_utils import basic
from ansible.module_utils.common._utils import get_all_subclasses
from ansible.modules import hostname
from units.modules.utils import ModuleTestCase, set_module_args
from ansible.module_utils.six import PY2
class TestHostname(ModuleTestCase):
@patch('os.path.isfile')
def test_stategy_get_never_writes_in_check_mode(self, isfile):
isfile.return_value = True
set_module_args({'name': 'fooname', '_ansible_check_mode': True})
subclasses = get_all_subclasses(hostname.BaseStrategy)
module = MagicMock()
for cls in subclasses:
instance = cls(module)
instance.module.run_command = MagicMock()
instance.module.run_command.return_value = (0, '', '')
m = mock_open()
builtins = 'builtins'
if PY2:
builtins = '__builtin__'
with patch('%s.open' % builtins, m):
instance.get_permanent_hostname()
instance.get_current_hostname()
self.assertFalse(
m.return_value.write.called,
msg='%s called write, should not have' % str(cls))
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 76,316 |
ansible-galaxy install results in infinite loop when write permissions missing on target dir
|
### Summary
If you use `ansible-galaxy` to install a role to a path specified by both environment variable `ANSIBLE_ROLES_PATH` and `--roles-path=` galaxy will fail to extract the role, and infinitely loop trying to extract the archive.
### Issue Type
Bug Report
### Component Name
galaxy
### Ansible Version
```console
$ ansible --version
ansible 2.9.27
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/ncelebic/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/ncelebic/py3venv/lib/python3.8/site-packages/ansible
executable location = /home/ncelebic/py3venv/bin/ansible
python version = 3.8.10 (default, Sep 28 2021, 16:10:42) [GCC 9.3.0]
```
### Configuration
```console
$ ansible-config dump --only-changed
```
### OS / Environment
Ubuntu 20.04
Python 3.8 virtualenv
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```
mkdir roles
chmod -w roles
ANSIBLE_ROLES_PATH=./roles/ ansible-galaxy install geerlingguy.repo-epel --roles-path=./roles
```
### Expected Results
If you use just the env var `ANSIBLE_ROLES_PATH` it errors out with:
```
/home/ncelebic/test/galaxy_test/roles/geerlingguy.repo-epel: [Errno 13] Permission denied:
'/home/ncelebic/test/galaxy_test/roles/geerlingguy.repo-epel'
```
That is the expected outcome
### Actual Results
```console
$ ANSIBLE_ROLES_PATH=./roles/ ansible-galaxy install geerlingguy.repo-epel --roles-p
ath=./roles
- downloading role 'repo-epel', owned by geerlingguy
- downloading role from https://github.com/geerlingguy/ansible-role-repo-epel/archive/3.1.0.tar.gz
- extracting geerlingguy.repo-epel to /home/ncelebic/test/galaxy_test/roles/geerlingguy.repo-epel
- extracting geerlingguy.repo-epel to /home/ncelebic/test/galaxy_test/roles/geerlingguy.repo-epel
- extracting geerlingguy.repo-epel to /home/ncelebic/test/galaxy_test/roles/geerlingguy.repo-epel
- extracting geerlingguy.repo-epel to /home/ncelebic/test/galaxy_test/roles/geerlingguy.repo-epel
- extracting geerlingguy.repo-epel to /home/ncelebic/test/galaxy_test/roles/geerlingguy.repo-epel
- extracting geerlingguy.repo-epel to /home/ncelebic/test/galaxy_test/roles/geerlingguy.repo-epel
- extracting geerlingguy.repo-epel to /home/ncelebic/test/galaxy_test/roles/geerlingguy.repo-epel
- extracting geerlingguy.repo-epel to /home/ncelebic/test/galaxy_test/roles/geerlingguy.repo-epel
- extracting geerlingguy.repo-epel to /home/ncelebic/test/galaxy_test/roles/geerlingguy.repo-epel
- extracting geerlingguy.repo-epel to /home/ncelebic/test/galaxy_test/roles/geerlingguy.repo-epel
- extracting geerlingguy.repo-epel to /home/ncelebic/test/galaxy_test/roles/geerlingguy.repo-epel
- extracting geerlingguy.repo-epel to /home/ncelebic/test/galaxy_test/roles/geerlingguy.repo-epel
- extracting geerlingguy.repo-epel to /home/ncelebic/test/galaxy_test/roles/geerlingguy.repo-epel
- extracting geerlingguy.repo-epel to /home/ncelebic/test/galaxy_test/roles/geerlingguy.repo-epel
- extracting geerlingguy.repo-epel to /home/ncelebic/test/galaxy_test/roles/geerlingguy.repo-epel
- extracting geerlingguy.repo-epel to /home/ncelebic/test/galaxy_test/roles/geerlingguy.repo-epel
- extracting geerlingguy.repo-epel to /home/ncelebic/test/galaxy_test/roles/geerlingguy.repo-epel
- extracting geerlingguy.repo-epel to /home/ncelebic/test/galaxy_test/roles/geerlingguy.repo-epel
- extracting geerlingguy.repo-epel to /home/ncelebic/test/galaxy_test/roles/geerlingguy.repo-epel
... forever
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/76316
|
https://github.com/ansible/ansible/pull/76318
|
0ddaf6edd66dc7f7e24cd8ec66aaf9a5e23fd63f
|
f59d0456e1bd4e2a7d45366acfaed92f6157f0f6
| 2021-11-17T22:42:44Z |
python
| 2021-11-22T12:28:02Z |
changelogs/fragments/76316-galaxy-install-role-inaccessible-dir-fix.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 76,316 |
ansible-galaxy install results in infinite loop when write permissions missing on target dir
|
### Summary
If you use `ansible-galaxy` to install a role to a path specified by both environment variable `ANSIBLE_ROLES_PATH` and `--roles-path=` galaxy will fail to extract the role, and infinitely loop trying to extract the archive.
### Issue Type
Bug Report
### Component Name
galaxy
### Ansible Version
```console
$ ansible --version
ansible 2.9.27
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/ncelebic/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/ncelebic/py3venv/lib/python3.8/site-packages/ansible
executable location = /home/ncelebic/py3venv/bin/ansible
python version = 3.8.10 (default, Sep 28 2021, 16:10:42) [GCC 9.3.0]
```
### Configuration
```console
$ ansible-config dump --only-changed
```
### OS / Environment
Ubuntu 20.04
Python 3.8 virtualenv
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```
mkdir roles
chmod -w roles
ANSIBLE_ROLES_PATH=./roles/ ansible-galaxy install geerlingguy.repo-epel --roles-path=./roles
```
### Expected Results
If you use just the env var `ANSIBLE_ROLES_PATH` it errors out with:
```
/home/ncelebic/test/galaxy_test/roles/geerlingguy.repo-epel: [Errno 13] Permission denied:
'/home/ncelebic/test/galaxy_test/roles/geerlingguy.repo-epel'
```
That is the expected outcome
### Actual Results
```console
$ ANSIBLE_ROLES_PATH=./roles/ ansible-galaxy install geerlingguy.repo-epel --roles-p
ath=./roles
- downloading role 'repo-epel', owned by geerlingguy
- downloading role from https://github.com/geerlingguy/ansible-role-repo-epel/archive/3.1.0.tar.gz
- extracting geerlingguy.repo-epel to /home/ncelebic/test/galaxy_test/roles/geerlingguy.repo-epel
- extracting geerlingguy.repo-epel to /home/ncelebic/test/galaxy_test/roles/geerlingguy.repo-epel
- extracting geerlingguy.repo-epel to /home/ncelebic/test/galaxy_test/roles/geerlingguy.repo-epel
- extracting geerlingguy.repo-epel to /home/ncelebic/test/galaxy_test/roles/geerlingguy.repo-epel
- extracting geerlingguy.repo-epel to /home/ncelebic/test/galaxy_test/roles/geerlingguy.repo-epel
- extracting geerlingguy.repo-epel to /home/ncelebic/test/galaxy_test/roles/geerlingguy.repo-epel
- extracting geerlingguy.repo-epel to /home/ncelebic/test/galaxy_test/roles/geerlingguy.repo-epel
- extracting geerlingguy.repo-epel to /home/ncelebic/test/galaxy_test/roles/geerlingguy.repo-epel
- extracting geerlingguy.repo-epel to /home/ncelebic/test/galaxy_test/roles/geerlingguy.repo-epel
- extracting geerlingguy.repo-epel to /home/ncelebic/test/galaxy_test/roles/geerlingguy.repo-epel
- extracting geerlingguy.repo-epel to /home/ncelebic/test/galaxy_test/roles/geerlingguy.repo-epel
- extracting geerlingguy.repo-epel to /home/ncelebic/test/galaxy_test/roles/geerlingguy.repo-epel
- extracting geerlingguy.repo-epel to /home/ncelebic/test/galaxy_test/roles/geerlingguy.repo-epel
- extracting geerlingguy.repo-epel to /home/ncelebic/test/galaxy_test/roles/geerlingguy.repo-epel
- extracting geerlingguy.repo-epel to /home/ncelebic/test/galaxy_test/roles/geerlingguy.repo-epel
- extracting geerlingguy.repo-epel to /home/ncelebic/test/galaxy_test/roles/geerlingguy.repo-epel
- extracting geerlingguy.repo-epel to /home/ncelebic/test/galaxy_test/roles/geerlingguy.repo-epel
- extracting geerlingguy.repo-epel to /home/ncelebic/test/galaxy_test/roles/geerlingguy.repo-epel
- extracting geerlingguy.repo-epel to /home/ncelebic/test/galaxy_test/roles/geerlingguy.repo-epel
... forever
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/76316
|
https://github.com/ansible/ansible/pull/76318
|
0ddaf6edd66dc7f7e24cd8ec66aaf9a5e23fd63f
|
f59d0456e1bd4e2a7d45366acfaed92f6157f0f6
| 2021-11-17T22:42:44Z |
python
| 2021-11-22T12:28:02Z |
lib/ansible/galaxy/role.py
|
########################################################################
#
# (C) 2015, Brian Coca <[email protected]>
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
#
########################################################################
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import errno
import datetime
import os
import tarfile
import tempfile
from ansible.module_utils.compat.version import LooseVersion
from shutil import rmtree
from ansible import context
from ansible.errors import AnsibleError
from ansible.galaxy.user_agent import user_agent
from ansible.module_utils._text import to_native, to_text
from ansible.module_utils.common.yaml import yaml_dump, yaml_load
from ansible.module_utils.urls import open_url
from ansible.playbook.role.requirement import RoleRequirement
from ansible.utils.display import Display
display = Display()
class GalaxyRole(object):
SUPPORTED_SCMS = set(['git', 'hg'])
META_MAIN = (os.path.join('meta', 'main.yml'), os.path.join('meta', 'main.yaml'))
META_INSTALL = os.path.join('meta', '.galaxy_install_info')
META_REQUIREMENTS = (os.path.join('meta', 'requirements.yml'), os.path.join('meta', 'requirements.yaml'))
ROLE_DIRS = ('defaults', 'files', 'handlers', 'meta', 'tasks', 'templates', 'vars', 'tests')
def __init__(self, galaxy, api, name, src=None, version=None, scm=None, path=None):
self._metadata = None
self._requirements = None
self._install_info = None
self._validate_certs = not context.CLIARGS['ignore_certs']
display.debug('Validate TLS certificates: %s' % self._validate_certs)
self.galaxy = galaxy
self.api = api
self.name = name
self.version = version
self.src = src or name
self.download_url = None
self.scm = scm
self.paths = [os.path.join(x, self.name) for x in galaxy.roles_paths]
if path is not None:
if not path.endswith(os.path.join(os.path.sep, self.name)):
path = os.path.join(path, self.name)
else:
# Look for a meta/main.ya?ml inside the potential role dir in case
# the role name is the same as parent directory of the role.
#
# Example:
# ./roles/testing/testing/meta/main.yml
for meta_main in self.META_MAIN:
if os.path.exists(os.path.join(path, name, meta_main)):
path = os.path.join(path, self.name)
break
self.path = path
else:
# use the first path by default
self.path = os.path.join(galaxy.roles_paths[0], self.name)
def __repr__(self):
"""
Returns "rolename (version)" if version is not null
Returns "rolename" otherwise
"""
if self.version:
return "%s (%s)" % (self.name, self.version)
else:
return self.name
def __eq__(self, other):
return self.name == other.name
@property
def metadata(self):
"""
Returns role metadata
"""
if self._metadata is None:
for path in self.paths:
for meta_main in self.META_MAIN:
meta_path = os.path.join(path, meta_main)
if os.path.isfile(meta_path):
try:
with open(meta_path, 'r') as f:
self._metadata = yaml_load(f)
except Exception:
display.vvvvv("Unable to load metadata for %s" % self.name)
return False
break
return self._metadata
@property
def install_info(self):
"""
Returns role install info
"""
if self._install_info is None:
info_path = os.path.join(self.path, self.META_INSTALL)
if os.path.isfile(info_path):
try:
f = open(info_path, 'r')
self._install_info = yaml_load(f)
except Exception:
display.vvvvv("Unable to load Galaxy install info for %s" % self.name)
return False
finally:
f.close()
return self._install_info
@property
def _exists(self):
for path in self.paths:
if os.path.isdir(path):
return True
return False
def _write_galaxy_install_info(self):
"""
Writes a YAML-formatted file to the role's meta/ directory
(named .galaxy_install_info) which contains some information
we can use later for commands like 'list' and 'info'.
"""
info = dict(
version=self.version,
install_date=datetime.datetime.utcnow().strftime("%c"),
)
if not os.path.exists(os.path.join(self.path, 'meta')):
os.makedirs(os.path.join(self.path, 'meta'))
info_path = os.path.join(self.path, self.META_INSTALL)
with open(info_path, 'w+') as f:
try:
self._install_info = yaml_dump(info, f)
except Exception:
return False
return True
def remove(self):
"""
Removes the specified role from the roles path.
There is a sanity check to make sure there's a meta/main.yml file at this
path so the user doesn't blow away random directories.
"""
if self.metadata:
try:
rmtree(self.path)
return True
except Exception:
pass
return False
def fetch(self, role_data):
"""
Downloads the archived role to a temp location based on role data
"""
if role_data:
# first grab the file and save it to a temp location
if self.download_url is not None:
archive_url = self.download_url
elif "github_user" in role_data and "github_repo" in role_data:
archive_url = 'https://github.com/%s/%s/archive/%s.tar.gz' % (role_data["github_user"], role_data["github_repo"], self.version)
else:
archive_url = self.src
display.display("- downloading role from %s" % archive_url)
try:
url_file = open_url(archive_url, validate_certs=self._validate_certs, http_agent=user_agent())
temp_file = tempfile.NamedTemporaryFile(delete=False)
data = url_file.read()
while data:
temp_file.write(data)
data = url_file.read()
temp_file.close()
return temp_file.name
except Exception as e:
display.error(u"failed to download the file: %s" % to_text(e))
return False
def install(self):
if self.scm:
# create tar file from scm url
tmp_file = RoleRequirement.scm_archive_role(keep_scm_meta=context.CLIARGS['keep_scm_meta'], **self.spec)
elif self.src:
if os.path.isfile(self.src):
tmp_file = self.src
elif '://' in self.src:
role_data = self.src
tmp_file = self.fetch(role_data)
else:
role_data = self.api.lookup_role_by_name(self.src)
if not role_data:
raise AnsibleError("- sorry, %s was not found on %s." % (self.src, self.api.api_server))
if role_data.get('role_type') == 'APP':
# Container Role
display.warning("%s is a Container App role, and should only be installed using Ansible "
"Container" % self.name)
role_versions = self.api.fetch_role_related('versions', role_data['id'])
if not self.version:
# convert the version names to LooseVersion objects
# and sort them to get the latest version. If there
# are no versions in the list, we'll grab the head
# of the master branch
if len(role_versions) > 0:
loose_versions = [LooseVersion(a.get('name', None)) for a in role_versions]
try:
loose_versions.sort()
except TypeError:
raise AnsibleError(
'Unable to compare role versions (%s) to determine the most recent version due to incompatible version formats. '
'Please contact the role author to resolve versioning conflicts, or specify an explicit role version to '
'install.' % ', '.join([v.vstring for v in loose_versions])
)
self.version = to_text(loose_versions[-1])
elif role_data.get('github_branch', None):
self.version = role_data['github_branch']
else:
self.version = 'master'
elif self.version != 'master':
if role_versions and to_text(self.version) not in [a.get('name', None) for a in role_versions]:
raise AnsibleError("- the specified version (%s) of %s was not found in the list of available versions (%s)." % (self.version,
self.name,
role_versions))
# check if there's a source link/url for our role_version
for role_version in role_versions:
if role_version['name'] == self.version and 'source' in role_version:
self.src = role_version['source']
if role_version['name'] == self.version and 'download_url' in role_version:
self.download_url = role_version['download_url']
tmp_file = self.fetch(role_data)
else:
raise AnsibleError("No valid role data found")
if tmp_file:
display.debug("installing from %s" % tmp_file)
if not tarfile.is_tarfile(tmp_file):
raise AnsibleError("the downloaded file does not appear to be a valid tar archive.")
else:
role_tar_file = tarfile.open(tmp_file, "r")
# verify the role's meta file
meta_file = None
members = role_tar_file.getmembers()
# next find the metadata file
for member in members:
for meta_main in self.META_MAIN:
if meta_main in member.name:
# Look for parent of meta/main.yml
# Due to possibility of sub roles each containing meta/main.yml
# look for shortest length parent
meta_parent_dir = os.path.dirname(os.path.dirname(member.name))
if not meta_file:
archive_parent_dir = meta_parent_dir
meta_file = member
else:
if len(meta_parent_dir) < len(archive_parent_dir):
archive_parent_dir = meta_parent_dir
meta_file = member
if not meta_file:
raise AnsibleError("this role does not appear to have a meta/main.yml file.")
else:
try:
self._metadata = yaml_load(role_tar_file.extractfile(meta_file))
except Exception:
raise AnsibleError("this role does not appear to have a valid meta/main.yml file.")
# we strip off any higher-level directories for all of the files contained within
# the tar file here. The default is 'github_repo-target'. Gerrit instances, on the other
# hand, does not have a parent directory at all.
installed = False
while not installed:
display.display("- extracting %s to %s" % (self.name, self.path))
try:
if os.path.exists(self.path):
if not os.path.isdir(self.path):
raise AnsibleError("the specified roles path exists and is not a directory.")
elif not context.CLIARGS.get("force", False):
raise AnsibleError("the specified role %s appears to already exist. Use --force to replace it." % self.name)
else:
# using --force, remove the old path
if not self.remove():
raise AnsibleError("%s doesn't appear to contain a role.\n please remove this directory manually if you really "
"want to put the role here." % self.path)
else:
os.makedirs(self.path)
# now we do the actual extraction to the path
for member in members:
# we only extract files, and remove any relative path
# bits that might be in the file for security purposes
# and drop any containing directory, as mentioned above
if member.isreg() or member.issym():
n_member_name = to_native(member.name)
n_archive_parent_dir = to_native(archive_parent_dir)
n_parts = n_member_name.replace(n_archive_parent_dir, "", 1).split(os.sep)
n_final_parts = []
for n_part in n_parts:
# TODO if the condition triggers it produces a broken installation.
# It will create the parent directory as an empty file and will
# explode if the directory contains valid files.
# Leaving this as is since the whole module needs a rewrite.
if n_part != '..' and not n_part.startswith('~') and '$' not in n_part:
n_final_parts.append(n_part)
member.name = os.path.join(*n_final_parts)
role_tar_file.extract(member, to_native(self.path))
# write out the install info file for later use
self._write_galaxy_install_info()
installed = True
except OSError as e:
error = True
if e.errno == errno.EACCES and len(self.paths) > 1:
current = self.paths.index(self.path)
if len(self.paths) > current:
self.path = self.paths[current + 1]
error = False
if error:
raise AnsibleError("Could not update files in %s: %s" % (self.path, to_native(e)))
# return the parsed yaml metadata
display.display("- %s was installed successfully" % str(self))
if not (self.src and os.path.isfile(self.src)):
try:
os.unlink(tmp_file)
except (OSError, IOError) as e:
display.warning(u"Unable to remove tmp file (%s): %s" % (tmp_file, to_text(e)))
return True
return False
@property
def spec(self):
"""
Returns role spec info
{
'scm': 'git',
'src': 'http://git.example.com/repos/repo.git',
'version': 'v1.0',
'name': 'repo'
}
"""
return dict(scm=self.scm, src=self.src, version=self.version, name=self.name)
@property
def requirements(self):
"""
Returns role requirements
"""
if self._requirements is None:
self._requirements = []
for meta_requirements in self.META_REQUIREMENTS:
meta_path = os.path.join(self.path, meta_requirements)
if os.path.isfile(meta_path):
try:
f = open(meta_path, 'r')
self._requirements = yaml_load(f)
except Exception:
display.vvvvv("Unable to load requirements for %s" % self.name)
finally:
f.close()
break
return self._requirements
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 72,373 |
Github issue template: outdated hint about editing docs
|
##### SUMMARY
When creating a new issue, this hint is still there:
--- HINT: Did you know the documentation has an "Edit on GitHub" link on every page ? --
As per #70267, that is not the case anymore.
To reduce the confusion, it is suggested to remove this hint.
##### ISSUE TYPE
- Documentation Report
##### COMPONENT NAME
github
##### ANSIBLE VERSION
latest
|
https://github.com/ansible/ansible/issues/72373
|
https://github.com/ansible/ansible/pull/76341
|
f59d0456e1bd4e2a7d45366acfaed92f6157f0f6
|
23414869e2d4ab313212528ecf0f2ef8f0a570e1
| 2020-10-28T08:36:39Z |
python
| 2021-11-22T16:28:34Z |
.github/ISSUE_TEMPLATE/documentation_report.yml
|
---
name: 📝 Documentation Report
description: Ask us about docs
body:
- type: markdown
attributes:
value: >
**Thank you for wanting to report a problem with ansible-core
documentation!**
Please fill out your suggestions below. If the problem seems
straightforward, feel free to go ahead and
[submit a pull request] instead!
⚠
Verify first that your issue is not [already reported on
GitHub][issue search].
Also test if the latest release and devel branch are affected too.
**Tip:** If you are seeking community support, please consider
[starting a mailing list thread or chatting in IRC][ML||IRC].
[ML||IRC]:
https://docs.ansible.com/ansible-core/devel/community/communication.html?utm_medium=github&utm_source=issue_form--documentation_report.yml#mailing-list-information
[issue search]: ../search?q=is%3Aissue&type=issues
[submit a pull request]:
https://docs.ansible.com/ansible-core/devel/community/documentation_contributions.html
- type: textarea
attributes:
label: Summary
description: >
Explain the problem briefly below, add suggestions to wording
or structure.
**HINT:** Did you know the documentation has a `View on GitHub`
link on every page? Feel free to use it to start a pull request
right from the GitHub UI!
placeholder: >-
I was reading the ansible-core documentation of version X and I'm having
problems understanding Y. It would be very helpful if that got
rephrased as Z.
validations:
required: true
- type: dropdown
attributes:
label: Issue Type
description: >
Please select the single available option in the drop-down.
<details>
<summary>
<em>Why?</em>
</summary>
We would do it by ourselves but unfortunatelly, the curent
edition of GitHub Issue Forms Alpha does not support this yet 🤷
_We will make it easier in the future, once GitHub
supports dropdown defaults. Promise!_
</details>
# FIXME: Once GitHub allows defining the default choice, update this
options:
- Documentation Report
validations:
required: true
- type: input
attributes:
label: Component Name
description: >
Write the short name of the rst file, module, plugin, task or
feature below, *use your best guess if unsure*.
**Tip:** Be advised that the source for some parts of the
documentation are hosted outside of this repository. If the page
you are reporting describes modules/plugins/etc that are not
officially supported by the Ansible Core Engineering team, there
is a good chance that it is coming from one of the [Ansible
Collections maintained by the community][collections org]. If this
is the case, please make sure to file an issue under the
appropriate project there instead.
[collections org]: /ansible-collections
placeholder: docs/docsite/rst/dev_guide/debugging.rst
validations:
required: true
- type: textarea
attributes:
label: Ansible Version
description: >-
Paste verbatim output from `ansible --version` below, under
the prompt line. Please don't wrap it with tripple backticks — your
whole input will be turned into a code snippet automatically.
render: console
value: |
$ ansible --version
placeholder: |
$ ansible --version
ansible [core 2.11.0b4.post0] (detached HEAD ref: refs/) last updated 2021/04/02 00:33:35 (GMT +200)
config file = None
configured module search path = ['~/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = ~/src/github/ansible/ansible/lib/ansible
ansible collection location = ~/.ansible/collections:/usr/share/ansible/collections
executable location = bin/ansible
python version = 3.9.0 (default, Oct 26 2020, 13:08:59) [GCC 10.2.0]
jinja version = 2.11.3
libyaml = True
validations:
required: true
- type: textarea
attributes:
label: Configuration
description: >-
Paste verbatim output from `ansible-config dump --only-changed`
below, under the prompt line.
Please don't wrap it with tripple backticks — your
whole input will be turned into a code snippet automatically.
render: console
value: |
$ ansible-config dump --only-changed
placeholder: |
$ ansible-config dump --only-changed
DEFAULT_GATHERING(~/src/github/ansible/ansible/ansible.cfg) = smart
DEFAULT_HOST_LIST(~/src/github/ansible/ansible/ansible.cfg) = ['~/src/github/ansible/ansible/hosts']
DEFAULT_VAULT_PASSWORD_FILE(~/src/github/ansible/ansible/ansible.cfg) = ~/src/github/ansible/ansible/vault/print-password.sh
validations:
required: true
- type: textarea
attributes:
label: OS / Environment
description: >-
Provide all relevant information below, e.g. OS version,
browser, etc.
placeholder: Fedora 33, Firefox etc.
validations:
required: true
- type: textarea
attributes:
label: Additional Information
description: |
Describe how this improves the documentation, e.g. before/after situation or screenshots.
**HINT:** You can paste https://gist.github.com links for larger files.
placeholder: >-
When the improvement is applied, it makes it more straightforward
to understand X.
validations:
required: true
- type: markdown
attributes:
value: >
*One last thing...*
*Please, complete **all** sections as described, this form
is [processed automatically by a robot][ansibot help].*
Thank you for your collaboration!
[ansibot help]:
/ansible/ansibullbot/blob/master/ISSUE_HELP.md#for-issue-submitters
- type: checkboxes
attributes:
label: Code of Conduct
description: |
Read the [Ansible Code of Conduct][CoC] first.
[CoC]: https://docs.ansible.com/ansible/latest/community/code_of_conduct.html?utm_medium=github&utm_source=issue_form--documentation_report.yml
options:
- label: I agree to follow the Ansible Code of Conduct
required: true
...
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 65,325 |
Add 'allow_change_held_packages' option to apt module.
|
<!--- Verify first that your feature was not already discussed on GitHub -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Describe the new feature/improvement briefly below -->
When installing or updating an already held apt package with the `apt` module, it fails.
This forces to use the `force: yes` option that it's too broad and already deprecated by apt.
Add an option `allow_change_held_packages` to the `apt` module to use the `--allow-change-held-packages` `apt` option.
##### ISSUE TYPE
- Feature Idea
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
apt
##### ADDITIONAL INFORMATION
<!--- Describe how the feature would be used, why it is needed and what it would solve -->
<!--- Paste example playbooks or commands between quotes below -->
```yaml
- name: Install package with specific version and hold
block:
- name: Install {{ package_name }} version {{ version }}"
apt:
name: "{{ package_name }}={{ version }}"
state: present
(force|allow_change_held_packages): yes
- name: Hold {{ package_name }} package
dpkg_selections:
name: "{{ package_name }}"
selection: hold
```
This is not idempotent without the `force: yes`, the second run will fail.
The `allow_change_held_packages` will avoid problems with this special situation.
<!--- HINT: You can also paste gist.github.com links for larger files -->
Related to: https://github.com/ansible/ansible/issues/15394
|
https://github.com/ansible/ansible/issues/65325
|
https://github.com/ansible/ansible/pull/73629
|
0afc0b8506cc80897073606157c984c9176f545c
|
cd2d8b77fe699f7cc8529dfa296b0d2894630599
| 2019-11-27T11:45:18Z |
python
| 2021-11-29T15:31:54Z |
changelogs/fragments/73629_allow-change-held-packages.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 65,325 |
Add 'allow_change_held_packages' option to apt module.
|
<!--- Verify first that your feature was not already discussed on GitHub -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Describe the new feature/improvement briefly below -->
When installing or updating an already held apt package with the `apt` module, it fails.
This forces to use the `force: yes` option that it's too broad and already deprecated by apt.
Add an option `allow_change_held_packages` to the `apt` module to use the `--allow-change-held-packages` `apt` option.
##### ISSUE TYPE
- Feature Idea
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
apt
##### ADDITIONAL INFORMATION
<!--- Describe how the feature would be used, why it is needed and what it would solve -->
<!--- Paste example playbooks or commands between quotes below -->
```yaml
- name: Install package with specific version and hold
block:
- name: Install {{ package_name }} version {{ version }}"
apt:
name: "{{ package_name }}={{ version }}"
state: present
(force|allow_change_held_packages): yes
- name: Hold {{ package_name }} package
dpkg_selections:
name: "{{ package_name }}"
selection: hold
```
This is not idempotent without the `force: yes`, the second run will fail.
The `allow_change_held_packages` will avoid problems with this special situation.
<!--- HINT: You can also paste gist.github.com links for larger files -->
Related to: https://github.com/ansible/ansible/issues/15394
|
https://github.com/ansible/ansible/issues/65325
|
https://github.com/ansible/ansible/pull/73629
|
0afc0b8506cc80897073606157c984c9176f545c
|
cd2d8b77fe699f7cc8529dfa296b0d2894630599
| 2019-11-27T11:45:18Z |
python
| 2021-11-29T15:31:54Z |
lib/ansible/modules/apt.py
|
#!/usr/bin/python
# -*- coding: utf-8 -*-
# Copyright: (c) 2012, Flowroute LLC
# Written by Matthew Williams <[email protected]>
# Based on yum module written by Seth Vidal <skvidal at fedoraproject.org>
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
DOCUMENTATION = '''
---
module: apt
short_description: Manages apt-packages
description:
- Manages I(apt) packages (such as for Debian/Ubuntu).
version_added: "0.0.2"
options:
name:
description:
- A list of package names, like C(foo), or package specifier with version, like C(foo=1.0).
Name wildcards (fnmatch) like C(apt*) and version wildcards like C(foo=1.0*) are also supported.
aliases: [ package, pkg ]
type: list
elements: str
state:
description:
- Indicates the desired package state. C(latest) ensures that the latest version is installed. C(build-dep) ensures the package build dependencies
are installed. C(fixed) attempt to correct a system with broken dependencies in place.
type: str
default: present
choices: [ absent, build-dep, latest, present, fixed ]
update_cache:
description:
- Run the equivalent of C(apt-get update) before the operation. Can be run as part of the package installation or as a separate step.
- Default is not to update the cache.
aliases: [ update-cache ]
type: bool
update_cache_retries:
description:
- Amount of retries if the cache update fails. Also see I(update_cache_retry_max_delay).
type: int
default: 5
version_added: '2.10'
update_cache_retry_max_delay:
description:
- Use an exponential backoff delay for each retry (see I(update_cache_retries)) up to this max delay in seconds.
type: int
default: 12
version_added: '2.10'
cache_valid_time:
description:
- Update the apt cache if it is older than the I(cache_valid_time). This option is set in seconds.
- As of Ansible 2.4, if explicitly set, this sets I(update_cache=yes).
type: int
default: 0
purge:
description:
- Will force purging of configuration files if the module state is set to I(absent).
type: bool
default: 'no'
default_release:
description:
- Corresponds to the C(-t) option for I(apt) and sets pin priorities
aliases: [ default-release ]
type: str
install_recommends:
description:
- Corresponds to the C(--no-install-recommends) option for I(apt). C(yes) installs recommended packages. C(no) does not install
recommended packages. By default, Ansible will use the same defaults as the operating system. Suggested packages are never installed.
aliases: [ install-recommends ]
type: bool
force:
description:
- 'Corresponds to the C(--force-yes) to I(apt-get) and implies C(allow_unauthenticated: yes) and C(allow_downgrade: yes)'
- "This option will disable checking both the packages' signatures and the certificates of the
web servers they are downloaded from."
- 'This option *is not* the equivalent of passing the C(-f) flag to I(apt-get) on the command line'
- '**This is a destructive operation with the potential to destroy your system, and it should almost never be used.**
Please also see C(man apt-get) for more information.'
type: bool
default: 'no'
allow_unauthenticated:
description:
- Ignore if packages cannot be authenticated. This is useful for bootstrapping environments that manage their own apt-key setup.
- 'C(allow_unauthenticated) is only supported with state: I(install)/I(present)'
aliases: [ allow-unauthenticated ]
type: bool
default: 'no'
version_added: "2.1"
allow_downgrade:
description:
- Corresponds to the C(--allow-downgrades) option for I(apt).
- This option enables the named package and version to replace an already installed higher version of that package.
- Note that setting I(allow_downgrade=true) can make this module behave in a non-idempotent way.
- (The task could end up with a set of packages that does not match the complete list of specified packages to install).
aliases: [ allow-downgrade, allow_downgrades, allow-downgrades ]
type: bool
default: 'no'
version_added: "2.12"
upgrade:
description:
- If yes or safe, performs an aptitude safe-upgrade.
- If full, performs an aptitude full-upgrade.
- If dist, performs an apt-get dist-upgrade.
- 'Note: This does not upgrade a specific package, use state=latest for that.'
- 'Note: Since 2.4, apt-get is used as a fall-back if aptitude is not present.'
version_added: "1.1"
choices: [ dist, full, 'no', safe, 'yes' ]
default: 'no'
type: str
dpkg_options:
description:
- Add dpkg options to apt command. Defaults to '-o "Dpkg::Options::=--force-confdef" -o "Dpkg::Options::=--force-confold"'
- Options should be supplied as comma separated list
default: force-confdef,force-confold
type: str
deb:
description:
- Path to a .deb package on the remote machine.
- If :// in the path, ansible will attempt to download deb before installing. (Version added 2.1)
- Requires the C(xz-utils) package to extract the control file of the deb package to install.
type: path
required: false
version_added: "1.6"
autoremove:
description:
- If C(yes), remove unused dependency packages for all module states except I(build-dep). It can also be used as the only option.
- Previous to version 2.4, autoclean was also an alias for autoremove, now it is its own separate command. See documentation for further information.
type: bool
default: 'no'
version_added: "2.1"
autoclean:
description:
- If C(yes), cleans the local repository of retrieved package files that can no longer be downloaded.
type: bool
default: 'no'
version_added: "2.4"
policy_rc_d:
description:
- Force the exit code of /usr/sbin/policy-rc.d.
- For example, if I(policy_rc_d=101) the installed package will not trigger a service start.
- If /usr/sbin/policy-rc.d already exists, it is backed up and restored after the package installation.
- If C(null), the /usr/sbin/policy-rc.d isn't created/changed.
type: int
default: null
version_added: "2.8"
only_upgrade:
description:
- Only upgrade a package if it is already installed.
type: bool
default: 'no'
version_added: "2.1"
fail_on_autoremove:
description:
- 'Corresponds to the C(--no-remove) option for C(apt).'
- 'If C(yes), it is ensured that no packages will be removed or the task will fail.'
- 'C(fail_on_autoremove) is only supported with state except C(absent)'
type: bool
default: 'no'
version_added: "2.11"
force_apt_get:
description:
- Force usage of apt-get instead of aptitude
type: bool
default: 'no'
version_added: "2.4"
lock_timeout:
description:
- How many seconds will this action wait to acquire a lock on the apt db.
- Sometimes there is a transitory lock and this will retry at least until timeout is hit.
type: int
default: 60
version_added: "2.12"
requirements:
- python-apt (python 2)
- python3-apt (python 3)
- aptitude (before 2.4)
author: "Matthew Williams (@mgwilliams)"
extends_documentation_fragment: action_common_attributes
attributes:
check_mode:
support: full
diff_mode:
support: full
platform:
platforms: debian
notes:
- Three of the upgrade modes (C(full), C(safe) and its alias C(yes)) required C(aptitude) up to 2.3, since 2.4 C(apt-get) is used as a fall-back.
- In most cases, packages installed with apt will start newly installed services by default. Most distributions have mechanisms to avoid this.
For example when installing Postgresql-9.5 in Debian 9, creating an excutable shell script (/usr/sbin/policy-rc.d) that throws
a return code of 101 will stop Postgresql 9.5 starting up after install. Remove the file or remove its execute permission afterwards.
- The apt-get commandline supports implicit regex matches here but we do not because it can let typos through easier
(If you typo C(foo) as C(fo) apt-get would install packages that have "fo" in their name with a warning and a prompt for the user.
Since we don't have warnings and prompts before installing we disallow this.Use an explicit fnmatch pattern if you want wildcarding)
- When used with a `loop:` each package will be processed individually, it is much more efficient to pass the list directly to the `name` option.
'''
EXAMPLES = '''
- name: Install apache httpd (state=present is optional)
apt:
name: apache2
state: present
- name: Update repositories cache and install "foo" package
apt:
name: foo
update_cache: yes
- name: Remove "foo" package
apt:
name: foo
state: absent
- name: Install the package "foo"
apt:
name: foo
- name: Install a list of packages
apt:
pkg:
- foo
- foo-tools
- name: Install the version '1.00' of package "foo"
apt:
name: foo=1.00
- name: Update the repository cache and update package "nginx" to latest version using default release squeeze-backport
apt:
name: nginx
state: latest
default_release: squeeze-backports
update_cache: yes
- name: Install the version '1.18.0' of package "nginx" and allow potential downgrades
apt:
name: nginx=1.18.0
state: present
allow_downgrade: yes
- name: Install zfsutils-linux with ensuring conflicted packages (e.g. zfs-fuse) will not be removed.
apt:
name: zfsutils-linux
state: latest
fail_on_autoremove: yes
- name: Install latest version of "openjdk-6-jdk" ignoring "install-recommends"
apt:
name: openjdk-6-jdk
state: latest
install_recommends: no
- name: Update all packages to their latest version
apt:
name: "*"
state: latest
- name: Upgrade the OS (apt-get dist-upgrade)
apt:
upgrade: dist
- name: Run the equivalent of "apt-get update" as a separate step
apt:
update_cache: yes
- name: Only run "update_cache=yes" if the last one is more than 3600 seconds ago
apt:
update_cache: yes
cache_valid_time: 3600
- name: Pass options to dpkg on run
apt:
upgrade: dist
update_cache: yes
dpkg_options: 'force-confold,force-confdef'
- name: Install a .deb package
apt:
deb: /tmp/mypackage.deb
- name: Install the build dependencies for package "foo"
apt:
pkg: foo
state: build-dep
- name: Install a .deb package from the internet
apt:
deb: https://example.com/python-ppq_0.1-1_all.deb
- name: Remove useless packages from the cache
apt:
autoclean: yes
- name: Remove dependencies that are no longer required
apt:
autoremove: yes
'''
RETURN = '''
cache_updated:
description: if the cache was updated or not
returned: success, in some cases
type: bool
sample: True
cache_update_time:
description: time of the last cache update (0 if unknown)
returned: success, in some cases
type: int
sample: 1425828348000
stdout:
description: output from apt
returned: success, when needed
type: str
sample: "Reading package lists...\nBuilding dependency tree...\nReading state information...\nThe following extra packages will be installed:\n apache2-bin ..."
stderr:
description: error output from apt
returned: success, when needed
type: str
sample: "AH00558: apache2: Could not reliably determine the server's fully qualified domain name, using 127.0.1.1. Set the 'ServerName' directive globally to ..."
''' # NOQA
# added to stave off future warnings about apt api
import warnings
warnings.filterwarnings('ignore', "apt API not stable yet", FutureWarning)
import datetime
import fnmatch
import itertools
import os
import random
import re
import shutil
import sys
import tempfile
import time
from ansible.module_utils.basic import AnsibleModule
from ansible.module_utils.common.locale import get_best_parsable_locale
from ansible.module_utils.common.respawn import has_respawned, probe_interpreters_for_module, respawn_module
from ansible.module_utils._text import to_bytes, to_native
from ansible.module_utils.six import PY3
from ansible.module_utils.urls import fetch_file
DPKG_OPTIONS = 'force-confdef,force-confold'
APT_GET_ZERO = "\n0 upgraded, 0 newly installed"
APTITUDE_ZERO = "\n0 packages upgraded, 0 newly installed"
APT_LISTS_PATH = "/var/lib/apt/lists"
APT_UPDATE_SUCCESS_STAMP_PATH = "/var/lib/apt/periodic/update-success-stamp"
APT_MARK_INVALID_OP = 'Invalid operation'
APT_MARK_INVALID_OP_DEB6 = 'Usage: apt-mark [options] {markauto|unmarkauto} packages'
CLEAN_OP_CHANGED_STR = dict(
autoremove='The following packages will be REMOVED',
# "Del python3-q 2.4-1 [24 kB]"
autoclean='Del ',
)
apt = apt_pkg = None # keep pylint happy by declaring unconditionally
HAS_PYTHON_APT = False
try:
import apt
import apt.debfile
import apt_pkg
HAS_PYTHON_APT = True
except ImportError:
pass
class PolicyRcD(object):
"""
This class is a context manager for the /usr/sbin/policy-rc.d file.
It allow the user to prevent dpkg to start the corresponding service when installing
a package.
https://people.debian.org/~hmh/invokerc.d-policyrc.d-specification.txt
"""
def __init__(self, module):
# we need the module for later use (eg. fail_json)
self.m = module
# if policy_rc_d is null then we don't need to modify policy-rc.d
if self.m.params['policy_rc_d'] is None:
return
# if the /usr/sbin/policy-rc.d already exists
# we will back it up during package installation
# then restore it
if os.path.exists('/usr/sbin/policy-rc.d'):
self.backup_dir = tempfile.mkdtemp(prefix="ansible")
else:
self.backup_dir = None
def __enter__(self):
"""
This method will be called when we enter the context, before we call `apt-get …`
"""
# if policy_rc_d is null then we don't need to modify policy-rc.d
if self.m.params['policy_rc_d'] is None:
return
# if the /usr/sbin/policy-rc.d already exists we back it up
if self.backup_dir:
try:
shutil.move('/usr/sbin/policy-rc.d', self.backup_dir)
except Exception:
self.m.fail_json(msg="Fail to move /usr/sbin/policy-rc.d to %s" % self.backup_dir)
# we write /usr/sbin/policy-rc.d so it always exits with code policy_rc_d
try:
with open('/usr/sbin/policy-rc.d', 'w') as policy_rc_d:
policy_rc_d.write('#!/bin/sh\nexit %d\n' % self.m.params['policy_rc_d'])
os.chmod('/usr/sbin/policy-rc.d', 0o0755)
except Exception:
self.m.fail_json(msg="Failed to create or chmod /usr/sbin/policy-rc.d")
def __exit__(self, type, value, traceback):
"""
This method will be called when we enter the context, before we call `apt-get …`
"""
# if policy_rc_d is null then we don't need to modify policy-rc.d
if self.m.params['policy_rc_d'] is None:
return
if self.backup_dir:
# if /usr/sbin/policy-rc.d already exists before the call to __enter__
# we restore it (from the backup done in __enter__)
try:
shutil.move(os.path.join(self.backup_dir, 'policy-rc.d'),
'/usr/sbin/policy-rc.d')
os.rmdir(self.backup_dir)
except Exception:
self.m.fail_json(msg="Fail to move back %s to /usr/sbin/policy-rc.d"
% os.path.join(self.backup_dir, 'policy-rc.d'))
else:
# if there wasn't a /usr/sbin/policy-rc.d file before the call to __enter__
# we just remove the file
try:
os.remove('/usr/sbin/policy-rc.d')
except Exception:
self.m.fail_json(msg="Fail to remove /usr/sbin/policy-rc.d (after package manipulation)")
def package_split(pkgspec):
parts = pkgspec.split('=', 1)
version = None
if len(parts) > 1:
version = parts[1]
return parts[0], version
def package_versions(pkgname, pkg, pkg_cache):
try:
versions = set(p.version for p in pkg.versions)
except AttributeError:
# assume older version of python-apt is installed
# apt.package.Package#versions require python-apt >= 0.7.9.
pkg_cache_list = (p for p in pkg_cache.Packages if p.Name == pkgname)
pkg_versions = (p.VersionList for p in pkg_cache_list)
versions = set(p.VerStr for p in itertools.chain(*pkg_versions))
return versions
def package_version_compare(version, other_version):
try:
return apt_pkg.version_compare(version, other_version)
except AttributeError:
return apt_pkg.VersionCompare(version, other_version)
def package_status(m, pkgname, version, cache, state):
try:
# get the package from the cache, as well as the
# low-level apt_pkg.Package object which contains
# state fields not directly accessible from the
# higher-level apt.package.Package object.
pkg = cache[pkgname]
ll_pkg = cache._cache[pkgname] # the low-level package object
except KeyError:
if state == 'install':
try:
provided_packages = cache.get_providing_packages(pkgname)
if provided_packages:
is_installed = False
upgradable = False
version_ok = False
# when virtual package providing only one package, look up status of target package
if cache.is_virtual_package(pkgname) and len(provided_packages) == 1:
package = provided_packages[0]
installed, version_ok, upgradable, has_files = package_status(m, package.name, version, cache, state='install')
if installed:
is_installed = True
return is_installed, version_ok, upgradable, False
m.fail_json(msg="No package matching '%s' is available" % pkgname)
except AttributeError:
# python-apt version too old to detect virtual packages
# mark as upgradable and let apt-get install deal with it
return False, False, True, False
else:
return False, False, False, False
try:
has_files = len(pkg.installed_files) > 0
except UnicodeDecodeError:
has_files = True
except AttributeError:
has_files = False # older python-apt cannot be used to determine non-purged
try:
package_is_installed = ll_pkg.current_state == apt_pkg.CURSTATE_INSTALLED
except AttributeError: # python-apt 0.7.X has very weak low-level object
try:
# might not be necessary as python-apt post-0.7.X should have current_state property
package_is_installed = pkg.is_installed
except AttributeError:
# assume older version of python-apt is installed
package_is_installed = pkg.isInstalled
version_is_installed = package_is_installed
if version:
versions = package_versions(pkgname, pkg, cache._cache)
avail_upgrades = fnmatch.filter(versions, version)
if package_is_installed:
try:
installed_version = pkg.installed.version
except AttributeError:
installed_version = pkg.installedVersion
# check if the version is matched as well
version_is_installed = fnmatch.fnmatch(installed_version, version)
# Only claim the package is upgradable if a candidate matches the version
package_is_upgradable = False
for candidate in avail_upgrades:
if package_version_compare(candidate, installed_version) > 0:
package_is_upgradable = True
break
else:
package_is_upgradable = bool(avail_upgrades)
else:
try:
package_is_upgradable = pkg.is_upgradable
except AttributeError:
# assume older version of python-apt is installed
package_is_upgradable = pkg.isUpgradable
return package_is_installed, version_is_installed, package_is_upgradable, has_files
def expand_dpkg_options(dpkg_options_compressed):
options_list = dpkg_options_compressed.split(',')
dpkg_options = ""
for dpkg_option in options_list:
dpkg_options = '%s -o "Dpkg::Options::=--%s"' \
% (dpkg_options, dpkg_option)
return dpkg_options.strip()
def expand_pkgspec_from_fnmatches(m, pkgspec, cache):
# Note: apt-get does implicit regex matching when an exact package name
# match is not found. Something like this:
# matches = [pkg.name for pkg in cache if re.match(pkgspec, pkg.name)]
# (Should also deal with the ':' for multiarch like the fnmatch code below)
#
# We have decided not to do similar implicit regex matching but might take
# a PR to add some sort of explicit regex matching:
# https://github.com/ansible/ansible-modules-core/issues/1258
new_pkgspec = []
if pkgspec:
for pkgspec_pattern in pkgspec:
pkgname_pattern, version = package_split(pkgspec_pattern)
# note that none of these chars is allowed in a (debian) pkgname
if frozenset('*?[]!').intersection(pkgname_pattern):
# handle multiarch pkgnames, the idea is that "apt*" should
# only select native packages. But "apt*:i386" should still work
if ":" not in pkgname_pattern:
# Filter the multiarch packages from the cache only once
try:
pkg_name_cache = _non_multiarch
except NameError:
pkg_name_cache = _non_multiarch = [pkg.name for pkg in cache if ':' not in pkg.name] # noqa: F841
else:
# Create a cache of pkg_names including multiarch only once
try:
pkg_name_cache = _all_pkg_names
except NameError:
pkg_name_cache = _all_pkg_names = [pkg.name for pkg in cache] # noqa: F841
matches = fnmatch.filter(pkg_name_cache, pkgname_pattern)
if not matches:
m.fail_json(msg="No package(s) matching '%s' available" % str(pkgname_pattern))
else:
new_pkgspec.extend(matches)
else:
# No wildcards in name
new_pkgspec.append(pkgspec_pattern)
return new_pkgspec
def parse_diff(output):
diff = to_native(output).splitlines()
try:
# check for start marker from aptitude
diff_start = diff.index('Resolving dependencies...')
except ValueError:
try:
# check for start marker from apt-get
diff_start = diff.index('Reading state information...')
except ValueError:
# show everything
diff_start = -1
try:
# check for end marker line from both apt-get and aptitude
diff_end = next(i for i, item in enumerate(diff) if re.match('[0-9]+ (packages )?upgraded', item))
except StopIteration:
diff_end = len(diff)
diff_start += 1
diff_end += 1
return {'prepared': '\n'.join(diff[diff_start:diff_end])}
def mark_installed_manually(m, packages):
if not packages:
return
apt_mark_cmd_path = m.get_bin_path("apt-mark")
# https://github.com/ansible/ansible/issues/40531
if apt_mark_cmd_path is None:
m.warn("Could not find apt-mark binary, not marking package(s) as manually installed.")
return
cmd = "%s manual %s" % (apt_mark_cmd_path, ' '.join(packages))
rc, out, err = m.run_command(cmd)
if APT_MARK_INVALID_OP in err or APT_MARK_INVALID_OP_DEB6 in err:
cmd = "%s unmarkauto %s" % (apt_mark_cmd_path, ' '.join(packages))
rc, out, err = m.run_command(cmd)
if rc != 0:
m.fail_json(msg="'%s' failed: %s" % (cmd, err), stdout=out, stderr=err, rc=rc)
def install(m, pkgspec, cache, upgrade=False, default_release=None,
install_recommends=None, force=False,
dpkg_options=expand_dpkg_options(DPKG_OPTIONS),
build_dep=False, fixed=False, autoremove=False, fail_on_autoremove=False, only_upgrade=False,
allow_unauthenticated=False, allow_downgrade=False):
pkg_list = []
packages = ""
pkgspec = expand_pkgspec_from_fnmatches(m, pkgspec, cache)
package_names = []
for package in pkgspec:
if build_dep:
# Let apt decide what to install
pkg_list.append("'%s'" % package)
continue
name, version = package_split(package)
package_names.append(name)
installed, installed_version, upgradable, has_files = package_status(m, name, version, cache, state='install')
if (not installed and not only_upgrade) or (installed and not installed_version) or (upgrade and upgradable):
pkg_list.append("'%s'" % package)
if installed_version and upgradable and version:
# This happens when the package is installed, a newer version is
# available, and the version is a wildcard that matches both
#
# We do not apply the upgrade flag because we cannot specify both
# a version and state=latest. (This behaviour mirrors how apt
# treats a version with wildcard in the package)
pkg_list.append("'%s'" % package)
packages = ' '.join(pkg_list)
if packages:
if force:
force_yes = '--force-yes'
else:
force_yes = ''
if m.check_mode:
check_arg = '--simulate'
else:
check_arg = ''
if autoremove:
autoremove = '--auto-remove'
else:
autoremove = ''
if fail_on_autoremove:
fail_on_autoremove = '--no-remove'
else:
fail_on_autoremove = ''
if only_upgrade:
only_upgrade = '--only-upgrade'
else:
only_upgrade = ''
if fixed:
fixed = '--fix-broken'
else:
fixed = ''
if build_dep:
cmd = "%s -y %s %s %s %s %s %s build-dep %s" % (APT_GET_CMD, dpkg_options, only_upgrade, fixed, force_yes, fail_on_autoremove, check_arg, packages)
else:
cmd = "%s -y %s %s %s %s %s %s %s install %s" % \
(APT_GET_CMD, dpkg_options, only_upgrade, fixed, force_yes, autoremove, fail_on_autoremove, check_arg, packages)
if default_release:
cmd += " -t '%s'" % (default_release,)
if install_recommends is False:
cmd += " -o APT::Install-Recommends=no"
elif install_recommends is True:
cmd += " -o APT::Install-Recommends=yes"
# install_recommends is None uses the OS default
if allow_unauthenticated:
cmd += " --allow-unauthenticated"
if allow_downgrade:
cmd += " --allow-downgrades"
with PolicyRcD(m):
rc, out, err = m.run_command(cmd)
if m._diff:
diff = parse_diff(out)
else:
diff = {}
status = True
changed = True
if build_dep:
changed = APT_GET_ZERO not in out
data = dict(changed=changed, stdout=out, stderr=err, diff=diff)
if rc:
status = False
data = dict(msg="'%s' failed: %s" % (cmd, err), stdout=out, stderr=err, rc=rc)
else:
status = True
data = dict(changed=False)
if not build_dep:
mark_installed_manually(m, package_names)
return (status, data)
def get_field_of_deb(m, deb_file, field="Version"):
cmd_dpkg = m.get_bin_path("dpkg", True)
cmd = cmd_dpkg + " --field %s %s" % (deb_file, field)
rc, stdout, stderr = m.run_command(cmd)
if rc != 0:
m.fail_json(msg="%s failed" % cmd, stdout=stdout, stderr=stderr)
return to_native(stdout).strip('\n')
def install_deb(m, debs, cache, force, fail_on_autoremove, install_recommends, allow_unauthenticated, allow_downgrade, dpkg_options):
changed = False
deps_to_install = []
pkgs_to_install = []
for deb_file in debs.split(','):
try:
pkg = apt.debfile.DebPackage(deb_file, cache=apt.Cache())
pkg_name = get_field_of_deb(m, deb_file, "Package")
pkg_version = get_field_of_deb(m, deb_file, "Version")
if hasattr(apt_pkg, 'get_architectures') and len(apt_pkg.get_architectures()) > 1:
pkg_arch = get_field_of_deb(m, deb_file, "Architecture")
pkg_key = "%s:%s" % (pkg_name, pkg_arch)
else:
pkg_key = pkg_name
try:
installed_pkg = apt.Cache()[pkg_key]
installed_version = installed_pkg.installed.version
if package_version_compare(pkg_version, installed_version) == 0:
# Does not need to down-/upgrade, move on to next package
continue
except Exception:
# Must not be installed, continue with installation
pass
# Check if package is installable
if not pkg.check():
if force or ("later version" in pkg._failure_string and allow_downgrade):
pass
else:
m.fail_json(msg=pkg._failure_string)
# add any missing deps to the list of deps we need
# to install so they're all done in one shot
deps_to_install.extend(pkg.missing_deps)
except Exception as e:
m.fail_json(msg="Unable to install package: %s" % to_native(e))
# and add this deb to the list of packages to install
pkgs_to_install.append(deb_file)
# install the deps through apt
retvals = {}
if deps_to_install:
(success, retvals) = install(m=m, pkgspec=deps_to_install, cache=cache,
install_recommends=install_recommends,
fail_on_autoremove=fail_on_autoremove,
allow_unauthenticated=allow_unauthenticated,
allow_downgrade=allow_downgrade,
dpkg_options=expand_dpkg_options(dpkg_options))
if not success:
m.fail_json(**retvals)
changed = retvals.get('changed', False)
if pkgs_to_install:
options = ' '.join(["--%s" % x for x in dpkg_options.split(",")])
if m.check_mode:
options += " --simulate"
if force:
options += " --force-all"
cmd = "dpkg %s -i %s" % (options, " ".join(pkgs_to_install))
with PolicyRcD(m):
rc, out, err = m.run_command(cmd)
if "stdout" in retvals:
stdout = retvals["stdout"] + out
else:
stdout = out
if "diff" in retvals:
diff = retvals["diff"]
if 'prepared' in diff:
diff['prepared'] += '\n\n' + out
else:
diff = parse_diff(out)
if "stderr" in retvals:
stderr = retvals["stderr"] + err
else:
stderr = err
if rc == 0:
m.exit_json(changed=True, stdout=stdout, stderr=stderr, diff=diff)
else:
m.fail_json(msg="%s failed" % cmd, stdout=stdout, stderr=stderr)
else:
m.exit_json(changed=changed, stdout=retvals.get('stdout', ''), stderr=retvals.get('stderr', ''), diff=retvals.get('diff', ''))
def remove(m, pkgspec, cache, purge=False, force=False,
dpkg_options=expand_dpkg_options(DPKG_OPTIONS), autoremove=False):
pkg_list = []
pkgspec = expand_pkgspec_from_fnmatches(m, pkgspec, cache)
for package in pkgspec:
name, version = package_split(package)
installed, installed_version, upgradable, has_files = package_status(m, name, version, cache, state='remove')
if installed_version or (has_files and purge):
pkg_list.append("'%s'" % package)
packages = ' '.join(pkg_list)
if not packages:
m.exit_json(changed=False)
else:
if force:
force_yes = '--force-yes'
else:
force_yes = ''
if purge:
purge = '--purge'
else:
purge = ''
if autoremove:
autoremove = '--auto-remove'
else:
autoremove = ''
if m.check_mode:
check_arg = '--simulate'
else:
check_arg = ''
cmd = "%s -q -y %s %s %s %s %s remove %s" % (APT_GET_CMD, dpkg_options, purge, force_yes, autoremove, check_arg, packages)
with PolicyRcD(m):
rc, out, err = m.run_command(cmd)
if m._diff:
diff = parse_diff(out)
else:
diff = {}
if rc:
m.fail_json(msg="'apt-get remove %s' failed: %s" % (packages, err), stdout=out, stderr=err, rc=rc)
m.exit_json(changed=True, stdout=out, stderr=err, diff=diff)
def cleanup(m, purge=False, force=False, operation=None,
dpkg_options=expand_dpkg_options(DPKG_OPTIONS)):
if operation not in frozenset(['autoremove', 'autoclean']):
raise AssertionError('Expected "autoremove" or "autoclean" cleanup operation, got %s' % operation)
if force:
force_yes = '--force-yes'
else:
force_yes = ''
if purge:
purge = '--purge'
else:
purge = ''
if m.check_mode:
check_arg = '--simulate'
else:
check_arg = ''
cmd = "%s -y %s %s %s %s %s" % (APT_GET_CMD, dpkg_options, purge, force_yes, operation, check_arg)
with PolicyRcD(m):
rc, out, err = m.run_command(cmd)
if m._diff:
diff = parse_diff(out)
else:
diff = {}
if rc:
m.fail_json(msg="'apt-get %s' failed: %s" % (operation, err), stdout=out, stderr=err, rc=rc)
changed = CLEAN_OP_CHANGED_STR[operation] in out
m.exit_json(changed=changed, stdout=out, stderr=err, diff=diff)
def upgrade(m, mode="yes", force=False, default_release=None,
use_apt_get=False,
dpkg_options=expand_dpkg_options(DPKG_OPTIONS), autoremove=False, fail_on_autoremove=False,
allow_unauthenticated=False,
allow_downgrade=False,
):
if autoremove:
autoremove = '--auto-remove'
else:
autoremove = ''
if m.check_mode:
check_arg = '--simulate'
else:
check_arg = ''
apt_cmd = None
prompt_regex = None
if mode == "dist" or (mode == "full" and use_apt_get):
# apt-get dist-upgrade
apt_cmd = APT_GET_CMD
upgrade_command = "dist-upgrade %s" % (autoremove)
elif mode == "full" and not use_apt_get:
# aptitude full-upgrade
apt_cmd = APTITUDE_CMD
upgrade_command = "full-upgrade"
else:
if use_apt_get:
apt_cmd = APT_GET_CMD
upgrade_command = "upgrade --with-new-pkgs %s" % (autoremove)
else:
# aptitude safe-upgrade # mode=yes # default
apt_cmd = APTITUDE_CMD
upgrade_command = "safe-upgrade"
prompt_regex = r"(^Do you want to ignore this warning and proceed anyway\?|^\*\*\*.*\[default=.*\])"
if force:
if apt_cmd == APT_GET_CMD:
force_yes = '--force-yes'
else:
force_yes = '--assume-yes --allow-untrusted'
else:
force_yes = ''
if fail_on_autoremove:
fail_on_autoremove = '--no-remove'
else:
fail_on_autoremove = ''
allow_unauthenticated = '--allow-unauthenticated' if allow_unauthenticated else ''
allow_downgrade = '--allow-downgrades' if allow_downgrade else ''
if apt_cmd is None:
if use_apt_get:
apt_cmd = APT_GET_CMD
else:
m.fail_json(msg="Unable to find APTITUDE in path. Please make sure "
"to have APTITUDE in path or use 'force_apt_get=True'")
apt_cmd_path = m.get_bin_path(apt_cmd, required=True)
cmd = '%s -y %s %s %s %s %s %s %s' % (
apt_cmd_path,
dpkg_options,
force_yes,
fail_on_autoremove,
allow_unauthenticated,
allow_downgrade,
check_arg,
upgrade_command,
)
if default_release:
cmd += " -t '%s'" % (default_release,)
with PolicyRcD(m):
rc, out, err = m.run_command(cmd, prompt_regex=prompt_regex)
if m._diff:
diff = parse_diff(out)
else:
diff = {}
if rc:
m.fail_json(msg="'%s %s' failed: %s" % (apt_cmd, upgrade_command, err), stdout=out, rc=rc)
if (apt_cmd == APT_GET_CMD and APT_GET_ZERO in out) or (apt_cmd == APTITUDE_CMD and APTITUDE_ZERO in out):
m.exit_json(changed=False, msg=out, stdout=out, stderr=err)
m.exit_json(changed=True, msg=out, stdout=out, stderr=err, diff=diff)
def get_cache_mtime():
"""Return mtime of a valid apt cache file.
Stat the apt cache file and if no cache file is found return 0
:returns: ``int``
"""
cache_time = 0
if os.path.exists(APT_UPDATE_SUCCESS_STAMP_PATH):
cache_time = os.stat(APT_UPDATE_SUCCESS_STAMP_PATH).st_mtime
elif os.path.exists(APT_LISTS_PATH):
cache_time = os.stat(APT_LISTS_PATH).st_mtime
return cache_time
def get_updated_cache_time():
"""Return the mtime time stamp and the updated cache time.
Always retrieve the mtime of the apt cache or set the `cache_mtime`
variable to 0
:returns: ``tuple``
"""
cache_mtime = get_cache_mtime()
mtimestamp = datetime.datetime.fromtimestamp(cache_mtime)
updated_cache_time = int(time.mktime(mtimestamp.timetuple()))
return mtimestamp, updated_cache_time
# https://github.com/ansible/ansible-modules-core/issues/2951
def get_cache(module):
'''Attempt to get the cache object and update till it works'''
cache = None
try:
cache = apt.Cache()
except SystemError as e:
if '/var/lib/apt/lists/' in to_native(e).lower():
# update cache until files are fixed or retries exceeded
retries = 0
while retries < 2:
(rc, so, se) = module.run_command(['apt-get', 'update', '-q'])
retries += 1
if rc == 0:
break
if rc != 0:
module.fail_json(msg='Updating the cache to correct corrupt package lists failed:\n%s\n%s' % (to_native(e), so + se), rc=rc)
# try again
cache = apt.Cache()
else:
module.fail_json(msg=to_native(e))
return cache
def main():
module = AnsibleModule(
argument_spec=dict(
state=dict(type='str', default='present', choices=['absent', 'build-dep', 'fixed', 'latest', 'present']),
update_cache=dict(type='bool', aliases=['update-cache']),
update_cache_retries=dict(type='int', default=5),
update_cache_retry_max_delay=dict(type='int', default=12),
cache_valid_time=dict(type='int', default=0),
purge=dict(type='bool', default=False),
package=dict(type='list', elements='str', aliases=['pkg', 'name']),
deb=dict(type='path'),
default_release=dict(type='str', aliases=['default-release']),
install_recommends=dict(type='bool', aliases=['install-recommends']),
force=dict(type='bool', default=False),
upgrade=dict(type='str', choices=['dist', 'full', 'no', 'safe', 'yes'], default='no'),
dpkg_options=dict(type='str', default=DPKG_OPTIONS),
autoremove=dict(type='bool', default=False),
autoclean=dict(type='bool', default=False),
fail_on_autoremove=dict(type='bool', default=False),
policy_rc_d=dict(type='int', default=None),
only_upgrade=dict(type='bool', default=False),
force_apt_get=dict(type='bool', default=False),
allow_unauthenticated=dict(type='bool', default=False, aliases=['allow-unauthenticated']),
allow_downgrade=dict(type='bool', default=False, aliases=['allow-downgrade', 'allow_downgrades', 'allow-downgrades']),
lock_timeout=dict(type='int', default=60),
),
mutually_exclusive=[['deb', 'package', 'upgrade']],
required_one_of=[['autoremove', 'deb', 'package', 'update_cache', 'upgrade']],
supports_check_mode=True,
)
# We screenscrape apt-get and aptitude output for information so we need
# to make sure we use the best parsable locale when running commands
# also set apt specific vars for desired behaviour
locale = get_best_parsable_locale(module)
# APT related constants
APT_ENV_VARS = dict(
DEBIAN_FRONTEND='noninteractive',
DEBIAN_PRIORITY='critical',
LANG=locale,
LC_ALL=locale,
LC_MESSAGES=locale,
LC_CTYPE=locale,
)
module.run_command_environ_update = APT_ENV_VARS
if not HAS_PYTHON_APT:
# This interpreter can't see the apt Python library- we'll do the following to try and fix that:
# 1) look in common locations for system-owned interpreters that can see it; if we find one, respawn under it
# 2) finding none, try to install a matching python-apt package for the current interpreter version;
# we limit to the current interpreter version to try and avoid installing a whole other Python just
# for apt support
# 3) if we installed a support package, try to respawn under what we think is the right interpreter (could be
# the current interpreter again, but we'll let it respawn anyway for simplicity)
# 4) if still not working, return an error and give up (some corner cases not covered, but this shouldn't be
# made any more complex than it already is to try and cover more, eg, custom interpreters taking over
# system locations)
apt_pkg_name = 'python3-apt' if PY3 else 'python-apt'
if has_respawned():
# this shouldn't be possible; short-circuit early if it happens...
module.fail_json(msg="{0} must be installed and visible from {1}.".format(apt_pkg_name, sys.executable))
interpreters = ['/usr/bin/python3', '/usr/bin/python2', '/usr/bin/python']
interpreter = probe_interpreters_for_module(interpreters, 'apt')
if interpreter:
# found the Python bindings; respawn this module under the interpreter where we found them
respawn_module(interpreter)
# this is the end of the line for this process, it will exit here once the respawned module has completed
# don't make changes if we're in check_mode
if module.check_mode:
module.fail_json(msg="%s must be installed to use check mode. "
"If run normally this module can auto-install it." % apt_pkg_name)
# We skip cache update in auto install the dependency if the
# user explicitly declared it with update_cache=no.
if module.params.get('update_cache') is False:
module.warn("Auto-installing missing dependency without updating cache: %s" % apt_pkg_name)
else:
module.warn("Updating cache and auto-installing missing dependency: %s" % apt_pkg_name)
module.run_command(['apt-get', 'update'], check_rc=True)
# try to install the apt Python binding
module.run_command(['apt-get', 'install', '--no-install-recommends', apt_pkg_name, '-y', '-q'], check_rc=True)
# try again to find the bindings in common places
interpreter = probe_interpreters_for_module(interpreters, 'apt')
if interpreter:
# found the Python bindings; respawn this module under the interpreter where we found them
# NB: respawn is somewhat wasteful if it's this interpreter, but simplifies the code
respawn_module(interpreter)
# this is the end of the line for this process, it will exit here once the respawned module has completed
else:
# we've done all we can do; just tell the user it's busted and get out
module.fail_json(msg="{0} must be installed and visible from {1}.".format(apt_pkg_name, sys.executable))
global APTITUDE_CMD
APTITUDE_CMD = module.get_bin_path("aptitude", False)
global APT_GET_CMD
APT_GET_CMD = module.get_bin_path("apt-get")
p = module.params
if p['upgrade'] == 'no':
p['upgrade'] = None
use_apt_get = p['force_apt_get']
if not use_apt_get and not APTITUDE_CMD:
use_apt_get = True
updated_cache = False
updated_cache_time = 0
install_recommends = p['install_recommends']
allow_unauthenticated = p['allow_unauthenticated']
allow_downgrade = p['allow_downgrade']
dpkg_options = expand_dpkg_options(p['dpkg_options'])
autoremove = p['autoremove']
fail_on_autoremove = p['fail_on_autoremove']
autoclean = p['autoclean']
# max times we'll retry
deadline = time.time() + p['lock_timeout']
# keep running on lock issues unless timeout or resolution is hit.
while True:
# Get the cache object, this has 3 retries built in
cache = get_cache(module)
try:
if p['default_release']:
try:
apt_pkg.config['APT::Default-Release'] = p['default_release']
except AttributeError:
apt_pkg.Config['APT::Default-Release'] = p['default_release']
# reopen cache w/ modified config
cache.open(progress=None)
mtimestamp, updated_cache_time = get_updated_cache_time()
# Cache valid time is default 0, which will update the cache if
# needed and `update_cache` was set to true
updated_cache = False
if p['update_cache'] or p['cache_valid_time']:
now = datetime.datetime.now()
tdelta = datetime.timedelta(seconds=p['cache_valid_time'])
if not mtimestamp + tdelta >= now:
# Retry to update the cache with exponential backoff
err = ''
update_cache_retries = module.params.get('update_cache_retries')
update_cache_retry_max_delay = module.params.get('update_cache_retry_max_delay')
randomize = random.randint(0, 1000) / 1000.0
for retry in range(update_cache_retries):
try:
cache.update()
break
except apt.cache.FetchFailedException as e:
err = to_native(e)
# Use exponential backoff plus a little bit of randomness
delay = 2 ** retry + randomize
if delay > update_cache_retry_max_delay:
delay = update_cache_retry_max_delay + randomize
time.sleep(delay)
else:
module.fail_json(msg='Failed to update apt cache: %s' % (err if err else 'unknown reason'))
cache.open(progress=None)
mtimestamp, post_cache_update_time = get_updated_cache_time()
if updated_cache_time != post_cache_update_time:
updated_cache = True
updated_cache_time = post_cache_update_time
# If there is nothing else to do exit. This will set state as
# changed based on if the cache was updated.
if not p['package'] and not p['upgrade'] and not p['deb']:
module.exit_json(
changed=updated_cache,
cache_updated=updated_cache,
cache_update_time=updated_cache_time
)
force_yes = p['force']
if p['upgrade']:
upgrade(
module,
p['upgrade'],
force_yes,
p['default_release'],
use_apt_get,
dpkg_options,
autoremove,
fail_on_autoremove,
allow_unauthenticated,
allow_downgrade
)
if p['deb']:
if p['state'] != 'present':
module.fail_json(msg="deb only supports state=present")
if '://' in p['deb']:
p['deb'] = fetch_file(module, p['deb'])
install_deb(module, p['deb'], cache,
install_recommends=install_recommends,
allow_unauthenticated=allow_unauthenticated,
allow_downgrade=allow_downgrade,
force=force_yes, fail_on_autoremove=fail_on_autoremove, dpkg_options=p['dpkg_options'])
unfiltered_packages = p['package'] or ()
packages = [package.strip() for package in unfiltered_packages if package != '*']
all_installed = '*' in unfiltered_packages
latest = p['state'] == 'latest'
if latest and all_installed:
if packages:
module.fail_json(msg='unable to install additional packages when upgrading all installed packages')
upgrade(
module,
'yes',
force_yes,
p['default_release'],
use_apt_get,
dpkg_options,
autoremove,
fail_on_autoremove,
allow_unauthenticated,
allow_downgrade
)
if packages:
for package in packages:
if package.count('=') > 1:
module.fail_json(msg="invalid package spec: %s" % package)
if latest and '=' in package:
module.fail_json(msg='version number inconsistent with state=latest: %s' % package)
if not packages:
if autoclean:
cleanup(module, p['purge'], force=force_yes, operation='autoclean', dpkg_options=dpkg_options)
if autoremove:
cleanup(module, p['purge'], force=force_yes, operation='autoremove', dpkg_options=dpkg_options)
if p['state'] in ('latest', 'present', 'build-dep', 'fixed'):
state_upgrade = False
state_builddep = False
state_fixed = False
if p['state'] == 'latest':
state_upgrade = True
if p['state'] == 'build-dep':
state_builddep = True
if p['state'] == 'fixed':
state_fixed = True
success, retvals = install(
module,
packages,
cache,
upgrade=state_upgrade,
default_release=p['default_release'],
install_recommends=install_recommends,
force=force_yes,
dpkg_options=dpkg_options,
build_dep=state_builddep,
fixed=state_fixed,
autoremove=autoremove,
fail_on_autoremove=fail_on_autoremove,
only_upgrade=p['only_upgrade'],
allow_unauthenticated=allow_unauthenticated,
allow_downgrade=allow_downgrade
)
# Store if the cache has been updated
retvals['cache_updated'] = updated_cache
# Store when the update time was last
retvals['cache_update_time'] = updated_cache_time
if success:
module.exit_json(**retvals)
else:
module.fail_json(**retvals)
elif p['state'] == 'absent':
remove(module, packages, cache, p['purge'], force=force_yes, dpkg_options=dpkg_options, autoremove=autoremove)
except apt.cache.LockFailedException as lockFailedException:
if time.time() < deadline:
continue
module.fail_json(msg="Failed to lock apt for exclusive operation: %s" % lockFailedException)
except apt.cache.FetchFailedException as fetchFailedException:
module.fail_json(msg="Could not fetch updated apt files: %s" % fetchFailedException)
# got here w/o exception and/or exit???
module.fail_json(msg='Unexpected code path taken, we really should have exited before, this is a bug')
if __name__ == "__main__":
main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 65,325 |
Add 'allow_change_held_packages' option to apt module.
|
<!--- Verify first that your feature was not already discussed on GitHub -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Describe the new feature/improvement briefly below -->
When installing or updating an already held apt package with the `apt` module, it fails.
This forces to use the `force: yes` option that it's too broad and already deprecated by apt.
Add an option `allow_change_held_packages` to the `apt` module to use the `--allow-change-held-packages` `apt` option.
##### ISSUE TYPE
- Feature Idea
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
apt
##### ADDITIONAL INFORMATION
<!--- Describe how the feature would be used, why it is needed and what it would solve -->
<!--- Paste example playbooks or commands between quotes below -->
```yaml
- name: Install package with specific version and hold
block:
- name: Install {{ package_name }} version {{ version }}"
apt:
name: "{{ package_name }}={{ version }}"
state: present
(force|allow_change_held_packages): yes
- name: Hold {{ package_name }} package
dpkg_selections:
name: "{{ package_name }}"
selection: hold
```
This is not idempotent without the `force: yes`, the second run will fail.
The `allow_change_held_packages` will avoid problems with this special situation.
<!--- HINT: You can also paste gist.github.com links for larger files -->
Related to: https://github.com/ansible/ansible/issues/15394
|
https://github.com/ansible/ansible/issues/65325
|
https://github.com/ansible/ansible/pull/73629
|
0afc0b8506cc80897073606157c984c9176f545c
|
cd2d8b77fe699f7cc8529dfa296b0d2894630599
| 2019-11-27T11:45:18Z |
python
| 2021-11-29T15:31:54Z |
test/integration/targets/apt/defaults/main.yml
|
apt_foreign_arch: i386
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 65,325 |
Add 'allow_change_held_packages' option to apt module.
|
<!--- Verify first that your feature was not already discussed on GitHub -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Describe the new feature/improvement briefly below -->
When installing or updating an already held apt package with the `apt` module, it fails.
This forces to use the `force: yes` option that it's too broad and already deprecated by apt.
Add an option `allow_change_held_packages` to the `apt` module to use the `--allow-change-held-packages` `apt` option.
##### ISSUE TYPE
- Feature Idea
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
apt
##### ADDITIONAL INFORMATION
<!--- Describe how the feature would be used, why it is needed and what it would solve -->
<!--- Paste example playbooks or commands between quotes below -->
```yaml
- name: Install package with specific version and hold
block:
- name: Install {{ package_name }} version {{ version }}"
apt:
name: "{{ package_name }}={{ version }}"
state: present
(force|allow_change_held_packages): yes
- name: Hold {{ package_name }} package
dpkg_selections:
name: "{{ package_name }}"
selection: hold
```
This is not idempotent without the `force: yes`, the second run will fail.
The `allow_change_held_packages` will avoid problems with this special situation.
<!--- HINT: You can also paste gist.github.com links for larger files -->
Related to: https://github.com/ansible/ansible/issues/15394
|
https://github.com/ansible/ansible/issues/65325
|
https://github.com/ansible/ansible/pull/73629
|
0afc0b8506cc80897073606157c984c9176f545c
|
cd2d8b77fe699f7cc8529dfa296b0d2894630599
| 2019-11-27T11:45:18Z |
python
| 2021-11-29T15:31:54Z |
test/integration/targets/apt/handlers/main.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 65,325 |
Add 'allow_change_held_packages' option to apt module.
|
<!--- Verify first that your feature was not already discussed on GitHub -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Describe the new feature/improvement briefly below -->
When installing or updating an already held apt package with the `apt` module, it fails.
This forces to use the `force: yes` option that it's too broad and already deprecated by apt.
Add an option `allow_change_held_packages` to the `apt` module to use the `--allow-change-held-packages` `apt` option.
##### ISSUE TYPE
- Feature Idea
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
apt
##### ADDITIONAL INFORMATION
<!--- Describe how the feature would be used, why it is needed and what it would solve -->
<!--- Paste example playbooks or commands between quotes below -->
```yaml
- name: Install package with specific version and hold
block:
- name: Install {{ package_name }} version {{ version }}"
apt:
name: "{{ package_name }}={{ version }}"
state: present
(force|allow_change_held_packages): yes
- name: Hold {{ package_name }} package
dpkg_selections:
name: "{{ package_name }}"
selection: hold
```
This is not idempotent without the `force: yes`, the second run will fail.
The `allow_change_held_packages` will avoid problems with this special situation.
<!--- HINT: You can also paste gist.github.com links for larger files -->
Related to: https://github.com/ansible/ansible/issues/15394
|
https://github.com/ansible/ansible/issues/65325
|
https://github.com/ansible/ansible/pull/73629
|
0afc0b8506cc80897073606157c984c9176f545c
|
cd2d8b77fe699f7cc8529dfa296b0d2894630599
| 2019-11-27T11:45:18Z |
python
| 2021-11-29T15:31:54Z |
test/integration/targets/apt/tasks/apt.yml
|
- name: use Debian mirror
set_fact:
distro_mirror: http://ftp.debian.org/debian
when: ansible_distribution == 'Debian'
- name: use Ubuntu mirror
set_fact:
distro_mirror: http://archive.ubuntu.com/ubuntu
when: ansible_distribution == 'Ubuntu'
# UNINSTALL 'python-apt'
# The `apt` module has the smarts to auto-install `python-apt(3)`. To test, we
# will first uninstall `python-apt`.
- name: uninstall python-apt with apt
apt:
pkg: [python-apt, python3-apt]
state: absent
purge: yes
register: apt_result
# In check mode, auto-install of `python-apt` must fail
- name: test fail uninstall hello without required apt deps in check mode
apt:
pkg: hello
state: absent
purge: yes
register: apt_result
check_mode: yes
ignore_errors: yes
- name: verify fail uninstall hello without required apt deps in check mode
assert:
that:
- apt_result is failed
- '"If run normally this module can auto-install it." in apt_result.msg'
- name: check with dpkg
shell: dpkg -s python-apt python3-apt
register: dpkg_result
ignore_errors: true
# UNINSTALL 'hello'
# With 'python-apt' uninstalled, the first call to 'apt' should install
# python-apt without updating the cache.
- name: uninstall hello with apt and prevent updating the cache
apt:
pkg: hello
state: absent
purge: yes
update_cache: no
register: apt_result
- name: check hello with dpkg
shell: dpkg-query -l hello
failed_when: False
register: dpkg_result
- name: verify uninstall hello with apt and prevent updating the cache
assert:
that:
- "'changed' in apt_result"
- apt_result is not changed
- "dpkg_result.rc == 1"
- name: Test installing fnmatch package
apt:
name:
- hel?o
- he?lo
register: apt_install_fnmatch
- name: Test uninstalling fnmatch package
apt:
name:
- hel?o
- he?lo
state: absent
register: apt_uninstall_fnmatch
- name: verify fnmatch
assert:
that:
- apt_install_fnmatch is changed
- apt_uninstall_fnmatch is changed
- name: Test update_cache 1
apt:
update_cache: true
cache_valid_time: 10
register: apt_update_cache_1
- name: Test update_cache 2
apt:
update_cache: true
cache_valid_time: 10
register: apt_update_cache_2
- name: verify update_cache
assert:
that:
- apt_update_cache_1 is changed
- apt_update_cache_2 is not changed
- name: uninstall apt bindings with apt again
apt:
pkg: [python-apt, python3-apt]
state: absent
purge: yes
# UNINSTALL 'hello'
# With 'python-apt' uninstalled, the first call to 'apt' should install
# python-apt.
- name: uninstall hello with apt
apt: pkg=hello state=absent purge=yes
register: apt_result
until: apt_result is success
- name: check hello with dpkg
shell: dpkg-query -l hello
failed_when: False
register: dpkg_result
- name: verify uninstallation of hello
assert:
that:
- "'changed' in apt_result"
- apt_result is not changed
- "dpkg_result.rc == 1"
# UNINSTALL AGAIN
- name: uninstall hello with apt
apt: pkg=hello state=absent purge=yes
register: apt_result
- name: verify no change on re-uninstall
assert:
that:
- "not apt_result.changed"
# INSTALL
- name: install hello with apt
apt: name=hello state=present
register: apt_result
- name: check hello with dpkg
shell: dpkg-query -l hello
failed_when: False
register: dpkg_result
- name: verify installation of hello
assert:
that:
- "apt_result.changed"
- "dpkg_result.rc == 0"
- name: verify apt module outputs
assert:
that:
- "'changed' in apt_result"
- "'stderr' in apt_result"
- "'stdout' in apt_result"
- "'stdout_lines' in apt_result"
# INSTALL AGAIN
- name: install hello with apt
apt: name=hello state=present
register: apt_result
- name: verify no change on re-install
assert:
that:
- "not apt_result.changed"
# UNINSTALL AGAIN
- name: uninstall hello with apt
apt: pkg=hello state=absent purge=yes
register: apt_result
# INSTALL WITH VERSION WILDCARD
- name: install hello with apt
apt: name=hello=2.* state=present
register: apt_result
- name: check hello with wildcard with dpkg
shell: dpkg-query -l hello
failed_when: False
register: dpkg_result
- name: verify installation of hello
assert:
that:
- "apt_result.changed"
- "dpkg_result.rc == 0"
- name: check hello version
shell: dpkg -s hello | grep Version | awk '{print $2}'
register: hello_version
- name: check hello architecture
shell: dpkg -s hello | grep Architecture | awk '{print $2}'
register: hello_architecture
- name: uninstall hello with apt
apt: pkg=hello state=absent purge=yes
# INSTALL WITHOUT REMOVALS
- name: Install hello, that conflicts with hello-traditional
apt:
pkg: hello
state: present
update_cache: no
- name: check hello
shell: dpkg-query -l hello
register: dpkg_result
- name: verify installation of hello
assert:
that:
- "apt_result.changed"
- "dpkg_result.rc == 0"
- name: Try installing hello-traditional, that conflicts with hello
apt:
pkg: hello-traditional
state: present
fail_on_autoremove: yes
ignore_errors: yes
register: apt_result
- name: verify failure of installing hello-traditional, because it is required to remove hello to install.
assert:
that:
- apt_result is failed
- '"Packages need to be removed but remove is disabled." in apt_result.msg'
- name: uninstall hello with apt
apt:
pkg: hello
state: absent
purge: yes
update_cache: no
- name: install deb file
apt: deb="/var/cache/apt/archives/hello_{{ hello_version.stdout }}_{{ hello_architecture.stdout }}.deb"
register: apt_initial
- name: install deb file again
apt: deb="/var/cache/apt/archives/hello_{{ hello_version.stdout }}_{{ hello_architecture.stdout }}.deb"
register: apt_secondary
- name: verify installation of hello
assert:
that:
- "apt_initial.changed"
- "not apt_secondary.changed"
- name: uninstall hello with apt
apt: pkg=hello state=absent purge=yes
- name: install deb file from URL
apt: deb="{{ distro_mirror }}/pool/main/h/hello/hello_{{ hello_version.stdout }}_{{ hello_architecture.stdout }}.deb"
register: apt_url
- name: verify installation of hello
assert:
that:
- "apt_url.changed"
- name: uninstall hello with apt
apt: pkg=hello state=absent purge=yes
- name: force install of deb
apt: deb="/var/cache/apt/archives/hello_{{ hello_version.stdout }}_{{ hello_architecture.stdout }}.deb" force=true
register: dpkg_force
- name: verify installation of hello
assert:
that:
- "dpkg_force.changed"
# NEGATIVE: upgrade all packages while providing additional packages to install
- name: provide additional packages to install while upgrading all installed packages
apt: pkg=*,test state=latest
ignore_errors: True
register: apt_result
- name: verify failure of upgrade packages and install
assert:
that:
- "not apt_result.changed"
- "apt_result.failed"
- name: autoclean during install
apt: pkg=hello state=present autoclean=yes
- name: undo previous install
apt: pkg=hello state=absent
# https://github.com/ansible/ansible/issues/23155
- name: create a repo file
copy:
dest: /etc/apt/sources.list.d/non-existing.list
content: deb http://ppa.launchpad.net/non-existing trusty main
- name: test for sane error message
apt:
update_cache: yes
register: apt_result
ignore_errors: yes
- name: verify sane error message
assert:
that:
- "'Failed to fetch' in apt_result['msg']"
- "'403' in apt_result['msg']"
- name: Clean up
file:
name: /etc/apt/sources.list.d/non-existing.list
state: absent
# https://github.com/ansible/ansible/issues/28907
- name: Install parent package
apt:
name: libcaca-dev
- name: Install child package
apt:
name: libslang2-dev
- shell: apt-mark showmanual | grep libcaca-dev
ignore_errors: yes
register: parent_output
- name: Check that parent package is marked as installed manually
assert:
that:
- "'libcaca-dev' in parent_output.stdout"
- shell: apt-mark showmanual | grep libslang2-dev
ignore_errors: yes
register: child_output
- name: Check that child package is marked as installed manually
assert:
that:
- "'libslang2-dev' in child_output.stdout"
- name: Clean up
apt:
name: "{{ pkgs }}"
state: absent
vars:
pkgs:
- libcaca-dev
- libslang2-dev
# https://github.com/ansible/ansible/issues/38995
- name: build-dep for a package
apt:
name: tree
state: build-dep
register: apt_result
- name: Check the result
assert:
that:
- apt_result is changed
- name: build-dep for a package (idempotency)
apt:
name: tree
state: build-dep
register: apt_result
- name: Check the result
assert:
that:
- apt_result is not changed
# check policy_rc_d parameter
- name: Install unscd but forbid service start
apt:
name: unscd
policy_rc_d: 101
- name: Stop unscd service
service:
name: unscd
state: stopped
register: service_unscd_stop
- name: unscd service shouldn't have been stopped by previous task
assert:
that: service_unscd_stop is not changed
- name: Uninstall unscd
apt:
name: unscd
policy_rc_d: 101
- name: Create incorrect /usr/sbin/policy-rc.d
copy:
dest: /usr/sbin/policy-rc.d
content: apt integration test
mode: 0755
- name: Install unscd but forbid service start
apt:
name: unscd
policy_rc_d: 101
- name: Stop unscd service
service:
name: unscd
state: stopped
register: service_unscd_stop
- name: unscd service shouldn't have been stopped by previous task
assert:
that: service_unscd_stop is not changed
- name: Create incorrect /usr/sbin/policy-rc.d
copy:
dest: /usr/sbin/policy-rc.d
content: apt integration test
mode: 0755
register: policy_rc_d
- name: Check if /usr/sbin/policy-rc.d was correctly backed-up during unscd install
assert:
that: policy_rc_d is not changed
- name: Delete /usr/sbin/policy-rc.d
file:
path: /usr/sbin/policy-rc.d
state: absent
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 76,334 |
to_json filter doesn't decrypt vaulted variables
|
### Summary
Using docker_container_exec's stdin syntax doesn't decrypt vaulted variables.
I was suggested in https://github.com/ansible-collections/community.docker/issues/241 that it's most likely issue with Ansible, not the collection.
### Issue Type
Bug Report
### Component Name
docker_container_exec
### Ansible Version
```console
$ ansible --version
ansible [core 2.11.6]
python version = 3.9.7 (default, Sep 10 2021, 14:59:43) [GCC 11.2.0]
jinja version = 3.0.3
libyaml = True
```
### Configuration
```console
$ ansible-config dump --only-changed
ANSIBLE_NOCOWS(/home/user/.ansible.cfg) = True
CACHE_PLUGIN(/home/user/.ansible.cfg) = jsonfile
CACHE_PLUGIN_CONNECTION(/home/user/.ansible.cfg) = /tmp/ansible-cachedir
CACHE_PLUGIN_TIMEOUT(/home/user/.ansible.cfg) = 5
DEFAULT_GATHERING(/home/user/.ansible.cfg) = smart
DEPRECATION_WARNINGS(/home/user/.ansible.cfg) = False
DIFF_ALWAYS(/home/user/.ansible.cfg) = True
HOST_KEY_CHECKING(/home/user/.ansible.cfg) = False
```
### OS / Environment
- Target: Ubuntu 20.04.3
### Steps to Reproduce
```yaml
- name: Upload key
community.docker.docker_container_exec:
argv:
- tee
- secret.json
container: '{{ docker_id }}'
stdin: '{{ secret_data | to_json }}'
```
Var file (vaulted value is embedded):
```
secret_data:
priv_key:
value: !vault |
$ANSIBLE_VAULT;1.1;AES256
1161...
....
pub_key:
value: some_public_key
```
### Expected Results
Vaulted variable is properly decrypted.
I've tested encryption with:
```
ansible localhost -m ansible.builtin.debug -a var=secret_data -e @all/vars.yml
```
and the syntax is valid (variable is decrypted as expected), but not when using docker collection.
### Actual Results
```console
JSON file which was created consist ansible vault data, not decrypted string.
below
"value": {"__ansible_vault": "$ANSIBLE_VAULT;1.1;AES256;\n1161...
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/76334
|
https://github.com/ansible/ansible/pull/76349
|
de11a1ce781482e280c55d99ce571a130ae474bc
|
480d47d071cc6ed08b80209a3fa75b64311ea648
| 2021-11-20T22:48:54Z |
python
| 2021-12-02T16:02:38Z |
changelogs/fragments/json_filter_fix.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 76,334 |
to_json filter doesn't decrypt vaulted variables
|
### Summary
Using docker_container_exec's stdin syntax doesn't decrypt vaulted variables.
I was suggested in https://github.com/ansible-collections/community.docker/issues/241 that it's most likely issue with Ansible, not the collection.
### Issue Type
Bug Report
### Component Name
docker_container_exec
### Ansible Version
```console
$ ansible --version
ansible [core 2.11.6]
python version = 3.9.7 (default, Sep 10 2021, 14:59:43) [GCC 11.2.0]
jinja version = 3.0.3
libyaml = True
```
### Configuration
```console
$ ansible-config dump --only-changed
ANSIBLE_NOCOWS(/home/user/.ansible.cfg) = True
CACHE_PLUGIN(/home/user/.ansible.cfg) = jsonfile
CACHE_PLUGIN_CONNECTION(/home/user/.ansible.cfg) = /tmp/ansible-cachedir
CACHE_PLUGIN_TIMEOUT(/home/user/.ansible.cfg) = 5
DEFAULT_GATHERING(/home/user/.ansible.cfg) = smart
DEPRECATION_WARNINGS(/home/user/.ansible.cfg) = False
DIFF_ALWAYS(/home/user/.ansible.cfg) = True
HOST_KEY_CHECKING(/home/user/.ansible.cfg) = False
```
### OS / Environment
- Target: Ubuntu 20.04.3
### Steps to Reproduce
```yaml
- name: Upload key
community.docker.docker_container_exec:
argv:
- tee
- secret.json
container: '{{ docker_id }}'
stdin: '{{ secret_data | to_json }}'
```
Var file (vaulted value is embedded):
```
secret_data:
priv_key:
value: !vault |
$ANSIBLE_VAULT;1.1;AES256
1161...
....
pub_key:
value: some_public_key
```
### Expected Results
Vaulted variable is properly decrypted.
I've tested encryption with:
```
ansible localhost -m ansible.builtin.debug -a var=secret_data -e @all/vars.yml
```
and the syntax is valid (variable is decrypted as expected), but not when using docker collection.
### Actual Results
```console
JSON file which was created consist ansible vault data, not decrypted string.
below
"value": {"__ansible_vault": "$ANSIBLE_VAULT;1.1;AES256;\n1161...
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/76334
|
https://github.com/ansible/ansible/pull/76349
|
de11a1ce781482e280c55d99ce571a130ae474bc
|
480d47d071cc6ed08b80209a3fa75b64311ea648
| 2021-11-20T22:48:54Z |
python
| 2021-12-02T16:02:38Z |
lib/ansible/plugins/filter/core.py
|
# (c) 2012, Jeroen Hoekx <[email protected]>
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
# Make coding more python3-ish
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import base64
import glob
import hashlib
import json
import ntpath
import os.path
import re
import shlex
import sys
import time
import uuid
import yaml
import datetime
from functools import partial
from random import Random, SystemRandom, shuffle
from jinja2.filters import pass_environment
from ansible.errors import AnsibleError, AnsibleFilterError, AnsibleFilterTypeError
from ansible.module_utils.six import string_types, integer_types, reraise, text_type
from ansible.module_utils._text import to_bytes, to_native, to_text
from ansible.module_utils.common.collections import is_sequence
from ansible.module_utils.common._collections_compat import Mapping
from ansible.module_utils.common.yaml import yaml_load, yaml_load_all
from ansible.parsing.ajson import AnsibleJSONEncoder
from ansible.parsing.yaml.dumper import AnsibleDumper
from ansible.template import recursive_check_defined
from ansible.utils.display import Display
from ansible.utils.encrypt import passlib_or_crypt
from ansible.utils.hashing import md5s, checksum_s
from ansible.utils.unicode import unicode_wrap
from ansible.utils.vars import merge_hash
display = Display()
UUID_NAMESPACE_ANSIBLE = uuid.UUID('361E6D51-FAEC-444A-9079-341386DA8E2E')
def to_yaml(a, *args, **kw):
'''Make verbose, human readable yaml'''
default_flow_style = kw.pop('default_flow_style', None)
try:
transformed = yaml.dump(a, Dumper=AnsibleDumper, allow_unicode=True, default_flow_style=default_flow_style, **kw)
except Exception as e:
raise AnsibleFilterError("to_yaml - %s" % to_native(e), orig_exc=e)
return to_text(transformed)
def to_nice_yaml(a, indent=4, *args, **kw):
'''Make verbose, human readable yaml'''
try:
transformed = yaml.dump(a, Dumper=AnsibleDumper, indent=indent, allow_unicode=True, default_flow_style=False, **kw)
except Exception as e:
raise AnsibleFilterError("to_nice_yaml - %s" % to_native(e), orig_exc=e)
return to_text(transformed)
def to_json(a, *args, **kw):
''' Convert the value to JSON '''
return json.dumps(a, cls=AnsibleJSONEncoder, *args, **kw)
def to_nice_json(a, indent=4, sort_keys=True, *args, **kw):
'''Make verbose, human readable JSON'''
return to_json(a, indent=indent, sort_keys=sort_keys, separators=(',', ': '), *args, **kw)
def to_bool(a):
''' return a bool for the arg '''
if a is None or isinstance(a, bool):
return a
if isinstance(a, string_types):
a = a.lower()
if a in ('yes', 'on', '1', 'true', 1):
return True
return False
def to_datetime(string, format="%Y-%m-%d %H:%M:%S"):
return datetime.datetime.strptime(string, format)
def strftime(string_format, second=None):
''' return a date string using string. See https://docs.python.org/3/library/time.html#time.strftime for format '''
if second is not None:
try:
second = float(second)
except Exception:
raise AnsibleFilterError('Invalid value for epoch value (%s)' % second)
return time.strftime(string_format, time.localtime(second))
def quote(a):
''' return its argument quoted for shell usage '''
if a is None:
a = u''
return shlex.quote(to_text(a))
def fileglob(pathname):
''' return list of matched regular files for glob '''
return [g for g in glob.glob(pathname) if os.path.isfile(g)]
def regex_replace(value='', pattern='', replacement='', ignorecase=False, multiline=False):
''' Perform a `re.sub` returning a string '''
value = to_text(value, errors='surrogate_or_strict', nonstring='simplerepr')
flags = 0
if ignorecase:
flags |= re.I
if multiline:
flags |= re.M
_re = re.compile(pattern, flags=flags)
return _re.sub(replacement, value)
def regex_findall(value, regex, multiline=False, ignorecase=False):
''' Perform re.findall and return the list of matches '''
value = to_text(value, errors='surrogate_or_strict', nonstring='simplerepr')
flags = 0
if ignorecase:
flags |= re.I
if multiline:
flags |= re.M
return re.findall(regex, value, flags)
def regex_search(value, regex, *args, **kwargs):
''' Perform re.search and return the list of matches or a backref '''
value = to_text(value, errors='surrogate_or_strict', nonstring='simplerepr')
groups = list()
for arg in args:
if arg.startswith('\\g'):
match = re.match(r'\\g<(\S+)>', arg).group(1)
groups.append(match)
elif arg.startswith('\\'):
match = int(re.match(r'\\(\d+)', arg).group(1))
groups.append(match)
else:
raise AnsibleFilterError('Unknown argument')
flags = 0
if kwargs.get('ignorecase'):
flags |= re.I
if kwargs.get('multiline'):
flags |= re.M
match = re.search(regex, value, flags)
if match:
if not groups:
return match.group()
else:
items = list()
for item in groups:
items.append(match.group(item))
return items
def ternary(value, true_val, false_val, none_val=None):
''' value ? true_val : false_val '''
if value is None and none_val is not None:
return none_val
elif bool(value):
return true_val
else:
return false_val
def regex_escape(string, re_type='python'):
string = to_text(string, errors='surrogate_or_strict', nonstring='simplerepr')
'''Escape all regular expressions special characters from STRING.'''
if re_type == 'python':
return re.escape(string)
elif re_type == 'posix_basic':
# list of BRE special chars:
# https://en.wikibooks.org/wiki/Regular_Expressions/POSIX_Basic_Regular_Expressions
return regex_replace(string, r'([].[^$*\\])', r'\\\1')
# TODO: implement posix_extended
# It's similar to, but different from python regex, which is similar to,
# but different from PCRE. It's possible that re.escape would work here.
# https://remram44.github.io/regex-cheatsheet/regex.html#programs
elif re_type == 'posix_extended':
raise AnsibleFilterError('Regex type (%s) not yet implemented' % re_type)
else:
raise AnsibleFilterError('Invalid regex type (%s)' % re_type)
def from_yaml(data):
if isinstance(data, string_types):
# The ``text_type`` call here strips any custom
# string wrapper class, so that CSafeLoader can
# read the data
return yaml_load(text_type(to_text(data, errors='surrogate_or_strict')))
return data
def from_yaml_all(data):
if isinstance(data, string_types):
# The ``text_type`` call here strips any custom
# string wrapper class, so that CSafeLoader can
# read the data
return yaml_load_all(text_type(to_text(data, errors='surrogate_or_strict')))
return data
@pass_environment
def rand(environment, end, start=None, step=None, seed=None):
if seed is None:
r = SystemRandom()
else:
r = Random(seed)
if isinstance(end, integer_types):
if not start:
start = 0
if not step:
step = 1
return r.randrange(start, end, step)
elif hasattr(end, '__iter__'):
if start or step:
raise AnsibleFilterError('start and step can only be used with integer values')
return r.choice(end)
else:
raise AnsibleFilterError('random can only be used on sequences and integers')
def randomize_list(mylist, seed=None):
try:
mylist = list(mylist)
if seed:
r = Random(seed)
r.shuffle(mylist)
else:
shuffle(mylist)
except Exception:
pass
return mylist
def get_hash(data, hashtype='sha1'):
try:
h = hashlib.new(hashtype)
except Exception as e:
# hash is not supported?
raise AnsibleFilterError(e)
h.update(to_bytes(data, errors='surrogate_or_strict'))
return h.hexdigest()
def get_encrypted_password(password, hashtype='sha512', salt=None, salt_size=None, rounds=None, ident=None):
passlib_mapping = {
'md5': 'md5_crypt',
'blowfish': 'bcrypt',
'sha256': 'sha256_crypt',
'sha512': 'sha512_crypt',
}
hashtype = passlib_mapping.get(hashtype, hashtype)
try:
return passlib_or_crypt(password, hashtype, salt=salt, salt_size=salt_size, rounds=rounds, ident=ident)
except AnsibleError as e:
reraise(AnsibleFilterError, AnsibleFilterError(to_native(e), orig_exc=e), sys.exc_info()[2])
def to_uuid(string, namespace=UUID_NAMESPACE_ANSIBLE):
uuid_namespace = namespace
if not isinstance(uuid_namespace, uuid.UUID):
try:
uuid_namespace = uuid.UUID(namespace)
except (AttributeError, ValueError) as e:
raise AnsibleFilterError("Invalid value '%s' for 'namespace': %s" % (to_native(namespace), to_native(e)))
# uuid.uuid5() requires bytes on Python 2 and bytes or text or Python 3
return to_text(uuid.uuid5(uuid_namespace, to_native(string, errors='surrogate_or_strict')))
def mandatory(a, msg=None):
from jinja2.runtime import Undefined
''' Make a variable mandatory '''
if isinstance(a, Undefined):
if a._undefined_name is not None:
name = "'%s' " % to_text(a._undefined_name)
else:
name = ''
if msg is not None:
raise AnsibleFilterError(to_native(msg))
else:
raise AnsibleFilterError("Mandatory variable %s not defined." % name)
return a
def combine(*terms, **kwargs):
recursive = kwargs.pop('recursive', False)
list_merge = kwargs.pop('list_merge', 'replace')
if kwargs:
raise AnsibleFilterError("'recursive' and 'list_merge' are the only valid keyword arguments")
# allow the user to do `[dict1, dict2, ...] | combine`
dictionaries = flatten(terms, levels=1)
# recursively check that every elements are defined (for jinja2)
recursive_check_defined(dictionaries)
if not dictionaries:
return {}
if len(dictionaries) == 1:
return dictionaries[0]
# merge all the dicts so that the dict at the end of the array have precedence
# over the dict at the beginning.
# we merge the dicts from the highest to the lowest priority because there is
# a huge probability that the lowest priority dict will be the biggest in size
# (as the low prio dict will hold the "default" values and the others will be "patches")
# and merge_hash create a copy of it's first argument.
# so high/right -> low/left is more efficient than low/left -> high/right
high_to_low_prio_dict_iterator = reversed(dictionaries)
result = next(high_to_low_prio_dict_iterator)
for dictionary in high_to_low_prio_dict_iterator:
result = merge_hash(dictionary, result, recursive, list_merge)
return result
def comment(text, style='plain', **kw):
# Predefined comment types
comment_styles = {
'plain': {
'decoration': '# '
},
'erlang': {
'decoration': '% '
},
'c': {
'decoration': '// '
},
'cblock': {
'beginning': '/*',
'decoration': ' * ',
'end': ' */'
},
'xml': {
'beginning': '<!--',
'decoration': ' - ',
'end': '-->'
}
}
# Pointer to the right comment type
style_params = comment_styles[style]
if 'decoration' in kw:
prepostfix = kw['decoration']
else:
prepostfix = style_params['decoration']
# Default params
p = {
'newline': '\n',
'beginning': '',
'prefix': (prepostfix).rstrip(),
'prefix_count': 1,
'decoration': '',
'postfix': (prepostfix).rstrip(),
'postfix_count': 1,
'end': ''
}
# Update default params
p.update(style_params)
p.update(kw)
# Compose substrings for the final string
str_beginning = ''
if p['beginning']:
str_beginning = "%s%s" % (p['beginning'], p['newline'])
str_prefix = ''
if p['prefix']:
if p['prefix'] != p['newline']:
str_prefix = str(
"%s%s" % (p['prefix'], p['newline'])) * int(p['prefix_count'])
else:
str_prefix = str(
"%s" % (p['newline'])) * int(p['prefix_count'])
str_text = ("%s%s" % (
p['decoration'],
# Prepend each line of the text with the decorator
text.replace(
p['newline'], "%s%s" % (p['newline'], p['decoration'])))).replace(
# Remove trailing spaces when only decorator is on the line
"%s%s" % (p['decoration'], p['newline']),
"%s%s" % (p['decoration'].rstrip(), p['newline']))
str_postfix = p['newline'].join(
[''] + [p['postfix'] for x in range(p['postfix_count'])])
str_end = ''
if p['end']:
str_end = "%s%s" % (p['newline'], p['end'])
# Return the final string
return "%s%s%s%s%s" % (
str_beginning,
str_prefix,
str_text,
str_postfix,
str_end)
@pass_environment
def extract(environment, item, container, morekeys=None):
if morekeys is None:
keys = [item]
elif isinstance(morekeys, list):
keys = [item] + morekeys
else:
keys = [item, morekeys]
value = container
for key in keys:
value = environment.getitem(value, key)
return value
def b64encode(string, encoding='utf-8'):
return to_text(base64.b64encode(to_bytes(string, encoding=encoding, errors='surrogate_or_strict')))
def b64decode(string, encoding='utf-8'):
return to_text(base64.b64decode(to_bytes(string, errors='surrogate_or_strict')), encoding=encoding)
def flatten(mylist, levels=None, skip_nulls=True):
ret = []
for element in mylist:
if skip_nulls and element in (None, 'None', 'null'):
# ignore null items
continue
elif is_sequence(element):
if levels is None:
ret.extend(flatten(element, skip_nulls=skip_nulls))
elif levels >= 1:
# decrement as we go down the stack
ret.extend(flatten(element, levels=(int(levels) - 1), skip_nulls=skip_nulls))
else:
ret.append(element)
else:
ret.append(element)
return ret
def subelements(obj, subelements, skip_missing=False):
'''Accepts a dict or list of dicts, and a dotted accessor and produces a product
of the element and the results of the dotted accessor
>>> obj = [{"name": "alice", "groups": ["wheel"], "authorized": ["/tmp/alice/onekey.pub"]}]
>>> subelements(obj, 'groups')
[({'name': 'alice', 'groups': ['wheel'], 'authorized': ['/tmp/alice/onekey.pub']}, 'wheel')]
'''
if isinstance(obj, dict):
element_list = list(obj.values())
elif isinstance(obj, list):
element_list = obj[:]
else:
raise AnsibleFilterError('obj must be a list of dicts or a nested dict')
if isinstance(subelements, list):
subelement_list = subelements[:]
elif isinstance(subelements, string_types):
subelement_list = subelements.split('.')
else:
raise AnsibleFilterTypeError('subelements must be a list or a string')
results = []
for element in element_list:
values = element
for subelement in subelement_list:
try:
values = values[subelement]
except KeyError:
if skip_missing:
values = []
break
raise AnsibleFilterError("could not find %r key in iterated item %r" % (subelement, values))
except TypeError:
raise AnsibleFilterTypeError("the key %s should point to a dictionary, got '%s'" % (subelement, values))
if not isinstance(values, list):
raise AnsibleFilterTypeError("the key %r should point to a list, got %r" % (subelement, values))
for value in values:
results.append((element, value))
return results
def dict_to_list_of_dict_key_value_elements(mydict, key_name='key', value_name='value'):
''' takes a dictionary and transforms it into a list of dictionaries,
with each having a 'key' and 'value' keys that correspond to the keys and values of the original '''
if not isinstance(mydict, Mapping):
raise AnsibleFilterTypeError("dict2items requires a dictionary, got %s instead." % type(mydict))
ret = []
for key in mydict:
ret.append({key_name: key, value_name: mydict[key]})
return ret
def list_of_dict_key_value_elements_to_dict(mylist, key_name='key', value_name='value'):
''' takes a list of dicts with each having a 'key' and 'value' keys, and transforms the list into a dictionary,
effectively as the reverse of dict2items '''
if not is_sequence(mylist):
raise AnsibleFilterTypeError("items2dict requires a list, got %s instead." % type(mylist))
return dict((item[key_name], item[value_name]) for item in mylist)
def path_join(paths):
''' takes a sequence or a string, and return a concatenation
of the different members '''
if isinstance(paths, string_types):
return os.path.join(paths)
elif is_sequence(paths):
return os.path.join(*paths)
else:
raise AnsibleFilterTypeError("|path_join expects string or sequence, got %s instead." % type(paths))
class FilterModule(object):
''' Ansible core jinja2 filters '''
def filters(self):
return {
# base 64
'b64decode': b64decode,
'b64encode': b64encode,
# uuid
'to_uuid': to_uuid,
# json
'to_json': to_json,
'to_nice_json': to_nice_json,
'from_json': json.loads,
# yaml
'to_yaml': to_yaml,
'to_nice_yaml': to_nice_yaml,
'from_yaml': from_yaml,
'from_yaml_all': from_yaml_all,
# path
'basename': partial(unicode_wrap, os.path.basename),
'dirname': partial(unicode_wrap, os.path.dirname),
'expanduser': partial(unicode_wrap, os.path.expanduser),
'expandvars': partial(unicode_wrap, os.path.expandvars),
'path_join': path_join,
'realpath': partial(unicode_wrap, os.path.realpath),
'relpath': partial(unicode_wrap, os.path.relpath),
'splitext': partial(unicode_wrap, os.path.splitext),
'win_basename': partial(unicode_wrap, ntpath.basename),
'win_dirname': partial(unicode_wrap, ntpath.dirname),
'win_splitdrive': partial(unicode_wrap, ntpath.splitdrive),
# file glob
'fileglob': fileglob,
# types
'bool': to_bool,
'to_datetime': to_datetime,
# date formatting
'strftime': strftime,
# quote string for shell usage
'quote': quote,
# hash filters
# md5 hex digest of string
'md5': md5s,
# sha1 hex digest of string
'sha1': checksum_s,
# checksum of string as used by ansible for checksumming files
'checksum': checksum_s,
# generic hashing
'password_hash': get_encrypted_password,
'hash': get_hash,
# regex
'regex_replace': regex_replace,
'regex_escape': regex_escape,
'regex_search': regex_search,
'regex_findall': regex_findall,
# ? : ;
'ternary': ternary,
# random stuff
'random': rand,
'shuffle': randomize_list,
# undefined
'mandatory': mandatory,
# comment-style decoration
'comment': comment,
# debug
'type_debug': lambda o: o.__class__.__name__,
# Data structures
'combine': combine,
'extract': extract,
'flatten': flatten,
'dict2items': dict_to_list_of_dict_key_value_elements,
'items2dict': list_of_dict_key_value_elements_to_dict,
'subelements': subelements,
'split': partial(unicode_wrap, text_type.split),
}
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 76,357 |
sudo module doesn't handle --non-interactive flag
|
### Summary
Setting ansible_become_flags to a value that includes "--non-interactive" results in "-on-interactive" being given to sudo because the sudo module is removing "-n" from flags, but isn't accounting for the long argument equivalent:
`flags = flags.replace('-n', '')`
### Issue Type
Bug Report
### Component Name
lib/ansible/plugins/become/sudo.py
### Ansible Version
```console
$ ansible --version
ansible 2.9.25
config file = /etc/ansible/ansible.cfg
configured module search path = [u'/home/<redacted>/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python2.7/site-packages/ansible
executable location = /bin/ansible
python version = 2.7.5 (default, Nov 16 2020, 22:23:17) [GCC 4.8.5 20150623 (Red Hat 4.8.5-44)]
```
### Configuration
```console
$ ansible-config dump --only-changed
DEFAULT_KEEP_REMOTE_FILES(env: ANSIBLE_KEEP_REMOTE_FILES) = False
```
### OS / Environment
CentOS Linux release 7.9.2009 (Core)
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
ansible_become_flags: '--set-home --non-interactive --stdin'
# Run a play that requires escalation via sudo
ansible-playbook <play> --ask-become-pass
```
### Expected Results
The run will have a fatal failure, whose output message will include this:
`"module_stdout": "sudo: invalid option -- 'o'`
If the ansible-playbook command is run with the -vvv option then the relevant ssh connection will include this in the command that it runs:
`'/bin/sh -c '"'"'sudo --set-home -on-interactive --stdin -p "[sudo via ansible`
### Actual Results
```console
<target-server-1> SSH: EXEC ssh -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="myuser"' -o ConnectTimeout=10 -o ControlPath=/home/myuser/.ansible/cp/e98147b6a6 -tt target-server-1 '/bin/sh -c '"'"'sudo -on-interactive --set-home --stdin -p "[sudo via ansible, key=uldzwcgvhxenmnykzovhhyqccxmihgqv] password:" -u escalateuser /bin/sh -c '"'"'"'"'"'"'"'"'echo BECOME-SUCCESS-uldzwcgvhxenmnykzovhhyqccxmihgqv ; /usr/bin/python /var/tmp/ansible-tmp-1637749767.73-18596-268317325457036/AnsiballZ_command.py'"'"'"'"'"'"'"'"' && sleep 0'"'"''
<target-server-1> (1, "sudo: invalid option -- 'o'\r\nusage: sudo -h | -K | -k | -V\r\nusage: sudo -v [-AknS] [-g group] [-h host] [-p prompt] [-u user]\r\nusage: sudo -l [-AknS] [-g group] [-h host] [-p prompt] [-U user] [-u user]\r\n [command]\r\nusage: sudo [-AbEHknPS] [-r role] [-t type] [-C num] [-g group] [-h host] [-p\r\n prompt] [-T timeout] [-u user] [VAR=value] [-i|-s] [<command>]\r\nusage: sudo -e [-AknS] [-r role] [-t type] [-C num] [-g group] [-h host] [-p\r\n prompt] [-T timeout] [-u user] file ...\r\n", 'Shared connection to target-server-1 closed.\r\n')
<target-server-1> Failed to connect to the host via ssh: Shared connection to target-server-1 closed.
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/76357
|
https://github.com/ansible/ansible/pull/76389
|
480d47d071cc6ed08b80209a3fa75b64311ea648
|
b02168d644dfaff1f76f52931d7b41485c2753a3
| 2021-11-24T11:41:21Z |
python
| 2021-12-02T16:58:01Z |
changelogs/fragments/fix_sudo_flag_mangling.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 76,357 |
sudo module doesn't handle --non-interactive flag
|
### Summary
Setting ansible_become_flags to a value that includes "--non-interactive" results in "-on-interactive" being given to sudo because the sudo module is removing "-n" from flags, but isn't accounting for the long argument equivalent:
`flags = flags.replace('-n', '')`
### Issue Type
Bug Report
### Component Name
lib/ansible/plugins/become/sudo.py
### Ansible Version
```console
$ ansible --version
ansible 2.9.25
config file = /etc/ansible/ansible.cfg
configured module search path = [u'/home/<redacted>/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python2.7/site-packages/ansible
executable location = /bin/ansible
python version = 2.7.5 (default, Nov 16 2020, 22:23:17) [GCC 4.8.5 20150623 (Red Hat 4.8.5-44)]
```
### Configuration
```console
$ ansible-config dump --only-changed
DEFAULT_KEEP_REMOTE_FILES(env: ANSIBLE_KEEP_REMOTE_FILES) = False
```
### OS / Environment
CentOS Linux release 7.9.2009 (Core)
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
ansible_become_flags: '--set-home --non-interactive --stdin'
# Run a play that requires escalation via sudo
ansible-playbook <play> --ask-become-pass
```
### Expected Results
The run will have a fatal failure, whose output message will include this:
`"module_stdout": "sudo: invalid option -- 'o'`
If the ansible-playbook command is run with the -vvv option then the relevant ssh connection will include this in the command that it runs:
`'/bin/sh -c '"'"'sudo --set-home -on-interactive --stdin -p "[sudo via ansible`
### Actual Results
```console
<target-server-1> SSH: EXEC ssh -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="myuser"' -o ConnectTimeout=10 -o ControlPath=/home/myuser/.ansible/cp/e98147b6a6 -tt target-server-1 '/bin/sh -c '"'"'sudo -on-interactive --set-home --stdin -p "[sudo via ansible, key=uldzwcgvhxenmnykzovhhyqccxmihgqv] password:" -u escalateuser /bin/sh -c '"'"'"'"'"'"'"'"'echo BECOME-SUCCESS-uldzwcgvhxenmnykzovhhyqccxmihgqv ; /usr/bin/python /var/tmp/ansible-tmp-1637749767.73-18596-268317325457036/AnsiballZ_command.py'"'"'"'"'"'"'"'"' && sleep 0'"'"''
<target-server-1> (1, "sudo: invalid option -- 'o'\r\nusage: sudo -h | -K | -k | -V\r\nusage: sudo -v [-AknS] [-g group] [-h host] [-p prompt] [-u user]\r\nusage: sudo -l [-AknS] [-g group] [-h host] [-p prompt] [-U user] [-u user]\r\n [command]\r\nusage: sudo [-AbEHknPS] [-r role] [-t type] [-C num] [-g group] [-h host] [-p\r\n prompt] [-T timeout] [-u user] [VAR=value] [-i|-s] [<command>]\r\nusage: sudo -e [-AknS] [-r role] [-t type] [-C num] [-g group] [-h host] [-p\r\n prompt] [-T timeout] [-u user] file ...\r\n", 'Shared connection to target-server-1 closed.\r\n')
<target-server-1> Failed to connect to the host via ssh: Shared connection to target-server-1 closed.
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/76357
|
https://github.com/ansible/ansible/pull/76389
|
480d47d071cc6ed08b80209a3fa75b64311ea648
|
b02168d644dfaff1f76f52931d7b41485c2753a3
| 2021-11-24T11:41:21Z |
python
| 2021-12-02T16:58:01Z |
lib/ansible/plugins/become/sudo.py
|
# -*- coding: utf-8 -*-
# Copyright: (c) 2018, Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
DOCUMENTATION = """
name: sudo
short_description: Substitute User DO
description:
- This become plugins allows your remote/login user to execute commands as another user via the sudo utility.
author: ansible (@core)
version_added: "2.8"
options:
become_user:
description: User you 'become' to execute the task
default: root
ini:
- section: privilege_escalation
key: become_user
- section: sudo_become_plugin
key: user
vars:
- name: ansible_become_user
- name: ansible_sudo_user
env:
- name: ANSIBLE_BECOME_USER
- name: ANSIBLE_SUDO_USER
become_exe:
description: Sudo executable
default: sudo
ini:
- section: privilege_escalation
key: become_exe
- section: sudo_become_plugin
key: executable
vars:
- name: ansible_become_exe
- name: ansible_sudo_exe
env:
- name: ANSIBLE_BECOME_EXE
- name: ANSIBLE_SUDO_EXE
become_flags:
description: Options to pass to sudo
default: -H -S -n
ini:
- section: privilege_escalation
key: become_flags
- section: sudo_become_plugin
key: flags
vars:
- name: ansible_become_flags
- name: ansible_sudo_flags
env:
- name: ANSIBLE_BECOME_FLAGS
- name: ANSIBLE_SUDO_FLAGS
become_pass:
description: Password to pass to sudo
required: False
vars:
- name: ansible_become_password
- name: ansible_become_pass
- name: ansible_sudo_pass
env:
- name: ANSIBLE_BECOME_PASS
- name: ANSIBLE_SUDO_PASS
ini:
- section: sudo_become_plugin
key: password
"""
from ansible.plugins.become import BecomeBase
class BecomeModule(BecomeBase):
name = 'sudo'
# messages for detecting prompted password issues
fail = ('Sorry, try again.',)
missing = ('Sorry, a password is required to run sudo', 'sudo: a password is required')
def build_become_command(self, cmd, shell):
super(BecomeModule, self).build_become_command(cmd, shell)
if not cmd:
return cmd
becomecmd = self.get_option('become_exe') or self.name
flags = self.get_option('become_flags') or ''
prompt = ''
if self.get_option('become_pass'):
self.prompt = '[sudo via ansible, key=%s] password:' % self._id
if flags: # this could be simplified, but kept as is for now for backwards string matching
flags = flags.replace('-n', '')
prompt = '-p "%s"' % (self.prompt)
user = self.get_option('become_user') or ''
if user:
user = '-u %s' % (user)
return ' '.join([becomecmd, flags, prompt, user, self._build_success_command(cmd, shell)])
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 76,357 |
sudo module doesn't handle --non-interactive flag
|
### Summary
Setting ansible_become_flags to a value that includes "--non-interactive" results in "-on-interactive" being given to sudo because the sudo module is removing "-n" from flags, but isn't accounting for the long argument equivalent:
`flags = flags.replace('-n', '')`
### Issue Type
Bug Report
### Component Name
lib/ansible/plugins/become/sudo.py
### Ansible Version
```console
$ ansible --version
ansible 2.9.25
config file = /etc/ansible/ansible.cfg
configured module search path = [u'/home/<redacted>/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python2.7/site-packages/ansible
executable location = /bin/ansible
python version = 2.7.5 (default, Nov 16 2020, 22:23:17) [GCC 4.8.5 20150623 (Red Hat 4.8.5-44)]
```
### Configuration
```console
$ ansible-config dump --only-changed
DEFAULT_KEEP_REMOTE_FILES(env: ANSIBLE_KEEP_REMOTE_FILES) = False
```
### OS / Environment
CentOS Linux release 7.9.2009 (Core)
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
ansible_become_flags: '--set-home --non-interactive --stdin'
# Run a play that requires escalation via sudo
ansible-playbook <play> --ask-become-pass
```
### Expected Results
The run will have a fatal failure, whose output message will include this:
`"module_stdout": "sudo: invalid option -- 'o'`
If the ansible-playbook command is run with the -vvv option then the relevant ssh connection will include this in the command that it runs:
`'/bin/sh -c '"'"'sudo --set-home -on-interactive --stdin -p "[sudo via ansible`
### Actual Results
```console
<target-server-1> SSH: EXEC ssh -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="myuser"' -o ConnectTimeout=10 -o ControlPath=/home/myuser/.ansible/cp/e98147b6a6 -tt target-server-1 '/bin/sh -c '"'"'sudo -on-interactive --set-home --stdin -p "[sudo via ansible, key=uldzwcgvhxenmnykzovhhyqccxmihgqv] password:" -u escalateuser /bin/sh -c '"'"'"'"'"'"'"'"'echo BECOME-SUCCESS-uldzwcgvhxenmnykzovhhyqccxmihgqv ; /usr/bin/python /var/tmp/ansible-tmp-1637749767.73-18596-268317325457036/AnsiballZ_command.py'"'"'"'"'"'"'"'"' && sleep 0'"'"''
<target-server-1> (1, "sudo: invalid option -- 'o'\r\nusage: sudo -h | -K | -k | -V\r\nusage: sudo -v [-AknS] [-g group] [-h host] [-p prompt] [-u user]\r\nusage: sudo -l [-AknS] [-g group] [-h host] [-p prompt] [-U user] [-u user]\r\n [command]\r\nusage: sudo [-AbEHknPS] [-r role] [-t type] [-C num] [-g group] [-h host] [-p\r\n prompt] [-T timeout] [-u user] [VAR=value] [-i|-s] [<command>]\r\nusage: sudo -e [-AknS] [-r role] [-t type] [-C num] [-g group] [-h host] [-p\r\n prompt] [-T timeout] [-u user] file ...\r\n", 'Shared connection to target-server-1 closed.\r\n')
<target-server-1> Failed to connect to the host via ssh: Shared connection to target-server-1 closed.
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/76357
|
https://github.com/ansible/ansible/pull/76389
|
480d47d071cc6ed08b80209a3fa75b64311ea648
|
b02168d644dfaff1f76f52931d7b41485c2753a3
| 2021-11-24T11:41:21Z |
python
| 2021-12-02T16:58:01Z |
test/units/plugins/become/test_sudo.py
|
# (c) 2012-2014, Michael DeHaan <[email protected]>
# (c) 2020 Ansible Project
#
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
# Make coding more python3-ish
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import re
from ansible import context
from ansible.plugins.loader import become_loader, shell_loader
def test_sudo(mocker, parser, reset_cli_args):
options = parser.parse_args([])
context._init_global_context(options)
sudo = become_loader.get('sudo')
sh = shell_loader.get('sh')
sh.executable = "/bin/bash"
sudo.set_options(direct={
'become_user': 'foo',
'become_flags': '-n -s -H',
})
cmd = sudo.build_become_command('/bin/foo', sh)
assert re.match(r"""sudo\s+-n -s -H\s+-u foo /bin/bash -c 'echo BECOME-SUCCESS-.+? ; /bin/foo'""", cmd), cmd
sudo.set_options(direct={
'become_user': 'foo',
'become_flags': '-n -s -H',
'become_pass': 'testpass',
})
cmd = sudo.build_become_command('/bin/foo', sh)
assert re.match(r"""sudo\s+-s\s-H\s+-p "\[sudo via ansible, key=.+?\] password:" -u foo /bin/bash -c 'echo BECOME-SUCCESS-.+? ; /bin/foo'""", cmd), cmd
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 76,386 |
URI module does not respect or return a status code when auth fails with 401
|
### Summary
The `ansible.builtin.uri` module does not return the status code when the authentication fails. It does not fail gracefully.
Setting `status_code: [200, 401]` does not help.
### Issue Type
Bug Report
### Component Name
uri
### Ansible Version
```console
ansible [core 2.11.6]
config file = /home/mrmeszaros/Workspace/ansible.cfg
configured module search path = ['/home/mrmeszaros/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/lib/python3.8/dist-packages/ansible
ansible collection location = /home/mrmeszaros/Workspace/.collections
executable location = /usr/local/bin/ansible
python version = 3.8.10 (default, Sep 28 2021, 16:10:42) [GCC 9.3.0]
jinja version = 3.0.2
libyaml = True
```
### Configuration
```console
ANSIBLE_NOCOWS(/home/mrmeszaros/Workspace/ansible.cfg) = True
COLLECTIONS_PATHS(/home/mrmeszaros/Workspace/ansible.cfg) = ['/home/mrmeszaros/Workspace/.collections']
DEFAULT_FILTER_PLUGIN_PATH(/home/mrmeszaros/Workspace/ansible.cfg) = ['/home/mrmeszaros/Workspace/filter_plugins']
DEFAULT_HOST_LIST(/home/mrmeszaros/Workspace/ansible.cfg) = ['/home/mrmeszaros/Workspace/inventories/vcenter']
DEFAULT_LOG_PATH(/home/mrmeszaros/Workspace/ansible.cfg) = /home/mrmeszaros/Workspace/.ansible.log
DEFAULT_ROLES_PATH(/home/mrmeszaros/Workspace/ansible.cfg) = ['/home/mrmeszaros/Workspace/.galaxy', '/home/mrmeszaros/Workspace/roles']
DEFAULT_STDOUT_CALLBACK(/home/mrmeszaros/Workspace/ansible.cfg) = yaml
HOST_KEY_CHECKING(/home/mrmeszaros/Workspace/ansible.cfg) = False
RETRY_FILES_ENABLED(/home/mrmeszaros/Workspace/ansible.cfg) = False
```
### OS / Environment
lsb_release -a:
```
Distributor ID: Ubuntu
Description: Ubuntu 20.04.3 LTS
Release: 20.04
Codename: focal
```
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
Prerequisites: Having moloch/arkime (or really any other web app with a REST API with authentication) installed on a machine.
```yaml (paste below)
- name: Check users
hosts: moloch
gather_facts: no
vars:
moloch_admin_pass: notit
tasks:
- name: List users
uri:
method: POST
url: http://localhost:8005/api/users
url_username: admin
url_password: "{{ moloch_admin_pass }}"
status_code:
- 200
- 401
changed_when: no
register: list_users_cmd
- debug: var=list_users_cmd
```
### Expected Results
I expected to see the http response with the headers and the `status: 401`, but instead it failed and did not even set the response headers or status.
Something like curl would do:
```
curl -X POST --digest --user admin:notit 172.19.90.1:8005/api/users -v
* Trying 172.19.90.1:8005...
* TCP_NODELAY set
* Connected to 172.19.90.1 (172.19.90.1) port 8005 (#0)
* Server auth using Digest with user 'admin'
> POST /api/users HTTP/1.1
> Host: 172.19.90.1:8005
> User-Agent: curl/7.68.0
> Accept: */*
>
* Mark bundle as not supporting multiuse
< HTTP/1.1 401 Unauthorized
< Vary: X-HTTP-Method-Override
< X-Frame-Options: DENY
< X-XSS-Protection: 1; mode=block
< WWW-Authenticate: Digest realm="Moloch", nonce="VItPV2v3xTxcE3DOdbmMnF6t60Kxx3VM", qop="auth"
< Date: Mon, 29 Nov 2021 15:16:18 GMT
< Connection: keep-alive
< Keep-Alive: timeout=5
< Transfer-Encoding: chunked
<
* Ignoring the response-body
* Connection #0 to host 172.19.90.1 left intact
* Issue another request to this URL: 'http://172.19.90.1:8005/api/users'
* Found bundle for host 172.19.90.1: 0x563d9c519b50 [serially]
* Can not multiplex, even if we wanted to!
* Re-using existing connection! (#0) with host 172.19.90.1
* Connected to 172.19.90.1 (172.19.90.1) port 8005 (#0)
* Server auth using Digest with user 'admin'
> POST /api/users HTTP/1.1
> Host: 172.19.90.1:8005
> Authorization: Digest username="admin", realm="Moloch", nonce="VItPV2v3xTxcE3DOdbmMnF6t60Kxx3VM", uri="/api/users", cnonce="ZTllZjJmNDIyMzQ0Zjk3Yjc3ZjAyN2MzNGVkMGUzODU=", nc=00000001, qop=auth, response="03c569ff5b108921de0da31a768d56f5"
> User-Agent: curl/7.68.0
> Accept: */*
>
* Mark bundle as not supporting multiuse
< HTTP/1.1 401 Unauthorized
< Vary: X-HTTP-Method-Override
< X-Frame-Options: DENY
< X-XSS-Protection: 1; mode=block
* Authentication problem. Ignoring this.
< WWW-Authenticate: Digest realm="Moloch", nonce="xNcghsGC2sX3w21sTDCFLK0cK9YStZ9M", qop="auth"
< Date: Mon, 29 Nov 2021 15:16:18 GMT
< Connection: keep-alive
< Keep-Alive: timeout=5
< Transfer-Encoding: chunked
<
* Connection #0 to host 172.19.90.1 left intact
Unauthorized⏎
```
### Actual Results
```console
TASK [List users] ***********************************************************************************************************************************************************************************************************
task path: /home/mrmeszaros/Workspace/playbooks/moloch.singleton.yml:316
<172.19.90.1> ESTABLISH SSH CONNECTION FOR USER: sysadm
<172.19.90.1> SSH: EXEC sshpass -d11 ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o 'IdentityFile="roles/linux-base/files/id_rsa"' -o 'User="sysadm"' -o ConnectTimeout=10 -o ControlPath=/home/mrmeszaros/.ansible/cp/f5d1d80b39 172.19.90.1 '/bin/sh -c '"'"'echo ~sysadm && sleep 0'"'"''
<172.19.90.1> (0, b'/home/sysadm\n', b'OpenSSH_8.2p1 Ubuntu-4ubuntu0.3, OpenSSL 1.1.1f 31 Mar 2020\r\ndebug1: Reading configuration data /home/mrmeszaros/.ssh/config\r\ndebug1: /home/mrmeszaros/.ssh/config line 1: Applying options for 172.19.*\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 19: include /etc/ssh/ssh_config.d/*.conf matched no files\r\ndebug1: /etc/ssh/ssh_config line 21: Applying options for *\r\ndebug2: resolve_canonicalize: hostname 172.19.90.1 is address\r\ndebug1: auto-mux: Trying existing master\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug2: mux_client_hello_exchange: master version 4\r\ndebug3: mux_client_forwards: request forwardings: 0 local, 0 remote\r\ndebug3: mux_client_request_session: entering\r\ndebug3: mux_client_request_alive: entering\r\ndebug3: mux_client_request_alive: done pid = 1186335\r\ndebug3: mux_client_request_session: session request sent\r\ndebug3: mux_client_read_packet: read header failed: Broken pipe\r\ndebug2: Received exit status from master 0\r\n')
<172.19.90.1> ESTABLISH SSH CONNECTION FOR USER: sysadm
<172.19.90.1> SSH: EXEC sshpass -d11 ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o 'IdentityFile="roles/linux-base/files/id_rsa"' -o 'User="sysadm"' -o ConnectTimeout=10 -o ControlPath=/home/mrmeszaros/.ansible/cp/f5d1d80b39 172.19.90.1 '/bin/sh -c '"'"'( umask 77 && mkdir -p "` echo /home/sysadm/.ansible/tmp `"&& mkdir "` echo /home/sysadm/.ansible/tmp/ansible-tmp-1638198795.0287964-1186445-275398547896703 `" && echo ansible-tmp-1638198795.0287964-1186445-275398547896703="` echo /home/sysadm/.ansible/tmp/ansible-tmp-1638198795.0287964-1186445-275398547896703 `" ) && sleep 0'"'"''
<172.19.90.1> (0, b'ansible-tmp-1638198795.0287964-1186445-275398547896703=/home/sysadm/.ansible/tmp/ansible-tmp-1638198795.0287964-1186445-275398547896703\n', b'OpenSSH_8.2p1 Ubuntu-4ubuntu0.3, OpenSSL 1.1.1f 31 Mar 2020\r\ndebug1: Reading configuration data /home/mrmeszaros/.ssh/config\r\ndebug1: /home/mrmeszaros/.ssh/config line 1: Applying options for 172.19.*\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 19: include /etc/ssh/ssh_config.d/*.conf matched no files\r\ndebug1: /etc/ssh/ssh_config line 21: Applying options for *\r\ndebug2: resolve_canonicalize: hostname 172.19.90.1 is address\r\ndebug1: auto-mux: Trying existing master\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug2: mux_client_hello_exchange: master version 4\r\ndebug3: mux_client_forwards: request forwardings: 0 local, 0 remote\r\ndebug3: mux_client_request_session: entering\r\ndebug3: mux_client_request_alive: entering\r\ndebug3: mux_client_request_alive: done pid = 1186335\r\ndebug3: mux_client_request_session: session request sent\r\ndebug3: mux_client_read_packet: read header failed: Broken pipe\r\ndebug2: Received exit status from master 0\r\n')
Using module file /usr/local/lib/python3.8/dist-packages/ansible/modules/uri.py
<172.19.90.1> PUT /home/mrmeszaros/.ansible/tmp/ansible-local-1186405uxq7nnzk/tmpv42z4a9b TO /home/sysadm/.ansible/tmp/ansible-tmp-1638198795.0287964-1186445-275398547896703/AnsiballZ_uri.py
<172.19.90.1> SSH: EXEC sshpass -d11 sftp -o BatchMode=no -b - -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o 'IdentityFile="roles/linux-base/files/id_rsa"' -o 'User="sysadm"' -o ConnectTimeout=10 -o ControlPath=/home/mrmeszaros/.ansible/cp/f5d1d80b39 '[172.19.90.1]'
<172.19.90.1> (0, b'sftp> put /home/mrmeszaros/.ansible/tmp/ansible-local-1186405uxq7nnzk/tmpv42z4a9b /home/sysadm/.ansible/tmp/ansible-tmp-1638198795.0287964-1186445-275398547896703/AnsiballZ_uri.py\n', b'OpenSSH_8.2p1 Ubuntu-4ubuntu0.3, OpenSSL 1.1.1f 31 Mar 2020\r\ndebug1: Reading configuration data /home/mrmeszaros/.ssh/config\r\ndebug1: /home/mrmeszaros/.ssh/config line 1: Applying options for 172.19.*\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 19: include /etc/ssh/ssh_config.d/*.conf matched no files\r\ndebug1: /etc/ssh/ssh_config line 21: Applying options for *\r\ndebug2: resolve_canonicalize: hostname 172.19.90.1 is address\r\ndebug1: auto-mux: Trying existing master\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug2: mux_client_hello_exchange: master version 4\r\ndebug3: mux_client_forwards: request forwardings: 0 local, 0 remote\r\ndebug3: mux_client_request_session: entering\r\ndebug3: mux_client_request_alive: entering\r\ndebug3: mux_client_request_alive: done pid = 1186335\r\ndebug3: mux_client_request_session: session request sent\r\ndebug2: Remote version: 3\r\ndebug2: Server supports extension "[email protected]" revision 1\r\ndebug2: Server supports extension "[email protected]" revision 2\r\ndebug2: Server supports extension "[email protected]" revision 2\r\ndebug2: Server supports extension "[email protected]" revision 1\r\ndebug2: Server supports extension "[email protected]" revision 1\r\ndebug2: Server supports extension "[email protected]" revision 1\r\ndebug3: Sent message fd 3 T:16 I:1\r\ndebug3: SSH_FXP_REALPATH . -> /home/sysadm size 0\r\ndebug3: Looking up /home/mrmeszaros/.ansible/tmp/ansible-local-1186405uxq7nnzk/tmpv42z4a9b\r\ndebug3: Sent message fd 3 T:17 I:2\r\ndebug3: Received stat reply T:101 I:2\r\ndebug1: Couldn\'t stat remote file: No such file or directory\r\ndebug3: Sent message SSH2_FXP_OPEN I:3 P:/home/sysadm/.ansible/tmp/ansible-tmp-1638198795.0287964-1186445-275398547896703/AnsiballZ_uri.py\r\ndebug3: Sent message SSH2_FXP_WRITE I:4 O:0 S:32768\r\ndebug3: SSH2_FXP_STATUS 0\r\ndebug3: In write loop, ack for 4 32768 bytes at 0\r\ndebug3: Sent message SSH2_FXP_WRITE I:5 O:32768 S:32768\r\ndebug3: Sent message SSH2_FXP_WRITE I:6 O:65536 S:32768\r\ndebug3: Sent message SSH2_FXP_WRITE I:7 O:98304 S:32768\r\ndebug3: Sent message SSH2_FXP_WRITE I:8 O:131072 S:30321\r\ndebug3: SSH2_FXP_STATUS 0\r\ndebug3: In write loop, ack for 5 32768 bytes at 32768\r\ndebug3: SSH2_FXP_STATUS 0\r\ndebug3: In write loop, ack for 6 32768 bytes at 65536\r\ndebug3: SSH2_FXP_STATUS 0\r\ndebug3: In write loop, ack for 7 32768 bytes at 98304\r\ndebug3: SSH2_FXP_STATUS 0\r\ndebug3: In write loop, ack for 8 30321 bytes at 131072\r\ndebug3: Sent message SSH2_FXP_CLOSE I:4\r\ndebug3: SSH2_FXP_STATUS 0\r\ndebug3: mux_client_read_packet: read header failed: Broken pipe\r\ndebug2: Received exit status from master 0\r\n')
<172.19.90.1> ESTABLISH SSH CONNECTION FOR USER: sysadm
<172.19.90.1> SSH: EXEC sshpass -d11 ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o 'IdentityFile="roles/linux-base/files/id_rsa"' -o 'User="sysadm"' -o ConnectTimeout=10 -o ControlPath=/home/mrmeszaros/.ansible/cp/f5d1d80b39 172.19.90.1 '/bin/sh -c '"'"'chmod u+x /home/sysadm/.ansible/tmp/ansible-tmp-1638198795.0287964-1186445-275398547896703/ /home/sysadm/.ansible/tmp/ansible-tmp-1638198795.0287964-1186445-275398547896703/AnsiballZ_uri.py && sleep 0'"'"''
<172.19.90.1> (0, b'', b'OpenSSH_8.2p1 Ubuntu-4ubuntu0.3, OpenSSL 1.1.1f 31 Mar 2020\r\ndebug1: Reading configuration data /home/mrmeszaros/.ssh/config\r\ndebug1: /home/mrmeszaros/.ssh/config line 1: Applying options for 172.19.*\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 19: include /etc/ssh/ssh_config.d/*.conf matched no files\r\ndebug1: /etc/ssh/ssh_config line 21: Applying options for *\r\ndebug2: resolve_canonicalize: hostname 172.19.90.1 is address\r\ndebug1: auto-mux: Trying existing master\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug2: mux_client_hello_exchange: master version 4\r\ndebug3: mux_client_forwards: request forwardings: 0 local, 0 remote\r\ndebug3: mux_client_request_session: entering\r\ndebug3: mux_client_request_alive: entering\r\ndebug3: mux_client_request_alive: done pid = 1186335\r\ndebug3: mux_client_request_session: session request sent\r\ndebug3: mux_client_read_packet: read header failed: Broken pipe\r\ndebug2: Received exit status from master 0\r\n')
<172.19.90.1> ESTABLISH SSH CONNECTION FOR USER: sysadm
<172.19.90.1> SSH: EXEC sshpass -d11 ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o 'IdentityFile="roles/linux-base/files/id_rsa"' -o 'User="sysadm"' -o ConnectTimeout=10 -o ControlPath=/home/mrmeszaros/.ansible/cp/f5d1d80b39 -tt 172.19.90.1 '/bin/sh -c '"'"'/usr/bin/python3 /home/sysadm/.ansible/tmp/ansible-tmp-1638198795.0287964-1186445-275398547896703/AnsiballZ_uri.py && sleep 0'"'"''
<172.19.90.1> (1, b'Traceback (most recent call last):\r\n File "/tmp/ansible_ansible.legacy.uri_payload_e2qi06tp/ansible_ansible.legacy.uri_payload.zip/ansible/module_utils/urls.py", line 1755, in fetch_url\r\n File "/tmp/ansible_ansible.legacy.uri_payload_e2qi06tp/ansible_ansible.legacy.uri_payload.zip/ansible/module_utils/urls.py", line 1535, in open_url\r\n File "/tmp/ansible_ansible.legacy.uri_payload_e2qi06tp/ansible_ansible.legacy.uri_payload.zip/ansible/module_utils/urls.py", line 1446, in open\r\n File "/usr/lib/python3.8/urllib/request.py", line 222, in urlopen\r\n return opener.open(url, data, timeout)\r\n File "/usr/lib/python3.8/urllib/request.py", line 531, in open\r\n response = meth(req, response)\r\n File "/usr/lib/python3.8/urllib/request.py", line 640, in http_response\r\n response = self.parent.error(\r\n File "/usr/lib/python3.8/urllib/request.py", line 563, in error\r\n result = self._call_chain(*args)\r\n File "/usr/lib/python3.8/urllib/request.py", line 502, in _call_chain\r\n result = func(*args)\r\n File "/usr/lib/python3.8/urllib/request.py", line 1244, in http_error_401\r\n retry = self.http_error_auth_reqed(\'www-authenticate\',\r\n File "/usr/lib/python3.8/urllib/request.py", line 1124, in http_error_auth_reqed\r\n return self.retry_http_digest_auth(req, authreq)\r\n File "/usr/lib/python3.8/urllib/request.py", line 1138, in retry_http_digest_auth\r\n resp = self.parent.open(req, timeout=req.timeout)\r\n File "/usr/lib/python3.8/urllib/request.py", line 531, in open\r\n response = meth(req, response)\r\n File "/usr/lib/python3.8/urllib/request.py", line 640, in http_response\r\n response = self.parent.error(\r\n File "/usr/lib/python3.8/urllib/request.py", line 563, in error\r\n result = self._call_chain(*args)\r\n File "/usr/lib/python3.8/urllib/request.py", line 502, in _call_chain\r\n result = func(*args)\r\n File "/usr/lib/python3.8/urllib/request.py", line 1244, in http_error_401\r\n retry = self.http_error_auth_reqed(\'www-authenticate\',\r\n File "/usr/lib/python3.8/urllib/request.py", line 1124, in http_error_auth_reqed\r\n return self.retry_http_digest_auth(req, authreq)\r\n File "/usr/lib/python3.8/urllib/request.py", line 1138, in retry_http_digest_auth\r\n resp = self.parent.open(req, timeout=req.timeout)\r\n File "/usr/lib/python3.8/urllib/request.py", line 531, in open\r\n response = meth(req, response)\r\n File "/usr/lib/python3.8/urllib/request.py", line 640, in http_response\r\n response = self.parent.error(\r\n File "/usr/lib/python3.8/urllib/request.py", line 563, in error\r\n result = self._call_chain(*args)\r\n File "/usr/lib/python3.8/urllib/request.py", line 502, in _call_chain\r\n result = func(*args)\r\n File "/usr/lib/python3.8/urllib/request.py", line 1244, in http_error_401\r\n retry = self.http_error_auth_reqed(\'www-authenticate\',\r\n File "/usr/lib/python3.8/urllib/request.py", line 1124, in http_error_auth_reqed\r\n return self.retry_http_digest_auth(req, authreq)\r\n File "/usr/lib/python3.8/urllib/request.py", line 1138, in retry_http_digest_auth\r\n resp = self.parent.open(req, timeout=req.timeout)\r\n File "/usr/lib/python3.8/urllib/request.py", line 531, in open\r\n response = meth(req, response)\r\n File "/usr/lib/python3.8/urllib/request.py", line 640, in http_response\r\n response = self.parent.error(\r\n File "/usr/lib/python3.8/urllib/request.py", line 563, in error\r\n result = self._call_chain(*args)\r\n File "/usr/lib/python3.8/urllib/request.py", line 502, in _call_chain\r\n result = func(*args)\r\n File "/usr/lib/python3.8/urllib/request.py", line 1244, in http_error_401\r\n retry = self.http_error_auth_reqed(\'www-authenticate\',\r\n File "/usr/lib/python3.8/urllib/request.py", line 1124, in http_error_auth_reqed\r\n return self.retry_http_digest_auth(req, authreq)\r\n File "/usr/lib/python3.8/urllib/request.py", line 1138, in retry_http_digest_auth\r\n resp = self.parent.open(req, timeout=req.timeout)\r\n File "/usr/lib/python3.8/urllib/request.py", line 531, in open\r\n response = meth(req, response)\r\n File "/usr/lib/python3.8/urllib/request.py", line 640, in http_response\r\n response = self.parent.error(\r\n File "/usr/lib/python3.8/urllib/request.py", line 563, in error\r\n result = self._call_chain(*args)\r\n File "/usr/lib/python3.8/urllib/request.py", line 502, in _call_chain\r\n result = func(*args)\r\n File "/usr/lib/python3.8/urllib/request.py", line 1244, in http_error_401\r\n retry = self.http_error_auth_reqed(\'www-authenticate\',\r\n File "/usr/lib/python3.8/urllib/request.py", line 1124, in http_error_auth_reqed\r\n return self.retry_http_digest_auth(req, authreq)\r\n File "/usr/lib/python3.8/urllib/request.py", line 1138, in retry_http_digest_auth\r\n resp = self.parent.open(req, timeout=req.timeout)\r\n File "/usr/lib/python3.8/urllib/request.py", line 531, in open\r\n response = meth(req, response)\r\n File "/usr/lib/python3.8/urllib/request.py", line 640, in http_response\r\n response = self.parent.error(\r\n File "/usr/lib/python3.8/urllib/request.py", line 563, in error\r\n result = self._call_chain(*args)\r\n File "/usr/lib/python3.8/urllib/request.py", line 502, in _call_chain\r\n result = func(*args)\r\n File "/usr/lib/python3.8/urllib/request.py", line 1244, in http_error_401\r\n retry = self.http_error_auth_reqed(\'www-authenticate\',\r\n File "/usr/lib/python3.8/urllib/request.py", line 1124, in http_error_auth_reqed\r\n return self.retry_http_digest_auth(req, authreq)\r\n File "/usr/lib/python3.8/urllib/request.py", line 1138, in retry_http_digest_auth\r\n resp = self.parent.open(req, timeout=req.timeout)\r\n File "/usr/lib/python3.8/urllib/request.py", line 531, in open\r\n response = meth(req, response)\r\n File "/usr/lib/python3.8/urllib/request.py", line 640, in http_response\r\n response = self.parent.error(\r\n File "/usr/lib/python3.8/urllib/request.py", line 563, in error\r\n result = self._call_chain(*args)\r\n File "/usr/lib/python3.8/urllib/request.py", line 502, in _call_chain\r\n result = func(*args)\r\n File "/usr/lib/python3.8/urllib/request.py", line 1244, in http_error_401\r\n retry = self.http_error_auth_reqed(\'www-authenticate\',\r\n File "/usr/lib/python3.8/urllib/request.py", line 1117, in http_error_auth_reqed\r\n raise HTTPError(req.full_url, 401, "digest auth failed",\r\nurllib.error.HTTPError: HTTP Error 401: digest auth failed\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nTraceback (most recent call last):\r\n File "/home/sysadm/.ansible/tmp/ansible-tmp-1638198795.0287964-1186445-275398547896703/AnsiballZ_uri.py", line 100, in <module>\r\n _ansiballz_main()\r\n File "/home/sysadm/.ansible/tmp/ansible-tmp-1638198795.0287964-1186445-275398547896703/AnsiballZ_uri.py", line 92, in _ansiballz_main\r\n invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)\r\n File "/home/sysadm/.ansible/tmp/ansible-tmp-1638198795.0287964-1186445-275398547896703/AnsiballZ_uri.py", line 40, in invoke_module\r\n runpy.run_module(mod_name=\'ansible.modules.uri\', init_globals=dict(_module_fqn=\'ansible.modules.uri\', _modlib_path=modlib_path),\r\n File "/usr/lib/python3.8/runpy.py", line 207, in run_module\r\n return _run_module_code(code, init_globals, run_name, mod_spec)\r\n File "/usr/lib/python3.8/runpy.py", line 97, in _run_module_code\r\n _run_code(code, mod_globals, init_globals,\r\n File "/usr/lib/python3.8/runpy.py", line 87, in _run_code\r\n exec(code, run_globals)\r\n File "/tmp/ansible_ansible.legacy.uri_payload_e2qi06tp/ansible_ansible.legacy.uri_payload.zip/ansible/modules/uri.py", line 793, in <module>\r\n File "/tmp/ansible_ansible.legacy.uri_payload_e2qi06tp/ansible_ansible.legacy.uri_payload.zip/ansible/modules/uri.py", line 710, in main\r\n File "/tmp/ansible_ansible.legacy.uri_payload_e2qi06tp/ansible_ansible.legacy.uri_payload.zip/ansible/modules/uri.py", line 599, in uri\r\n File "/tmp/ansible_ansible.legacy.uri_payload_e2qi06tp/ansible_ansible.legacy.uri_payload.zip/ansible/module_utils/urls.py", line 1804, in fetch_url\r\n File "/usr/lib/python3.8/tempfile.py", line 606, in __getattr__\r\n file = self.__dict__[\'file\']\r\nKeyError: \'file\'\r\n', b'OpenSSH_8.2p1 Ubuntu-4ubuntu0.3, OpenSSL 1.1.1f 31 Mar 2020\r\ndebug1: Reading configuration data /home/mrmeszaros/.ssh/config\r\ndebug1: /home/mrmeszaros/.ssh/config line 1: Applying options for 172.19.*\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 19: include /etc/ssh/ssh_config.d/*.conf matched no files\r\ndebug1: /etc/ssh/ssh_config line 21: Applying options for *\r\ndebug2: resolve_canonicalize: hostname 172.19.90.1 is address\r\ndebug1: auto-mux: Trying existing master\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug2: mux_client_hello_exchange: master version 4\r\ndebug3: mux_client_forwards: request forwardings: 0 local, 0 remote\r\ndebug3: mux_client_request_session: entering\r\ndebug3: mux_client_request_alive: entering\r\ndebug3: mux_client_request_alive: done pid = 1186335\r\ndebug3: mux_client_request_session: session request sent\r\ndebug3: mux_client_read_packet: read header failed: Broken pipe\r\ndebug2: Received exit status from master 1\r\nShared connection to 172.19.90.1 closed.\r\n')
<172.19.90.1> Failed to connect to the host via ssh: OpenSSH_8.2p1 Ubuntu-4ubuntu0.3, OpenSSL 1.1.1f 31 Mar 2020
debug1: Reading configuration data /home/mrmeszaros/.ssh/config
debug1: /home/mrmeszaros/.ssh/config line 1: Applying options for 172.19.*
debug1: Reading configuration data /etc/ssh/ssh_config
debug1: /etc/ssh/ssh_config line 19: include /etc/ssh/ssh_config.d/*.conf matched no files
debug1: /etc/ssh/ssh_config line 21: Applying options for *
debug2: resolve_canonicalize: hostname 172.19.90.1 is address
debug1: auto-mux: Trying existing master
debug2: fd 3 setting O_NONBLOCK
debug2: mux_client_hello_exchange: master version 4
debug3: mux_client_forwards: request forwardings: 0 local, 0 remote
debug3: mux_client_request_session: entering
debug3: mux_client_request_alive: entering
debug3: mux_client_request_alive: done pid = 1186335
debug3: mux_client_request_session: session request sent
debug3: mux_client_read_packet: read header failed: Broken pipe
debug2: Received exit status from master 1
Shared connection to 172.19.90.1 closed.
<172.19.90.1> ESTABLISH SSH CONNECTION FOR USER: sysadm
<172.19.90.1> SSH: EXEC sshpass -d11 ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o 'IdentityFile="roles/linux-base/files/id_rsa"' -o 'User="sysadm"' -o ConnectTimeout=10 -o ControlPath=/home/mrmeszaros/.ansible/cp/f5d1d80b39 172.19.90.1 '/bin/sh -c '"'"'rm -f -r /home/sysadm/.ansible/tmp/ansible-tmp-1638198795.0287964-1186445-275398547896703/ > /dev/null 2>&1 && sleep 0'"'"''
<172.19.90.1> (0, b'', b'OpenSSH_8.2p1 Ubuntu-4ubuntu0.3, OpenSSL 1.1.1f 31 Mar 2020\r\ndebug1: Reading configuration data /home/mrmeszaros/.ssh/config\r\ndebug1: /home/mrmeszaros/.ssh/config line 1: Applying options for 172.19.*\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 19: include /etc/ssh/ssh_config.d/*.conf matched no files\r\ndebug1: /etc/ssh/ssh_config line 21: Applying options for *\r\ndebug2: resolve_canonicalize: hostname 172.19.90.1 is address\r\ndebug1: auto-mux: Trying existing master\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug2: mux_client_hello_exchange: master version 4\r\ndebug3: mux_client_forwards: request forwardings: 0 local, 0 remote\r\ndebug3: mux_client_request_session: entering\r\ndebug3: mux_client_request_alive: entering\r\ndebug3: mux_client_request_alive: done pid = 1186335\r\ndebug3: mux_client_request_session: session request sent\r\ndebug3: mux_client_read_packet: read header failed: Broken pipe\r\ndebug2: Received exit status from master 0\r\n')
The full traceback is:
Traceback (most recent call last):
File "/tmp/ansible_ansible.legacy.uri_payload_e2qi06tp/ansible_ansible.legacy.uri_payload.zip/ansible/module_utils/urls.py", line 1755, in fetch_url
File "/tmp/ansible_ansible.legacy.uri_payload_e2qi06tp/ansible_ansible.legacy.uri_payload.zip/ansible/module_utils/urls.py", line 1535, in open_url
File "/tmp/ansible_ansible.legacy.uri_payload_e2qi06tp/ansible_ansible.legacy.uri_payload.zip/ansible/module_utils/urls.py", line 1446, in open
File "/usr/lib/python3.8/urllib/request.py", line 222, in urlopen
return opener.open(url, data, timeout)
File "/usr/lib/python3.8/urllib/request.py", line 531, in open
response = meth(req, response)
File "/usr/lib/python3.8/urllib/request.py", line 640, in http_response
response = self.parent.error(
File "/usr/lib/python3.8/urllib/request.py", line 563, in error
result = self._call_chain(*args)
File "/usr/lib/python3.8/urllib/request.py", line 502, in _call_chain
result = func(*args)
File "/usr/lib/python3.8/urllib/request.py", line 1244, in http_error_401
retry = self.http_error_auth_reqed('www-authenticate',
File "/usr/lib/python3.8/urllib/request.py", line 1124, in http_error_auth_reqed
return self.retry_http_digest_auth(req, authreq)
File "/usr/lib/python3.8/urllib/request.py", line 1138, in retry_http_digest_auth
resp = self.parent.open(req, timeout=req.timeout)
File "/usr/lib/python3.8/urllib/request.py", line 531, in open
response = meth(req, response)
File "/usr/lib/python3.8/urllib/request.py", line 640, in http_response
response = self.parent.error(
File "/usr/lib/python3.8/urllib/request.py", line 563, in error
result = self._call_chain(*args)
File "/usr/lib/python3.8/urllib/request.py", line 502, in _call_chain
result = func(*args)
File "/usr/lib/python3.8/urllib/request.py", line 1244, in http_error_401
retry = self.http_error_auth_reqed('www-authenticate',
File "/usr/lib/python3.8/urllib/request.py", line 1124, in http_error_auth_reqed
return self.retry_http_digest_auth(req, authreq)
File "/usr/lib/python3.8/urllib/request.py", line 1138, in retry_http_digest_auth
resp = self.parent.open(req, timeout=req.timeout)
File "/usr/lib/python3.8/urllib/request.py", line 531, in open
response = meth(req, response)
File "/usr/lib/python3.8/urllib/request.py", line 640, in http_response
response = self.parent.error(
File "/usr/lib/python3.8/urllib/request.py", line 563, in error
result = self._call_chain(*args)
File "/usr/lib/python3.8/urllib/request.py", line 502, in _call_chain
result = func(*args)
File "/usr/lib/python3.8/urllib/request.py", line 1244, in http_error_401
retry = self.http_error_auth_reqed('www-authenticate',
File "/usr/lib/python3.8/urllib/request.py", line 1124, in http_error_auth_reqed
return self.retry_http_digest_auth(req, authreq)
File "/usr/lib/python3.8/urllib/request.py", line 1138, in retry_http_digest_auth
resp = self.parent.open(req, timeout=req.timeout)
File "/usr/lib/python3.8/urllib/request.py", line 531, in open
response = meth(req, response)
File "/usr/lib/python3.8/urllib/request.py", line 640, in http_response
response = self.parent.error(
File "/usr/lib/python3.8/urllib/request.py", line 563, in error
result = self._call_chain(*args)
File "/usr/lib/python3.8/urllib/request.py", line 502, in _call_chain
result = func(*args)
File "/usr/lib/python3.8/urllib/request.py", line 1244, in http_error_401
retry = self.http_error_auth_reqed('www-authenticate',
File "/usr/lib/python3.8/urllib/request.py", line 1124, in http_error_auth_reqed
return self.retry_http_digest_auth(req, authreq)
File "/usr/lib/python3.8/urllib/request.py", line 1138, in retry_http_digest_auth
resp = self.parent.open(req, timeout=req.timeout)
File "/usr/lib/python3.8/urllib/request.py", line 531, in open
response = meth(req, response)
File "/usr/lib/python3.8/urllib/request.py", line 640, in http_response
response = self.parent.error(
File "/usr/lib/python3.8/urllib/request.py", line 563, in error
result = self._call_chain(*args)
File "/usr/lib/python3.8/urllib/request.py", line 502, in _call_chain
result = func(*args)
File "/usr/lib/python3.8/urllib/request.py", line 1244, in http_error_401
retry = self.http_error_auth_reqed('www-authenticate',
File "/usr/lib/python3.8/urllib/request.py", line 1124, in http_error_auth_reqed
return self.retry_http_digest_auth(req, authreq)
File "/usr/lib/python3.8/urllib/request.py", line 1138, in retry_http_digest_auth
resp = self.parent.open(req, timeout=req.timeout)
File "/usr/lib/python3.8/urllib/request.py", line 531, in open
response = meth(req, response)
File "/usr/lib/python3.8/urllib/request.py", line 640, in http_response
response = self.parent.error(
File "/usr/lib/python3.8/urllib/request.py", line 563, in error
result = self._call_chain(*args)
File "/usr/lib/python3.8/urllib/request.py", line 502, in _call_chain
result = func(*args)
File "/usr/lib/python3.8/urllib/request.py", line 1244, in http_error_401
retry = self.http_error_auth_reqed('www-authenticate',
File "/usr/lib/python3.8/urllib/request.py", line 1124, in http_error_auth_reqed
return self.retry_http_digest_auth(req, authreq)
File "/usr/lib/python3.8/urllib/request.py", line 1138, in retry_http_digest_auth
resp = self.parent.open(req, timeout=req.timeout)
File "/usr/lib/python3.8/urllib/request.py", line 531, in open
response = meth(req, response)
File "/usr/lib/python3.8/urllib/request.py", line 640, in http_response
response = self.parent.error(
File "/usr/lib/python3.8/urllib/request.py", line 563, in error
result = self._call_chain(*args)
File "/usr/lib/python3.8/urllib/request.py", line 502, in _call_chain
result = func(*args)
File "/usr/lib/python3.8/urllib/request.py", line 1244, in http_error_401
retry = self.http_error_auth_reqed('www-authenticate',
File "/usr/lib/python3.8/urllib/request.py", line 1117, in http_error_auth_reqed
raise HTTPError(req.full_url, 401, "digest auth failed",
urllib.error.HTTPError: HTTP Error 401: digest auth failed
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/sysadm/.ansible/tmp/ansible-tmp-1638198795.0287964-1186445-275398547896703/AnsiballZ_uri.py", line 100, in <module>
_ansiballz_main()
File "/home/sysadm/.ansible/tmp/ansible-tmp-1638198795.0287964-1186445-275398547896703/AnsiballZ_uri.py", line 92, in _ansiballz_main
invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)
File "/home/sysadm/.ansible/tmp/ansible-tmp-1638198795.0287964-1186445-275398547896703/AnsiballZ_uri.py", line 40, in invoke_module
runpy.run_module(mod_name='ansible.modules.uri', init_globals=dict(_module_fqn='ansible.modules.uri', _modlib_path=modlib_path),
File "/usr/lib/python3.8/runpy.py", line 207, in run_module
return _run_module_code(code, init_globals, run_name, mod_spec)
File "/usr/lib/python3.8/runpy.py", line 97, in _run_module_code
_run_code(code, mod_globals, init_globals,
File "/usr/lib/python3.8/runpy.py", line 87, in _run_code
exec(code, run_globals)
File "/tmp/ansible_ansible.legacy.uri_payload_e2qi06tp/ansible_ansible.legacy.uri_payload.zip/ansible/modules/uri.py", line 793, in <module>
File "/tmp/ansible_ansible.legacy.uri_payload_e2qi06tp/ansible_ansible.legacy.uri_payload.zip/ansible/modules/uri.py", line 710, in main
File "/tmp/ansible_ansible.legacy.uri_payload_e2qi06tp/ansible_ansible.legacy.uri_payload.zip/ansible/modules/uri.py", line 599, in uri
File "/tmp/ansible_ansible.legacy.uri_payload_e2qi06tp/ansible_ansible.legacy.uri_payload.zip/ansible/module_utils/urls.py", line 1804, in fetch_url
File "/usr/lib/python3.8/tempfile.py", line 606, in __getattr__
file = self.__dict__['file']
KeyError: 'file'
fatal: [moloch]: FAILED! => changed=false
module_stderr: |-
OpenSSH_8.2p1 Ubuntu-4ubuntu0.3, OpenSSL 1.1.1f 31 Mar 2020
debug1: Reading configuration data /home/mrmeszaros/.ssh/config
debug1: /home/mrmeszaros/.ssh/config line 1: Applying options for 172.19.*
debug1: Reading configuration data /etc/ssh/ssh_config
debug1: /etc/ssh/ssh_config line 19: include /etc/ssh/ssh_config.d/*.conf matched no files
debug1: /etc/ssh/ssh_config line 21: Applying options for *
debug2: resolve_canonicalize: hostname 172.19.90.1 is address
debug1: auto-mux: Trying existing master
debug2: fd 3 setting O_NONBLOCK
debug2: mux_client_hello_exchange: master version 4
debug3: mux_client_forwards: request forwardings: 0 local, 0 remote
debug3: mux_client_request_session: entering
debug3: mux_client_request_alive: entering
debug3: mux_client_request_alive: done pid = 1186335
debug3: mux_client_request_session: session request sent
debug3: mux_client_read_packet: read header failed: Broken pipe
debug2: Received exit status from master 1
Shared connection to 172.19.90.1 closed.
module_stdout: |-
Traceback (most recent call last):
File "/tmp/ansible_ansible.legacy.uri_payload_e2qi06tp/ansible_ansible.legacy.uri_payload.zip/ansible/module_utils/urls.py", line 1755, in fetch_url
File "/tmp/ansible_ansible.legacy.uri_payload_e2qi06tp/ansible_ansible.legacy.uri_payload.zip/ansible/module_utils/urls.py", line 1535, in open_url
File "/tmp/ansible_ansible.legacy.uri_payload_e2qi06tp/ansible_ansible.legacy.uri_payload.zip/ansible/module_utils/urls.py", line 1446, in open
File "/usr/lib/python3.8/urllib/request.py", line 222, in urlopen
return opener.open(url, data, timeout)
File "/usr/lib/python3.8/urllib/request.py", line 531, in open
response = meth(req, response)
File "/usr/lib/python3.8/urllib/request.py", line 640, in http_response
response = self.parent.error(
File "/usr/lib/python3.8/urllib/request.py", line 563, in error
result = self._call_chain(*args)
File "/usr/lib/python3.8/urllib/request.py", line 502, in _call_chain
result = func(*args)
File "/usr/lib/python3.8/urllib/request.py", line 1244, in http_error_401
retry = self.http_error_auth_reqed('www-authenticate',
File "/usr/lib/python3.8/urllib/request.py", line 1124, in http_error_auth_reqed
return self.retry_http_digest_auth(req, authreq)
File "/usr/lib/python3.8/urllib/request.py", line 1138, in retry_http_digest_auth
resp = self.parent.open(req, timeout=req.timeout)
File "/usr/lib/python3.8/urllib/request.py", line 531, in open
response = meth(req, response)
File "/usr/lib/python3.8/urllib/request.py", line 640, in http_response
response = self.parent.error(
File "/usr/lib/python3.8/urllib/request.py", line 563, in error
result = self._call_chain(*args)
File "/usr/lib/python3.8/urllib/request.py", line 502, in _call_chain
result = func(*args)
File "/usr/lib/python3.8/urllib/request.py", line 1244, in http_error_401
retry = self.http_error_auth_reqed('www-authenticate',
File "/usr/lib/python3.8/urllib/request.py", line 1124, in http_error_auth_reqed
return self.retry_http_digest_auth(req, authreq)
File "/usr/lib/python3.8/urllib/request.py", line 1138, in retry_http_digest_auth
resp = self.parent.open(req, timeout=req.timeout)
File "/usr/lib/python3.8/urllib/request.py", line 531, in open
response = meth(req, response)
File "/usr/lib/python3.8/urllib/request.py", line 640, in http_response
response = self.parent.error(
File "/usr/lib/python3.8/urllib/request.py", line 563, in error
result = self._call_chain(*args)
File "/usr/lib/python3.8/urllib/request.py", line 502, in _call_chain
result = func(*args)
File "/usr/lib/python3.8/urllib/request.py", line 1244, in http_error_401
retry = self.http_error_auth_reqed('www-authenticate',
File "/usr/lib/python3.8/urllib/request.py", line 1124, in http_error_auth_reqed
return self.retry_http_digest_auth(req, authreq)
File "/usr/lib/python3.8/urllib/request.py", line 1138, in retry_http_digest_auth
resp = self.parent.open(req, timeout=req.timeout)
File "/usr/lib/python3.8/urllib/request.py", line 531, in open
response = meth(req, response)
File "/usr/lib/python3.8/urllib/request.py", line 640, in http_response
response = self.parent.error(
File "/usr/lib/python3.8/urllib/request.py", line 563, in error
result = self._call_chain(*args)
File "/usr/lib/python3.8/urllib/request.py", line 502, in _call_chain
result = func(*args)
File "/usr/lib/python3.8/urllib/request.py", line 1244, in http_error_401
retry = self.http_error_auth_reqed('www-authenticate',
File "/usr/lib/python3.8/urllib/request.py", line 1124, in http_error_auth_reqed
return self.retry_http_digest_auth(req, authreq)
File "/usr/lib/python3.8/urllib/request.py", line 1138, in retry_http_digest_auth
resp = self.parent.open(req, timeout=req.timeout)
File "/usr/lib/python3.8/urllib/request.py", line 531, in open
response = meth(req, response)
File "/usr/lib/python3.8/urllib/request.py", line 640, in http_response
response = self.parent.error(
File "/usr/lib/python3.8/urllib/request.py", line 563, in error
result = self._call_chain(*args)
File "/usr/lib/python3.8/urllib/request.py", line 502, in _call_chain
result = func(*args)
File "/usr/lib/python3.8/urllib/request.py", line 1244, in http_error_401
retry = self.http_error_auth_reqed('www-authenticate',
File "/usr/lib/python3.8/urllib/request.py", line 1124, in http_error_auth_reqed
return self.retry_http_digest_auth(req, authreq)
File "/usr/lib/python3.8/urllib/request.py", line 1138, in retry_http_digest_auth
resp = self.parent.open(req, timeout=req.timeout)
File "/usr/lib/python3.8/urllib/request.py", line 531, in open
response = meth(req, response)
File "/usr/lib/python3.8/urllib/request.py", line 640, in http_response
response = self.parent.error(
File "/usr/lib/python3.8/urllib/request.py", line 563, in error
result = self._call_chain(*args)
File "/usr/lib/python3.8/urllib/request.py", line 502, in _call_chain
result = func(*args)
File "/usr/lib/python3.8/urllib/request.py", line 1244, in http_error_401
retry = self.http_error_auth_reqed('www-authenticate',
File "/usr/lib/python3.8/urllib/request.py", line 1124, in http_error_auth_reqed
return self.retry_http_digest_auth(req, authreq)
File "/usr/lib/python3.8/urllib/request.py", line 1138, in retry_http_digest_auth
resp = self.parent.open(req, timeout=req.timeout)
File "/usr/lib/python3.8/urllib/request.py", line 531, in open
response = meth(req, response)
File "/usr/lib/python3.8/urllib/request.py", line 640, in http_response
response = self.parent.error(
File "/usr/lib/python3.8/urllib/request.py", line 563, in error
result = self._call_chain(*args)
File "/usr/lib/python3.8/urllib/request.py", line 502, in _call_chain
result = func(*args)
File "/usr/lib/python3.8/urllib/request.py", line 1244, in http_error_401
retry = self.http_error_auth_reqed('www-authenticate',
File "/usr/lib/python3.8/urllib/request.py", line 1117, in http_error_auth_reqed
raise HTTPError(req.full_url, 401, "digest auth failed",
urllib.error.HTTPError: HTTP Error 401: digest auth failed
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/sysadm/.ansible/tmp/ansible-tmp-1638198795.0287964-1186445-275398547896703/AnsiballZ_uri.py", line 100, in <module>
_ansiballz_main()
File "/home/sysadm/.ansible/tmp/ansible-tmp-1638198795.0287964-1186445-275398547896703/AnsiballZ_uri.py", line 92, in _ansiballz_main
invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)
File "/home/sysadm/.ansible/tmp/ansible-tmp-1638198795.0287964-1186445-275398547896703/AnsiballZ_uri.py", line 40, in invoke_module
runpy.run_module(mod_name='ansible.modules.uri', init_globals=dict(_module_fqn='ansible.modules.uri', _modlib_path=modlib_path),
File "/usr/lib/python3.8/runpy.py", line 207, in run_module
return _run_module_code(code, init_globals, run_name, mod_spec)
File "/usr/lib/python3.8/runpy.py", line 97, in _run_module_code
_run_code(code, mod_globals, init_globals,
File "/usr/lib/python3.8/runpy.py", line 87, in _run_code
exec(code, run_globals)
File "/tmp/ansible_ansible.legacy.uri_payload_e2qi06tp/ansible_ansible.legacy.uri_payload.zip/ansible/modules/uri.py", line 793, in <module>
File "/tmp/ansible_ansible.legacy.uri_payload_e2qi06tp/ansible_ansible.legacy.uri_payload.zip/ansible/modules/uri.py", line 710, in main
File "/tmp/ansible_ansible.legacy.uri_payload_e2qi06tp/ansible_ansible.legacy.uri_payload.zip/ansible/modules/uri.py", line 599, in uri
File "/tmp/ansible_ansible.legacy.uri_payload_e2qi06tp/ansible_ansible.legacy.uri_payload.zip/ansible/module_utils/urls.py", line 1804, in fetch_url
File "/usr/lib/python3.8/tempfile.py", line 606, in __getattr__
file = self.__dict__['file']
KeyError: 'file'
msg: |-
MODULE FAILURE
See stdout/stderr for the exact error
rc: 1
PLAY RECAP ****************************************************************************************************************************************************************************************************************************
moloch : ok=0 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/76386
|
https://github.com/ansible/ansible/pull/76421
|
82ff7efa9401ba0cc79b59e9d239492258e1791b
|
eca97a19a3d40d3ae1bc3e5e676edea159197157
| 2021-11-29T15:21:14Z |
python
| 2021-12-02T19:42:29Z |
changelogs/fragments/76386-httperror-no-fp.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 76,386 |
URI module does not respect or return a status code when auth fails with 401
|
### Summary
The `ansible.builtin.uri` module does not return the status code when the authentication fails. It does not fail gracefully.
Setting `status_code: [200, 401]` does not help.
### Issue Type
Bug Report
### Component Name
uri
### Ansible Version
```console
ansible [core 2.11.6]
config file = /home/mrmeszaros/Workspace/ansible.cfg
configured module search path = ['/home/mrmeszaros/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/lib/python3.8/dist-packages/ansible
ansible collection location = /home/mrmeszaros/Workspace/.collections
executable location = /usr/local/bin/ansible
python version = 3.8.10 (default, Sep 28 2021, 16:10:42) [GCC 9.3.0]
jinja version = 3.0.2
libyaml = True
```
### Configuration
```console
ANSIBLE_NOCOWS(/home/mrmeszaros/Workspace/ansible.cfg) = True
COLLECTIONS_PATHS(/home/mrmeszaros/Workspace/ansible.cfg) = ['/home/mrmeszaros/Workspace/.collections']
DEFAULT_FILTER_PLUGIN_PATH(/home/mrmeszaros/Workspace/ansible.cfg) = ['/home/mrmeszaros/Workspace/filter_plugins']
DEFAULT_HOST_LIST(/home/mrmeszaros/Workspace/ansible.cfg) = ['/home/mrmeszaros/Workspace/inventories/vcenter']
DEFAULT_LOG_PATH(/home/mrmeszaros/Workspace/ansible.cfg) = /home/mrmeszaros/Workspace/.ansible.log
DEFAULT_ROLES_PATH(/home/mrmeszaros/Workspace/ansible.cfg) = ['/home/mrmeszaros/Workspace/.galaxy', '/home/mrmeszaros/Workspace/roles']
DEFAULT_STDOUT_CALLBACK(/home/mrmeszaros/Workspace/ansible.cfg) = yaml
HOST_KEY_CHECKING(/home/mrmeszaros/Workspace/ansible.cfg) = False
RETRY_FILES_ENABLED(/home/mrmeszaros/Workspace/ansible.cfg) = False
```
### OS / Environment
lsb_release -a:
```
Distributor ID: Ubuntu
Description: Ubuntu 20.04.3 LTS
Release: 20.04
Codename: focal
```
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
Prerequisites: Having moloch/arkime (or really any other web app with a REST API with authentication) installed on a machine.
```yaml (paste below)
- name: Check users
hosts: moloch
gather_facts: no
vars:
moloch_admin_pass: notit
tasks:
- name: List users
uri:
method: POST
url: http://localhost:8005/api/users
url_username: admin
url_password: "{{ moloch_admin_pass }}"
status_code:
- 200
- 401
changed_when: no
register: list_users_cmd
- debug: var=list_users_cmd
```
### Expected Results
I expected to see the http response with the headers and the `status: 401`, but instead it failed and did not even set the response headers or status.
Something like curl would do:
```
curl -X POST --digest --user admin:notit 172.19.90.1:8005/api/users -v
* Trying 172.19.90.1:8005...
* TCP_NODELAY set
* Connected to 172.19.90.1 (172.19.90.1) port 8005 (#0)
* Server auth using Digest with user 'admin'
> POST /api/users HTTP/1.1
> Host: 172.19.90.1:8005
> User-Agent: curl/7.68.0
> Accept: */*
>
* Mark bundle as not supporting multiuse
< HTTP/1.1 401 Unauthorized
< Vary: X-HTTP-Method-Override
< X-Frame-Options: DENY
< X-XSS-Protection: 1; mode=block
< WWW-Authenticate: Digest realm="Moloch", nonce="VItPV2v3xTxcE3DOdbmMnF6t60Kxx3VM", qop="auth"
< Date: Mon, 29 Nov 2021 15:16:18 GMT
< Connection: keep-alive
< Keep-Alive: timeout=5
< Transfer-Encoding: chunked
<
* Ignoring the response-body
* Connection #0 to host 172.19.90.1 left intact
* Issue another request to this URL: 'http://172.19.90.1:8005/api/users'
* Found bundle for host 172.19.90.1: 0x563d9c519b50 [serially]
* Can not multiplex, even if we wanted to!
* Re-using existing connection! (#0) with host 172.19.90.1
* Connected to 172.19.90.1 (172.19.90.1) port 8005 (#0)
* Server auth using Digest with user 'admin'
> POST /api/users HTTP/1.1
> Host: 172.19.90.1:8005
> Authorization: Digest username="admin", realm="Moloch", nonce="VItPV2v3xTxcE3DOdbmMnF6t60Kxx3VM", uri="/api/users", cnonce="ZTllZjJmNDIyMzQ0Zjk3Yjc3ZjAyN2MzNGVkMGUzODU=", nc=00000001, qop=auth, response="03c569ff5b108921de0da31a768d56f5"
> User-Agent: curl/7.68.0
> Accept: */*
>
* Mark bundle as not supporting multiuse
< HTTP/1.1 401 Unauthorized
< Vary: X-HTTP-Method-Override
< X-Frame-Options: DENY
< X-XSS-Protection: 1; mode=block
* Authentication problem. Ignoring this.
< WWW-Authenticate: Digest realm="Moloch", nonce="xNcghsGC2sX3w21sTDCFLK0cK9YStZ9M", qop="auth"
< Date: Mon, 29 Nov 2021 15:16:18 GMT
< Connection: keep-alive
< Keep-Alive: timeout=5
< Transfer-Encoding: chunked
<
* Connection #0 to host 172.19.90.1 left intact
Unauthorized⏎
```
### Actual Results
```console
TASK [List users] ***********************************************************************************************************************************************************************************************************
task path: /home/mrmeszaros/Workspace/playbooks/moloch.singleton.yml:316
<172.19.90.1> ESTABLISH SSH CONNECTION FOR USER: sysadm
<172.19.90.1> SSH: EXEC sshpass -d11 ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o 'IdentityFile="roles/linux-base/files/id_rsa"' -o 'User="sysadm"' -o ConnectTimeout=10 -o ControlPath=/home/mrmeszaros/.ansible/cp/f5d1d80b39 172.19.90.1 '/bin/sh -c '"'"'echo ~sysadm && sleep 0'"'"''
<172.19.90.1> (0, b'/home/sysadm\n', b'OpenSSH_8.2p1 Ubuntu-4ubuntu0.3, OpenSSL 1.1.1f 31 Mar 2020\r\ndebug1: Reading configuration data /home/mrmeszaros/.ssh/config\r\ndebug1: /home/mrmeszaros/.ssh/config line 1: Applying options for 172.19.*\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 19: include /etc/ssh/ssh_config.d/*.conf matched no files\r\ndebug1: /etc/ssh/ssh_config line 21: Applying options for *\r\ndebug2: resolve_canonicalize: hostname 172.19.90.1 is address\r\ndebug1: auto-mux: Trying existing master\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug2: mux_client_hello_exchange: master version 4\r\ndebug3: mux_client_forwards: request forwardings: 0 local, 0 remote\r\ndebug3: mux_client_request_session: entering\r\ndebug3: mux_client_request_alive: entering\r\ndebug3: mux_client_request_alive: done pid = 1186335\r\ndebug3: mux_client_request_session: session request sent\r\ndebug3: mux_client_read_packet: read header failed: Broken pipe\r\ndebug2: Received exit status from master 0\r\n')
<172.19.90.1> ESTABLISH SSH CONNECTION FOR USER: sysadm
<172.19.90.1> SSH: EXEC sshpass -d11 ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o 'IdentityFile="roles/linux-base/files/id_rsa"' -o 'User="sysadm"' -o ConnectTimeout=10 -o ControlPath=/home/mrmeszaros/.ansible/cp/f5d1d80b39 172.19.90.1 '/bin/sh -c '"'"'( umask 77 && mkdir -p "` echo /home/sysadm/.ansible/tmp `"&& mkdir "` echo /home/sysadm/.ansible/tmp/ansible-tmp-1638198795.0287964-1186445-275398547896703 `" && echo ansible-tmp-1638198795.0287964-1186445-275398547896703="` echo /home/sysadm/.ansible/tmp/ansible-tmp-1638198795.0287964-1186445-275398547896703 `" ) && sleep 0'"'"''
<172.19.90.1> (0, b'ansible-tmp-1638198795.0287964-1186445-275398547896703=/home/sysadm/.ansible/tmp/ansible-tmp-1638198795.0287964-1186445-275398547896703\n', b'OpenSSH_8.2p1 Ubuntu-4ubuntu0.3, OpenSSL 1.1.1f 31 Mar 2020\r\ndebug1: Reading configuration data /home/mrmeszaros/.ssh/config\r\ndebug1: /home/mrmeszaros/.ssh/config line 1: Applying options for 172.19.*\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 19: include /etc/ssh/ssh_config.d/*.conf matched no files\r\ndebug1: /etc/ssh/ssh_config line 21: Applying options for *\r\ndebug2: resolve_canonicalize: hostname 172.19.90.1 is address\r\ndebug1: auto-mux: Trying existing master\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug2: mux_client_hello_exchange: master version 4\r\ndebug3: mux_client_forwards: request forwardings: 0 local, 0 remote\r\ndebug3: mux_client_request_session: entering\r\ndebug3: mux_client_request_alive: entering\r\ndebug3: mux_client_request_alive: done pid = 1186335\r\ndebug3: mux_client_request_session: session request sent\r\ndebug3: mux_client_read_packet: read header failed: Broken pipe\r\ndebug2: Received exit status from master 0\r\n')
Using module file /usr/local/lib/python3.8/dist-packages/ansible/modules/uri.py
<172.19.90.1> PUT /home/mrmeszaros/.ansible/tmp/ansible-local-1186405uxq7nnzk/tmpv42z4a9b TO /home/sysadm/.ansible/tmp/ansible-tmp-1638198795.0287964-1186445-275398547896703/AnsiballZ_uri.py
<172.19.90.1> SSH: EXEC sshpass -d11 sftp -o BatchMode=no -b - -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o 'IdentityFile="roles/linux-base/files/id_rsa"' -o 'User="sysadm"' -o ConnectTimeout=10 -o ControlPath=/home/mrmeszaros/.ansible/cp/f5d1d80b39 '[172.19.90.1]'
<172.19.90.1> (0, b'sftp> put /home/mrmeszaros/.ansible/tmp/ansible-local-1186405uxq7nnzk/tmpv42z4a9b /home/sysadm/.ansible/tmp/ansible-tmp-1638198795.0287964-1186445-275398547896703/AnsiballZ_uri.py\n', b'OpenSSH_8.2p1 Ubuntu-4ubuntu0.3, OpenSSL 1.1.1f 31 Mar 2020\r\ndebug1: Reading configuration data /home/mrmeszaros/.ssh/config\r\ndebug1: /home/mrmeszaros/.ssh/config line 1: Applying options for 172.19.*\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 19: include /etc/ssh/ssh_config.d/*.conf matched no files\r\ndebug1: /etc/ssh/ssh_config line 21: Applying options for *\r\ndebug2: resolve_canonicalize: hostname 172.19.90.1 is address\r\ndebug1: auto-mux: Trying existing master\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug2: mux_client_hello_exchange: master version 4\r\ndebug3: mux_client_forwards: request forwardings: 0 local, 0 remote\r\ndebug3: mux_client_request_session: entering\r\ndebug3: mux_client_request_alive: entering\r\ndebug3: mux_client_request_alive: done pid = 1186335\r\ndebug3: mux_client_request_session: session request sent\r\ndebug2: Remote version: 3\r\ndebug2: Server supports extension "[email protected]" revision 1\r\ndebug2: Server supports extension "[email protected]" revision 2\r\ndebug2: Server supports extension "[email protected]" revision 2\r\ndebug2: Server supports extension "[email protected]" revision 1\r\ndebug2: Server supports extension "[email protected]" revision 1\r\ndebug2: Server supports extension "[email protected]" revision 1\r\ndebug3: Sent message fd 3 T:16 I:1\r\ndebug3: SSH_FXP_REALPATH . -> /home/sysadm size 0\r\ndebug3: Looking up /home/mrmeszaros/.ansible/tmp/ansible-local-1186405uxq7nnzk/tmpv42z4a9b\r\ndebug3: Sent message fd 3 T:17 I:2\r\ndebug3: Received stat reply T:101 I:2\r\ndebug1: Couldn\'t stat remote file: No such file or directory\r\ndebug3: Sent message SSH2_FXP_OPEN I:3 P:/home/sysadm/.ansible/tmp/ansible-tmp-1638198795.0287964-1186445-275398547896703/AnsiballZ_uri.py\r\ndebug3: Sent message SSH2_FXP_WRITE I:4 O:0 S:32768\r\ndebug3: SSH2_FXP_STATUS 0\r\ndebug3: In write loop, ack for 4 32768 bytes at 0\r\ndebug3: Sent message SSH2_FXP_WRITE I:5 O:32768 S:32768\r\ndebug3: Sent message SSH2_FXP_WRITE I:6 O:65536 S:32768\r\ndebug3: Sent message SSH2_FXP_WRITE I:7 O:98304 S:32768\r\ndebug3: Sent message SSH2_FXP_WRITE I:8 O:131072 S:30321\r\ndebug3: SSH2_FXP_STATUS 0\r\ndebug3: In write loop, ack for 5 32768 bytes at 32768\r\ndebug3: SSH2_FXP_STATUS 0\r\ndebug3: In write loop, ack for 6 32768 bytes at 65536\r\ndebug3: SSH2_FXP_STATUS 0\r\ndebug3: In write loop, ack for 7 32768 bytes at 98304\r\ndebug3: SSH2_FXP_STATUS 0\r\ndebug3: In write loop, ack for 8 30321 bytes at 131072\r\ndebug3: Sent message SSH2_FXP_CLOSE I:4\r\ndebug3: SSH2_FXP_STATUS 0\r\ndebug3: mux_client_read_packet: read header failed: Broken pipe\r\ndebug2: Received exit status from master 0\r\n')
<172.19.90.1> ESTABLISH SSH CONNECTION FOR USER: sysadm
<172.19.90.1> SSH: EXEC sshpass -d11 ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o 'IdentityFile="roles/linux-base/files/id_rsa"' -o 'User="sysadm"' -o ConnectTimeout=10 -o ControlPath=/home/mrmeszaros/.ansible/cp/f5d1d80b39 172.19.90.1 '/bin/sh -c '"'"'chmod u+x /home/sysadm/.ansible/tmp/ansible-tmp-1638198795.0287964-1186445-275398547896703/ /home/sysadm/.ansible/tmp/ansible-tmp-1638198795.0287964-1186445-275398547896703/AnsiballZ_uri.py && sleep 0'"'"''
<172.19.90.1> (0, b'', b'OpenSSH_8.2p1 Ubuntu-4ubuntu0.3, OpenSSL 1.1.1f 31 Mar 2020\r\ndebug1: Reading configuration data /home/mrmeszaros/.ssh/config\r\ndebug1: /home/mrmeszaros/.ssh/config line 1: Applying options for 172.19.*\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 19: include /etc/ssh/ssh_config.d/*.conf matched no files\r\ndebug1: /etc/ssh/ssh_config line 21: Applying options for *\r\ndebug2: resolve_canonicalize: hostname 172.19.90.1 is address\r\ndebug1: auto-mux: Trying existing master\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug2: mux_client_hello_exchange: master version 4\r\ndebug3: mux_client_forwards: request forwardings: 0 local, 0 remote\r\ndebug3: mux_client_request_session: entering\r\ndebug3: mux_client_request_alive: entering\r\ndebug3: mux_client_request_alive: done pid = 1186335\r\ndebug3: mux_client_request_session: session request sent\r\ndebug3: mux_client_read_packet: read header failed: Broken pipe\r\ndebug2: Received exit status from master 0\r\n')
<172.19.90.1> ESTABLISH SSH CONNECTION FOR USER: sysadm
<172.19.90.1> SSH: EXEC sshpass -d11 ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o 'IdentityFile="roles/linux-base/files/id_rsa"' -o 'User="sysadm"' -o ConnectTimeout=10 -o ControlPath=/home/mrmeszaros/.ansible/cp/f5d1d80b39 -tt 172.19.90.1 '/bin/sh -c '"'"'/usr/bin/python3 /home/sysadm/.ansible/tmp/ansible-tmp-1638198795.0287964-1186445-275398547896703/AnsiballZ_uri.py && sleep 0'"'"''
<172.19.90.1> (1, b'Traceback (most recent call last):\r\n File "/tmp/ansible_ansible.legacy.uri_payload_e2qi06tp/ansible_ansible.legacy.uri_payload.zip/ansible/module_utils/urls.py", line 1755, in fetch_url\r\n File "/tmp/ansible_ansible.legacy.uri_payload_e2qi06tp/ansible_ansible.legacy.uri_payload.zip/ansible/module_utils/urls.py", line 1535, in open_url\r\n File "/tmp/ansible_ansible.legacy.uri_payload_e2qi06tp/ansible_ansible.legacy.uri_payload.zip/ansible/module_utils/urls.py", line 1446, in open\r\n File "/usr/lib/python3.8/urllib/request.py", line 222, in urlopen\r\n return opener.open(url, data, timeout)\r\n File "/usr/lib/python3.8/urllib/request.py", line 531, in open\r\n response = meth(req, response)\r\n File "/usr/lib/python3.8/urllib/request.py", line 640, in http_response\r\n response = self.parent.error(\r\n File "/usr/lib/python3.8/urllib/request.py", line 563, in error\r\n result = self._call_chain(*args)\r\n File "/usr/lib/python3.8/urllib/request.py", line 502, in _call_chain\r\n result = func(*args)\r\n File "/usr/lib/python3.8/urllib/request.py", line 1244, in http_error_401\r\n retry = self.http_error_auth_reqed(\'www-authenticate\',\r\n File "/usr/lib/python3.8/urllib/request.py", line 1124, in http_error_auth_reqed\r\n return self.retry_http_digest_auth(req, authreq)\r\n File "/usr/lib/python3.8/urllib/request.py", line 1138, in retry_http_digest_auth\r\n resp = self.parent.open(req, timeout=req.timeout)\r\n File "/usr/lib/python3.8/urllib/request.py", line 531, in open\r\n response = meth(req, response)\r\n File "/usr/lib/python3.8/urllib/request.py", line 640, in http_response\r\n response = self.parent.error(\r\n File "/usr/lib/python3.8/urllib/request.py", line 563, in error\r\n result = self._call_chain(*args)\r\n File "/usr/lib/python3.8/urllib/request.py", line 502, in _call_chain\r\n result = func(*args)\r\n File "/usr/lib/python3.8/urllib/request.py", line 1244, in http_error_401\r\n retry = self.http_error_auth_reqed(\'www-authenticate\',\r\n File "/usr/lib/python3.8/urllib/request.py", line 1124, in http_error_auth_reqed\r\n return self.retry_http_digest_auth(req, authreq)\r\n File "/usr/lib/python3.8/urllib/request.py", line 1138, in retry_http_digest_auth\r\n resp = self.parent.open(req, timeout=req.timeout)\r\n File "/usr/lib/python3.8/urllib/request.py", line 531, in open\r\n response = meth(req, response)\r\n File "/usr/lib/python3.8/urllib/request.py", line 640, in http_response\r\n response = self.parent.error(\r\n File "/usr/lib/python3.8/urllib/request.py", line 563, in error\r\n result = self._call_chain(*args)\r\n File "/usr/lib/python3.8/urllib/request.py", line 502, in _call_chain\r\n result = func(*args)\r\n File "/usr/lib/python3.8/urllib/request.py", line 1244, in http_error_401\r\n retry = self.http_error_auth_reqed(\'www-authenticate\',\r\n File "/usr/lib/python3.8/urllib/request.py", line 1124, in http_error_auth_reqed\r\n return self.retry_http_digest_auth(req, authreq)\r\n File "/usr/lib/python3.8/urllib/request.py", line 1138, in retry_http_digest_auth\r\n resp = self.parent.open(req, timeout=req.timeout)\r\n File "/usr/lib/python3.8/urllib/request.py", line 531, in open\r\n response = meth(req, response)\r\n File "/usr/lib/python3.8/urllib/request.py", line 640, in http_response\r\n response = self.parent.error(\r\n File "/usr/lib/python3.8/urllib/request.py", line 563, in error\r\n result = self._call_chain(*args)\r\n File "/usr/lib/python3.8/urllib/request.py", line 502, in _call_chain\r\n result = func(*args)\r\n File "/usr/lib/python3.8/urllib/request.py", line 1244, in http_error_401\r\n retry = self.http_error_auth_reqed(\'www-authenticate\',\r\n File "/usr/lib/python3.8/urllib/request.py", line 1124, in http_error_auth_reqed\r\n return self.retry_http_digest_auth(req, authreq)\r\n File "/usr/lib/python3.8/urllib/request.py", line 1138, in retry_http_digest_auth\r\n resp = self.parent.open(req, timeout=req.timeout)\r\n File "/usr/lib/python3.8/urllib/request.py", line 531, in open\r\n response = meth(req, response)\r\n File "/usr/lib/python3.8/urllib/request.py", line 640, in http_response\r\n response = self.parent.error(\r\n File "/usr/lib/python3.8/urllib/request.py", line 563, in error\r\n result = self._call_chain(*args)\r\n File "/usr/lib/python3.8/urllib/request.py", line 502, in _call_chain\r\n result = func(*args)\r\n File "/usr/lib/python3.8/urllib/request.py", line 1244, in http_error_401\r\n retry = self.http_error_auth_reqed(\'www-authenticate\',\r\n File "/usr/lib/python3.8/urllib/request.py", line 1124, in http_error_auth_reqed\r\n return self.retry_http_digest_auth(req, authreq)\r\n File "/usr/lib/python3.8/urllib/request.py", line 1138, in retry_http_digest_auth\r\n resp = self.parent.open(req, timeout=req.timeout)\r\n File "/usr/lib/python3.8/urllib/request.py", line 531, in open\r\n response = meth(req, response)\r\n File "/usr/lib/python3.8/urllib/request.py", line 640, in http_response\r\n response = self.parent.error(\r\n File "/usr/lib/python3.8/urllib/request.py", line 563, in error\r\n result = self._call_chain(*args)\r\n File "/usr/lib/python3.8/urllib/request.py", line 502, in _call_chain\r\n result = func(*args)\r\n File "/usr/lib/python3.8/urllib/request.py", line 1244, in http_error_401\r\n retry = self.http_error_auth_reqed(\'www-authenticate\',\r\n File "/usr/lib/python3.8/urllib/request.py", line 1124, in http_error_auth_reqed\r\n return self.retry_http_digest_auth(req, authreq)\r\n File "/usr/lib/python3.8/urllib/request.py", line 1138, in retry_http_digest_auth\r\n resp = self.parent.open(req, timeout=req.timeout)\r\n File "/usr/lib/python3.8/urllib/request.py", line 531, in open\r\n response = meth(req, response)\r\n File "/usr/lib/python3.8/urllib/request.py", line 640, in http_response\r\n response = self.parent.error(\r\n File "/usr/lib/python3.8/urllib/request.py", line 563, in error\r\n result = self._call_chain(*args)\r\n File "/usr/lib/python3.8/urllib/request.py", line 502, in _call_chain\r\n result = func(*args)\r\n File "/usr/lib/python3.8/urllib/request.py", line 1244, in http_error_401\r\n retry = self.http_error_auth_reqed(\'www-authenticate\',\r\n File "/usr/lib/python3.8/urllib/request.py", line 1117, in http_error_auth_reqed\r\n raise HTTPError(req.full_url, 401, "digest auth failed",\r\nurllib.error.HTTPError: HTTP Error 401: digest auth failed\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nTraceback (most recent call last):\r\n File "/home/sysadm/.ansible/tmp/ansible-tmp-1638198795.0287964-1186445-275398547896703/AnsiballZ_uri.py", line 100, in <module>\r\n _ansiballz_main()\r\n File "/home/sysadm/.ansible/tmp/ansible-tmp-1638198795.0287964-1186445-275398547896703/AnsiballZ_uri.py", line 92, in _ansiballz_main\r\n invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)\r\n File "/home/sysadm/.ansible/tmp/ansible-tmp-1638198795.0287964-1186445-275398547896703/AnsiballZ_uri.py", line 40, in invoke_module\r\n runpy.run_module(mod_name=\'ansible.modules.uri\', init_globals=dict(_module_fqn=\'ansible.modules.uri\', _modlib_path=modlib_path),\r\n File "/usr/lib/python3.8/runpy.py", line 207, in run_module\r\n return _run_module_code(code, init_globals, run_name, mod_spec)\r\n File "/usr/lib/python3.8/runpy.py", line 97, in _run_module_code\r\n _run_code(code, mod_globals, init_globals,\r\n File "/usr/lib/python3.8/runpy.py", line 87, in _run_code\r\n exec(code, run_globals)\r\n File "/tmp/ansible_ansible.legacy.uri_payload_e2qi06tp/ansible_ansible.legacy.uri_payload.zip/ansible/modules/uri.py", line 793, in <module>\r\n File "/tmp/ansible_ansible.legacy.uri_payload_e2qi06tp/ansible_ansible.legacy.uri_payload.zip/ansible/modules/uri.py", line 710, in main\r\n File "/tmp/ansible_ansible.legacy.uri_payload_e2qi06tp/ansible_ansible.legacy.uri_payload.zip/ansible/modules/uri.py", line 599, in uri\r\n File "/tmp/ansible_ansible.legacy.uri_payload_e2qi06tp/ansible_ansible.legacy.uri_payload.zip/ansible/module_utils/urls.py", line 1804, in fetch_url\r\n File "/usr/lib/python3.8/tempfile.py", line 606, in __getattr__\r\n file = self.__dict__[\'file\']\r\nKeyError: \'file\'\r\n', b'OpenSSH_8.2p1 Ubuntu-4ubuntu0.3, OpenSSL 1.1.1f 31 Mar 2020\r\ndebug1: Reading configuration data /home/mrmeszaros/.ssh/config\r\ndebug1: /home/mrmeszaros/.ssh/config line 1: Applying options for 172.19.*\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 19: include /etc/ssh/ssh_config.d/*.conf matched no files\r\ndebug1: /etc/ssh/ssh_config line 21: Applying options for *\r\ndebug2: resolve_canonicalize: hostname 172.19.90.1 is address\r\ndebug1: auto-mux: Trying existing master\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug2: mux_client_hello_exchange: master version 4\r\ndebug3: mux_client_forwards: request forwardings: 0 local, 0 remote\r\ndebug3: mux_client_request_session: entering\r\ndebug3: mux_client_request_alive: entering\r\ndebug3: mux_client_request_alive: done pid = 1186335\r\ndebug3: mux_client_request_session: session request sent\r\ndebug3: mux_client_read_packet: read header failed: Broken pipe\r\ndebug2: Received exit status from master 1\r\nShared connection to 172.19.90.1 closed.\r\n')
<172.19.90.1> Failed to connect to the host via ssh: OpenSSH_8.2p1 Ubuntu-4ubuntu0.3, OpenSSL 1.1.1f 31 Mar 2020
debug1: Reading configuration data /home/mrmeszaros/.ssh/config
debug1: /home/mrmeszaros/.ssh/config line 1: Applying options for 172.19.*
debug1: Reading configuration data /etc/ssh/ssh_config
debug1: /etc/ssh/ssh_config line 19: include /etc/ssh/ssh_config.d/*.conf matched no files
debug1: /etc/ssh/ssh_config line 21: Applying options for *
debug2: resolve_canonicalize: hostname 172.19.90.1 is address
debug1: auto-mux: Trying existing master
debug2: fd 3 setting O_NONBLOCK
debug2: mux_client_hello_exchange: master version 4
debug3: mux_client_forwards: request forwardings: 0 local, 0 remote
debug3: mux_client_request_session: entering
debug3: mux_client_request_alive: entering
debug3: mux_client_request_alive: done pid = 1186335
debug3: mux_client_request_session: session request sent
debug3: mux_client_read_packet: read header failed: Broken pipe
debug2: Received exit status from master 1
Shared connection to 172.19.90.1 closed.
<172.19.90.1> ESTABLISH SSH CONNECTION FOR USER: sysadm
<172.19.90.1> SSH: EXEC sshpass -d11 ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o 'IdentityFile="roles/linux-base/files/id_rsa"' -o 'User="sysadm"' -o ConnectTimeout=10 -o ControlPath=/home/mrmeszaros/.ansible/cp/f5d1d80b39 172.19.90.1 '/bin/sh -c '"'"'rm -f -r /home/sysadm/.ansible/tmp/ansible-tmp-1638198795.0287964-1186445-275398547896703/ > /dev/null 2>&1 && sleep 0'"'"''
<172.19.90.1> (0, b'', b'OpenSSH_8.2p1 Ubuntu-4ubuntu0.3, OpenSSL 1.1.1f 31 Mar 2020\r\ndebug1: Reading configuration data /home/mrmeszaros/.ssh/config\r\ndebug1: /home/mrmeszaros/.ssh/config line 1: Applying options for 172.19.*\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 19: include /etc/ssh/ssh_config.d/*.conf matched no files\r\ndebug1: /etc/ssh/ssh_config line 21: Applying options for *\r\ndebug2: resolve_canonicalize: hostname 172.19.90.1 is address\r\ndebug1: auto-mux: Trying existing master\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug2: mux_client_hello_exchange: master version 4\r\ndebug3: mux_client_forwards: request forwardings: 0 local, 0 remote\r\ndebug3: mux_client_request_session: entering\r\ndebug3: mux_client_request_alive: entering\r\ndebug3: mux_client_request_alive: done pid = 1186335\r\ndebug3: mux_client_request_session: session request sent\r\ndebug3: mux_client_read_packet: read header failed: Broken pipe\r\ndebug2: Received exit status from master 0\r\n')
The full traceback is:
Traceback (most recent call last):
File "/tmp/ansible_ansible.legacy.uri_payload_e2qi06tp/ansible_ansible.legacy.uri_payload.zip/ansible/module_utils/urls.py", line 1755, in fetch_url
File "/tmp/ansible_ansible.legacy.uri_payload_e2qi06tp/ansible_ansible.legacy.uri_payload.zip/ansible/module_utils/urls.py", line 1535, in open_url
File "/tmp/ansible_ansible.legacy.uri_payload_e2qi06tp/ansible_ansible.legacy.uri_payload.zip/ansible/module_utils/urls.py", line 1446, in open
File "/usr/lib/python3.8/urllib/request.py", line 222, in urlopen
return opener.open(url, data, timeout)
File "/usr/lib/python3.8/urllib/request.py", line 531, in open
response = meth(req, response)
File "/usr/lib/python3.8/urllib/request.py", line 640, in http_response
response = self.parent.error(
File "/usr/lib/python3.8/urllib/request.py", line 563, in error
result = self._call_chain(*args)
File "/usr/lib/python3.8/urllib/request.py", line 502, in _call_chain
result = func(*args)
File "/usr/lib/python3.8/urllib/request.py", line 1244, in http_error_401
retry = self.http_error_auth_reqed('www-authenticate',
File "/usr/lib/python3.8/urllib/request.py", line 1124, in http_error_auth_reqed
return self.retry_http_digest_auth(req, authreq)
File "/usr/lib/python3.8/urllib/request.py", line 1138, in retry_http_digest_auth
resp = self.parent.open(req, timeout=req.timeout)
File "/usr/lib/python3.8/urllib/request.py", line 531, in open
response = meth(req, response)
File "/usr/lib/python3.8/urllib/request.py", line 640, in http_response
response = self.parent.error(
File "/usr/lib/python3.8/urllib/request.py", line 563, in error
result = self._call_chain(*args)
File "/usr/lib/python3.8/urllib/request.py", line 502, in _call_chain
result = func(*args)
File "/usr/lib/python3.8/urllib/request.py", line 1244, in http_error_401
retry = self.http_error_auth_reqed('www-authenticate',
File "/usr/lib/python3.8/urllib/request.py", line 1124, in http_error_auth_reqed
return self.retry_http_digest_auth(req, authreq)
File "/usr/lib/python3.8/urllib/request.py", line 1138, in retry_http_digest_auth
resp = self.parent.open(req, timeout=req.timeout)
File "/usr/lib/python3.8/urllib/request.py", line 531, in open
response = meth(req, response)
File "/usr/lib/python3.8/urllib/request.py", line 640, in http_response
response = self.parent.error(
File "/usr/lib/python3.8/urllib/request.py", line 563, in error
result = self._call_chain(*args)
File "/usr/lib/python3.8/urllib/request.py", line 502, in _call_chain
result = func(*args)
File "/usr/lib/python3.8/urllib/request.py", line 1244, in http_error_401
retry = self.http_error_auth_reqed('www-authenticate',
File "/usr/lib/python3.8/urllib/request.py", line 1124, in http_error_auth_reqed
return self.retry_http_digest_auth(req, authreq)
File "/usr/lib/python3.8/urllib/request.py", line 1138, in retry_http_digest_auth
resp = self.parent.open(req, timeout=req.timeout)
File "/usr/lib/python3.8/urllib/request.py", line 531, in open
response = meth(req, response)
File "/usr/lib/python3.8/urllib/request.py", line 640, in http_response
response = self.parent.error(
File "/usr/lib/python3.8/urllib/request.py", line 563, in error
result = self._call_chain(*args)
File "/usr/lib/python3.8/urllib/request.py", line 502, in _call_chain
result = func(*args)
File "/usr/lib/python3.8/urllib/request.py", line 1244, in http_error_401
retry = self.http_error_auth_reqed('www-authenticate',
File "/usr/lib/python3.8/urllib/request.py", line 1124, in http_error_auth_reqed
return self.retry_http_digest_auth(req, authreq)
File "/usr/lib/python3.8/urllib/request.py", line 1138, in retry_http_digest_auth
resp = self.parent.open(req, timeout=req.timeout)
File "/usr/lib/python3.8/urllib/request.py", line 531, in open
response = meth(req, response)
File "/usr/lib/python3.8/urllib/request.py", line 640, in http_response
response = self.parent.error(
File "/usr/lib/python3.8/urllib/request.py", line 563, in error
result = self._call_chain(*args)
File "/usr/lib/python3.8/urllib/request.py", line 502, in _call_chain
result = func(*args)
File "/usr/lib/python3.8/urllib/request.py", line 1244, in http_error_401
retry = self.http_error_auth_reqed('www-authenticate',
File "/usr/lib/python3.8/urllib/request.py", line 1124, in http_error_auth_reqed
return self.retry_http_digest_auth(req, authreq)
File "/usr/lib/python3.8/urllib/request.py", line 1138, in retry_http_digest_auth
resp = self.parent.open(req, timeout=req.timeout)
File "/usr/lib/python3.8/urllib/request.py", line 531, in open
response = meth(req, response)
File "/usr/lib/python3.8/urllib/request.py", line 640, in http_response
response = self.parent.error(
File "/usr/lib/python3.8/urllib/request.py", line 563, in error
result = self._call_chain(*args)
File "/usr/lib/python3.8/urllib/request.py", line 502, in _call_chain
result = func(*args)
File "/usr/lib/python3.8/urllib/request.py", line 1244, in http_error_401
retry = self.http_error_auth_reqed('www-authenticate',
File "/usr/lib/python3.8/urllib/request.py", line 1124, in http_error_auth_reqed
return self.retry_http_digest_auth(req, authreq)
File "/usr/lib/python3.8/urllib/request.py", line 1138, in retry_http_digest_auth
resp = self.parent.open(req, timeout=req.timeout)
File "/usr/lib/python3.8/urllib/request.py", line 531, in open
response = meth(req, response)
File "/usr/lib/python3.8/urllib/request.py", line 640, in http_response
response = self.parent.error(
File "/usr/lib/python3.8/urllib/request.py", line 563, in error
result = self._call_chain(*args)
File "/usr/lib/python3.8/urllib/request.py", line 502, in _call_chain
result = func(*args)
File "/usr/lib/python3.8/urllib/request.py", line 1244, in http_error_401
retry = self.http_error_auth_reqed('www-authenticate',
File "/usr/lib/python3.8/urllib/request.py", line 1117, in http_error_auth_reqed
raise HTTPError(req.full_url, 401, "digest auth failed",
urllib.error.HTTPError: HTTP Error 401: digest auth failed
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/sysadm/.ansible/tmp/ansible-tmp-1638198795.0287964-1186445-275398547896703/AnsiballZ_uri.py", line 100, in <module>
_ansiballz_main()
File "/home/sysadm/.ansible/tmp/ansible-tmp-1638198795.0287964-1186445-275398547896703/AnsiballZ_uri.py", line 92, in _ansiballz_main
invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)
File "/home/sysadm/.ansible/tmp/ansible-tmp-1638198795.0287964-1186445-275398547896703/AnsiballZ_uri.py", line 40, in invoke_module
runpy.run_module(mod_name='ansible.modules.uri', init_globals=dict(_module_fqn='ansible.modules.uri', _modlib_path=modlib_path),
File "/usr/lib/python3.8/runpy.py", line 207, in run_module
return _run_module_code(code, init_globals, run_name, mod_spec)
File "/usr/lib/python3.8/runpy.py", line 97, in _run_module_code
_run_code(code, mod_globals, init_globals,
File "/usr/lib/python3.8/runpy.py", line 87, in _run_code
exec(code, run_globals)
File "/tmp/ansible_ansible.legacy.uri_payload_e2qi06tp/ansible_ansible.legacy.uri_payload.zip/ansible/modules/uri.py", line 793, in <module>
File "/tmp/ansible_ansible.legacy.uri_payload_e2qi06tp/ansible_ansible.legacy.uri_payload.zip/ansible/modules/uri.py", line 710, in main
File "/tmp/ansible_ansible.legacy.uri_payload_e2qi06tp/ansible_ansible.legacy.uri_payload.zip/ansible/modules/uri.py", line 599, in uri
File "/tmp/ansible_ansible.legacy.uri_payload_e2qi06tp/ansible_ansible.legacy.uri_payload.zip/ansible/module_utils/urls.py", line 1804, in fetch_url
File "/usr/lib/python3.8/tempfile.py", line 606, in __getattr__
file = self.__dict__['file']
KeyError: 'file'
fatal: [moloch]: FAILED! => changed=false
module_stderr: |-
OpenSSH_8.2p1 Ubuntu-4ubuntu0.3, OpenSSL 1.1.1f 31 Mar 2020
debug1: Reading configuration data /home/mrmeszaros/.ssh/config
debug1: /home/mrmeszaros/.ssh/config line 1: Applying options for 172.19.*
debug1: Reading configuration data /etc/ssh/ssh_config
debug1: /etc/ssh/ssh_config line 19: include /etc/ssh/ssh_config.d/*.conf matched no files
debug1: /etc/ssh/ssh_config line 21: Applying options for *
debug2: resolve_canonicalize: hostname 172.19.90.1 is address
debug1: auto-mux: Trying existing master
debug2: fd 3 setting O_NONBLOCK
debug2: mux_client_hello_exchange: master version 4
debug3: mux_client_forwards: request forwardings: 0 local, 0 remote
debug3: mux_client_request_session: entering
debug3: mux_client_request_alive: entering
debug3: mux_client_request_alive: done pid = 1186335
debug3: mux_client_request_session: session request sent
debug3: mux_client_read_packet: read header failed: Broken pipe
debug2: Received exit status from master 1
Shared connection to 172.19.90.1 closed.
module_stdout: |-
Traceback (most recent call last):
File "/tmp/ansible_ansible.legacy.uri_payload_e2qi06tp/ansible_ansible.legacy.uri_payload.zip/ansible/module_utils/urls.py", line 1755, in fetch_url
File "/tmp/ansible_ansible.legacy.uri_payload_e2qi06tp/ansible_ansible.legacy.uri_payload.zip/ansible/module_utils/urls.py", line 1535, in open_url
File "/tmp/ansible_ansible.legacy.uri_payload_e2qi06tp/ansible_ansible.legacy.uri_payload.zip/ansible/module_utils/urls.py", line 1446, in open
File "/usr/lib/python3.8/urllib/request.py", line 222, in urlopen
return opener.open(url, data, timeout)
File "/usr/lib/python3.8/urllib/request.py", line 531, in open
response = meth(req, response)
File "/usr/lib/python3.8/urllib/request.py", line 640, in http_response
response = self.parent.error(
File "/usr/lib/python3.8/urllib/request.py", line 563, in error
result = self._call_chain(*args)
File "/usr/lib/python3.8/urllib/request.py", line 502, in _call_chain
result = func(*args)
File "/usr/lib/python3.8/urllib/request.py", line 1244, in http_error_401
retry = self.http_error_auth_reqed('www-authenticate',
File "/usr/lib/python3.8/urllib/request.py", line 1124, in http_error_auth_reqed
return self.retry_http_digest_auth(req, authreq)
File "/usr/lib/python3.8/urllib/request.py", line 1138, in retry_http_digest_auth
resp = self.parent.open(req, timeout=req.timeout)
File "/usr/lib/python3.8/urllib/request.py", line 531, in open
response = meth(req, response)
File "/usr/lib/python3.8/urllib/request.py", line 640, in http_response
response = self.parent.error(
File "/usr/lib/python3.8/urllib/request.py", line 563, in error
result = self._call_chain(*args)
File "/usr/lib/python3.8/urllib/request.py", line 502, in _call_chain
result = func(*args)
File "/usr/lib/python3.8/urllib/request.py", line 1244, in http_error_401
retry = self.http_error_auth_reqed('www-authenticate',
File "/usr/lib/python3.8/urllib/request.py", line 1124, in http_error_auth_reqed
return self.retry_http_digest_auth(req, authreq)
File "/usr/lib/python3.8/urllib/request.py", line 1138, in retry_http_digest_auth
resp = self.parent.open(req, timeout=req.timeout)
File "/usr/lib/python3.8/urllib/request.py", line 531, in open
response = meth(req, response)
File "/usr/lib/python3.8/urllib/request.py", line 640, in http_response
response = self.parent.error(
File "/usr/lib/python3.8/urllib/request.py", line 563, in error
result = self._call_chain(*args)
File "/usr/lib/python3.8/urllib/request.py", line 502, in _call_chain
result = func(*args)
File "/usr/lib/python3.8/urllib/request.py", line 1244, in http_error_401
retry = self.http_error_auth_reqed('www-authenticate',
File "/usr/lib/python3.8/urllib/request.py", line 1124, in http_error_auth_reqed
return self.retry_http_digest_auth(req, authreq)
File "/usr/lib/python3.8/urllib/request.py", line 1138, in retry_http_digest_auth
resp = self.parent.open(req, timeout=req.timeout)
File "/usr/lib/python3.8/urllib/request.py", line 531, in open
response = meth(req, response)
File "/usr/lib/python3.8/urllib/request.py", line 640, in http_response
response = self.parent.error(
File "/usr/lib/python3.8/urllib/request.py", line 563, in error
result = self._call_chain(*args)
File "/usr/lib/python3.8/urllib/request.py", line 502, in _call_chain
result = func(*args)
File "/usr/lib/python3.8/urllib/request.py", line 1244, in http_error_401
retry = self.http_error_auth_reqed('www-authenticate',
File "/usr/lib/python3.8/urllib/request.py", line 1124, in http_error_auth_reqed
return self.retry_http_digest_auth(req, authreq)
File "/usr/lib/python3.8/urllib/request.py", line 1138, in retry_http_digest_auth
resp = self.parent.open(req, timeout=req.timeout)
File "/usr/lib/python3.8/urllib/request.py", line 531, in open
response = meth(req, response)
File "/usr/lib/python3.8/urllib/request.py", line 640, in http_response
response = self.parent.error(
File "/usr/lib/python3.8/urllib/request.py", line 563, in error
result = self._call_chain(*args)
File "/usr/lib/python3.8/urllib/request.py", line 502, in _call_chain
result = func(*args)
File "/usr/lib/python3.8/urllib/request.py", line 1244, in http_error_401
retry = self.http_error_auth_reqed('www-authenticate',
File "/usr/lib/python3.8/urllib/request.py", line 1124, in http_error_auth_reqed
return self.retry_http_digest_auth(req, authreq)
File "/usr/lib/python3.8/urllib/request.py", line 1138, in retry_http_digest_auth
resp = self.parent.open(req, timeout=req.timeout)
File "/usr/lib/python3.8/urllib/request.py", line 531, in open
response = meth(req, response)
File "/usr/lib/python3.8/urllib/request.py", line 640, in http_response
response = self.parent.error(
File "/usr/lib/python3.8/urllib/request.py", line 563, in error
result = self._call_chain(*args)
File "/usr/lib/python3.8/urllib/request.py", line 502, in _call_chain
result = func(*args)
File "/usr/lib/python3.8/urllib/request.py", line 1244, in http_error_401
retry = self.http_error_auth_reqed('www-authenticate',
File "/usr/lib/python3.8/urllib/request.py", line 1124, in http_error_auth_reqed
return self.retry_http_digest_auth(req, authreq)
File "/usr/lib/python3.8/urllib/request.py", line 1138, in retry_http_digest_auth
resp = self.parent.open(req, timeout=req.timeout)
File "/usr/lib/python3.8/urllib/request.py", line 531, in open
response = meth(req, response)
File "/usr/lib/python3.8/urllib/request.py", line 640, in http_response
response = self.parent.error(
File "/usr/lib/python3.8/urllib/request.py", line 563, in error
result = self._call_chain(*args)
File "/usr/lib/python3.8/urllib/request.py", line 502, in _call_chain
result = func(*args)
File "/usr/lib/python3.8/urllib/request.py", line 1244, in http_error_401
retry = self.http_error_auth_reqed('www-authenticate',
File "/usr/lib/python3.8/urllib/request.py", line 1117, in http_error_auth_reqed
raise HTTPError(req.full_url, 401, "digest auth failed",
urllib.error.HTTPError: HTTP Error 401: digest auth failed
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/sysadm/.ansible/tmp/ansible-tmp-1638198795.0287964-1186445-275398547896703/AnsiballZ_uri.py", line 100, in <module>
_ansiballz_main()
File "/home/sysadm/.ansible/tmp/ansible-tmp-1638198795.0287964-1186445-275398547896703/AnsiballZ_uri.py", line 92, in _ansiballz_main
invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)
File "/home/sysadm/.ansible/tmp/ansible-tmp-1638198795.0287964-1186445-275398547896703/AnsiballZ_uri.py", line 40, in invoke_module
runpy.run_module(mod_name='ansible.modules.uri', init_globals=dict(_module_fqn='ansible.modules.uri', _modlib_path=modlib_path),
File "/usr/lib/python3.8/runpy.py", line 207, in run_module
return _run_module_code(code, init_globals, run_name, mod_spec)
File "/usr/lib/python3.8/runpy.py", line 97, in _run_module_code
_run_code(code, mod_globals, init_globals,
File "/usr/lib/python3.8/runpy.py", line 87, in _run_code
exec(code, run_globals)
File "/tmp/ansible_ansible.legacy.uri_payload_e2qi06tp/ansible_ansible.legacy.uri_payload.zip/ansible/modules/uri.py", line 793, in <module>
File "/tmp/ansible_ansible.legacy.uri_payload_e2qi06tp/ansible_ansible.legacy.uri_payload.zip/ansible/modules/uri.py", line 710, in main
File "/tmp/ansible_ansible.legacy.uri_payload_e2qi06tp/ansible_ansible.legacy.uri_payload.zip/ansible/modules/uri.py", line 599, in uri
File "/tmp/ansible_ansible.legacy.uri_payload_e2qi06tp/ansible_ansible.legacy.uri_payload.zip/ansible/module_utils/urls.py", line 1804, in fetch_url
File "/usr/lib/python3.8/tempfile.py", line 606, in __getattr__
file = self.__dict__['file']
KeyError: 'file'
msg: |-
MODULE FAILURE
See stdout/stderr for the exact error
rc: 1
PLAY RECAP ****************************************************************************************************************************************************************************************************************************
moloch : ok=0 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/76386
|
https://github.com/ansible/ansible/pull/76421
|
82ff7efa9401ba0cc79b59e9d239492258e1791b
|
eca97a19a3d40d3ae1bc3e5e676edea159197157
| 2021-11-29T15:21:14Z |
python
| 2021-12-02T19:42:29Z |
lib/ansible/module_utils/urls.py
|
# This code is part of Ansible, but is an independent component.
# This particular file snippet, and this file snippet only, is BSD licensed.
# Modules you write using this snippet, which is embedded dynamically by Ansible
# still belong to the author of the module, and may assign their own license
# to the complete work.
#
# Copyright (c), Michael DeHaan <[email protected]>, 2012-2013
# Copyright (c), Toshio Kuratomi <[email protected]>, 2015
#
# Simplified BSD License (see licenses/simplified_bsd.txt or https://opensource.org/licenses/BSD-2-Clause)
#
# The match_hostname function and supporting code is under the terms and
# conditions of the Python Software Foundation License. They were taken from
# the Python3 standard library and adapted for use in Python2. See comments in the
# source for which code precisely is under this License.
#
# PSF License (see licenses/PSF-license.txt or https://opensource.org/licenses/Python-2.0)
'''
The **urls** utils module offers a replacement for the urllib2 python library.
urllib2 is the python stdlib way to retrieve files from the Internet but it
lacks some security features (around verifying SSL certificates) that users
should care about in most situations. Using the functions in this module corrects
deficiencies in the urllib2 module wherever possible.
There are also third-party libraries (for instance, requests) which can be used
to replace urllib2 with a more secure library. However, all third party libraries
require that the library be installed on the managed machine. That is an extra step
for users making use of a module. If possible, avoid third party libraries by using
this code instead.
'''
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import atexit
import base64
import email.mime.multipart
import email.mime.nonmultipart
import email.mime.application
import email.parser
import email.utils
import functools
import mimetypes
import netrc
import os
import platform
import re
import socket
import sys
import tempfile
import traceback
from contextlib import contextmanager
try:
import email.policy
except ImportError:
# Py2
import email.generator
try:
import httplib
except ImportError:
# Python 3
import http.client as httplib
import ansible.module_utils.six.moves.http_cookiejar as cookiejar
import ansible.module_utils.six.moves.urllib.request as urllib_request
import ansible.module_utils.six.moves.urllib.error as urllib_error
from ansible.module_utils.common.collections import Mapping
from ansible.module_utils.six import PY2, PY3, string_types
from ansible.module_utils.six.moves import cStringIO
from ansible.module_utils.basic import get_distribution, missing_required_lib
from ansible.module_utils._text import to_bytes, to_native, to_text
try:
# python3
import urllib.request as urllib_request
from urllib.request import AbstractHTTPHandler, BaseHandler
except ImportError:
# python2
import urllib2 as urllib_request
from urllib2 import AbstractHTTPHandler, BaseHandler
urllib_request.HTTPRedirectHandler.http_error_308 = urllib_request.HTTPRedirectHandler.http_error_307
try:
from ansible.module_utils.six.moves.urllib.parse import urlparse, urlunparse, unquote
HAS_URLPARSE = True
except Exception:
HAS_URLPARSE = False
try:
import ssl
HAS_SSL = True
except Exception:
HAS_SSL = False
try:
# SNI Handling needs python2.7.9's SSLContext
from ssl import create_default_context, SSLContext
HAS_SSLCONTEXT = True
except ImportError:
HAS_SSLCONTEXT = False
# SNI Handling for python < 2.7.9 with urllib3 support
try:
# urllib3>=1.15
HAS_URLLIB3_SSL_WRAP_SOCKET = False
try:
from urllib3.contrib.pyopenssl import PyOpenSSLContext
except ImportError:
from requests.packages.urllib3.contrib.pyopenssl import PyOpenSSLContext
HAS_URLLIB3_PYOPENSSLCONTEXT = True
except ImportError:
# urllib3<1.15,>=1.6
HAS_URLLIB3_PYOPENSSLCONTEXT = False
try:
try:
from urllib3.contrib.pyopenssl import ssl_wrap_socket
except ImportError:
from requests.packages.urllib3.contrib.pyopenssl import ssl_wrap_socket
HAS_URLLIB3_SSL_WRAP_SOCKET = True
except ImportError:
pass
# Select a protocol that includes all secure tls protocols
# Exclude insecure ssl protocols if possible
if HAS_SSL:
# If we can't find extra tls methods, ssl.PROTOCOL_TLSv1 is sufficient
PROTOCOL = ssl.PROTOCOL_TLSv1
if not HAS_SSLCONTEXT and HAS_SSL:
try:
import ctypes
import ctypes.util
except ImportError:
# python 2.4 (likely rhel5 which doesn't have tls1.1 support in its openssl)
pass
else:
libssl_name = ctypes.util.find_library('ssl')
libssl = ctypes.CDLL(libssl_name)
for method in ('TLSv1_1_method', 'TLSv1_2_method'):
try:
libssl[method]
# Found something - we'll let openssl autonegotiate and hope
# the server has disabled sslv2 and 3. best we can do.
PROTOCOL = ssl.PROTOCOL_SSLv23
break
except AttributeError:
pass
del libssl
# The following makes it easier for us to script updates of the bundled backports.ssl_match_hostname
# The bundled backports.ssl_match_hostname should really be moved into its own file for processing
_BUNDLED_METADATA = {"pypi_name": "backports.ssl_match_hostname", "version": "3.7.0.1"}
LOADED_VERIFY_LOCATIONS = set()
HAS_MATCH_HOSTNAME = True
try:
from ssl import match_hostname, CertificateError
except ImportError:
try:
from backports.ssl_match_hostname import match_hostname, CertificateError
except ImportError:
HAS_MATCH_HOSTNAME = False
HAS_CRYPTOGRAPHY = True
try:
from cryptography import x509
from cryptography.hazmat.backends import default_backend
from cryptography.hazmat.primitives import hashes
from cryptography.exceptions import UnsupportedAlgorithm
except ImportError:
HAS_CRYPTOGRAPHY = False
# Old import for GSSAPI authentication, this is not used in urls.py but kept for backwards compatibility.
try:
import urllib_gssapi
HAS_GSSAPI = True
except ImportError:
HAS_GSSAPI = False
GSSAPI_IMP_ERR = None
try:
import gssapi
class HTTPGSSAPIAuthHandler(BaseHandler):
""" Handles Negotiate/Kerberos support through the gssapi library. """
AUTH_HEADER_PATTERN = re.compile(r'(?:.*)\s*(Negotiate|Kerberos)\s*([^,]*),?', re.I)
handler_order = 480 # Handle before Digest authentication
def __init__(self, username=None, password=None):
self.username = username
self.password = password
self._context = None
def get_auth_value(self, headers):
auth_match = self.AUTH_HEADER_PATTERN.search(headers.get('www-authenticate', ''))
if auth_match:
return auth_match.group(1), base64.b64decode(auth_match.group(2))
def http_error_401(self, req, fp, code, msg, headers):
# If we've already attempted the auth and we've reached this again then there was a failure.
if self._context:
return
parsed = generic_urlparse(urlparse(req.get_full_url()))
auth_header = self.get_auth_value(headers)
if not auth_header:
return
auth_protocol, in_token = auth_header
username = None
if self.username:
username = gssapi.Name(self.username, name_type=gssapi.NameType.user)
if username and self.password:
if not hasattr(gssapi.raw, 'acquire_cred_with_password'):
raise NotImplementedError("Platform GSSAPI library does not support "
"gss_acquire_cred_with_password, cannot acquire GSSAPI credential with "
"explicit username and password.")
b_password = to_bytes(self.password, errors='surrogate_or_strict')
cred = gssapi.raw.acquire_cred_with_password(username, b_password, usage='initiate').creds
else:
cred = gssapi.Credentials(name=username, usage='initiate')
# Get the peer certificate for the channel binding token if possible (HTTPS). A bug on macOS causes the
# authentication to fail when the CBT is present. Just skip that platform.
cbt = None
cert = getpeercert(fp, True)
if cert and platform.system() != 'Darwin':
cert_hash = get_channel_binding_cert_hash(cert)
if cert_hash:
cbt = gssapi.raw.ChannelBindings(application_data=b"tls-server-end-point:" + cert_hash)
# TODO: We could add another option that is set to include the port in the SPN if desired in the future.
target = gssapi.Name("HTTP@%s" % parsed['hostname'], gssapi.NameType.hostbased_service)
self._context = gssapi.SecurityContext(usage="initiate", name=target, creds=cred, channel_bindings=cbt)
resp = None
while not self._context.complete:
out_token = self._context.step(in_token)
if not out_token:
break
auth_header = '%s %s' % (auth_protocol, to_native(base64.b64encode(out_token)))
req.add_unredirected_header('Authorization', auth_header)
resp = self.parent.open(req)
# The response could contain a token that the client uses to validate the server
auth_header = self.get_auth_value(resp.headers)
if not auth_header:
break
in_token = auth_header[1]
return resp
except ImportError:
GSSAPI_IMP_ERR = traceback.format_exc()
HTTPGSSAPIAuthHandler = None
if not HAS_MATCH_HOSTNAME:
# The following block of code is under the terms and conditions of the
# Python Software Foundation License
"""The match_hostname() function from Python 3.4, essential when using SSL."""
try:
# Divergence: Python-3.7+'s _ssl has this exception type but older Pythons do not
from _ssl import SSLCertVerificationError
CertificateError = SSLCertVerificationError
except ImportError:
class CertificateError(ValueError):
pass
def _dnsname_match(dn, hostname):
"""Matching according to RFC 6125, section 6.4.3
- Hostnames are compared lower case.
- For IDNA, both dn and hostname must be encoded as IDN A-label (ACE).
- Partial wildcards like 'www*.example.org', multiple wildcards, sole
wildcard or wildcards in labels other then the left-most label are not
supported and a CertificateError is raised.
- A wildcard must match at least one character.
"""
if not dn:
return False
wildcards = dn.count('*')
# speed up common case w/o wildcards
if not wildcards:
return dn.lower() == hostname.lower()
if wildcards > 1:
# Divergence .format() to percent formatting for Python < 2.6
raise CertificateError(
"too many wildcards in certificate DNS name: %s" % repr(dn))
dn_leftmost, sep, dn_remainder = dn.partition('.')
if '*' in dn_remainder:
# Only match wildcard in leftmost segment.
# Divergence .format() to percent formatting for Python < 2.6
raise CertificateError(
"wildcard can only be present in the leftmost label: "
"%s." % repr(dn))
if not sep:
# no right side
# Divergence .format() to percent formatting for Python < 2.6
raise CertificateError(
"sole wildcard without additional labels are not support: "
"%s." % repr(dn))
if dn_leftmost != '*':
# no partial wildcard matching
# Divergence .format() to percent formatting for Python < 2.6
raise CertificateError(
"partial wildcards in leftmost label are not supported: "
"%s." % repr(dn))
hostname_leftmost, sep, hostname_remainder = hostname.partition('.')
if not hostname_leftmost or not sep:
# wildcard must match at least one char
return False
return dn_remainder.lower() == hostname_remainder.lower()
def _inet_paton(ipname):
"""Try to convert an IP address to packed binary form
Supports IPv4 addresses on all platforms and IPv6 on platforms with IPv6
support.
"""
# inet_aton() also accepts strings like '1'
# Divergence: We make sure we have native string type for all python versions
try:
b_ipname = to_bytes(ipname, errors='strict')
except UnicodeError:
raise ValueError("%s must be an all-ascii string." % repr(ipname))
# Set ipname in native string format
if sys.version_info < (3,):
n_ipname = b_ipname
else:
n_ipname = ipname
if n_ipname.count('.') == 3:
try:
return socket.inet_aton(n_ipname)
# Divergence: OSError on late python3. socket.error earlier.
# Null bytes generate ValueError on python3(we want to raise
# ValueError anyway), TypeError # earlier
except (OSError, socket.error, TypeError):
pass
try:
return socket.inet_pton(socket.AF_INET6, n_ipname)
# Divergence: OSError on late python3. socket.error earlier.
# Null bytes generate ValueError on python3(we want to raise
# ValueError anyway), TypeError # earlier
except (OSError, socket.error, TypeError):
# Divergence .format() to percent formatting for Python < 2.6
raise ValueError("%s is neither an IPv4 nor an IP6 "
"address." % repr(ipname))
except AttributeError:
# AF_INET6 not available
pass
# Divergence .format() to percent formatting for Python < 2.6
raise ValueError("%s is not an IPv4 address." % repr(ipname))
def _ipaddress_match(ipname, host_ip):
"""Exact matching of IP addresses.
RFC 6125 explicitly doesn't define an algorithm for this
(section 1.7.2 - "Out of Scope").
"""
# OpenSSL may add a trailing newline to a subjectAltName's IP address
ip = _inet_paton(ipname.rstrip())
return ip == host_ip
def match_hostname(cert, hostname):
"""Verify that *cert* (in decoded format as returned by
SSLSocket.getpeercert()) matches the *hostname*. RFC 2818 and RFC 6125
rules are followed.
The function matches IP addresses rather than dNSNames if hostname is a
valid ipaddress string. IPv4 addresses are supported on all platforms.
IPv6 addresses are supported on platforms with IPv6 support (AF_INET6
and inet_pton).
CertificateError is raised on failure. On success, the function
returns nothing.
"""
if not cert:
raise ValueError("empty or no certificate, match_hostname needs a "
"SSL socket or SSL context with either "
"CERT_OPTIONAL or CERT_REQUIRED")
try:
# Divergence: Deal with hostname as bytes
host_ip = _inet_paton(to_text(hostname, errors='strict'))
except UnicodeError:
# Divergence: Deal with hostname as byte strings.
# IP addresses should be all ascii, so we consider it not
# an IP address if this fails
host_ip = None
except ValueError:
# Not an IP address (common case)
host_ip = None
dnsnames = []
san = cert.get('subjectAltName', ())
for key, value in san:
if key == 'DNS':
if host_ip is None and _dnsname_match(value, hostname):
return
dnsnames.append(value)
elif key == 'IP Address':
if host_ip is not None and _ipaddress_match(value, host_ip):
return
dnsnames.append(value)
if not dnsnames:
# The subject is only checked when there is no dNSName entry
# in subjectAltName
for sub in cert.get('subject', ()):
for key, value in sub:
# XXX according to RFC 2818, the most specific Common Name
# must be used.
if key == 'commonName':
if _dnsname_match(value, hostname):
return
dnsnames.append(value)
if len(dnsnames) > 1:
raise CertificateError("hostname %r doesn't match either of %s" % (hostname, ', '.join(map(repr, dnsnames))))
elif len(dnsnames) == 1:
raise CertificateError("hostname %r doesn't match %r" % (hostname, dnsnames[0]))
else:
raise CertificateError("no appropriate commonName or subjectAltName fields were found")
# End of Python Software Foundation Licensed code
HAS_MATCH_HOSTNAME = True
# This is a dummy cacert provided for macOS since you need at least 1
# ca cert, regardless of validity, for Python on macOS to use the
# keychain functionality in OpenSSL for validating SSL certificates.
# See: http://mercurial.selenic.com/wiki/CACertificates#Mac_OS_X_10.6_and_higher
b_DUMMY_CA_CERT = b"""-----BEGIN CERTIFICATE-----
MIICvDCCAiWgAwIBAgIJAO8E12S7/qEpMA0GCSqGSIb3DQEBBQUAMEkxCzAJBgNV
BAYTAlVTMRcwFQYDVQQIEw5Ob3J0aCBDYXJvbGluYTEPMA0GA1UEBxMGRHVyaGFt
MRAwDgYDVQQKEwdBbnNpYmxlMB4XDTE0MDMxODIyMDAyMloXDTI0MDMxNTIyMDAy
MlowSTELMAkGA1UEBhMCVVMxFzAVBgNVBAgTDk5vcnRoIENhcm9saW5hMQ8wDQYD
VQQHEwZEdXJoYW0xEDAOBgNVBAoTB0Fuc2libGUwgZ8wDQYJKoZIhvcNAQEBBQAD
gY0AMIGJAoGBANtvpPq3IlNlRbCHhZAcP6WCzhc5RbsDqyh1zrkmLi0GwcQ3z/r9
gaWfQBYhHpobK2Tiq11TfraHeNB3/VfNImjZcGpN8Fl3MWwu7LfVkJy3gNNnxkA1
4Go0/LmIvRFHhbzgfuo9NFgjPmmab9eqXJceqZIlz2C8xA7EeG7ku0+vAgMBAAGj
gaswgagwHQYDVR0OBBYEFPnN1nPRqNDXGlCqCvdZchRNi/FaMHkGA1UdIwRyMHCA
FPnN1nPRqNDXGlCqCvdZchRNi/FaoU2kSzBJMQswCQYDVQQGEwJVUzEXMBUGA1UE
CBMOTm9ydGggQ2Fyb2xpbmExDzANBgNVBAcTBkR1cmhhbTEQMA4GA1UEChMHQW5z
aWJsZYIJAO8E12S7/qEpMAwGA1UdEwQFMAMBAf8wDQYJKoZIhvcNAQEFBQADgYEA
MUB80IR6knq9K/tY+hvPsZer6eFMzO3JGkRFBh2kn6JdMDnhYGX7AXVHGflrwNQH
qFy+aenWXsC0ZvrikFxbQnX8GVtDADtVznxOi7XzFw7JOxdsVrpXgSN0eh0aMzvV
zKPZsZ2miVGclicJHzm5q080b1p/sZtuKIEZk6vZqEg=
-----END CERTIFICATE-----
"""
b_PEM_CERT_RE = re.compile(
br'^-----BEGIN CERTIFICATE-----\n.+?-----END CERTIFICATE-----$',
flags=re.M | re.S
)
#
# Exceptions
#
class ConnectionError(Exception):
"""Failed to connect to the server"""
pass
class ProxyError(ConnectionError):
"""Failure to connect because of a proxy"""
pass
class SSLValidationError(ConnectionError):
"""Failure to connect due to SSL validation failing"""
pass
class NoSSLError(SSLValidationError):
"""Needed to connect to an HTTPS url but no ssl library available to verify the certificate"""
pass
class MissingModuleError(Exception):
"""Failed to import 3rd party module required by the caller"""
def __init__(self, message, import_traceback):
super(MissingModuleError, self).__init__(message)
self.import_traceback = import_traceback
# Some environments (Google Compute Engine's CoreOS deploys) do not compile
# against openssl and thus do not have any HTTPS support.
CustomHTTPSConnection = None
CustomHTTPSHandler = None
HTTPSClientAuthHandler = None
UnixHTTPSConnection = None
if hasattr(httplib, 'HTTPSConnection') and hasattr(urllib_request, 'HTTPSHandler'):
class CustomHTTPSConnection(httplib.HTTPSConnection):
def __init__(self, *args, **kwargs):
httplib.HTTPSConnection.__init__(self, *args, **kwargs)
self.context = None
if HAS_SSLCONTEXT:
self.context = self._context
elif HAS_URLLIB3_PYOPENSSLCONTEXT:
self.context = self._context = PyOpenSSLContext(PROTOCOL)
if self.context and self.cert_file:
self.context.load_cert_chain(self.cert_file, self.key_file)
def connect(self):
"Connect to a host on a given (SSL) port."
if hasattr(self, 'source_address'):
sock = socket.create_connection((self.host, self.port), self.timeout, self.source_address)
else:
sock = socket.create_connection((self.host, self.port), self.timeout)
server_hostname = self.host
# Note: self._tunnel_host is not available on py < 2.6 but this code
# isn't used on py < 2.6 (lack of create_connection)
if self._tunnel_host:
self.sock = sock
self._tunnel()
server_hostname = self._tunnel_host
if HAS_SSLCONTEXT or HAS_URLLIB3_PYOPENSSLCONTEXT:
self.sock = self.context.wrap_socket(sock, server_hostname=server_hostname)
elif HAS_URLLIB3_SSL_WRAP_SOCKET:
self.sock = ssl_wrap_socket(sock, keyfile=self.key_file, cert_reqs=ssl.CERT_NONE, certfile=self.cert_file, ssl_version=PROTOCOL,
server_hostname=server_hostname)
else:
self.sock = ssl.wrap_socket(sock, keyfile=self.key_file, certfile=self.cert_file, ssl_version=PROTOCOL)
class CustomHTTPSHandler(urllib_request.HTTPSHandler):
def https_open(self, req):
kwargs = {}
if HAS_SSLCONTEXT:
kwargs['context'] = self._context
return self.do_open(
functools.partial(
CustomHTTPSConnection,
**kwargs
),
req
)
https_request = AbstractHTTPHandler.do_request_
class HTTPSClientAuthHandler(urllib_request.HTTPSHandler):
'''Handles client authentication via cert/key
This is a fairly lightweight extension on HTTPSHandler, and can be used
in place of HTTPSHandler
'''
def __init__(self, client_cert=None, client_key=None, unix_socket=None, **kwargs):
urllib_request.HTTPSHandler.__init__(self, **kwargs)
self.client_cert = client_cert
self.client_key = client_key
self._unix_socket = unix_socket
def https_open(self, req):
return self.do_open(self._build_https_connection, req)
def _build_https_connection(self, host, **kwargs):
kwargs.update({
'cert_file': self.client_cert,
'key_file': self.client_key,
})
try:
kwargs['context'] = self._context
except AttributeError:
pass
if self._unix_socket:
return UnixHTTPSConnection(self._unix_socket)(host, **kwargs)
return httplib.HTTPSConnection(host, **kwargs)
@contextmanager
def unix_socket_patch_httpconnection_connect():
'''Monkey patch ``httplib.HTTPConnection.connect`` to be ``UnixHTTPConnection.connect``
so that when calling ``super(UnixHTTPSConnection, self).connect()`` we get the
correct behavior of creating self.sock for the unix socket
'''
_connect = httplib.HTTPConnection.connect
httplib.HTTPConnection.connect = UnixHTTPConnection.connect
yield
httplib.HTTPConnection.connect = _connect
class UnixHTTPSConnection(httplib.HTTPSConnection):
def __init__(self, unix_socket):
self._unix_socket = unix_socket
def connect(self):
# This method exists simply to ensure we monkeypatch
# httplib.HTTPConnection.connect to call UnixHTTPConnection.connect
with unix_socket_patch_httpconnection_connect():
# Disable pylint check for the super() call. It complains about UnixHTTPSConnection
# being a NoneType because of the initial definition above, but it won't actually
# be a NoneType when this code runs
# pylint: disable=bad-super-call
super(UnixHTTPSConnection, self).connect()
def __call__(self, *args, **kwargs):
httplib.HTTPSConnection.__init__(self, *args, **kwargs)
return self
class UnixHTTPConnection(httplib.HTTPConnection):
'''Handles http requests to a unix socket file'''
def __init__(self, unix_socket):
self._unix_socket = unix_socket
def connect(self):
self.sock = socket.socket(socket.AF_UNIX, socket.SOCK_STREAM)
try:
self.sock.connect(self._unix_socket)
except OSError as e:
raise OSError('Invalid Socket File (%s): %s' % (self._unix_socket, e))
if self.timeout is not socket._GLOBAL_DEFAULT_TIMEOUT:
self.sock.settimeout(self.timeout)
def __call__(self, *args, **kwargs):
httplib.HTTPConnection.__init__(self, *args, **kwargs)
return self
class UnixHTTPHandler(urllib_request.HTTPHandler):
'''Handler for Unix urls'''
def __init__(self, unix_socket, **kwargs):
urllib_request.HTTPHandler.__init__(self, **kwargs)
self._unix_socket = unix_socket
def http_open(self, req):
return self.do_open(UnixHTTPConnection(self._unix_socket), req)
class ParseResultDottedDict(dict):
'''
A dict that acts similarly to the ParseResult named tuple from urllib
'''
def __init__(self, *args, **kwargs):
super(ParseResultDottedDict, self).__init__(*args, **kwargs)
self.__dict__ = self
def as_list(self):
'''
Generate a list from this dict, that looks like the ParseResult named tuple
'''
return [self.get(k, None) for k in ('scheme', 'netloc', 'path', 'params', 'query', 'fragment')]
def generic_urlparse(parts):
'''
Returns a dictionary of url parts as parsed by urlparse,
but accounts for the fact that older versions of that
library do not support named attributes (ie. .netloc)
'''
generic_parts = ParseResultDottedDict()
if hasattr(parts, 'netloc'):
# urlparse is newer, just read the fields straight
# from the parts object
generic_parts['scheme'] = parts.scheme
generic_parts['netloc'] = parts.netloc
generic_parts['path'] = parts.path
generic_parts['params'] = parts.params
generic_parts['query'] = parts.query
generic_parts['fragment'] = parts.fragment
generic_parts['username'] = parts.username
generic_parts['password'] = parts.password
hostname = parts.hostname
if hostname and hostname[0] == '[' and '[' in parts.netloc and ']' in parts.netloc:
# Py2.6 doesn't parse IPv6 addresses correctly
hostname = parts.netloc.split(']')[0][1:].lower()
generic_parts['hostname'] = hostname
try:
port = parts.port
except ValueError:
# Py2.6 doesn't parse IPv6 addresses correctly
netloc = parts.netloc.split('@')[-1].split(']')[-1]
if ':' in netloc:
port = netloc.split(':')[1]
if port:
port = int(port)
else:
port = None
generic_parts['port'] = port
else:
# we have to use indexes, and then parse out
# the other parts not supported by indexing
generic_parts['scheme'] = parts[0]
generic_parts['netloc'] = parts[1]
generic_parts['path'] = parts[2]
generic_parts['params'] = parts[3]
generic_parts['query'] = parts[4]
generic_parts['fragment'] = parts[5]
# get the username, password, etc.
try:
netloc_re = re.compile(r'^((?:\w)+(?::(?:\w)+)?@)?([A-Za-z0-9.-]+)(:\d+)?$')
match = netloc_re.match(parts[1])
auth = match.group(1)
hostname = match.group(2)
port = match.group(3)
if port:
# the capture group for the port will include the ':',
# so remove it and convert the port to an integer
port = int(port[1:])
if auth:
# the capture group above includes the @, so remove it
# and then split it up based on the first ':' found
auth = auth[:-1]
username, password = auth.split(':', 1)
else:
username = password = None
generic_parts['username'] = username
generic_parts['password'] = password
generic_parts['hostname'] = hostname
generic_parts['port'] = port
except Exception:
generic_parts['username'] = None
generic_parts['password'] = None
generic_parts['hostname'] = parts[1]
generic_parts['port'] = None
return generic_parts
def extract_pem_certs(b_data):
for match in b_PEM_CERT_RE.finditer(b_data):
yield match.group(0)
def get_response_filename(response):
url = response.geturl()
path = urlparse(url)[2]
filename = os.path.basename(path.rstrip('/')) or None
if filename:
filename = unquote(filename)
return response.headers.get_param('filename', header='content-disposition') or filename
def parse_content_type(response):
if PY2:
get_type = response.headers.gettype
get_param = response.headers.getparam
else:
get_type = response.headers.get_content_type
get_param = response.headers.get_param
content_type = (get_type() or 'application/octet-stream').split(',')[0]
main_type, sub_type = content_type.split('/')
charset = (get_param('charset') or 'utf-8').split(',')[0]
return content_type, main_type, sub_type, charset
class RequestWithMethod(urllib_request.Request):
'''
Workaround for using DELETE/PUT/etc with urllib2
Originally contained in library/net_infrastructure/dnsmadeeasy
'''
def __init__(self, url, method, data=None, headers=None, origin_req_host=None, unverifiable=True):
if headers is None:
headers = {}
self._method = method.upper()
urllib_request.Request.__init__(self, url, data, headers, origin_req_host, unverifiable)
def get_method(self):
if self._method:
return self._method
else:
return urllib_request.Request.get_method(self)
def RedirectHandlerFactory(follow_redirects=None, validate_certs=True, ca_path=None):
"""This is a class factory that closes over the value of
``follow_redirects`` so that the RedirectHandler class has access to
that value without having to use globals, and potentially cause problems
where ``open_url`` or ``fetch_url`` are used multiple times in a module.
"""
class RedirectHandler(urllib_request.HTTPRedirectHandler):
"""This is an implementation of a RedirectHandler to match the
functionality provided by httplib2. It will utilize the value of
``follow_redirects`` that is passed into ``RedirectHandlerFactory``
to determine how redirects should be handled in urllib2.
"""
def redirect_request(self, req, fp, code, msg, hdrs, newurl):
if not HAS_SSLCONTEXT:
handler = maybe_add_ssl_handler(newurl, validate_certs, ca_path=ca_path)
if handler:
urllib_request._opener.add_handler(handler)
# Preserve urllib2 compatibility
if follow_redirects == 'urllib2':
return urllib_request.HTTPRedirectHandler.redirect_request(self, req, fp, code, msg, hdrs, newurl)
# Handle disabled redirects
elif follow_redirects in ['no', 'none', False]:
raise urllib_error.HTTPError(newurl, code, msg, hdrs, fp)
method = req.get_method()
# Handle non-redirect HTTP status or invalid follow_redirects
if follow_redirects in ['all', 'yes', True]:
if code < 300 or code >= 400:
raise urllib_error.HTTPError(req.get_full_url(), code, msg, hdrs, fp)
elif follow_redirects == 'safe':
if code < 300 or code >= 400 or method not in ('GET', 'HEAD'):
raise urllib_error.HTTPError(req.get_full_url(), code, msg, hdrs, fp)
else:
raise urllib_error.HTTPError(req.get_full_url(), code, msg, hdrs, fp)
try:
# Python 2-3.3
data = req.get_data()
origin_req_host = req.get_origin_req_host()
except AttributeError:
# Python 3.4+
data = req.data
origin_req_host = req.origin_req_host
# Be conciliant with URIs containing a space
newurl = newurl.replace(' ', '%20')
# Suport redirect with payload and original headers
if code in (307, 308):
# Preserve payload and headers
headers = req.headers
else:
# Do not preserve payload and filter headers
data = None
headers = dict((k, v) for k, v in req.headers.items()
if k.lower() not in ("content-length", "content-type", "transfer-encoding"))
# http://tools.ietf.org/html/rfc7231#section-6.4.4
if code == 303 and method != 'HEAD':
method = 'GET'
# Do what the browsers do, despite standards...
# First, turn 302s into GETs.
if code == 302 and method != 'HEAD':
method = 'GET'
# Second, if a POST is responded to with a 301, turn it into a GET.
if code == 301 and method == 'POST':
method = 'GET'
return RequestWithMethod(newurl,
method=method,
headers=headers,
data=data,
origin_req_host=origin_req_host,
unverifiable=True,
)
return RedirectHandler
def build_ssl_validation_error(hostname, port, paths, exc=None):
'''Inteligently build out the SSLValidationError based on what support
you have installed
'''
msg = [
('Failed to validate the SSL certificate for %s:%s.'
' Make sure your managed systems have a valid CA'
' certificate installed.')
]
if not HAS_SSLCONTEXT:
msg.append('If the website serving the url uses SNI you need'
' python >= 2.7.9 on your managed machine')
msg.append(' (the python executable used (%s) is version: %s)' %
(sys.executable, ''.join(sys.version.splitlines())))
if not HAS_URLLIB3_PYOPENSSLCONTEXT and not HAS_URLLIB3_SSL_WRAP_SOCKET:
msg.append('or you can install the `urllib3`, `pyOpenSSL`,'
' `ndg-httpsclient`, and `pyasn1` python modules')
msg.append('to perform SNI verification in python >= 2.6.')
msg.append('You can use validate_certs=False if you do'
' not need to confirm the servers identity but this is'
' unsafe and not recommended.'
' Paths checked for this platform: %s.')
if exc:
msg.append('The exception msg was: %s.' % to_native(exc))
raise SSLValidationError(' '.join(msg) % (hostname, port, ", ".join(paths)))
def atexit_remove_file(filename):
if os.path.exists(filename):
try:
os.unlink(filename)
except Exception:
# just ignore if we cannot delete, things should be ok
pass
class SSLValidationHandler(urllib_request.BaseHandler):
'''
A custom handler class for SSL validation.
Based on:
http://stackoverflow.com/questions/1087227/validate-ssl-certificates-with-python
http://techknack.net/python-urllib2-handlers/
'''
CONNECT_COMMAND = "CONNECT %s:%s HTTP/1.0\r\n"
def __init__(self, hostname, port, ca_path=None):
self.hostname = hostname
self.port = port
self.ca_path = ca_path
def get_ca_certs(self):
# tries to find a valid CA cert in one of the
# standard locations for the current distribution
ca_certs = []
cadata = bytearray()
paths_checked = []
if self.ca_path:
paths_checked = [self.ca_path]
with open(to_bytes(self.ca_path, errors='surrogate_or_strict'), 'rb') as f:
if HAS_SSLCONTEXT:
for b_pem in extract_pem_certs(f.read()):
cadata.extend(
ssl.PEM_cert_to_DER_cert(
to_native(b_pem, errors='surrogate_or_strict')
)
)
return self.ca_path, cadata, paths_checked
if not HAS_SSLCONTEXT:
paths_checked.append('/etc/ssl/certs')
system = to_text(platform.system(), errors='surrogate_or_strict')
# build a list of paths to check for .crt/.pem files
# based on the platform type
if system == u'Linux':
paths_checked.append('/etc/pki/ca-trust/extracted/pem')
paths_checked.append('/etc/pki/tls/certs')
paths_checked.append('/usr/share/ca-certificates/cacert.org')
elif system == u'FreeBSD':
paths_checked.append('/usr/local/share/certs')
elif system == u'OpenBSD':
paths_checked.append('/etc/ssl')
elif system == u'NetBSD':
ca_certs.append('/etc/openssl/certs')
elif system == u'SunOS':
paths_checked.append('/opt/local/etc/openssl/certs')
# fall back to a user-deployed cert in a standard
# location if the OS platform one is not available
paths_checked.append('/etc/ansible')
tmp_path = None
if not HAS_SSLCONTEXT:
tmp_fd, tmp_path = tempfile.mkstemp()
atexit.register(atexit_remove_file, tmp_path)
# Write the dummy ca cert if we are running on macOS
if system == u'Darwin':
if HAS_SSLCONTEXT:
cadata.extend(
ssl.PEM_cert_to_DER_cert(
to_native(b_DUMMY_CA_CERT, errors='surrogate_or_strict')
)
)
else:
os.write(tmp_fd, b_DUMMY_CA_CERT)
# Default Homebrew path for OpenSSL certs
paths_checked.append('/usr/local/etc/openssl')
# for all of the paths, find any .crt or .pem files
# and compile them into single temp file for use
# in the ssl check to speed up the test
for path in paths_checked:
if os.path.exists(path) and os.path.isdir(path):
dir_contents = os.listdir(path)
for f in dir_contents:
full_path = os.path.join(path, f)
if os.path.isfile(full_path) and os.path.splitext(f)[1] in ('.crt', '.pem'):
try:
if full_path not in LOADED_VERIFY_LOCATIONS:
with open(full_path, 'rb') as cert_file:
b_cert = cert_file.read()
if HAS_SSLCONTEXT:
try:
for b_pem in extract_pem_certs(b_cert):
cadata.extend(
ssl.PEM_cert_to_DER_cert(
to_native(b_pem, errors='surrogate_or_strict')
)
)
except Exception:
continue
else:
os.write(tmp_fd, b_cert)
os.write(tmp_fd, b'\n')
except (OSError, IOError):
pass
if HAS_SSLCONTEXT:
default_verify_paths = ssl.get_default_verify_paths()
paths_checked[:0] = [default_verify_paths.capath]
else:
os.close(tmp_fd)
return (tmp_path, cadata, paths_checked)
def validate_proxy_response(self, response, valid_codes=None):
'''
make sure we get back a valid code from the proxy
'''
valid_codes = [200] if valid_codes is None else valid_codes
try:
(http_version, resp_code, msg) = re.match(br'(HTTP/\d\.\d) (\d\d\d) (.*)', response).groups()
if int(resp_code) not in valid_codes:
raise Exception
except Exception:
raise ProxyError('Connection to proxy failed')
def detect_no_proxy(self, url):
'''
Detect if the 'no_proxy' environment variable is set and honor those locations.
'''
env_no_proxy = os.environ.get('no_proxy')
if env_no_proxy:
env_no_proxy = env_no_proxy.split(',')
netloc = urlparse(url).netloc
for host in env_no_proxy:
if netloc.endswith(host) or netloc.split(':')[0].endswith(host):
# Our requested URL matches something in no_proxy, so don't
# use the proxy for this
return False
return True
def make_context(self, cafile, cadata):
cafile = self.ca_path or cafile
if self.ca_path:
cadata = None
else:
cadata = cadata or None
if HAS_SSLCONTEXT:
context = create_default_context(cafile=cafile)
elif HAS_URLLIB3_PYOPENSSLCONTEXT:
context = PyOpenSSLContext(PROTOCOL)
else:
raise NotImplementedError('Host libraries are too old to support creating an sslcontext')
if cafile or cadata:
context.load_verify_locations(cafile=cafile, cadata=cadata)
return context
def http_request(self, req):
tmp_ca_cert_path, cadata, paths_checked = self.get_ca_certs()
# Detect if 'no_proxy' environment variable is set and if our URL is included
use_proxy = self.detect_no_proxy(req.get_full_url())
https_proxy = os.environ.get('https_proxy')
context = None
try:
context = self.make_context(tmp_ca_cert_path, cadata)
except NotImplementedError:
# We'll make do with no context below
pass
try:
if use_proxy and https_proxy:
proxy_parts = generic_urlparse(urlparse(https_proxy))
port = proxy_parts.get('port') or 443
proxy_hostname = proxy_parts.get('hostname', None)
if proxy_hostname is None or proxy_parts.get('scheme') == '':
raise ProxyError("Failed to parse https_proxy environment variable."
" Please make sure you export https proxy as 'https_proxy=<SCHEME>://<IP_ADDRESS>:<PORT>'")
s = socket.create_connection((proxy_hostname, port))
if proxy_parts.get('scheme') == 'http':
s.sendall(to_bytes(self.CONNECT_COMMAND % (self.hostname, self.port), errors='surrogate_or_strict'))
if proxy_parts.get('username'):
credentials = "%s:%s" % (proxy_parts.get('username', ''), proxy_parts.get('password', ''))
s.sendall(b'Proxy-Authorization: Basic %s\r\n' % base64.b64encode(to_bytes(credentials, errors='surrogate_or_strict')).strip())
s.sendall(b'\r\n')
connect_result = b""
while connect_result.find(b"\r\n\r\n") <= 0:
connect_result += s.recv(4096)
# 128 kilobytes of headers should be enough for everyone.
if len(connect_result) > 131072:
raise ProxyError('Proxy sent too verbose headers. Only 128KiB allowed.')
self.validate_proxy_response(connect_result)
if context:
ssl_s = context.wrap_socket(s, server_hostname=self.hostname)
elif HAS_URLLIB3_SSL_WRAP_SOCKET:
ssl_s = ssl_wrap_socket(s, ca_certs=tmp_ca_cert_path, cert_reqs=ssl.CERT_REQUIRED, ssl_version=PROTOCOL, server_hostname=self.hostname)
else:
ssl_s = ssl.wrap_socket(s, ca_certs=tmp_ca_cert_path, cert_reqs=ssl.CERT_REQUIRED, ssl_version=PROTOCOL)
match_hostname(ssl_s.getpeercert(), self.hostname)
else:
raise ProxyError('Unsupported proxy scheme: %s. Currently ansible only supports HTTP proxies.' % proxy_parts.get('scheme'))
else:
s = socket.create_connection((self.hostname, self.port))
if context:
ssl_s = context.wrap_socket(s, server_hostname=self.hostname)
elif HAS_URLLIB3_SSL_WRAP_SOCKET:
ssl_s = ssl_wrap_socket(s, ca_certs=tmp_ca_cert_path, cert_reqs=ssl.CERT_REQUIRED, ssl_version=PROTOCOL, server_hostname=self.hostname)
else:
ssl_s = ssl.wrap_socket(s, ca_certs=tmp_ca_cert_path, cert_reqs=ssl.CERT_REQUIRED, ssl_version=PROTOCOL)
match_hostname(ssl_s.getpeercert(), self.hostname)
# close the ssl connection
# ssl_s.unwrap()
s.close()
except (ssl.SSLError, CertificateError) as e:
build_ssl_validation_error(self.hostname, self.port, paths_checked, e)
except socket.error as e:
raise ConnectionError('Failed to connect to %s at port %s: %s' % (self.hostname, self.port, to_native(e)))
return req
https_request = http_request
def maybe_add_ssl_handler(url, validate_certs, ca_path=None):
parsed = generic_urlparse(urlparse(url))
if parsed.scheme == 'https' and validate_certs:
if not HAS_SSL:
raise NoSSLError('SSL validation is not available in your version of python. You can use validate_certs=False,'
' however this is unsafe and not recommended')
# create the SSL validation handler and
# add it to the list of handlers
return SSLValidationHandler(parsed.hostname, parsed.port or 443, ca_path=ca_path)
def getpeercert(response, binary_form=False):
""" Attempt to get the peer certificate of the response from urlopen. """
# The response from urllib2.open() is different across Python 2 and 3
if PY3:
socket = response.fp.raw._sock
else:
socket = response.fp._sock.fp._sock
try:
return socket.getpeercert(binary_form)
except AttributeError:
pass # Not HTTPS
def get_channel_binding_cert_hash(certificate_der):
""" Gets the channel binding app data for a TLS connection using the peer cert. """
if not HAS_CRYPTOGRAPHY:
return
# Logic documented in RFC 5929 section 4 https://tools.ietf.org/html/rfc5929#section-4
cert = x509.load_der_x509_certificate(certificate_der, default_backend())
hash_algorithm = None
try:
hash_algorithm = cert.signature_hash_algorithm
except UnsupportedAlgorithm:
pass
# If the signature hash algorithm is unknown/unsupported or md5/sha1 we must use SHA256.
if not hash_algorithm or hash_algorithm.name in ['md5', 'sha1']:
hash_algorithm = hashes.SHA256()
digest = hashes.Hash(hash_algorithm, default_backend())
digest.update(certificate_der)
return digest.finalize()
def rfc2822_date_string(timetuple, zone='-0000'):
"""Accepts a timetuple and optional zone which defaults to ``-0000``
and returns a date string as specified by RFC 2822, e.g.:
Fri, 09 Nov 2001 01:08:47 -0000
Copied from email.utils.formatdate and modified for separate use
"""
return '%s, %02d %s %04d %02d:%02d:%02d %s' % (
['Mon', 'Tue', 'Wed', 'Thu', 'Fri', 'Sat', 'Sun'][timetuple[6]],
timetuple[2],
['Jan', 'Feb', 'Mar', 'Apr', 'May', 'Jun',
'Jul', 'Aug', 'Sep', 'Oct', 'Nov', 'Dec'][timetuple[1] - 1],
timetuple[0], timetuple[3], timetuple[4], timetuple[5],
zone)
class Request:
def __init__(self, headers=None, use_proxy=True, force=False, timeout=10, validate_certs=True,
url_username=None, url_password=None, http_agent=None, force_basic_auth=False,
follow_redirects='urllib2', client_cert=None, client_key=None, cookies=None, unix_socket=None,
ca_path=None):
"""This class works somewhat similarly to the ``Session`` class of from requests
by defining a cookiejar that an be used across requests as well as cascaded defaults that
can apply to repeated requests
For documentation of params, see ``Request.open``
>>> from ansible.module_utils.urls import Request
>>> r = Request()
>>> r.open('GET', 'http://httpbin.org/cookies/set?k1=v1').read()
'{\n "cookies": {\n "k1": "v1"\n }\n}\n'
>>> r = Request(url_username='user', url_password='passwd')
>>> r.open('GET', 'http://httpbin.org/basic-auth/user/passwd').read()
'{\n "authenticated": true, \n "user": "user"\n}\n'
>>> r = Request(headers=dict(foo='bar'))
>>> r.open('GET', 'http://httpbin.org/get', headers=dict(baz='qux')).read()
"""
self.headers = headers or {}
if not isinstance(self.headers, dict):
raise ValueError("headers must be a dict: %r" % self.headers)
self.use_proxy = use_proxy
self.force = force
self.timeout = timeout
self.validate_certs = validate_certs
self.url_username = url_username
self.url_password = url_password
self.http_agent = http_agent
self.force_basic_auth = force_basic_auth
self.follow_redirects = follow_redirects
self.client_cert = client_cert
self.client_key = client_key
self.unix_socket = unix_socket
self.ca_path = ca_path
if isinstance(cookies, cookiejar.CookieJar):
self.cookies = cookies
else:
self.cookies = cookiejar.CookieJar()
def _fallback(self, value, fallback):
if value is None:
return fallback
return value
def open(self, method, url, data=None, headers=None, use_proxy=None,
force=None, last_mod_time=None, timeout=None, validate_certs=None,
url_username=None, url_password=None, http_agent=None,
force_basic_auth=None, follow_redirects=None,
client_cert=None, client_key=None, cookies=None, use_gssapi=False,
unix_socket=None, ca_path=None, unredirected_headers=None):
"""
Sends a request via HTTP(S) or FTP using urllib2 (Python2) or urllib (Python3)
Does not require the module environment
Returns :class:`HTTPResponse` object.
:arg method: method for the request
:arg url: URL to request
:kwarg data: (optional) bytes, or file-like object to send
in the body of the request
:kwarg headers: (optional) Dictionary of HTTP Headers to send with the
request
:kwarg use_proxy: (optional) Boolean of whether or not to use proxy
:kwarg force: (optional) Boolean of whether or not to set `cache-control: no-cache` header
:kwarg last_mod_time: (optional) Datetime object to use when setting If-Modified-Since header
:kwarg timeout: (optional) How long to wait for the server to send
data before giving up, as a float
:kwarg validate_certs: (optional) Booleani that controls whether we verify
the server's TLS certificate
:kwarg url_username: (optional) String of the user to use when authenticating
:kwarg url_password: (optional) String of the password to use when authenticating
:kwarg http_agent: (optional) String of the User-Agent to use in the request
:kwarg force_basic_auth: (optional) Boolean determining if auth header should be sent in the initial request
:kwarg follow_redirects: (optional) String of urllib2, all/yes, safe, none to determine how redirects are
followed, see RedirectHandlerFactory for more information
:kwarg client_cert: (optional) PEM formatted certificate chain file to be used for SSL client authentication.
This file can also include the key as well, and if the key is included, client_key is not required
:kwarg client_key: (optional) PEM formatted file that contains your private key to be used for SSL client
authentication. If client_cert contains both the certificate and key, this option is not required
:kwarg cookies: (optional) CookieJar object to send with the
request
:kwarg use_gssapi: (optional) Use GSSAPI handler of requests.
:kwarg unix_socket: (optional) String of file system path to unix socket file to use when establishing
connection to the provided url
:kwarg ca_path: (optional) String of file system path to CA cert bundle to use
:kwarg unredirected_headers: (optional) A list of headers to not attach on a redirected request
:returns: HTTPResponse. Added in Ansible 2.9
"""
method = method.upper()
if headers is None:
headers = {}
elif not isinstance(headers, dict):
raise ValueError("headers must be a dict")
headers = dict(self.headers, **headers)
use_proxy = self._fallback(use_proxy, self.use_proxy)
force = self._fallback(force, self.force)
timeout = self._fallback(timeout, self.timeout)
validate_certs = self._fallback(validate_certs, self.validate_certs)
url_username = self._fallback(url_username, self.url_username)
url_password = self._fallback(url_password, self.url_password)
http_agent = self._fallback(http_agent, self.http_agent)
force_basic_auth = self._fallback(force_basic_auth, self.force_basic_auth)
follow_redirects = self._fallback(follow_redirects, self.follow_redirects)
client_cert = self._fallback(client_cert, self.client_cert)
client_key = self._fallback(client_key, self.client_key)
cookies = self._fallback(cookies, self.cookies)
unix_socket = self._fallback(unix_socket, self.unix_socket)
ca_path = self._fallback(ca_path, self.ca_path)
handlers = []
if unix_socket:
handlers.append(UnixHTTPHandler(unix_socket))
ssl_handler = maybe_add_ssl_handler(url, validate_certs, ca_path=ca_path)
if ssl_handler and not HAS_SSLCONTEXT:
handlers.append(ssl_handler)
parsed = generic_urlparse(urlparse(url))
if parsed.scheme != 'ftp':
username = url_username
password = url_password
if username:
netloc = parsed.netloc
elif '@' in parsed.netloc:
credentials, netloc = parsed.netloc.split('@', 1)
if ':' in credentials:
username, password = credentials.split(':', 1)
else:
username = credentials
password = ''
parsed_list = parsed.as_list()
parsed_list[1] = netloc
# reconstruct url without credentials
url = urlunparse(parsed_list)
if use_gssapi:
if HTTPGSSAPIAuthHandler:
handlers.append(HTTPGSSAPIAuthHandler(username, password))
else:
imp_err_msg = missing_required_lib('gssapi', reason='for use_gssapi=True',
url='https://pypi.org/project/gssapi/')
raise MissingModuleError(imp_err_msg, import_traceback=GSSAPI_IMP_ERR)
elif username and not force_basic_auth:
passman = urllib_request.HTTPPasswordMgrWithDefaultRealm()
# this creates a password manager
passman.add_password(None, netloc, username, password)
# because we have put None at the start it will always
# use this username/password combination for urls
# for which `theurl` is a super-url
authhandler = urllib_request.HTTPBasicAuthHandler(passman)
digest_authhandler = urllib_request.HTTPDigestAuthHandler(passman)
# create the AuthHandler
handlers.append(authhandler)
handlers.append(digest_authhandler)
elif username and force_basic_auth:
headers["Authorization"] = basic_auth_header(username, password)
else:
try:
rc = netrc.netrc(os.environ.get('NETRC'))
login = rc.authenticators(parsed.hostname)
except IOError:
login = None
if login:
username, _, password = login
if username and password:
headers["Authorization"] = basic_auth_header(username, password)
if not use_proxy:
proxyhandler = urllib_request.ProxyHandler({})
handlers.append(proxyhandler)
context = None
if HAS_SSLCONTEXT and not validate_certs:
# In 2.7.9, the default context validates certificates
context = SSLContext(ssl.PROTOCOL_SSLv23)
if ssl.OP_NO_SSLv2:
context.options |= ssl.OP_NO_SSLv2
context.options |= ssl.OP_NO_SSLv3
context.verify_mode = ssl.CERT_NONE
context.check_hostname = False
handlers.append(HTTPSClientAuthHandler(client_cert=client_cert,
client_key=client_key,
context=context,
unix_socket=unix_socket))
elif client_cert or unix_socket:
handlers.append(HTTPSClientAuthHandler(client_cert=client_cert,
client_key=client_key,
unix_socket=unix_socket))
if ssl_handler and HAS_SSLCONTEXT and validate_certs:
tmp_ca_path, cadata, paths_checked = ssl_handler.get_ca_certs()
try:
context = ssl_handler.make_context(tmp_ca_path, cadata)
except NotImplementedError:
pass
# pre-2.6 versions of python cannot use the custom https
# handler, since the socket class is lacking create_connection.
# Some python builds lack HTTPS support.
if hasattr(socket, 'create_connection') and CustomHTTPSHandler:
kwargs = {}
if HAS_SSLCONTEXT:
kwargs['context'] = context
handlers.append(CustomHTTPSHandler(**kwargs))
handlers.append(RedirectHandlerFactory(follow_redirects, validate_certs, ca_path=ca_path))
# add some nicer cookie handling
if cookies is not None:
handlers.append(urllib_request.HTTPCookieProcessor(cookies))
opener = urllib_request.build_opener(*handlers)
urllib_request.install_opener(opener)
data = to_bytes(data, nonstring='passthru')
request = RequestWithMethod(url, method, data)
# add the custom agent header, to help prevent issues
# with sites that block the default urllib agent string
if http_agent:
request.add_header('User-agent', http_agent)
# Cache control
# Either we directly force a cache refresh
if force:
request.add_header('cache-control', 'no-cache')
# or we do it if the original is more recent than our copy
elif last_mod_time:
tstamp = rfc2822_date_string(last_mod_time.timetuple(), 'GMT')
request.add_header('If-Modified-Since', tstamp)
# user defined headers now, which may override things we've set above
unredirected_headers = [h.lower() for h in (unredirected_headers or [])]
for header in headers:
if header.lower() in unredirected_headers:
request.add_unredirected_header(header, headers[header])
else:
request.add_header(header, headers[header])
return urllib_request.urlopen(request, None, timeout)
def get(self, url, **kwargs):
r"""Sends a GET request. Returns :class:`HTTPResponse` object.
:arg url: URL to request
:kwarg \*\*kwargs: Optional arguments that ``open`` takes.
:returns: HTTPResponse
"""
return self.open('GET', url, **kwargs)
def options(self, url, **kwargs):
r"""Sends a OPTIONS request. Returns :class:`HTTPResponse` object.
:arg url: URL to request
:kwarg \*\*kwargs: Optional arguments that ``open`` takes.
:returns: HTTPResponse
"""
return self.open('OPTIONS', url, **kwargs)
def head(self, url, **kwargs):
r"""Sends a HEAD request. Returns :class:`HTTPResponse` object.
:arg url: URL to request
:kwarg \*\*kwargs: Optional arguments that ``open`` takes.
:returns: HTTPResponse
"""
return self.open('HEAD', url, **kwargs)
def post(self, url, data=None, **kwargs):
r"""Sends a POST request. Returns :class:`HTTPResponse` object.
:arg url: URL to request.
:kwarg data: (optional) bytes, or file-like object to send in the body of the request.
:kwarg \*\*kwargs: Optional arguments that ``open`` takes.
:returns: HTTPResponse
"""
return self.open('POST', url, data=data, **kwargs)
def put(self, url, data=None, **kwargs):
r"""Sends a PUT request. Returns :class:`HTTPResponse` object.
:arg url: URL to request.
:kwarg data: (optional) bytes, or file-like object to send in the body of the request.
:kwarg \*\*kwargs: Optional arguments that ``open`` takes.
:returns: HTTPResponse
"""
return self.open('PUT', url, data=data, **kwargs)
def patch(self, url, data=None, **kwargs):
r"""Sends a PATCH request. Returns :class:`HTTPResponse` object.
:arg url: URL to request.
:kwarg data: (optional) bytes, or file-like object to send in the body of the request.
:kwarg \*\*kwargs: Optional arguments that ``open`` takes.
:returns: HTTPResponse
"""
return self.open('PATCH', url, data=data, **kwargs)
def delete(self, url, **kwargs):
r"""Sends a DELETE request. Returns :class:`HTTPResponse` object.
:arg url: URL to request
:kwargs \*\*kwargs: Optional arguments that ``open`` takes.
:returns: HTTPResponse
"""
return self.open('DELETE', url, **kwargs)
def open_url(url, data=None, headers=None, method=None, use_proxy=True,
force=False, last_mod_time=None, timeout=10, validate_certs=True,
url_username=None, url_password=None, http_agent=None,
force_basic_auth=False, follow_redirects='urllib2',
client_cert=None, client_key=None, cookies=None,
use_gssapi=False, unix_socket=None, ca_path=None,
unredirected_headers=None):
'''
Sends a request via HTTP(S) or FTP using urllib2 (Python2) or urllib (Python3)
Does not require the module environment
'''
method = method or ('POST' if data else 'GET')
return Request().open(method, url, data=data, headers=headers, use_proxy=use_proxy,
force=force, last_mod_time=last_mod_time, timeout=timeout, validate_certs=validate_certs,
url_username=url_username, url_password=url_password, http_agent=http_agent,
force_basic_auth=force_basic_auth, follow_redirects=follow_redirects,
client_cert=client_cert, client_key=client_key, cookies=cookies,
use_gssapi=use_gssapi, unix_socket=unix_socket, ca_path=ca_path,
unredirected_headers=unredirected_headers)
def prepare_multipart(fields):
"""Takes a mapping, and prepares a multipart/form-data body
:arg fields: Mapping
:returns: tuple of (content_type, body) where ``content_type`` is
the ``multipart/form-data`` ``Content-Type`` header including
``boundary`` and ``body`` is the prepared bytestring body
Payload content from a file will be base64 encoded and will include
the appropriate ``Content-Transfer-Encoding`` and ``Content-Type``
headers.
Example:
{
"file1": {
"filename": "/bin/true",
"mime_type": "application/octet-stream"
},
"file2": {
"content": "text based file content",
"filename": "fake.txt",
"mime_type": "text/plain",
},
"text_form_field": "value"
}
"""
if not isinstance(fields, Mapping):
raise TypeError(
'Mapping is required, cannot be type %s' % fields.__class__.__name__
)
m = email.mime.multipart.MIMEMultipart('form-data')
for field, value in sorted(fields.items()):
if isinstance(value, string_types):
main_type = 'text'
sub_type = 'plain'
content = value
filename = None
elif isinstance(value, Mapping):
filename = value.get('filename')
content = value.get('content')
if not any((filename, content)):
raise ValueError('at least one of filename or content must be provided')
mime = value.get('mime_type')
if not mime:
try:
mime = mimetypes.guess_type(filename or '', strict=False)[0] or 'application/octet-stream'
except Exception:
mime = 'application/octet-stream'
main_type, sep, sub_type = mime.partition('/')
else:
raise TypeError(
'value must be a string, or mapping, cannot be type %s' % value.__class__.__name__
)
if not content and filename:
with open(to_bytes(filename, errors='surrogate_or_strict'), 'rb') as f:
part = email.mime.application.MIMEApplication(f.read())
del part['Content-Type']
part.add_header('Content-Type', '%s/%s' % (main_type, sub_type))
else:
part = email.mime.nonmultipart.MIMENonMultipart(main_type, sub_type)
part.set_payload(to_bytes(content))
part.add_header('Content-Disposition', 'form-data')
del part['MIME-Version']
part.set_param(
'name',
field,
header='Content-Disposition'
)
if filename:
part.set_param(
'filename',
to_native(os.path.basename(filename)),
header='Content-Disposition'
)
m.attach(part)
if PY3:
# Ensure headers are not split over multiple lines
# The HTTP policy also uses CRLF by default
b_data = m.as_bytes(policy=email.policy.HTTP)
else:
# Py2
# We cannot just call ``as_string`` since it provides no way
# to specify ``maxheaderlen``
fp = cStringIO() # cStringIO seems to be required here
# Ensure headers are not split over multiple lines
g = email.generator.Generator(fp, maxheaderlen=0)
g.flatten(m)
# ``fix_eols`` switches from ``\n`` to ``\r\n``
b_data = email.utils.fix_eols(fp.getvalue())
del m
headers, sep, b_content = b_data.partition(b'\r\n\r\n')
del b_data
if PY3:
parser = email.parser.BytesHeaderParser().parsebytes
else:
# Py2
parser = email.parser.HeaderParser().parsestr
return (
parser(headers)['content-type'], # Message converts to native strings
b_content
)
#
# Module-related functions
#
def basic_auth_header(username, password):
"""Takes a username and password and returns a byte string suitable for
using as value of an Authorization header to do basic auth.
"""
return b"Basic %s" % base64.b64encode(to_bytes("%s:%s" % (username, password), errors='surrogate_or_strict'))
def url_argument_spec():
'''
Creates an argument spec that can be used with any module
that will be requesting content via urllib/urllib2
'''
return dict(
url=dict(type='str'),
force=dict(type='bool', default=False),
http_agent=dict(type='str', default='ansible-httpget'),
use_proxy=dict(type='bool', default=True),
validate_certs=dict(type='bool', default=True),
url_username=dict(type='str'),
url_password=dict(type='str', no_log=True),
force_basic_auth=dict(type='bool', default=False),
client_cert=dict(type='path'),
client_key=dict(type='path'),
use_gssapi=dict(type='bool', default=False),
)
def fetch_url(module, url, data=None, headers=None, method=None,
use_proxy=True, force=False, last_mod_time=None, timeout=10,
use_gssapi=False, unix_socket=None, ca_path=None, cookies=None, unredirected_headers=None):
"""Sends a request via HTTP(S) or FTP (needs the module as parameter)
:arg module: The AnsibleModule (used to get username, password etc. (s.b.).
:arg url: The url to use.
:kwarg data: The data to be sent (in case of POST/PUT).
:kwarg headers: A dict with the request headers.
:kwarg method: "POST", "PUT", etc.
:kwarg boolean use_proxy: Default: True
:kwarg boolean force: If True: Do not get a cached copy (Default: False)
:kwarg last_mod_time: Default: None
:kwarg int timeout: Default: 10
:kwarg boolean use_gssapi: Default: False
:kwarg unix_socket: (optional) String of file system path to unix socket file to use when establishing
connection to the provided url
:kwarg ca_path: (optional) String of file system path to CA cert bundle to use
:kwarg cookies: (optional) CookieJar object to send with the request
:kwarg unredirected_headers: (optional) A list of headers to not attach on a redirected request
:returns: A tuple of (**response**, **info**). Use ``response.read()`` to read the data.
The **info** contains the 'status' and other meta data. When a HttpError (status >= 400)
occurred then ``info['body']`` contains the error response data::
Example::
data={...}
resp, info = fetch_url(module,
"http://example.com",
data=module.jsonify(data),
headers={'Content-type': 'application/json'},
method="POST")
status_code = info["status"]
body = resp.read()
if status_code >= 400 :
body = info['body']
"""
if not HAS_URLPARSE:
module.fail_json(msg='urlparse is not installed')
# ensure we use proper tempdir
old_tempdir = tempfile.tempdir
tempfile.tempdir = module.tmpdir
# Get validate_certs from the module params
validate_certs = module.params.get('validate_certs', True)
username = module.params.get('url_username', '')
password = module.params.get('url_password', '')
http_agent = module.params.get('http_agent', 'ansible-httpget')
force_basic_auth = module.params.get('force_basic_auth', '')
follow_redirects = module.params.get('follow_redirects', 'urllib2')
client_cert = module.params.get('client_cert')
client_key = module.params.get('client_key')
use_gssapi = module.params.get('use_gssapi', use_gssapi)
if not isinstance(cookies, cookiejar.CookieJar):
cookies = cookiejar.LWPCookieJar()
r = None
info = dict(url=url, status=-1)
try:
r = open_url(url, data=data, headers=headers, method=method,
use_proxy=use_proxy, force=force, last_mod_time=last_mod_time, timeout=timeout,
validate_certs=validate_certs, url_username=username,
url_password=password, http_agent=http_agent, force_basic_auth=force_basic_auth,
follow_redirects=follow_redirects, client_cert=client_cert,
client_key=client_key, cookies=cookies, use_gssapi=use_gssapi,
unix_socket=unix_socket, ca_path=ca_path, unredirected_headers=unredirected_headers)
# Lowercase keys, to conform to py2 behavior, so that py3 and py2 are predictable
info.update(dict((k.lower(), v) for k, v in r.info().items()))
# Don't be lossy, append header values for duplicate headers
# In Py2 there is nothing that needs done, py2 does this for us
if PY3:
temp_headers = {}
for name, value in r.headers.items():
# The same as above, lower case keys to match py2 behavior, and create more consistent results
name = name.lower()
if name in temp_headers:
temp_headers[name] = ', '.join((temp_headers[name], value))
else:
temp_headers[name] = value
info.update(temp_headers)
# parse the cookies into a nice dictionary
cookie_list = []
cookie_dict = dict()
# Python sorts cookies in order of most specific (ie. longest) path first. See ``CookieJar._cookie_attrs``
# Cookies with the same path are reversed from response order.
# This code makes no assumptions about that, and accepts the order given by python
for cookie in cookies:
cookie_dict[cookie.name] = cookie.value
cookie_list.append((cookie.name, cookie.value))
info['cookies_string'] = '; '.join('%s=%s' % c for c in cookie_list)
info['cookies'] = cookie_dict
# finally update the result with a message about the fetch
info.update(dict(msg="OK (%s bytes)" % r.headers.get('Content-Length', 'unknown'), url=r.geturl(), status=r.code))
except NoSSLError as e:
distribution = get_distribution()
if distribution is not None and distribution.lower() == 'redhat':
module.fail_json(msg='%s. You can also install python-ssl from EPEL' % to_native(e), **info)
else:
module.fail_json(msg='%s' % to_native(e), **info)
except (ConnectionError, ValueError) as e:
module.fail_json(msg=to_native(e), **info)
except MissingModuleError as e:
module.fail_json(msg=to_text(e), exception=e.import_traceback)
except urllib_error.HTTPError as e:
r = e
try:
body = e.read()
except AttributeError:
body = ''
else:
e.close()
# Try to add exception info to the output but don't fail if we can't
try:
# Lowercase keys, to conform to py2 behavior, so that py3 and py2 are predictable
info.update(dict((k.lower(), v) for k, v in e.info().items()))
except Exception:
pass
info.update({'msg': to_native(e), 'body': body, 'status': e.code})
except urllib_error.URLError as e:
code = int(getattr(e, 'code', -1))
info.update(dict(msg="Request failed: %s" % to_native(e), status=code))
except socket.error as e:
info.update(dict(msg="Connection failure: %s" % to_native(e), status=-1))
except httplib.BadStatusLine as e:
info.update(dict(msg="Connection failure: connection was closed before a valid response was received: %s" % to_native(e.line), status=-1))
except Exception as e:
info.update(dict(msg="An unknown error occurred: %s" % to_native(e), status=-1),
exception=traceback.format_exc())
finally:
tempfile.tempdir = old_tempdir
return r, info
def fetch_file(module, url, data=None, headers=None, method=None,
use_proxy=True, force=False, last_mod_time=None, timeout=10,
unredirected_headers=None):
'''Download and save a file via HTTP(S) or FTP (needs the module as parameter).
This is basically a wrapper around fetch_url().
:arg module: The AnsibleModule (used to get username, password etc. (s.b.).
:arg url: The url to use.
:kwarg data: The data to be sent (in case of POST/PUT).
:kwarg headers: A dict with the request headers.
:kwarg method: "POST", "PUT", etc.
:kwarg boolean use_proxy: Default: True
:kwarg boolean force: If True: Do not get a cached copy (Default: False)
:kwarg last_mod_time: Default: None
:kwarg int timeout: Default: 10
:kwarg unredirected_headers: (optional) A list of headers to not attach on a redirected request
:returns: A string, the path to the downloaded file.
'''
# download file
bufsize = 65536
file_name, file_ext = os.path.splitext(str(url.rsplit('/', 1)[1]))
fetch_temp_file = tempfile.NamedTemporaryFile(dir=module.tmpdir, prefix=file_name, suffix=file_ext, delete=False)
module.add_cleanup_file(fetch_temp_file.name)
try:
rsp, info = fetch_url(module, url, data, headers, method, use_proxy, force, last_mod_time, timeout,
unredirected_headers=unredirected_headers)
if not rsp:
module.fail_json(msg="Failure downloading %s, %s" % (url, info['msg']))
data = rsp.read(bufsize)
while data:
fetch_temp_file.write(data)
data = rsp.read(bufsize)
fetch_temp_file.close()
except Exception as e:
module.fail_json(msg="Failure downloading %s, %s" % (url, to_native(e)))
return fetch_temp_file.name
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 76,386 |
URI module does not respect or return a status code when auth fails with 401
|
### Summary
The `ansible.builtin.uri` module does not return the status code when the authentication fails. It does not fail gracefully.
Setting `status_code: [200, 401]` does not help.
### Issue Type
Bug Report
### Component Name
uri
### Ansible Version
```console
ansible [core 2.11.6]
config file = /home/mrmeszaros/Workspace/ansible.cfg
configured module search path = ['/home/mrmeszaros/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/lib/python3.8/dist-packages/ansible
ansible collection location = /home/mrmeszaros/Workspace/.collections
executable location = /usr/local/bin/ansible
python version = 3.8.10 (default, Sep 28 2021, 16:10:42) [GCC 9.3.0]
jinja version = 3.0.2
libyaml = True
```
### Configuration
```console
ANSIBLE_NOCOWS(/home/mrmeszaros/Workspace/ansible.cfg) = True
COLLECTIONS_PATHS(/home/mrmeszaros/Workspace/ansible.cfg) = ['/home/mrmeszaros/Workspace/.collections']
DEFAULT_FILTER_PLUGIN_PATH(/home/mrmeszaros/Workspace/ansible.cfg) = ['/home/mrmeszaros/Workspace/filter_plugins']
DEFAULT_HOST_LIST(/home/mrmeszaros/Workspace/ansible.cfg) = ['/home/mrmeszaros/Workspace/inventories/vcenter']
DEFAULT_LOG_PATH(/home/mrmeszaros/Workspace/ansible.cfg) = /home/mrmeszaros/Workspace/.ansible.log
DEFAULT_ROLES_PATH(/home/mrmeszaros/Workspace/ansible.cfg) = ['/home/mrmeszaros/Workspace/.galaxy', '/home/mrmeszaros/Workspace/roles']
DEFAULT_STDOUT_CALLBACK(/home/mrmeszaros/Workspace/ansible.cfg) = yaml
HOST_KEY_CHECKING(/home/mrmeszaros/Workspace/ansible.cfg) = False
RETRY_FILES_ENABLED(/home/mrmeszaros/Workspace/ansible.cfg) = False
```
### OS / Environment
lsb_release -a:
```
Distributor ID: Ubuntu
Description: Ubuntu 20.04.3 LTS
Release: 20.04
Codename: focal
```
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
Prerequisites: Having moloch/arkime (or really any other web app with a REST API with authentication) installed on a machine.
```yaml (paste below)
- name: Check users
hosts: moloch
gather_facts: no
vars:
moloch_admin_pass: notit
tasks:
- name: List users
uri:
method: POST
url: http://localhost:8005/api/users
url_username: admin
url_password: "{{ moloch_admin_pass }}"
status_code:
- 200
- 401
changed_when: no
register: list_users_cmd
- debug: var=list_users_cmd
```
### Expected Results
I expected to see the http response with the headers and the `status: 401`, but instead it failed and did not even set the response headers or status.
Something like curl would do:
```
curl -X POST --digest --user admin:notit 172.19.90.1:8005/api/users -v
* Trying 172.19.90.1:8005...
* TCP_NODELAY set
* Connected to 172.19.90.1 (172.19.90.1) port 8005 (#0)
* Server auth using Digest with user 'admin'
> POST /api/users HTTP/1.1
> Host: 172.19.90.1:8005
> User-Agent: curl/7.68.0
> Accept: */*
>
* Mark bundle as not supporting multiuse
< HTTP/1.1 401 Unauthorized
< Vary: X-HTTP-Method-Override
< X-Frame-Options: DENY
< X-XSS-Protection: 1; mode=block
< WWW-Authenticate: Digest realm="Moloch", nonce="VItPV2v3xTxcE3DOdbmMnF6t60Kxx3VM", qop="auth"
< Date: Mon, 29 Nov 2021 15:16:18 GMT
< Connection: keep-alive
< Keep-Alive: timeout=5
< Transfer-Encoding: chunked
<
* Ignoring the response-body
* Connection #0 to host 172.19.90.1 left intact
* Issue another request to this URL: 'http://172.19.90.1:8005/api/users'
* Found bundle for host 172.19.90.1: 0x563d9c519b50 [serially]
* Can not multiplex, even if we wanted to!
* Re-using existing connection! (#0) with host 172.19.90.1
* Connected to 172.19.90.1 (172.19.90.1) port 8005 (#0)
* Server auth using Digest with user 'admin'
> POST /api/users HTTP/1.1
> Host: 172.19.90.1:8005
> Authorization: Digest username="admin", realm="Moloch", nonce="VItPV2v3xTxcE3DOdbmMnF6t60Kxx3VM", uri="/api/users", cnonce="ZTllZjJmNDIyMzQ0Zjk3Yjc3ZjAyN2MzNGVkMGUzODU=", nc=00000001, qop=auth, response="03c569ff5b108921de0da31a768d56f5"
> User-Agent: curl/7.68.0
> Accept: */*
>
* Mark bundle as not supporting multiuse
< HTTP/1.1 401 Unauthorized
< Vary: X-HTTP-Method-Override
< X-Frame-Options: DENY
< X-XSS-Protection: 1; mode=block
* Authentication problem. Ignoring this.
< WWW-Authenticate: Digest realm="Moloch", nonce="xNcghsGC2sX3w21sTDCFLK0cK9YStZ9M", qop="auth"
< Date: Mon, 29 Nov 2021 15:16:18 GMT
< Connection: keep-alive
< Keep-Alive: timeout=5
< Transfer-Encoding: chunked
<
* Connection #0 to host 172.19.90.1 left intact
Unauthorized⏎
```
### Actual Results
```console
TASK [List users] ***********************************************************************************************************************************************************************************************************
task path: /home/mrmeszaros/Workspace/playbooks/moloch.singleton.yml:316
<172.19.90.1> ESTABLISH SSH CONNECTION FOR USER: sysadm
<172.19.90.1> SSH: EXEC sshpass -d11 ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o 'IdentityFile="roles/linux-base/files/id_rsa"' -o 'User="sysadm"' -o ConnectTimeout=10 -o ControlPath=/home/mrmeszaros/.ansible/cp/f5d1d80b39 172.19.90.1 '/bin/sh -c '"'"'echo ~sysadm && sleep 0'"'"''
<172.19.90.1> (0, b'/home/sysadm\n', b'OpenSSH_8.2p1 Ubuntu-4ubuntu0.3, OpenSSL 1.1.1f 31 Mar 2020\r\ndebug1: Reading configuration data /home/mrmeszaros/.ssh/config\r\ndebug1: /home/mrmeszaros/.ssh/config line 1: Applying options for 172.19.*\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 19: include /etc/ssh/ssh_config.d/*.conf matched no files\r\ndebug1: /etc/ssh/ssh_config line 21: Applying options for *\r\ndebug2: resolve_canonicalize: hostname 172.19.90.1 is address\r\ndebug1: auto-mux: Trying existing master\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug2: mux_client_hello_exchange: master version 4\r\ndebug3: mux_client_forwards: request forwardings: 0 local, 0 remote\r\ndebug3: mux_client_request_session: entering\r\ndebug3: mux_client_request_alive: entering\r\ndebug3: mux_client_request_alive: done pid = 1186335\r\ndebug3: mux_client_request_session: session request sent\r\ndebug3: mux_client_read_packet: read header failed: Broken pipe\r\ndebug2: Received exit status from master 0\r\n')
<172.19.90.1> ESTABLISH SSH CONNECTION FOR USER: sysadm
<172.19.90.1> SSH: EXEC sshpass -d11 ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o 'IdentityFile="roles/linux-base/files/id_rsa"' -o 'User="sysadm"' -o ConnectTimeout=10 -o ControlPath=/home/mrmeszaros/.ansible/cp/f5d1d80b39 172.19.90.1 '/bin/sh -c '"'"'( umask 77 && mkdir -p "` echo /home/sysadm/.ansible/tmp `"&& mkdir "` echo /home/sysadm/.ansible/tmp/ansible-tmp-1638198795.0287964-1186445-275398547896703 `" && echo ansible-tmp-1638198795.0287964-1186445-275398547896703="` echo /home/sysadm/.ansible/tmp/ansible-tmp-1638198795.0287964-1186445-275398547896703 `" ) && sleep 0'"'"''
<172.19.90.1> (0, b'ansible-tmp-1638198795.0287964-1186445-275398547896703=/home/sysadm/.ansible/tmp/ansible-tmp-1638198795.0287964-1186445-275398547896703\n', b'OpenSSH_8.2p1 Ubuntu-4ubuntu0.3, OpenSSL 1.1.1f 31 Mar 2020\r\ndebug1: Reading configuration data /home/mrmeszaros/.ssh/config\r\ndebug1: /home/mrmeszaros/.ssh/config line 1: Applying options for 172.19.*\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 19: include /etc/ssh/ssh_config.d/*.conf matched no files\r\ndebug1: /etc/ssh/ssh_config line 21: Applying options for *\r\ndebug2: resolve_canonicalize: hostname 172.19.90.1 is address\r\ndebug1: auto-mux: Trying existing master\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug2: mux_client_hello_exchange: master version 4\r\ndebug3: mux_client_forwards: request forwardings: 0 local, 0 remote\r\ndebug3: mux_client_request_session: entering\r\ndebug3: mux_client_request_alive: entering\r\ndebug3: mux_client_request_alive: done pid = 1186335\r\ndebug3: mux_client_request_session: session request sent\r\ndebug3: mux_client_read_packet: read header failed: Broken pipe\r\ndebug2: Received exit status from master 0\r\n')
Using module file /usr/local/lib/python3.8/dist-packages/ansible/modules/uri.py
<172.19.90.1> PUT /home/mrmeszaros/.ansible/tmp/ansible-local-1186405uxq7nnzk/tmpv42z4a9b TO /home/sysadm/.ansible/tmp/ansible-tmp-1638198795.0287964-1186445-275398547896703/AnsiballZ_uri.py
<172.19.90.1> SSH: EXEC sshpass -d11 sftp -o BatchMode=no -b - -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o 'IdentityFile="roles/linux-base/files/id_rsa"' -o 'User="sysadm"' -o ConnectTimeout=10 -o ControlPath=/home/mrmeszaros/.ansible/cp/f5d1d80b39 '[172.19.90.1]'
<172.19.90.1> (0, b'sftp> put /home/mrmeszaros/.ansible/tmp/ansible-local-1186405uxq7nnzk/tmpv42z4a9b /home/sysadm/.ansible/tmp/ansible-tmp-1638198795.0287964-1186445-275398547896703/AnsiballZ_uri.py\n', b'OpenSSH_8.2p1 Ubuntu-4ubuntu0.3, OpenSSL 1.1.1f 31 Mar 2020\r\ndebug1: Reading configuration data /home/mrmeszaros/.ssh/config\r\ndebug1: /home/mrmeszaros/.ssh/config line 1: Applying options for 172.19.*\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 19: include /etc/ssh/ssh_config.d/*.conf matched no files\r\ndebug1: /etc/ssh/ssh_config line 21: Applying options for *\r\ndebug2: resolve_canonicalize: hostname 172.19.90.1 is address\r\ndebug1: auto-mux: Trying existing master\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug2: mux_client_hello_exchange: master version 4\r\ndebug3: mux_client_forwards: request forwardings: 0 local, 0 remote\r\ndebug3: mux_client_request_session: entering\r\ndebug3: mux_client_request_alive: entering\r\ndebug3: mux_client_request_alive: done pid = 1186335\r\ndebug3: mux_client_request_session: session request sent\r\ndebug2: Remote version: 3\r\ndebug2: Server supports extension "[email protected]" revision 1\r\ndebug2: Server supports extension "[email protected]" revision 2\r\ndebug2: Server supports extension "[email protected]" revision 2\r\ndebug2: Server supports extension "[email protected]" revision 1\r\ndebug2: Server supports extension "[email protected]" revision 1\r\ndebug2: Server supports extension "[email protected]" revision 1\r\ndebug3: Sent message fd 3 T:16 I:1\r\ndebug3: SSH_FXP_REALPATH . -> /home/sysadm size 0\r\ndebug3: Looking up /home/mrmeszaros/.ansible/tmp/ansible-local-1186405uxq7nnzk/tmpv42z4a9b\r\ndebug3: Sent message fd 3 T:17 I:2\r\ndebug3: Received stat reply T:101 I:2\r\ndebug1: Couldn\'t stat remote file: No such file or directory\r\ndebug3: Sent message SSH2_FXP_OPEN I:3 P:/home/sysadm/.ansible/tmp/ansible-tmp-1638198795.0287964-1186445-275398547896703/AnsiballZ_uri.py\r\ndebug3: Sent message SSH2_FXP_WRITE I:4 O:0 S:32768\r\ndebug3: SSH2_FXP_STATUS 0\r\ndebug3: In write loop, ack for 4 32768 bytes at 0\r\ndebug3: Sent message SSH2_FXP_WRITE I:5 O:32768 S:32768\r\ndebug3: Sent message SSH2_FXP_WRITE I:6 O:65536 S:32768\r\ndebug3: Sent message SSH2_FXP_WRITE I:7 O:98304 S:32768\r\ndebug3: Sent message SSH2_FXP_WRITE I:8 O:131072 S:30321\r\ndebug3: SSH2_FXP_STATUS 0\r\ndebug3: In write loop, ack for 5 32768 bytes at 32768\r\ndebug3: SSH2_FXP_STATUS 0\r\ndebug3: In write loop, ack for 6 32768 bytes at 65536\r\ndebug3: SSH2_FXP_STATUS 0\r\ndebug3: In write loop, ack for 7 32768 bytes at 98304\r\ndebug3: SSH2_FXP_STATUS 0\r\ndebug3: In write loop, ack for 8 30321 bytes at 131072\r\ndebug3: Sent message SSH2_FXP_CLOSE I:4\r\ndebug3: SSH2_FXP_STATUS 0\r\ndebug3: mux_client_read_packet: read header failed: Broken pipe\r\ndebug2: Received exit status from master 0\r\n')
<172.19.90.1> ESTABLISH SSH CONNECTION FOR USER: sysadm
<172.19.90.1> SSH: EXEC sshpass -d11 ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o 'IdentityFile="roles/linux-base/files/id_rsa"' -o 'User="sysadm"' -o ConnectTimeout=10 -o ControlPath=/home/mrmeszaros/.ansible/cp/f5d1d80b39 172.19.90.1 '/bin/sh -c '"'"'chmod u+x /home/sysadm/.ansible/tmp/ansible-tmp-1638198795.0287964-1186445-275398547896703/ /home/sysadm/.ansible/tmp/ansible-tmp-1638198795.0287964-1186445-275398547896703/AnsiballZ_uri.py && sleep 0'"'"''
<172.19.90.1> (0, b'', b'OpenSSH_8.2p1 Ubuntu-4ubuntu0.3, OpenSSL 1.1.1f 31 Mar 2020\r\ndebug1: Reading configuration data /home/mrmeszaros/.ssh/config\r\ndebug1: /home/mrmeszaros/.ssh/config line 1: Applying options for 172.19.*\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 19: include /etc/ssh/ssh_config.d/*.conf matched no files\r\ndebug1: /etc/ssh/ssh_config line 21: Applying options for *\r\ndebug2: resolve_canonicalize: hostname 172.19.90.1 is address\r\ndebug1: auto-mux: Trying existing master\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug2: mux_client_hello_exchange: master version 4\r\ndebug3: mux_client_forwards: request forwardings: 0 local, 0 remote\r\ndebug3: mux_client_request_session: entering\r\ndebug3: mux_client_request_alive: entering\r\ndebug3: mux_client_request_alive: done pid = 1186335\r\ndebug3: mux_client_request_session: session request sent\r\ndebug3: mux_client_read_packet: read header failed: Broken pipe\r\ndebug2: Received exit status from master 0\r\n')
<172.19.90.1> ESTABLISH SSH CONNECTION FOR USER: sysadm
<172.19.90.1> SSH: EXEC sshpass -d11 ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o 'IdentityFile="roles/linux-base/files/id_rsa"' -o 'User="sysadm"' -o ConnectTimeout=10 -o ControlPath=/home/mrmeszaros/.ansible/cp/f5d1d80b39 -tt 172.19.90.1 '/bin/sh -c '"'"'/usr/bin/python3 /home/sysadm/.ansible/tmp/ansible-tmp-1638198795.0287964-1186445-275398547896703/AnsiballZ_uri.py && sleep 0'"'"''
<172.19.90.1> (1, b'Traceback (most recent call last):\r\n File "/tmp/ansible_ansible.legacy.uri_payload_e2qi06tp/ansible_ansible.legacy.uri_payload.zip/ansible/module_utils/urls.py", line 1755, in fetch_url\r\n File "/tmp/ansible_ansible.legacy.uri_payload_e2qi06tp/ansible_ansible.legacy.uri_payload.zip/ansible/module_utils/urls.py", line 1535, in open_url\r\n File "/tmp/ansible_ansible.legacy.uri_payload_e2qi06tp/ansible_ansible.legacy.uri_payload.zip/ansible/module_utils/urls.py", line 1446, in open\r\n File "/usr/lib/python3.8/urllib/request.py", line 222, in urlopen\r\n return opener.open(url, data, timeout)\r\n File "/usr/lib/python3.8/urllib/request.py", line 531, in open\r\n response = meth(req, response)\r\n File "/usr/lib/python3.8/urllib/request.py", line 640, in http_response\r\n response = self.parent.error(\r\n File "/usr/lib/python3.8/urllib/request.py", line 563, in error\r\n result = self._call_chain(*args)\r\n File "/usr/lib/python3.8/urllib/request.py", line 502, in _call_chain\r\n result = func(*args)\r\n File "/usr/lib/python3.8/urllib/request.py", line 1244, in http_error_401\r\n retry = self.http_error_auth_reqed(\'www-authenticate\',\r\n File "/usr/lib/python3.8/urllib/request.py", line 1124, in http_error_auth_reqed\r\n return self.retry_http_digest_auth(req, authreq)\r\n File "/usr/lib/python3.8/urllib/request.py", line 1138, in retry_http_digest_auth\r\n resp = self.parent.open(req, timeout=req.timeout)\r\n File "/usr/lib/python3.8/urllib/request.py", line 531, in open\r\n response = meth(req, response)\r\n File "/usr/lib/python3.8/urllib/request.py", line 640, in http_response\r\n response = self.parent.error(\r\n File "/usr/lib/python3.8/urllib/request.py", line 563, in error\r\n result = self._call_chain(*args)\r\n File "/usr/lib/python3.8/urllib/request.py", line 502, in _call_chain\r\n result = func(*args)\r\n File "/usr/lib/python3.8/urllib/request.py", line 1244, in http_error_401\r\n retry = self.http_error_auth_reqed(\'www-authenticate\',\r\n File "/usr/lib/python3.8/urllib/request.py", line 1124, in http_error_auth_reqed\r\n return self.retry_http_digest_auth(req, authreq)\r\n File "/usr/lib/python3.8/urllib/request.py", line 1138, in retry_http_digest_auth\r\n resp = self.parent.open(req, timeout=req.timeout)\r\n File "/usr/lib/python3.8/urllib/request.py", line 531, in open\r\n response = meth(req, response)\r\n File "/usr/lib/python3.8/urllib/request.py", line 640, in http_response\r\n response = self.parent.error(\r\n File "/usr/lib/python3.8/urllib/request.py", line 563, in error\r\n result = self._call_chain(*args)\r\n File "/usr/lib/python3.8/urllib/request.py", line 502, in _call_chain\r\n result = func(*args)\r\n File "/usr/lib/python3.8/urllib/request.py", line 1244, in http_error_401\r\n retry = self.http_error_auth_reqed(\'www-authenticate\',\r\n File "/usr/lib/python3.8/urllib/request.py", line 1124, in http_error_auth_reqed\r\n return self.retry_http_digest_auth(req, authreq)\r\n File "/usr/lib/python3.8/urllib/request.py", line 1138, in retry_http_digest_auth\r\n resp = self.parent.open(req, timeout=req.timeout)\r\n File "/usr/lib/python3.8/urllib/request.py", line 531, in open\r\n response = meth(req, response)\r\n File "/usr/lib/python3.8/urllib/request.py", line 640, in http_response\r\n response = self.parent.error(\r\n File "/usr/lib/python3.8/urllib/request.py", line 563, in error\r\n result = self._call_chain(*args)\r\n File "/usr/lib/python3.8/urllib/request.py", line 502, in _call_chain\r\n result = func(*args)\r\n File "/usr/lib/python3.8/urllib/request.py", line 1244, in http_error_401\r\n retry = self.http_error_auth_reqed(\'www-authenticate\',\r\n File "/usr/lib/python3.8/urllib/request.py", line 1124, in http_error_auth_reqed\r\n return self.retry_http_digest_auth(req, authreq)\r\n File "/usr/lib/python3.8/urllib/request.py", line 1138, in retry_http_digest_auth\r\n resp = self.parent.open(req, timeout=req.timeout)\r\n File "/usr/lib/python3.8/urllib/request.py", line 531, in open\r\n response = meth(req, response)\r\n File "/usr/lib/python3.8/urllib/request.py", line 640, in http_response\r\n response = self.parent.error(\r\n File "/usr/lib/python3.8/urllib/request.py", line 563, in error\r\n result = self._call_chain(*args)\r\n File "/usr/lib/python3.8/urllib/request.py", line 502, in _call_chain\r\n result = func(*args)\r\n File "/usr/lib/python3.8/urllib/request.py", line 1244, in http_error_401\r\n retry = self.http_error_auth_reqed(\'www-authenticate\',\r\n File "/usr/lib/python3.8/urllib/request.py", line 1124, in http_error_auth_reqed\r\n return self.retry_http_digest_auth(req, authreq)\r\n File "/usr/lib/python3.8/urllib/request.py", line 1138, in retry_http_digest_auth\r\n resp = self.parent.open(req, timeout=req.timeout)\r\n File "/usr/lib/python3.8/urllib/request.py", line 531, in open\r\n response = meth(req, response)\r\n File "/usr/lib/python3.8/urllib/request.py", line 640, in http_response\r\n response = self.parent.error(\r\n File "/usr/lib/python3.8/urllib/request.py", line 563, in error\r\n result = self._call_chain(*args)\r\n File "/usr/lib/python3.8/urllib/request.py", line 502, in _call_chain\r\n result = func(*args)\r\n File "/usr/lib/python3.8/urllib/request.py", line 1244, in http_error_401\r\n retry = self.http_error_auth_reqed(\'www-authenticate\',\r\n File "/usr/lib/python3.8/urllib/request.py", line 1124, in http_error_auth_reqed\r\n return self.retry_http_digest_auth(req, authreq)\r\n File "/usr/lib/python3.8/urllib/request.py", line 1138, in retry_http_digest_auth\r\n resp = self.parent.open(req, timeout=req.timeout)\r\n File "/usr/lib/python3.8/urllib/request.py", line 531, in open\r\n response = meth(req, response)\r\n File "/usr/lib/python3.8/urllib/request.py", line 640, in http_response\r\n response = self.parent.error(\r\n File "/usr/lib/python3.8/urllib/request.py", line 563, in error\r\n result = self._call_chain(*args)\r\n File "/usr/lib/python3.8/urllib/request.py", line 502, in _call_chain\r\n result = func(*args)\r\n File "/usr/lib/python3.8/urllib/request.py", line 1244, in http_error_401\r\n retry = self.http_error_auth_reqed(\'www-authenticate\',\r\n File "/usr/lib/python3.8/urllib/request.py", line 1117, in http_error_auth_reqed\r\n raise HTTPError(req.full_url, 401, "digest auth failed",\r\nurllib.error.HTTPError: HTTP Error 401: digest auth failed\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nTraceback (most recent call last):\r\n File "/home/sysadm/.ansible/tmp/ansible-tmp-1638198795.0287964-1186445-275398547896703/AnsiballZ_uri.py", line 100, in <module>\r\n _ansiballz_main()\r\n File "/home/sysadm/.ansible/tmp/ansible-tmp-1638198795.0287964-1186445-275398547896703/AnsiballZ_uri.py", line 92, in _ansiballz_main\r\n invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)\r\n File "/home/sysadm/.ansible/tmp/ansible-tmp-1638198795.0287964-1186445-275398547896703/AnsiballZ_uri.py", line 40, in invoke_module\r\n runpy.run_module(mod_name=\'ansible.modules.uri\', init_globals=dict(_module_fqn=\'ansible.modules.uri\', _modlib_path=modlib_path),\r\n File "/usr/lib/python3.8/runpy.py", line 207, in run_module\r\n return _run_module_code(code, init_globals, run_name, mod_spec)\r\n File "/usr/lib/python3.8/runpy.py", line 97, in _run_module_code\r\n _run_code(code, mod_globals, init_globals,\r\n File "/usr/lib/python3.8/runpy.py", line 87, in _run_code\r\n exec(code, run_globals)\r\n File "/tmp/ansible_ansible.legacy.uri_payload_e2qi06tp/ansible_ansible.legacy.uri_payload.zip/ansible/modules/uri.py", line 793, in <module>\r\n File "/tmp/ansible_ansible.legacy.uri_payload_e2qi06tp/ansible_ansible.legacy.uri_payload.zip/ansible/modules/uri.py", line 710, in main\r\n File "/tmp/ansible_ansible.legacy.uri_payload_e2qi06tp/ansible_ansible.legacy.uri_payload.zip/ansible/modules/uri.py", line 599, in uri\r\n File "/tmp/ansible_ansible.legacy.uri_payload_e2qi06tp/ansible_ansible.legacy.uri_payload.zip/ansible/module_utils/urls.py", line 1804, in fetch_url\r\n File "/usr/lib/python3.8/tempfile.py", line 606, in __getattr__\r\n file = self.__dict__[\'file\']\r\nKeyError: \'file\'\r\n', b'OpenSSH_8.2p1 Ubuntu-4ubuntu0.3, OpenSSL 1.1.1f 31 Mar 2020\r\ndebug1: Reading configuration data /home/mrmeszaros/.ssh/config\r\ndebug1: /home/mrmeszaros/.ssh/config line 1: Applying options for 172.19.*\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 19: include /etc/ssh/ssh_config.d/*.conf matched no files\r\ndebug1: /etc/ssh/ssh_config line 21: Applying options for *\r\ndebug2: resolve_canonicalize: hostname 172.19.90.1 is address\r\ndebug1: auto-mux: Trying existing master\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug2: mux_client_hello_exchange: master version 4\r\ndebug3: mux_client_forwards: request forwardings: 0 local, 0 remote\r\ndebug3: mux_client_request_session: entering\r\ndebug3: mux_client_request_alive: entering\r\ndebug3: mux_client_request_alive: done pid = 1186335\r\ndebug3: mux_client_request_session: session request sent\r\ndebug3: mux_client_read_packet: read header failed: Broken pipe\r\ndebug2: Received exit status from master 1\r\nShared connection to 172.19.90.1 closed.\r\n')
<172.19.90.1> Failed to connect to the host via ssh: OpenSSH_8.2p1 Ubuntu-4ubuntu0.3, OpenSSL 1.1.1f 31 Mar 2020
debug1: Reading configuration data /home/mrmeszaros/.ssh/config
debug1: /home/mrmeszaros/.ssh/config line 1: Applying options for 172.19.*
debug1: Reading configuration data /etc/ssh/ssh_config
debug1: /etc/ssh/ssh_config line 19: include /etc/ssh/ssh_config.d/*.conf matched no files
debug1: /etc/ssh/ssh_config line 21: Applying options for *
debug2: resolve_canonicalize: hostname 172.19.90.1 is address
debug1: auto-mux: Trying existing master
debug2: fd 3 setting O_NONBLOCK
debug2: mux_client_hello_exchange: master version 4
debug3: mux_client_forwards: request forwardings: 0 local, 0 remote
debug3: mux_client_request_session: entering
debug3: mux_client_request_alive: entering
debug3: mux_client_request_alive: done pid = 1186335
debug3: mux_client_request_session: session request sent
debug3: mux_client_read_packet: read header failed: Broken pipe
debug2: Received exit status from master 1
Shared connection to 172.19.90.1 closed.
<172.19.90.1> ESTABLISH SSH CONNECTION FOR USER: sysadm
<172.19.90.1> SSH: EXEC sshpass -d11 ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o 'IdentityFile="roles/linux-base/files/id_rsa"' -o 'User="sysadm"' -o ConnectTimeout=10 -o ControlPath=/home/mrmeszaros/.ansible/cp/f5d1d80b39 172.19.90.1 '/bin/sh -c '"'"'rm -f -r /home/sysadm/.ansible/tmp/ansible-tmp-1638198795.0287964-1186445-275398547896703/ > /dev/null 2>&1 && sleep 0'"'"''
<172.19.90.1> (0, b'', b'OpenSSH_8.2p1 Ubuntu-4ubuntu0.3, OpenSSL 1.1.1f 31 Mar 2020\r\ndebug1: Reading configuration data /home/mrmeszaros/.ssh/config\r\ndebug1: /home/mrmeszaros/.ssh/config line 1: Applying options for 172.19.*\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 19: include /etc/ssh/ssh_config.d/*.conf matched no files\r\ndebug1: /etc/ssh/ssh_config line 21: Applying options for *\r\ndebug2: resolve_canonicalize: hostname 172.19.90.1 is address\r\ndebug1: auto-mux: Trying existing master\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug2: mux_client_hello_exchange: master version 4\r\ndebug3: mux_client_forwards: request forwardings: 0 local, 0 remote\r\ndebug3: mux_client_request_session: entering\r\ndebug3: mux_client_request_alive: entering\r\ndebug3: mux_client_request_alive: done pid = 1186335\r\ndebug3: mux_client_request_session: session request sent\r\ndebug3: mux_client_read_packet: read header failed: Broken pipe\r\ndebug2: Received exit status from master 0\r\n')
The full traceback is:
Traceback (most recent call last):
File "/tmp/ansible_ansible.legacy.uri_payload_e2qi06tp/ansible_ansible.legacy.uri_payload.zip/ansible/module_utils/urls.py", line 1755, in fetch_url
File "/tmp/ansible_ansible.legacy.uri_payload_e2qi06tp/ansible_ansible.legacy.uri_payload.zip/ansible/module_utils/urls.py", line 1535, in open_url
File "/tmp/ansible_ansible.legacy.uri_payload_e2qi06tp/ansible_ansible.legacy.uri_payload.zip/ansible/module_utils/urls.py", line 1446, in open
File "/usr/lib/python3.8/urllib/request.py", line 222, in urlopen
return opener.open(url, data, timeout)
File "/usr/lib/python3.8/urllib/request.py", line 531, in open
response = meth(req, response)
File "/usr/lib/python3.8/urllib/request.py", line 640, in http_response
response = self.parent.error(
File "/usr/lib/python3.8/urllib/request.py", line 563, in error
result = self._call_chain(*args)
File "/usr/lib/python3.8/urllib/request.py", line 502, in _call_chain
result = func(*args)
File "/usr/lib/python3.8/urllib/request.py", line 1244, in http_error_401
retry = self.http_error_auth_reqed('www-authenticate',
File "/usr/lib/python3.8/urllib/request.py", line 1124, in http_error_auth_reqed
return self.retry_http_digest_auth(req, authreq)
File "/usr/lib/python3.8/urllib/request.py", line 1138, in retry_http_digest_auth
resp = self.parent.open(req, timeout=req.timeout)
File "/usr/lib/python3.8/urllib/request.py", line 531, in open
response = meth(req, response)
File "/usr/lib/python3.8/urllib/request.py", line 640, in http_response
response = self.parent.error(
File "/usr/lib/python3.8/urllib/request.py", line 563, in error
result = self._call_chain(*args)
File "/usr/lib/python3.8/urllib/request.py", line 502, in _call_chain
result = func(*args)
File "/usr/lib/python3.8/urllib/request.py", line 1244, in http_error_401
retry = self.http_error_auth_reqed('www-authenticate',
File "/usr/lib/python3.8/urllib/request.py", line 1124, in http_error_auth_reqed
return self.retry_http_digest_auth(req, authreq)
File "/usr/lib/python3.8/urllib/request.py", line 1138, in retry_http_digest_auth
resp = self.parent.open(req, timeout=req.timeout)
File "/usr/lib/python3.8/urllib/request.py", line 531, in open
response = meth(req, response)
File "/usr/lib/python3.8/urllib/request.py", line 640, in http_response
response = self.parent.error(
File "/usr/lib/python3.8/urllib/request.py", line 563, in error
result = self._call_chain(*args)
File "/usr/lib/python3.8/urllib/request.py", line 502, in _call_chain
result = func(*args)
File "/usr/lib/python3.8/urllib/request.py", line 1244, in http_error_401
retry = self.http_error_auth_reqed('www-authenticate',
File "/usr/lib/python3.8/urllib/request.py", line 1124, in http_error_auth_reqed
return self.retry_http_digest_auth(req, authreq)
File "/usr/lib/python3.8/urllib/request.py", line 1138, in retry_http_digest_auth
resp = self.parent.open(req, timeout=req.timeout)
File "/usr/lib/python3.8/urllib/request.py", line 531, in open
response = meth(req, response)
File "/usr/lib/python3.8/urllib/request.py", line 640, in http_response
response = self.parent.error(
File "/usr/lib/python3.8/urllib/request.py", line 563, in error
result = self._call_chain(*args)
File "/usr/lib/python3.8/urllib/request.py", line 502, in _call_chain
result = func(*args)
File "/usr/lib/python3.8/urllib/request.py", line 1244, in http_error_401
retry = self.http_error_auth_reqed('www-authenticate',
File "/usr/lib/python3.8/urllib/request.py", line 1124, in http_error_auth_reqed
return self.retry_http_digest_auth(req, authreq)
File "/usr/lib/python3.8/urllib/request.py", line 1138, in retry_http_digest_auth
resp = self.parent.open(req, timeout=req.timeout)
File "/usr/lib/python3.8/urllib/request.py", line 531, in open
response = meth(req, response)
File "/usr/lib/python3.8/urllib/request.py", line 640, in http_response
response = self.parent.error(
File "/usr/lib/python3.8/urllib/request.py", line 563, in error
result = self._call_chain(*args)
File "/usr/lib/python3.8/urllib/request.py", line 502, in _call_chain
result = func(*args)
File "/usr/lib/python3.8/urllib/request.py", line 1244, in http_error_401
retry = self.http_error_auth_reqed('www-authenticate',
File "/usr/lib/python3.8/urllib/request.py", line 1124, in http_error_auth_reqed
return self.retry_http_digest_auth(req, authreq)
File "/usr/lib/python3.8/urllib/request.py", line 1138, in retry_http_digest_auth
resp = self.parent.open(req, timeout=req.timeout)
File "/usr/lib/python3.8/urllib/request.py", line 531, in open
response = meth(req, response)
File "/usr/lib/python3.8/urllib/request.py", line 640, in http_response
response = self.parent.error(
File "/usr/lib/python3.8/urllib/request.py", line 563, in error
result = self._call_chain(*args)
File "/usr/lib/python3.8/urllib/request.py", line 502, in _call_chain
result = func(*args)
File "/usr/lib/python3.8/urllib/request.py", line 1244, in http_error_401
retry = self.http_error_auth_reqed('www-authenticate',
File "/usr/lib/python3.8/urllib/request.py", line 1124, in http_error_auth_reqed
return self.retry_http_digest_auth(req, authreq)
File "/usr/lib/python3.8/urllib/request.py", line 1138, in retry_http_digest_auth
resp = self.parent.open(req, timeout=req.timeout)
File "/usr/lib/python3.8/urllib/request.py", line 531, in open
response = meth(req, response)
File "/usr/lib/python3.8/urllib/request.py", line 640, in http_response
response = self.parent.error(
File "/usr/lib/python3.8/urllib/request.py", line 563, in error
result = self._call_chain(*args)
File "/usr/lib/python3.8/urllib/request.py", line 502, in _call_chain
result = func(*args)
File "/usr/lib/python3.8/urllib/request.py", line 1244, in http_error_401
retry = self.http_error_auth_reqed('www-authenticate',
File "/usr/lib/python3.8/urllib/request.py", line 1117, in http_error_auth_reqed
raise HTTPError(req.full_url, 401, "digest auth failed",
urllib.error.HTTPError: HTTP Error 401: digest auth failed
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/sysadm/.ansible/tmp/ansible-tmp-1638198795.0287964-1186445-275398547896703/AnsiballZ_uri.py", line 100, in <module>
_ansiballz_main()
File "/home/sysadm/.ansible/tmp/ansible-tmp-1638198795.0287964-1186445-275398547896703/AnsiballZ_uri.py", line 92, in _ansiballz_main
invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)
File "/home/sysadm/.ansible/tmp/ansible-tmp-1638198795.0287964-1186445-275398547896703/AnsiballZ_uri.py", line 40, in invoke_module
runpy.run_module(mod_name='ansible.modules.uri', init_globals=dict(_module_fqn='ansible.modules.uri', _modlib_path=modlib_path),
File "/usr/lib/python3.8/runpy.py", line 207, in run_module
return _run_module_code(code, init_globals, run_name, mod_spec)
File "/usr/lib/python3.8/runpy.py", line 97, in _run_module_code
_run_code(code, mod_globals, init_globals,
File "/usr/lib/python3.8/runpy.py", line 87, in _run_code
exec(code, run_globals)
File "/tmp/ansible_ansible.legacy.uri_payload_e2qi06tp/ansible_ansible.legacy.uri_payload.zip/ansible/modules/uri.py", line 793, in <module>
File "/tmp/ansible_ansible.legacy.uri_payload_e2qi06tp/ansible_ansible.legacy.uri_payload.zip/ansible/modules/uri.py", line 710, in main
File "/tmp/ansible_ansible.legacy.uri_payload_e2qi06tp/ansible_ansible.legacy.uri_payload.zip/ansible/modules/uri.py", line 599, in uri
File "/tmp/ansible_ansible.legacy.uri_payload_e2qi06tp/ansible_ansible.legacy.uri_payload.zip/ansible/module_utils/urls.py", line 1804, in fetch_url
File "/usr/lib/python3.8/tempfile.py", line 606, in __getattr__
file = self.__dict__['file']
KeyError: 'file'
fatal: [moloch]: FAILED! => changed=false
module_stderr: |-
OpenSSH_8.2p1 Ubuntu-4ubuntu0.3, OpenSSL 1.1.1f 31 Mar 2020
debug1: Reading configuration data /home/mrmeszaros/.ssh/config
debug1: /home/mrmeszaros/.ssh/config line 1: Applying options for 172.19.*
debug1: Reading configuration data /etc/ssh/ssh_config
debug1: /etc/ssh/ssh_config line 19: include /etc/ssh/ssh_config.d/*.conf matched no files
debug1: /etc/ssh/ssh_config line 21: Applying options for *
debug2: resolve_canonicalize: hostname 172.19.90.1 is address
debug1: auto-mux: Trying existing master
debug2: fd 3 setting O_NONBLOCK
debug2: mux_client_hello_exchange: master version 4
debug3: mux_client_forwards: request forwardings: 0 local, 0 remote
debug3: mux_client_request_session: entering
debug3: mux_client_request_alive: entering
debug3: mux_client_request_alive: done pid = 1186335
debug3: mux_client_request_session: session request sent
debug3: mux_client_read_packet: read header failed: Broken pipe
debug2: Received exit status from master 1
Shared connection to 172.19.90.1 closed.
module_stdout: |-
Traceback (most recent call last):
File "/tmp/ansible_ansible.legacy.uri_payload_e2qi06tp/ansible_ansible.legacy.uri_payload.zip/ansible/module_utils/urls.py", line 1755, in fetch_url
File "/tmp/ansible_ansible.legacy.uri_payload_e2qi06tp/ansible_ansible.legacy.uri_payload.zip/ansible/module_utils/urls.py", line 1535, in open_url
File "/tmp/ansible_ansible.legacy.uri_payload_e2qi06tp/ansible_ansible.legacy.uri_payload.zip/ansible/module_utils/urls.py", line 1446, in open
File "/usr/lib/python3.8/urllib/request.py", line 222, in urlopen
return opener.open(url, data, timeout)
File "/usr/lib/python3.8/urllib/request.py", line 531, in open
response = meth(req, response)
File "/usr/lib/python3.8/urllib/request.py", line 640, in http_response
response = self.parent.error(
File "/usr/lib/python3.8/urllib/request.py", line 563, in error
result = self._call_chain(*args)
File "/usr/lib/python3.8/urllib/request.py", line 502, in _call_chain
result = func(*args)
File "/usr/lib/python3.8/urllib/request.py", line 1244, in http_error_401
retry = self.http_error_auth_reqed('www-authenticate',
File "/usr/lib/python3.8/urllib/request.py", line 1124, in http_error_auth_reqed
return self.retry_http_digest_auth(req, authreq)
File "/usr/lib/python3.8/urllib/request.py", line 1138, in retry_http_digest_auth
resp = self.parent.open(req, timeout=req.timeout)
File "/usr/lib/python3.8/urllib/request.py", line 531, in open
response = meth(req, response)
File "/usr/lib/python3.8/urllib/request.py", line 640, in http_response
response = self.parent.error(
File "/usr/lib/python3.8/urllib/request.py", line 563, in error
result = self._call_chain(*args)
File "/usr/lib/python3.8/urllib/request.py", line 502, in _call_chain
result = func(*args)
File "/usr/lib/python3.8/urllib/request.py", line 1244, in http_error_401
retry = self.http_error_auth_reqed('www-authenticate',
File "/usr/lib/python3.8/urllib/request.py", line 1124, in http_error_auth_reqed
return self.retry_http_digest_auth(req, authreq)
File "/usr/lib/python3.8/urllib/request.py", line 1138, in retry_http_digest_auth
resp = self.parent.open(req, timeout=req.timeout)
File "/usr/lib/python3.8/urllib/request.py", line 531, in open
response = meth(req, response)
File "/usr/lib/python3.8/urllib/request.py", line 640, in http_response
response = self.parent.error(
File "/usr/lib/python3.8/urllib/request.py", line 563, in error
result = self._call_chain(*args)
File "/usr/lib/python3.8/urllib/request.py", line 502, in _call_chain
result = func(*args)
File "/usr/lib/python3.8/urllib/request.py", line 1244, in http_error_401
retry = self.http_error_auth_reqed('www-authenticate',
File "/usr/lib/python3.8/urllib/request.py", line 1124, in http_error_auth_reqed
return self.retry_http_digest_auth(req, authreq)
File "/usr/lib/python3.8/urllib/request.py", line 1138, in retry_http_digest_auth
resp = self.parent.open(req, timeout=req.timeout)
File "/usr/lib/python3.8/urllib/request.py", line 531, in open
response = meth(req, response)
File "/usr/lib/python3.8/urllib/request.py", line 640, in http_response
response = self.parent.error(
File "/usr/lib/python3.8/urllib/request.py", line 563, in error
result = self._call_chain(*args)
File "/usr/lib/python3.8/urllib/request.py", line 502, in _call_chain
result = func(*args)
File "/usr/lib/python3.8/urllib/request.py", line 1244, in http_error_401
retry = self.http_error_auth_reqed('www-authenticate',
File "/usr/lib/python3.8/urllib/request.py", line 1124, in http_error_auth_reqed
return self.retry_http_digest_auth(req, authreq)
File "/usr/lib/python3.8/urllib/request.py", line 1138, in retry_http_digest_auth
resp = self.parent.open(req, timeout=req.timeout)
File "/usr/lib/python3.8/urllib/request.py", line 531, in open
response = meth(req, response)
File "/usr/lib/python3.8/urllib/request.py", line 640, in http_response
response = self.parent.error(
File "/usr/lib/python3.8/urllib/request.py", line 563, in error
result = self._call_chain(*args)
File "/usr/lib/python3.8/urllib/request.py", line 502, in _call_chain
result = func(*args)
File "/usr/lib/python3.8/urllib/request.py", line 1244, in http_error_401
retry = self.http_error_auth_reqed('www-authenticate',
File "/usr/lib/python3.8/urllib/request.py", line 1124, in http_error_auth_reqed
return self.retry_http_digest_auth(req, authreq)
File "/usr/lib/python3.8/urllib/request.py", line 1138, in retry_http_digest_auth
resp = self.parent.open(req, timeout=req.timeout)
File "/usr/lib/python3.8/urllib/request.py", line 531, in open
response = meth(req, response)
File "/usr/lib/python3.8/urllib/request.py", line 640, in http_response
response = self.parent.error(
File "/usr/lib/python3.8/urllib/request.py", line 563, in error
result = self._call_chain(*args)
File "/usr/lib/python3.8/urllib/request.py", line 502, in _call_chain
result = func(*args)
File "/usr/lib/python3.8/urllib/request.py", line 1244, in http_error_401
retry = self.http_error_auth_reqed('www-authenticate',
File "/usr/lib/python3.8/urllib/request.py", line 1124, in http_error_auth_reqed
return self.retry_http_digest_auth(req, authreq)
File "/usr/lib/python3.8/urllib/request.py", line 1138, in retry_http_digest_auth
resp = self.parent.open(req, timeout=req.timeout)
File "/usr/lib/python3.8/urllib/request.py", line 531, in open
response = meth(req, response)
File "/usr/lib/python3.8/urllib/request.py", line 640, in http_response
response = self.parent.error(
File "/usr/lib/python3.8/urllib/request.py", line 563, in error
result = self._call_chain(*args)
File "/usr/lib/python3.8/urllib/request.py", line 502, in _call_chain
result = func(*args)
File "/usr/lib/python3.8/urllib/request.py", line 1244, in http_error_401
retry = self.http_error_auth_reqed('www-authenticate',
File "/usr/lib/python3.8/urllib/request.py", line 1117, in http_error_auth_reqed
raise HTTPError(req.full_url, 401, "digest auth failed",
urllib.error.HTTPError: HTTP Error 401: digest auth failed
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/sysadm/.ansible/tmp/ansible-tmp-1638198795.0287964-1186445-275398547896703/AnsiballZ_uri.py", line 100, in <module>
_ansiballz_main()
File "/home/sysadm/.ansible/tmp/ansible-tmp-1638198795.0287964-1186445-275398547896703/AnsiballZ_uri.py", line 92, in _ansiballz_main
invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)
File "/home/sysadm/.ansible/tmp/ansible-tmp-1638198795.0287964-1186445-275398547896703/AnsiballZ_uri.py", line 40, in invoke_module
runpy.run_module(mod_name='ansible.modules.uri', init_globals=dict(_module_fqn='ansible.modules.uri', _modlib_path=modlib_path),
File "/usr/lib/python3.8/runpy.py", line 207, in run_module
return _run_module_code(code, init_globals, run_name, mod_spec)
File "/usr/lib/python3.8/runpy.py", line 97, in _run_module_code
_run_code(code, mod_globals, init_globals,
File "/usr/lib/python3.8/runpy.py", line 87, in _run_code
exec(code, run_globals)
File "/tmp/ansible_ansible.legacy.uri_payload_e2qi06tp/ansible_ansible.legacy.uri_payload.zip/ansible/modules/uri.py", line 793, in <module>
File "/tmp/ansible_ansible.legacy.uri_payload_e2qi06tp/ansible_ansible.legacy.uri_payload.zip/ansible/modules/uri.py", line 710, in main
File "/tmp/ansible_ansible.legacy.uri_payload_e2qi06tp/ansible_ansible.legacy.uri_payload.zip/ansible/modules/uri.py", line 599, in uri
File "/tmp/ansible_ansible.legacy.uri_payload_e2qi06tp/ansible_ansible.legacy.uri_payload.zip/ansible/module_utils/urls.py", line 1804, in fetch_url
File "/usr/lib/python3.8/tempfile.py", line 606, in __getattr__
file = self.__dict__['file']
KeyError: 'file'
msg: |-
MODULE FAILURE
See stdout/stderr for the exact error
rc: 1
PLAY RECAP ****************************************************************************************************************************************************************************************************************************
moloch : ok=0 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/76386
|
https://github.com/ansible/ansible/pull/76421
|
82ff7efa9401ba0cc79b59e9d239492258e1791b
|
eca97a19a3d40d3ae1bc3e5e676edea159197157
| 2021-11-29T15:21:14Z |
python
| 2021-12-02T19:42:29Z |
lib/ansible/modules/uri.py
|
#!/usr/bin/python
# -*- coding: utf-8 -*-
# Copyright: (c) 2013, Romeo Theriault <romeot () hawaii.edu>
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
DOCUMENTATION = r'''
---
module: uri
short_description: Interacts with webservices
description:
- Interacts with HTTP and HTTPS web services and supports Digest, Basic and WSSE
HTTP authentication mechanisms.
- For Windows targets, use the M(ansible.windows.win_uri) module instead.
version_added: "1.1"
options:
url:
description:
- HTTP or HTTPS URL in the form (http|https)://host.domain[:port]/path
type: str
required: true
dest:
description:
- A path of where to download the file to (if desired). If I(dest) is a
directory, the basename of the file on the remote server will be used.
type: path
url_username:
description:
- A username for the module to use for Digest, Basic or WSSE authentication.
type: str
aliases: [ user ]
url_password:
description:
- A password for the module to use for Digest, Basic or WSSE authentication.
type: str
aliases: [ password ]
body:
description:
- The body of the http request/response to the web service. If C(body_format) is set
to 'json' it will take an already formatted JSON string or convert a data structure
into JSON.
- If C(body_format) is set to 'form-urlencoded' it will convert a dictionary
or list of tuples into an 'application/x-www-form-urlencoded' string. (Added in v2.7)
- If C(body_format) is set to 'form-multipart' it will convert a dictionary
into 'multipart/form-multipart' body. (Added in v2.10)
type: raw
body_format:
description:
- The serialization format of the body. When set to C(json), C(form-multipart), or C(form-urlencoded), encodes
the body argument, if needed, and automatically sets the Content-Type header accordingly.
- As of C(2.3) it is possible to override the `Content-Type` header, when
set to C(json) or C(form-urlencoded) via the I(headers) option.
- The 'Content-Type' header cannot be overridden when using C(form-multipart)
- C(form-urlencoded) was added in v2.7.
- C(form-multipart) was added in v2.10.
type: str
choices: [ form-urlencoded, json, raw, form-multipart ]
default: raw
version_added: "2.0"
method:
description:
- The HTTP method of the request or response.
- In more recent versions we do not restrict the method at the module level anymore
but it still must be a valid method accepted by the service handling the request.
type: str
default: GET
return_content:
description:
- Whether or not to return the body of the response as a "content" key in
the dictionary result no matter it succeeded or failed.
- Independently of this option, if the reported Content-type is "application/json", then the JSON is
always loaded into a key called C(json) in the dictionary results.
type: bool
default: no
force_basic_auth:
description:
- Force the sending of the Basic authentication header upon initial request.
- The library used by the uri module only sends authentication information when a webservice
responds to an initial request with a 401 status. Since some basic auth services do not properly
send a 401, logins will fail.
type: bool
default: no
follow_redirects:
description:
- Whether or not the URI module should follow redirects. C(all) will follow all redirects.
C(safe) will follow only "safe" redirects, where "safe" means that the client is only
doing a GET or HEAD on the URI to which it is being redirected. C(none) will not follow
any redirects. Note that C(yes) and C(no) choices are accepted for backwards compatibility,
where C(yes) is the equivalent of C(all) and C(no) is the equivalent of C(safe). C(yes) and C(no)
are deprecated and will be removed in some future version of Ansible.
type: str
choices: ['all', 'no', 'none', 'safe', 'urllib2', 'yes']
default: safe
creates:
description:
- A filename, when it already exists, this step will not be run.
type: path
removes:
description:
- A filename, when it does not exist, this step will not be run.
type: path
status_code:
description:
- A list of valid, numeric, HTTP status codes that signifies success of the request.
type: list
elements: int
default: [ 200 ]
timeout:
description:
- The socket level timeout in seconds
type: int
default: 30
headers:
description:
- Add custom HTTP headers to a request in the format of a YAML hash. As
of C(2.3) supplying C(Content-Type) here will override the header
generated by supplying C(json) or C(form-urlencoded) for I(body_format).
type: dict
version_added: '2.1'
validate_certs:
description:
- If C(no), SSL certificates will not be validated.
- This should only set to C(no) used on personally controlled sites using self-signed certificates.
- Prior to 1.9.2 the code defaulted to C(no).
type: bool
default: yes
version_added: '1.9.2'
client_cert:
description:
- PEM formatted certificate chain file to be used for SSL client authentication.
- This file can also include the key as well, and if the key is included, I(client_key) is not required
type: path
version_added: '2.4'
client_key:
description:
- PEM formatted file that contains your private key to be used for SSL client authentication.
- If I(client_cert) contains both the certificate and key, this option is not required.
type: path
version_added: '2.4'
ca_path:
description:
- PEM formatted file that contains a CA certificate to be used for validation
type: path
version_added: '2.11'
src:
description:
- Path to file to be submitted to the remote server.
- Cannot be used with I(body).
type: path
version_added: '2.7'
remote_src:
description:
- If C(no), the module will search for the C(src) on the controller node.
- If C(yes), the module will search for the C(src) on the managed (remote) node.
type: bool
default: no
version_added: '2.7'
force:
description:
- If C(yes) do not get a cached copy.
type: bool
default: no
use_proxy:
description:
- If C(no), it will not use a proxy, even if one is defined in an environment variable on the target hosts.
type: bool
default: yes
unix_socket:
description:
- Path to Unix domain socket to use for connection
type: path
version_added: '2.8'
http_agent:
description:
- Header to identify as, generally appears in web server logs.
type: str
default: ansible-httpget
unredirected_headers:
description:
- A list of header names that will not be sent on subsequent redirected requests. This list is case
insensitive. By default all headers will be redirected. In some cases it may be beneficial to list
headers such as C(Authorization) here to avoid potential credential exposure.
default: []
type: list
elements: str
version_added: '2.12'
use_gssapi:
description:
- Use GSSAPI to perform the authentication, typically this is for Kerberos or Kerberos through Negotiate
authentication.
- Requires the Python library L(gssapi,https://github.com/pythongssapi/python-gssapi) to be installed.
- Credentials for GSSAPI can be specified with I(url_username)/I(url_password) or with the GSSAPI env var
C(KRB5CCNAME) that specified a custom Kerberos credential cache.
- NTLM authentication is C(not) supported even if the GSSAPI mech for NTLM has been installed.
type: bool
default: no
version_added: '2.11'
extends_documentation_fragment:
- action_common_attributes
- files
attributes:
check_mode:
support: none
diff_mode:
support: none
platform:
platforms: posix
notes:
- The dependency on httplib2 was removed in Ansible 2.1.
- The module returns all the HTTP headers in lower-case.
- For Windows targets, use the M(ansible.windows.win_uri) module instead.
seealso:
- module: ansible.builtin.get_url
- module: ansible.windows.win_uri
author:
- Romeo Theriault (@romeotheriault)
'''
EXAMPLES = r'''
- name: Check that you can connect (GET) to a page and it returns a status 200
uri:
url: http://www.example.com
- name: Check that a page returns a status 200 and fail if the word AWESOME is not in the page contents
uri:
url: http://www.example.com
return_content: yes
register: this
failed_when: "'AWESOME' not in this.content"
- name: Create a JIRA issue
uri:
url: https://your.jira.example.com/rest/api/2/issue/
user: your_username
password: your_pass
method: POST
body: "{{ lookup('file','issue.json') }}"
force_basic_auth: yes
status_code: 201
body_format: json
- name: Login to a form based webpage, then use the returned cookie to access the app in later tasks
uri:
url: https://your.form.based.auth.example.com/index.php
method: POST
body_format: form-urlencoded
body:
name: your_username
password: your_password
enter: Sign in
status_code: 302
register: login
- name: Login to a form based webpage using a list of tuples
uri:
url: https://your.form.based.auth.example.com/index.php
method: POST
body_format: form-urlencoded
body:
- [ name, your_username ]
- [ password, your_password ]
- [ enter, Sign in ]
status_code: 302
register: login
- name: Upload a file via multipart/form-multipart
uri:
url: https://httpbin.org/post
method: POST
body_format: form-multipart
body:
file1:
filename: /bin/true
mime_type: application/octet-stream
file2:
content: text based file content
filename: fake.txt
mime_type: text/plain
text_form_field: value
- name: Connect to website using a previously stored cookie
uri:
url: https://your.form.based.auth.example.com/dashboard.php
method: GET
return_content: yes
headers:
Cookie: "{{ login.cookies_string }}"
- name: Queue build of a project in Jenkins
uri:
url: http://{{ jenkins.host }}/job/{{ jenkins.job }}/build?token={{ jenkins.token }}
user: "{{ jenkins.user }}"
password: "{{ jenkins.password }}"
method: GET
force_basic_auth: yes
status_code: 201
- name: POST from contents of local file
uri:
url: https://httpbin.org/post
method: POST
src: file.json
- name: POST from contents of remote file
uri:
url: https://httpbin.org/post
method: POST
src: /path/to/my/file.json
remote_src: yes
- name: Create workspaces in Log analytics Azure
uri:
url: https://www.mms.microsoft.com/Embedded/Api/ConfigDataSources/LogManagementData/Save
method: POST
body_format: json
status_code: [200, 202]
return_content: true
headers:
Content-Type: application/json
x-ms-client-workspace-path: /subscriptions/{{ sub_id }}/resourcegroups/{{ res_group }}/providers/microsoft.operationalinsights/workspaces/{{ w_spaces }}
x-ms-client-platform: ibiza
x-ms-client-auth-token: "{{ token_az }}"
body:
- name: Pause play until a URL is reachable from this host
uri:
url: "http://192.0.2.1/some/test"
follow_redirects: none
method: GET
register: _result
until: _result.status == 200
retries: 720 # 720 * 5 seconds = 1hour (60*60/5)
delay: 5 # Every 5 seconds
# There are issues in a supporting Python library that is discussed in
# https://github.com/ansible/ansible/issues/52705 where a proxy is defined
# but you want to bypass proxy use on CIDR masks by using no_proxy
- name: Work around a python issue that doesn't support no_proxy envvar
uri:
follow_redirects: none
validate_certs: false
timeout: 5
url: "http://{{ ip_address }}:{{ port | default(80) }}"
register: uri_data
failed_when: false
changed_when: false
vars:
ip_address: 192.0.2.1
environment: |
{
{% for no_proxy in (lookup('env', 'no_proxy') | regex_replace('\s*,\s*', ' ') ).split() %}
{% if no_proxy | regex_search('\/') and
no_proxy | ipaddr('net') != '' and
no_proxy | ipaddr('net') != false and
ip_address | ipaddr(no_proxy) is not none and
ip_address | ipaddr(no_proxy) != false %}
'no_proxy': '{{ ip_address }}'
{% elif no_proxy | regex_search(':') != '' and
no_proxy | regex_search(':') != false and
no_proxy == ip_address + ':' + (port | default(80)) %}
'no_proxy': '{{ ip_address }}:{{ port | default(80) }}'
{% elif no_proxy | ipaddr('host') != '' and
no_proxy | ipaddr('host') != false and
no_proxy == ip_address %}
'no_proxy': '{{ ip_address }}'
{% elif no_proxy | regex_search('^(\*|)\.') != '' and
no_proxy | regex_search('^(\*|)\.') != false and
no_proxy | regex_replace('\*', '') in ip_address %}
'no_proxy': '{{ ip_address }}'
{% endif %}
{% endfor %}
}
'''
RETURN = r'''
# The return information includes all the HTTP headers in lower-case.
content:
description: The response body content.
returned: status not in status_code or return_content is true
type: str
sample: "{}"
cookies:
description: The cookie values placed in cookie jar.
returned: on success
type: dict
sample: {"SESSIONID": "[SESSIONID]"}
version_added: "2.4"
cookies_string:
description: The value for future request Cookie headers.
returned: on success
type: str
sample: "SESSIONID=[SESSIONID]"
version_added: "2.6"
elapsed:
description: The number of seconds that elapsed while performing the download.
returned: on success
type: int
sample: 23
msg:
description: The HTTP message from the request.
returned: always
type: str
sample: OK (unknown bytes)
path:
description: destination file/path
returned: dest is defined
type: str
sample: /path/to/file.txt
redirected:
description: Whether the request was redirected.
returned: on success
type: bool
sample: false
status:
description: The HTTP status code from the request.
returned: always
type: int
sample: 200
url:
description: The actual URL used for the request.
returned: always
type: str
sample: https://www.ansible.com/
'''
import datetime
import json
import os
import re
import shutil
import sys
import tempfile
from ansible.module_utils.basic import AnsibleModule, sanitize_keys
from ansible.module_utils.six import PY2, PY3, binary_type, iteritems, string_types
from ansible.module_utils.six.moves.urllib.parse import urlencode, urlsplit
from ansible.module_utils._text import to_native, to_text
from ansible.module_utils.common._collections_compat import Mapping, Sequence
from ansible.module_utils.urls import fetch_url, get_response_filename, parse_content_type, prepare_multipart, url_argument_spec
JSON_CANDIDATES = ('text', 'json', 'javascript')
# List of response key names we do not want sanitize_keys() to change.
NO_MODIFY_KEYS = frozenset(
('msg', 'exception', 'warnings', 'deprecations', 'failed', 'skipped',
'changed', 'rc', 'stdout', 'stderr', 'elapsed', 'path', 'location',
'content_type')
)
def format_message(err, resp):
msg = resp.pop('msg')
return err + (' %s' % msg if msg else '')
def write_file(module, dest, content, resp):
# create a tempfile with some test content
fd, tmpsrc = tempfile.mkstemp(dir=module.tmpdir)
f = os.fdopen(fd, 'wb')
try:
if isinstance(content, binary_type):
f.write(content)
else:
shutil.copyfileobj(content, f)
except Exception as e:
os.remove(tmpsrc)
msg = format_message("Failed to create temporary content file: %s" % to_native(e), resp)
module.fail_json(msg=msg, **resp)
f.close()
checksum_src = None
checksum_dest = None
# raise an error if there is no tmpsrc file
if not os.path.exists(tmpsrc):
os.remove(tmpsrc)
msg = format_message("Source '%s' does not exist" % tmpsrc, resp)
module.fail_json(msg=msg, **resp)
if not os.access(tmpsrc, os.R_OK):
os.remove(tmpsrc)
msg = format_message("Source '%s' not readable" % tmpsrc, resp)
module.fail_json(msg=msg, **resp)
checksum_src = module.sha1(tmpsrc)
# check if there is no dest file
if os.path.exists(dest):
# raise an error if copy has no permission on dest
if not os.access(dest, os.W_OK):
os.remove(tmpsrc)
msg = format_message("Destination '%s' not writable" % dest, resp)
module.fail_json(msg=msg, **resp)
if not os.access(dest, os.R_OK):
os.remove(tmpsrc)
msg = format_message("Destination '%s' not readable" % dest, resp)
module.fail_json(msg=msg, **resp)
checksum_dest = module.sha1(dest)
else:
if not os.access(os.path.dirname(dest), os.W_OK):
os.remove(tmpsrc)
msg = format_message("Destination dir '%s' not writable" % os.path.dirname(dest), resp)
module.fail_json(msg=msg, **resp)
if checksum_src != checksum_dest:
try:
shutil.copyfile(tmpsrc, dest)
except Exception as e:
os.remove(tmpsrc)
msg = format_message("failed to copy %s to %s: %s" % (tmpsrc, dest, to_native(e)), resp)
module.fail_json(msg=msg, **resp)
os.remove(tmpsrc)
def absolute_location(url, location):
"""Attempts to create an absolute URL based on initial URL, and
next URL, specifically in the case of a ``Location`` header.
"""
if '://' in location:
return location
elif location.startswith('/'):
parts = urlsplit(url)
base = url.replace(parts[2], '')
return '%s%s' % (base, location)
elif not location.startswith('/'):
base = os.path.dirname(url)
return '%s/%s' % (base, location)
else:
return location
def kv_list(data):
''' Convert data into a list of key-value tuples '''
if data is None:
return None
if isinstance(data, Sequence):
return list(data)
if isinstance(data, Mapping):
return list(data.items())
raise TypeError('cannot form-urlencode body, expect list or dict')
def form_urlencoded(body):
''' Convert data into a form-urlencoded string '''
if isinstance(body, string_types):
return body
if isinstance(body, (Mapping, Sequence)):
result = []
# Turn a list of lists into a list of tuples that urlencode accepts
for key, values in kv_list(body):
if isinstance(values, string_types) or not isinstance(values, (Mapping, Sequence)):
values = [values]
for value in values:
if value is not None:
result.append((to_text(key), to_text(value)))
return urlencode(result, doseq=True)
return body
def uri(module, url, dest, body, body_format, method, headers, socket_timeout, ca_path, unredirected_headers):
# is dest is set and is a directory, let's check if we get redirected and
# set the filename from that url
src = module.params['src']
if src:
try:
headers.update({
'Content-Length': os.stat(src).st_size
})
data = open(src, 'rb')
except OSError:
module.fail_json(msg='Unable to open source file %s' % src, elapsed=0)
else:
data = body
kwargs = {}
if dest is not None and os.path.isfile(dest):
# if destination file already exist, only download if file newer
kwargs['last_mod_time'] = datetime.datetime.utcfromtimestamp(os.path.getmtime(dest))
resp, info = fetch_url(module, url, data=data, headers=headers,
method=method, timeout=socket_timeout, unix_socket=module.params['unix_socket'],
ca_path=ca_path, unredirected_headers=unredirected_headers,
**kwargs)
if src:
# Try to close the open file handle
try:
data.close()
except Exception:
pass
return resp, info
def main():
argument_spec = url_argument_spec()
argument_spec.update(
dest=dict(type='path'),
url_username=dict(type='str', aliases=['user']),
url_password=dict(type='str', aliases=['password'], no_log=True),
body=dict(type='raw'),
body_format=dict(type='str', default='raw', choices=['form-urlencoded', 'json', 'raw', 'form-multipart']),
src=dict(type='path'),
method=dict(type='str', default='GET'),
return_content=dict(type='bool', default=False),
follow_redirects=dict(type='str', default='safe', choices=['all', 'no', 'none', 'safe', 'urllib2', 'yes']),
creates=dict(type='path'),
removes=dict(type='path'),
status_code=dict(type='list', elements='int', default=[200]),
timeout=dict(type='int', default=30),
headers=dict(type='dict', default={}),
unix_socket=dict(type='path'),
remote_src=dict(type='bool', default=False),
ca_path=dict(type='path', default=None),
unredirected_headers=dict(type='list', elements='str', default=[]),
)
module = AnsibleModule(
argument_spec=argument_spec,
add_file_common_args=True,
mutually_exclusive=[['body', 'src']],
)
url = module.params['url']
body = module.params['body']
body_format = module.params['body_format'].lower()
method = module.params['method'].upper()
dest = module.params['dest']
return_content = module.params['return_content']
creates = module.params['creates']
removes = module.params['removes']
status_code = [int(x) for x in list(module.params['status_code'])]
socket_timeout = module.params['timeout']
ca_path = module.params['ca_path']
dict_headers = module.params['headers']
unredirected_headers = module.params['unredirected_headers']
if not re.match('^[A-Z]+$', method):
module.fail_json(msg="Parameter 'method' needs to be a single word in uppercase, like GET or POST.")
if body_format == 'json':
# Encode the body unless its a string, then assume it is pre-formatted JSON
if not isinstance(body, string_types):
body = json.dumps(body)
if 'content-type' not in [header.lower() for header in dict_headers]:
dict_headers['Content-Type'] = 'application/json'
elif body_format == 'form-urlencoded':
if not isinstance(body, string_types):
try:
body = form_urlencoded(body)
except ValueError as e:
module.fail_json(msg='failed to parse body as form_urlencoded: %s' % to_native(e), elapsed=0)
if 'content-type' not in [header.lower() for header in dict_headers]:
dict_headers['Content-Type'] = 'application/x-www-form-urlencoded'
elif body_format == 'form-multipart':
try:
content_type, body = prepare_multipart(body)
except (TypeError, ValueError) as e:
module.fail_json(msg='failed to parse body as form-multipart: %s' % to_native(e))
dict_headers['Content-Type'] = content_type
if creates is not None:
# do not run the command if the line contains creates=filename
# and the filename already exists. This allows idempotence
# of uri executions.
if os.path.exists(creates):
module.exit_json(stdout="skipped, since '%s' exists" % creates, changed=False)
if removes is not None:
# do not run the command if the line contains removes=filename
# and the filename does not exist. This allows idempotence
# of uri executions.
if not os.path.exists(removes):
module.exit_json(stdout="skipped, since '%s' does not exist" % removes, changed=False)
# Make the request
start = datetime.datetime.utcnow()
r, info = uri(module, url, dest, body, body_format, method,
dict_headers, socket_timeout, ca_path, unredirected_headers)
elapsed = (datetime.datetime.utcnow() - start).seconds
if r and dest is not None and os.path.isdir(dest):
filename = get_response_filename(r) or 'index.html'
dest = os.path.join(dest, filename)
if r:
content_type, main_type, sub_type, content_encoding = parse_content_type(r)
else:
content_type = 'application/octet-stream'
main_type = 'aplication'
sub_type = 'octet-stream'
content_encoding = 'utf-8'
maybe_json = content_type and any(candidate in sub_type for candidate in JSON_CANDIDATES)
maybe_output = maybe_json or return_content or info['status'] not in status_code
if maybe_output:
try:
if PY3 and r.closed:
raise TypeError
content = r.read()
except (AttributeError, TypeError):
# there was no content, but the error read()
# may have been stored in the info as 'body'
content = info.pop('body', b'')
elif r:
content = r
else:
content = None
resp = {}
resp['redirected'] = info['url'] != url
resp.update(info)
resp['elapsed'] = elapsed
resp['status'] = int(resp['status'])
resp['changed'] = False
# Write the file out if requested
if r and dest is not None:
if resp['status'] in status_code and resp['status'] != 304:
write_file(module, dest, content, resp)
# allow file attribute changes
resp['changed'] = True
module.params['path'] = dest
file_args = module.load_file_common_arguments(module.params, path=dest)
resp['changed'] = module.set_fs_attributes_if_different(file_args, resp['changed'])
resp['path'] = dest
# Transmogrify the headers, replacing '-' with '_', since variables don't
# work with dashes.
# In python3, the headers are title cased. Lowercase them to be
# compatible with the python2 behaviour.
uresp = {}
for key, value in iteritems(resp):
ukey = key.replace("-", "_").lower()
uresp[ukey] = value
if 'location' in uresp:
uresp['location'] = absolute_location(url, uresp['location'])
# Default content_encoding to try
if isinstance(content, binary_type):
u_content = to_text(content, encoding=content_encoding)
if maybe_json:
try:
js = json.loads(u_content)
uresp['json'] = js
except Exception:
if PY2:
sys.exc_clear() # Avoid false positive traceback in fail_json() on Python 2
else:
u_content = None
if module.no_log_values:
uresp = sanitize_keys(uresp, module.no_log_values, NO_MODIFY_KEYS)
if resp['status'] not in status_code:
uresp['msg'] = 'Status code was %s and not %s: %s' % (resp['status'], status_code, uresp.get('msg', ''))
if return_content:
module.fail_json(content=u_content, **uresp)
else:
module.fail_json(**uresp)
elif return_content:
module.exit_json(content=u_content, **uresp)
else:
module.exit_json(**uresp)
if __name__ == '__main__':
main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 76,386 |
URI module does not respect or return a status code when auth fails with 401
|
### Summary
The `ansible.builtin.uri` module does not return the status code when the authentication fails. It does not fail gracefully.
Setting `status_code: [200, 401]` does not help.
### Issue Type
Bug Report
### Component Name
uri
### Ansible Version
```console
ansible [core 2.11.6]
config file = /home/mrmeszaros/Workspace/ansible.cfg
configured module search path = ['/home/mrmeszaros/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/lib/python3.8/dist-packages/ansible
ansible collection location = /home/mrmeszaros/Workspace/.collections
executable location = /usr/local/bin/ansible
python version = 3.8.10 (default, Sep 28 2021, 16:10:42) [GCC 9.3.0]
jinja version = 3.0.2
libyaml = True
```
### Configuration
```console
ANSIBLE_NOCOWS(/home/mrmeszaros/Workspace/ansible.cfg) = True
COLLECTIONS_PATHS(/home/mrmeszaros/Workspace/ansible.cfg) = ['/home/mrmeszaros/Workspace/.collections']
DEFAULT_FILTER_PLUGIN_PATH(/home/mrmeszaros/Workspace/ansible.cfg) = ['/home/mrmeszaros/Workspace/filter_plugins']
DEFAULT_HOST_LIST(/home/mrmeszaros/Workspace/ansible.cfg) = ['/home/mrmeszaros/Workspace/inventories/vcenter']
DEFAULT_LOG_PATH(/home/mrmeszaros/Workspace/ansible.cfg) = /home/mrmeszaros/Workspace/.ansible.log
DEFAULT_ROLES_PATH(/home/mrmeszaros/Workspace/ansible.cfg) = ['/home/mrmeszaros/Workspace/.galaxy', '/home/mrmeszaros/Workspace/roles']
DEFAULT_STDOUT_CALLBACK(/home/mrmeszaros/Workspace/ansible.cfg) = yaml
HOST_KEY_CHECKING(/home/mrmeszaros/Workspace/ansible.cfg) = False
RETRY_FILES_ENABLED(/home/mrmeszaros/Workspace/ansible.cfg) = False
```
### OS / Environment
lsb_release -a:
```
Distributor ID: Ubuntu
Description: Ubuntu 20.04.3 LTS
Release: 20.04
Codename: focal
```
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
Prerequisites: Having moloch/arkime (or really any other web app with a REST API with authentication) installed on a machine.
```yaml (paste below)
- name: Check users
hosts: moloch
gather_facts: no
vars:
moloch_admin_pass: notit
tasks:
- name: List users
uri:
method: POST
url: http://localhost:8005/api/users
url_username: admin
url_password: "{{ moloch_admin_pass }}"
status_code:
- 200
- 401
changed_when: no
register: list_users_cmd
- debug: var=list_users_cmd
```
### Expected Results
I expected to see the http response with the headers and the `status: 401`, but instead it failed and did not even set the response headers or status.
Something like curl would do:
```
curl -X POST --digest --user admin:notit 172.19.90.1:8005/api/users -v
* Trying 172.19.90.1:8005...
* TCP_NODELAY set
* Connected to 172.19.90.1 (172.19.90.1) port 8005 (#0)
* Server auth using Digest with user 'admin'
> POST /api/users HTTP/1.1
> Host: 172.19.90.1:8005
> User-Agent: curl/7.68.0
> Accept: */*
>
* Mark bundle as not supporting multiuse
< HTTP/1.1 401 Unauthorized
< Vary: X-HTTP-Method-Override
< X-Frame-Options: DENY
< X-XSS-Protection: 1; mode=block
< WWW-Authenticate: Digest realm="Moloch", nonce="VItPV2v3xTxcE3DOdbmMnF6t60Kxx3VM", qop="auth"
< Date: Mon, 29 Nov 2021 15:16:18 GMT
< Connection: keep-alive
< Keep-Alive: timeout=5
< Transfer-Encoding: chunked
<
* Ignoring the response-body
* Connection #0 to host 172.19.90.1 left intact
* Issue another request to this URL: 'http://172.19.90.1:8005/api/users'
* Found bundle for host 172.19.90.1: 0x563d9c519b50 [serially]
* Can not multiplex, even if we wanted to!
* Re-using existing connection! (#0) with host 172.19.90.1
* Connected to 172.19.90.1 (172.19.90.1) port 8005 (#0)
* Server auth using Digest with user 'admin'
> POST /api/users HTTP/1.1
> Host: 172.19.90.1:8005
> Authorization: Digest username="admin", realm="Moloch", nonce="VItPV2v3xTxcE3DOdbmMnF6t60Kxx3VM", uri="/api/users", cnonce="ZTllZjJmNDIyMzQ0Zjk3Yjc3ZjAyN2MzNGVkMGUzODU=", nc=00000001, qop=auth, response="03c569ff5b108921de0da31a768d56f5"
> User-Agent: curl/7.68.0
> Accept: */*
>
* Mark bundle as not supporting multiuse
< HTTP/1.1 401 Unauthorized
< Vary: X-HTTP-Method-Override
< X-Frame-Options: DENY
< X-XSS-Protection: 1; mode=block
* Authentication problem. Ignoring this.
< WWW-Authenticate: Digest realm="Moloch", nonce="xNcghsGC2sX3w21sTDCFLK0cK9YStZ9M", qop="auth"
< Date: Mon, 29 Nov 2021 15:16:18 GMT
< Connection: keep-alive
< Keep-Alive: timeout=5
< Transfer-Encoding: chunked
<
* Connection #0 to host 172.19.90.1 left intact
Unauthorized⏎
```
### Actual Results
```console
TASK [List users] ***********************************************************************************************************************************************************************************************************
task path: /home/mrmeszaros/Workspace/playbooks/moloch.singleton.yml:316
<172.19.90.1> ESTABLISH SSH CONNECTION FOR USER: sysadm
<172.19.90.1> SSH: EXEC sshpass -d11 ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o 'IdentityFile="roles/linux-base/files/id_rsa"' -o 'User="sysadm"' -o ConnectTimeout=10 -o ControlPath=/home/mrmeszaros/.ansible/cp/f5d1d80b39 172.19.90.1 '/bin/sh -c '"'"'echo ~sysadm && sleep 0'"'"''
<172.19.90.1> (0, b'/home/sysadm\n', b'OpenSSH_8.2p1 Ubuntu-4ubuntu0.3, OpenSSL 1.1.1f 31 Mar 2020\r\ndebug1: Reading configuration data /home/mrmeszaros/.ssh/config\r\ndebug1: /home/mrmeszaros/.ssh/config line 1: Applying options for 172.19.*\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 19: include /etc/ssh/ssh_config.d/*.conf matched no files\r\ndebug1: /etc/ssh/ssh_config line 21: Applying options for *\r\ndebug2: resolve_canonicalize: hostname 172.19.90.1 is address\r\ndebug1: auto-mux: Trying existing master\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug2: mux_client_hello_exchange: master version 4\r\ndebug3: mux_client_forwards: request forwardings: 0 local, 0 remote\r\ndebug3: mux_client_request_session: entering\r\ndebug3: mux_client_request_alive: entering\r\ndebug3: mux_client_request_alive: done pid = 1186335\r\ndebug3: mux_client_request_session: session request sent\r\ndebug3: mux_client_read_packet: read header failed: Broken pipe\r\ndebug2: Received exit status from master 0\r\n')
<172.19.90.1> ESTABLISH SSH CONNECTION FOR USER: sysadm
<172.19.90.1> SSH: EXEC sshpass -d11 ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o 'IdentityFile="roles/linux-base/files/id_rsa"' -o 'User="sysadm"' -o ConnectTimeout=10 -o ControlPath=/home/mrmeszaros/.ansible/cp/f5d1d80b39 172.19.90.1 '/bin/sh -c '"'"'( umask 77 && mkdir -p "` echo /home/sysadm/.ansible/tmp `"&& mkdir "` echo /home/sysadm/.ansible/tmp/ansible-tmp-1638198795.0287964-1186445-275398547896703 `" && echo ansible-tmp-1638198795.0287964-1186445-275398547896703="` echo /home/sysadm/.ansible/tmp/ansible-tmp-1638198795.0287964-1186445-275398547896703 `" ) && sleep 0'"'"''
<172.19.90.1> (0, b'ansible-tmp-1638198795.0287964-1186445-275398547896703=/home/sysadm/.ansible/tmp/ansible-tmp-1638198795.0287964-1186445-275398547896703\n', b'OpenSSH_8.2p1 Ubuntu-4ubuntu0.3, OpenSSL 1.1.1f 31 Mar 2020\r\ndebug1: Reading configuration data /home/mrmeszaros/.ssh/config\r\ndebug1: /home/mrmeszaros/.ssh/config line 1: Applying options for 172.19.*\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 19: include /etc/ssh/ssh_config.d/*.conf matched no files\r\ndebug1: /etc/ssh/ssh_config line 21: Applying options for *\r\ndebug2: resolve_canonicalize: hostname 172.19.90.1 is address\r\ndebug1: auto-mux: Trying existing master\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug2: mux_client_hello_exchange: master version 4\r\ndebug3: mux_client_forwards: request forwardings: 0 local, 0 remote\r\ndebug3: mux_client_request_session: entering\r\ndebug3: mux_client_request_alive: entering\r\ndebug3: mux_client_request_alive: done pid = 1186335\r\ndebug3: mux_client_request_session: session request sent\r\ndebug3: mux_client_read_packet: read header failed: Broken pipe\r\ndebug2: Received exit status from master 0\r\n')
Using module file /usr/local/lib/python3.8/dist-packages/ansible/modules/uri.py
<172.19.90.1> PUT /home/mrmeszaros/.ansible/tmp/ansible-local-1186405uxq7nnzk/tmpv42z4a9b TO /home/sysadm/.ansible/tmp/ansible-tmp-1638198795.0287964-1186445-275398547896703/AnsiballZ_uri.py
<172.19.90.1> SSH: EXEC sshpass -d11 sftp -o BatchMode=no -b - -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o 'IdentityFile="roles/linux-base/files/id_rsa"' -o 'User="sysadm"' -o ConnectTimeout=10 -o ControlPath=/home/mrmeszaros/.ansible/cp/f5d1d80b39 '[172.19.90.1]'
<172.19.90.1> (0, b'sftp> put /home/mrmeszaros/.ansible/tmp/ansible-local-1186405uxq7nnzk/tmpv42z4a9b /home/sysadm/.ansible/tmp/ansible-tmp-1638198795.0287964-1186445-275398547896703/AnsiballZ_uri.py\n', b'OpenSSH_8.2p1 Ubuntu-4ubuntu0.3, OpenSSL 1.1.1f 31 Mar 2020\r\ndebug1: Reading configuration data /home/mrmeszaros/.ssh/config\r\ndebug1: /home/mrmeszaros/.ssh/config line 1: Applying options for 172.19.*\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 19: include /etc/ssh/ssh_config.d/*.conf matched no files\r\ndebug1: /etc/ssh/ssh_config line 21: Applying options for *\r\ndebug2: resolve_canonicalize: hostname 172.19.90.1 is address\r\ndebug1: auto-mux: Trying existing master\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug2: mux_client_hello_exchange: master version 4\r\ndebug3: mux_client_forwards: request forwardings: 0 local, 0 remote\r\ndebug3: mux_client_request_session: entering\r\ndebug3: mux_client_request_alive: entering\r\ndebug3: mux_client_request_alive: done pid = 1186335\r\ndebug3: mux_client_request_session: session request sent\r\ndebug2: Remote version: 3\r\ndebug2: Server supports extension "[email protected]" revision 1\r\ndebug2: Server supports extension "[email protected]" revision 2\r\ndebug2: Server supports extension "[email protected]" revision 2\r\ndebug2: Server supports extension "[email protected]" revision 1\r\ndebug2: Server supports extension "[email protected]" revision 1\r\ndebug2: Server supports extension "[email protected]" revision 1\r\ndebug3: Sent message fd 3 T:16 I:1\r\ndebug3: SSH_FXP_REALPATH . -> /home/sysadm size 0\r\ndebug3: Looking up /home/mrmeszaros/.ansible/tmp/ansible-local-1186405uxq7nnzk/tmpv42z4a9b\r\ndebug3: Sent message fd 3 T:17 I:2\r\ndebug3: Received stat reply T:101 I:2\r\ndebug1: Couldn\'t stat remote file: No such file or directory\r\ndebug3: Sent message SSH2_FXP_OPEN I:3 P:/home/sysadm/.ansible/tmp/ansible-tmp-1638198795.0287964-1186445-275398547896703/AnsiballZ_uri.py\r\ndebug3: Sent message SSH2_FXP_WRITE I:4 O:0 S:32768\r\ndebug3: SSH2_FXP_STATUS 0\r\ndebug3: In write loop, ack for 4 32768 bytes at 0\r\ndebug3: Sent message SSH2_FXP_WRITE I:5 O:32768 S:32768\r\ndebug3: Sent message SSH2_FXP_WRITE I:6 O:65536 S:32768\r\ndebug3: Sent message SSH2_FXP_WRITE I:7 O:98304 S:32768\r\ndebug3: Sent message SSH2_FXP_WRITE I:8 O:131072 S:30321\r\ndebug3: SSH2_FXP_STATUS 0\r\ndebug3: In write loop, ack for 5 32768 bytes at 32768\r\ndebug3: SSH2_FXP_STATUS 0\r\ndebug3: In write loop, ack for 6 32768 bytes at 65536\r\ndebug3: SSH2_FXP_STATUS 0\r\ndebug3: In write loop, ack for 7 32768 bytes at 98304\r\ndebug3: SSH2_FXP_STATUS 0\r\ndebug3: In write loop, ack for 8 30321 bytes at 131072\r\ndebug3: Sent message SSH2_FXP_CLOSE I:4\r\ndebug3: SSH2_FXP_STATUS 0\r\ndebug3: mux_client_read_packet: read header failed: Broken pipe\r\ndebug2: Received exit status from master 0\r\n')
<172.19.90.1> ESTABLISH SSH CONNECTION FOR USER: sysadm
<172.19.90.1> SSH: EXEC sshpass -d11 ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o 'IdentityFile="roles/linux-base/files/id_rsa"' -o 'User="sysadm"' -o ConnectTimeout=10 -o ControlPath=/home/mrmeszaros/.ansible/cp/f5d1d80b39 172.19.90.1 '/bin/sh -c '"'"'chmod u+x /home/sysadm/.ansible/tmp/ansible-tmp-1638198795.0287964-1186445-275398547896703/ /home/sysadm/.ansible/tmp/ansible-tmp-1638198795.0287964-1186445-275398547896703/AnsiballZ_uri.py && sleep 0'"'"''
<172.19.90.1> (0, b'', b'OpenSSH_8.2p1 Ubuntu-4ubuntu0.3, OpenSSL 1.1.1f 31 Mar 2020\r\ndebug1: Reading configuration data /home/mrmeszaros/.ssh/config\r\ndebug1: /home/mrmeszaros/.ssh/config line 1: Applying options for 172.19.*\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 19: include /etc/ssh/ssh_config.d/*.conf matched no files\r\ndebug1: /etc/ssh/ssh_config line 21: Applying options for *\r\ndebug2: resolve_canonicalize: hostname 172.19.90.1 is address\r\ndebug1: auto-mux: Trying existing master\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug2: mux_client_hello_exchange: master version 4\r\ndebug3: mux_client_forwards: request forwardings: 0 local, 0 remote\r\ndebug3: mux_client_request_session: entering\r\ndebug3: mux_client_request_alive: entering\r\ndebug3: mux_client_request_alive: done pid = 1186335\r\ndebug3: mux_client_request_session: session request sent\r\ndebug3: mux_client_read_packet: read header failed: Broken pipe\r\ndebug2: Received exit status from master 0\r\n')
<172.19.90.1> ESTABLISH SSH CONNECTION FOR USER: sysadm
<172.19.90.1> SSH: EXEC sshpass -d11 ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o 'IdentityFile="roles/linux-base/files/id_rsa"' -o 'User="sysadm"' -o ConnectTimeout=10 -o ControlPath=/home/mrmeszaros/.ansible/cp/f5d1d80b39 -tt 172.19.90.1 '/bin/sh -c '"'"'/usr/bin/python3 /home/sysadm/.ansible/tmp/ansible-tmp-1638198795.0287964-1186445-275398547896703/AnsiballZ_uri.py && sleep 0'"'"''
<172.19.90.1> (1, b'Traceback (most recent call last):\r\n File "/tmp/ansible_ansible.legacy.uri_payload_e2qi06tp/ansible_ansible.legacy.uri_payload.zip/ansible/module_utils/urls.py", line 1755, in fetch_url\r\n File "/tmp/ansible_ansible.legacy.uri_payload_e2qi06tp/ansible_ansible.legacy.uri_payload.zip/ansible/module_utils/urls.py", line 1535, in open_url\r\n File "/tmp/ansible_ansible.legacy.uri_payload_e2qi06tp/ansible_ansible.legacy.uri_payload.zip/ansible/module_utils/urls.py", line 1446, in open\r\n File "/usr/lib/python3.8/urllib/request.py", line 222, in urlopen\r\n return opener.open(url, data, timeout)\r\n File "/usr/lib/python3.8/urllib/request.py", line 531, in open\r\n response = meth(req, response)\r\n File "/usr/lib/python3.8/urllib/request.py", line 640, in http_response\r\n response = self.parent.error(\r\n File "/usr/lib/python3.8/urllib/request.py", line 563, in error\r\n result = self._call_chain(*args)\r\n File "/usr/lib/python3.8/urllib/request.py", line 502, in _call_chain\r\n result = func(*args)\r\n File "/usr/lib/python3.8/urllib/request.py", line 1244, in http_error_401\r\n retry = self.http_error_auth_reqed(\'www-authenticate\',\r\n File "/usr/lib/python3.8/urllib/request.py", line 1124, in http_error_auth_reqed\r\n return self.retry_http_digest_auth(req, authreq)\r\n File "/usr/lib/python3.8/urllib/request.py", line 1138, in retry_http_digest_auth\r\n resp = self.parent.open(req, timeout=req.timeout)\r\n File "/usr/lib/python3.8/urllib/request.py", line 531, in open\r\n response = meth(req, response)\r\n File "/usr/lib/python3.8/urllib/request.py", line 640, in http_response\r\n response = self.parent.error(\r\n File "/usr/lib/python3.8/urllib/request.py", line 563, in error\r\n result = self._call_chain(*args)\r\n File "/usr/lib/python3.8/urllib/request.py", line 502, in _call_chain\r\n result = func(*args)\r\n File "/usr/lib/python3.8/urllib/request.py", line 1244, in http_error_401\r\n retry = self.http_error_auth_reqed(\'www-authenticate\',\r\n File "/usr/lib/python3.8/urllib/request.py", line 1124, in http_error_auth_reqed\r\n return self.retry_http_digest_auth(req, authreq)\r\n File "/usr/lib/python3.8/urllib/request.py", line 1138, in retry_http_digest_auth\r\n resp = self.parent.open(req, timeout=req.timeout)\r\n File "/usr/lib/python3.8/urllib/request.py", line 531, in open\r\n response = meth(req, response)\r\n File "/usr/lib/python3.8/urllib/request.py", line 640, in http_response\r\n response = self.parent.error(\r\n File "/usr/lib/python3.8/urllib/request.py", line 563, in error\r\n result = self._call_chain(*args)\r\n File "/usr/lib/python3.8/urllib/request.py", line 502, in _call_chain\r\n result = func(*args)\r\n File "/usr/lib/python3.8/urllib/request.py", line 1244, in http_error_401\r\n retry = self.http_error_auth_reqed(\'www-authenticate\',\r\n File "/usr/lib/python3.8/urllib/request.py", line 1124, in http_error_auth_reqed\r\n return self.retry_http_digest_auth(req, authreq)\r\n File "/usr/lib/python3.8/urllib/request.py", line 1138, in retry_http_digest_auth\r\n resp = self.parent.open(req, timeout=req.timeout)\r\n File "/usr/lib/python3.8/urllib/request.py", line 531, in open\r\n response = meth(req, response)\r\n File "/usr/lib/python3.8/urllib/request.py", line 640, in http_response\r\n response = self.parent.error(\r\n File "/usr/lib/python3.8/urllib/request.py", line 563, in error\r\n result = self._call_chain(*args)\r\n File "/usr/lib/python3.8/urllib/request.py", line 502, in _call_chain\r\n result = func(*args)\r\n File "/usr/lib/python3.8/urllib/request.py", line 1244, in http_error_401\r\n retry = self.http_error_auth_reqed(\'www-authenticate\',\r\n File "/usr/lib/python3.8/urllib/request.py", line 1124, in http_error_auth_reqed\r\n return self.retry_http_digest_auth(req, authreq)\r\n File "/usr/lib/python3.8/urllib/request.py", line 1138, in retry_http_digest_auth\r\n resp = self.parent.open(req, timeout=req.timeout)\r\n File "/usr/lib/python3.8/urllib/request.py", line 531, in open\r\n response = meth(req, response)\r\n File "/usr/lib/python3.8/urllib/request.py", line 640, in http_response\r\n response = self.parent.error(\r\n File "/usr/lib/python3.8/urllib/request.py", line 563, in error\r\n result = self._call_chain(*args)\r\n File "/usr/lib/python3.8/urllib/request.py", line 502, in _call_chain\r\n result = func(*args)\r\n File "/usr/lib/python3.8/urllib/request.py", line 1244, in http_error_401\r\n retry = self.http_error_auth_reqed(\'www-authenticate\',\r\n File "/usr/lib/python3.8/urllib/request.py", line 1124, in http_error_auth_reqed\r\n return self.retry_http_digest_auth(req, authreq)\r\n File "/usr/lib/python3.8/urllib/request.py", line 1138, in retry_http_digest_auth\r\n resp = self.parent.open(req, timeout=req.timeout)\r\n File "/usr/lib/python3.8/urllib/request.py", line 531, in open\r\n response = meth(req, response)\r\n File "/usr/lib/python3.8/urllib/request.py", line 640, in http_response\r\n response = self.parent.error(\r\n File "/usr/lib/python3.8/urllib/request.py", line 563, in error\r\n result = self._call_chain(*args)\r\n File "/usr/lib/python3.8/urllib/request.py", line 502, in _call_chain\r\n result = func(*args)\r\n File "/usr/lib/python3.8/urllib/request.py", line 1244, in http_error_401\r\n retry = self.http_error_auth_reqed(\'www-authenticate\',\r\n File "/usr/lib/python3.8/urllib/request.py", line 1124, in http_error_auth_reqed\r\n return self.retry_http_digest_auth(req, authreq)\r\n File "/usr/lib/python3.8/urllib/request.py", line 1138, in retry_http_digest_auth\r\n resp = self.parent.open(req, timeout=req.timeout)\r\n File "/usr/lib/python3.8/urllib/request.py", line 531, in open\r\n response = meth(req, response)\r\n File "/usr/lib/python3.8/urllib/request.py", line 640, in http_response\r\n response = self.parent.error(\r\n File "/usr/lib/python3.8/urllib/request.py", line 563, in error\r\n result = self._call_chain(*args)\r\n File "/usr/lib/python3.8/urllib/request.py", line 502, in _call_chain\r\n result = func(*args)\r\n File "/usr/lib/python3.8/urllib/request.py", line 1244, in http_error_401\r\n retry = self.http_error_auth_reqed(\'www-authenticate\',\r\n File "/usr/lib/python3.8/urllib/request.py", line 1117, in http_error_auth_reqed\r\n raise HTTPError(req.full_url, 401, "digest auth failed",\r\nurllib.error.HTTPError: HTTP Error 401: digest auth failed\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nTraceback (most recent call last):\r\n File "/home/sysadm/.ansible/tmp/ansible-tmp-1638198795.0287964-1186445-275398547896703/AnsiballZ_uri.py", line 100, in <module>\r\n _ansiballz_main()\r\n File "/home/sysadm/.ansible/tmp/ansible-tmp-1638198795.0287964-1186445-275398547896703/AnsiballZ_uri.py", line 92, in _ansiballz_main\r\n invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)\r\n File "/home/sysadm/.ansible/tmp/ansible-tmp-1638198795.0287964-1186445-275398547896703/AnsiballZ_uri.py", line 40, in invoke_module\r\n runpy.run_module(mod_name=\'ansible.modules.uri\', init_globals=dict(_module_fqn=\'ansible.modules.uri\', _modlib_path=modlib_path),\r\n File "/usr/lib/python3.8/runpy.py", line 207, in run_module\r\n return _run_module_code(code, init_globals, run_name, mod_spec)\r\n File "/usr/lib/python3.8/runpy.py", line 97, in _run_module_code\r\n _run_code(code, mod_globals, init_globals,\r\n File "/usr/lib/python3.8/runpy.py", line 87, in _run_code\r\n exec(code, run_globals)\r\n File "/tmp/ansible_ansible.legacy.uri_payload_e2qi06tp/ansible_ansible.legacy.uri_payload.zip/ansible/modules/uri.py", line 793, in <module>\r\n File "/tmp/ansible_ansible.legacy.uri_payload_e2qi06tp/ansible_ansible.legacy.uri_payload.zip/ansible/modules/uri.py", line 710, in main\r\n File "/tmp/ansible_ansible.legacy.uri_payload_e2qi06tp/ansible_ansible.legacy.uri_payload.zip/ansible/modules/uri.py", line 599, in uri\r\n File "/tmp/ansible_ansible.legacy.uri_payload_e2qi06tp/ansible_ansible.legacy.uri_payload.zip/ansible/module_utils/urls.py", line 1804, in fetch_url\r\n File "/usr/lib/python3.8/tempfile.py", line 606, in __getattr__\r\n file = self.__dict__[\'file\']\r\nKeyError: \'file\'\r\n', b'OpenSSH_8.2p1 Ubuntu-4ubuntu0.3, OpenSSL 1.1.1f 31 Mar 2020\r\ndebug1: Reading configuration data /home/mrmeszaros/.ssh/config\r\ndebug1: /home/mrmeszaros/.ssh/config line 1: Applying options for 172.19.*\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 19: include /etc/ssh/ssh_config.d/*.conf matched no files\r\ndebug1: /etc/ssh/ssh_config line 21: Applying options for *\r\ndebug2: resolve_canonicalize: hostname 172.19.90.1 is address\r\ndebug1: auto-mux: Trying existing master\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug2: mux_client_hello_exchange: master version 4\r\ndebug3: mux_client_forwards: request forwardings: 0 local, 0 remote\r\ndebug3: mux_client_request_session: entering\r\ndebug3: mux_client_request_alive: entering\r\ndebug3: mux_client_request_alive: done pid = 1186335\r\ndebug3: mux_client_request_session: session request sent\r\ndebug3: mux_client_read_packet: read header failed: Broken pipe\r\ndebug2: Received exit status from master 1\r\nShared connection to 172.19.90.1 closed.\r\n')
<172.19.90.1> Failed to connect to the host via ssh: OpenSSH_8.2p1 Ubuntu-4ubuntu0.3, OpenSSL 1.1.1f 31 Mar 2020
debug1: Reading configuration data /home/mrmeszaros/.ssh/config
debug1: /home/mrmeszaros/.ssh/config line 1: Applying options for 172.19.*
debug1: Reading configuration data /etc/ssh/ssh_config
debug1: /etc/ssh/ssh_config line 19: include /etc/ssh/ssh_config.d/*.conf matched no files
debug1: /etc/ssh/ssh_config line 21: Applying options for *
debug2: resolve_canonicalize: hostname 172.19.90.1 is address
debug1: auto-mux: Trying existing master
debug2: fd 3 setting O_NONBLOCK
debug2: mux_client_hello_exchange: master version 4
debug3: mux_client_forwards: request forwardings: 0 local, 0 remote
debug3: mux_client_request_session: entering
debug3: mux_client_request_alive: entering
debug3: mux_client_request_alive: done pid = 1186335
debug3: mux_client_request_session: session request sent
debug3: mux_client_read_packet: read header failed: Broken pipe
debug2: Received exit status from master 1
Shared connection to 172.19.90.1 closed.
<172.19.90.1> ESTABLISH SSH CONNECTION FOR USER: sysadm
<172.19.90.1> SSH: EXEC sshpass -d11 ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o 'IdentityFile="roles/linux-base/files/id_rsa"' -o 'User="sysadm"' -o ConnectTimeout=10 -o ControlPath=/home/mrmeszaros/.ansible/cp/f5d1d80b39 172.19.90.1 '/bin/sh -c '"'"'rm -f -r /home/sysadm/.ansible/tmp/ansible-tmp-1638198795.0287964-1186445-275398547896703/ > /dev/null 2>&1 && sleep 0'"'"''
<172.19.90.1> (0, b'', b'OpenSSH_8.2p1 Ubuntu-4ubuntu0.3, OpenSSL 1.1.1f 31 Mar 2020\r\ndebug1: Reading configuration data /home/mrmeszaros/.ssh/config\r\ndebug1: /home/mrmeszaros/.ssh/config line 1: Applying options for 172.19.*\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 19: include /etc/ssh/ssh_config.d/*.conf matched no files\r\ndebug1: /etc/ssh/ssh_config line 21: Applying options for *\r\ndebug2: resolve_canonicalize: hostname 172.19.90.1 is address\r\ndebug1: auto-mux: Trying existing master\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug2: mux_client_hello_exchange: master version 4\r\ndebug3: mux_client_forwards: request forwardings: 0 local, 0 remote\r\ndebug3: mux_client_request_session: entering\r\ndebug3: mux_client_request_alive: entering\r\ndebug3: mux_client_request_alive: done pid = 1186335\r\ndebug3: mux_client_request_session: session request sent\r\ndebug3: mux_client_read_packet: read header failed: Broken pipe\r\ndebug2: Received exit status from master 0\r\n')
The full traceback is:
Traceback (most recent call last):
File "/tmp/ansible_ansible.legacy.uri_payload_e2qi06tp/ansible_ansible.legacy.uri_payload.zip/ansible/module_utils/urls.py", line 1755, in fetch_url
File "/tmp/ansible_ansible.legacy.uri_payload_e2qi06tp/ansible_ansible.legacy.uri_payload.zip/ansible/module_utils/urls.py", line 1535, in open_url
File "/tmp/ansible_ansible.legacy.uri_payload_e2qi06tp/ansible_ansible.legacy.uri_payload.zip/ansible/module_utils/urls.py", line 1446, in open
File "/usr/lib/python3.8/urllib/request.py", line 222, in urlopen
return opener.open(url, data, timeout)
File "/usr/lib/python3.8/urllib/request.py", line 531, in open
response = meth(req, response)
File "/usr/lib/python3.8/urllib/request.py", line 640, in http_response
response = self.parent.error(
File "/usr/lib/python3.8/urllib/request.py", line 563, in error
result = self._call_chain(*args)
File "/usr/lib/python3.8/urllib/request.py", line 502, in _call_chain
result = func(*args)
File "/usr/lib/python3.8/urllib/request.py", line 1244, in http_error_401
retry = self.http_error_auth_reqed('www-authenticate',
File "/usr/lib/python3.8/urllib/request.py", line 1124, in http_error_auth_reqed
return self.retry_http_digest_auth(req, authreq)
File "/usr/lib/python3.8/urllib/request.py", line 1138, in retry_http_digest_auth
resp = self.parent.open(req, timeout=req.timeout)
File "/usr/lib/python3.8/urllib/request.py", line 531, in open
response = meth(req, response)
File "/usr/lib/python3.8/urllib/request.py", line 640, in http_response
response = self.parent.error(
File "/usr/lib/python3.8/urllib/request.py", line 563, in error
result = self._call_chain(*args)
File "/usr/lib/python3.8/urllib/request.py", line 502, in _call_chain
result = func(*args)
File "/usr/lib/python3.8/urllib/request.py", line 1244, in http_error_401
retry = self.http_error_auth_reqed('www-authenticate',
File "/usr/lib/python3.8/urllib/request.py", line 1124, in http_error_auth_reqed
return self.retry_http_digest_auth(req, authreq)
File "/usr/lib/python3.8/urllib/request.py", line 1138, in retry_http_digest_auth
resp = self.parent.open(req, timeout=req.timeout)
File "/usr/lib/python3.8/urllib/request.py", line 531, in open
response = meth(req, response)
File "/usr/lib/python3.8/urllib/request.py", line 640, in http_response
response = self.parent.error(
File "/usr/lib/python3.8/urllib/request.py", line 563, in error
result = self._call_chain(*args)
File "/usr/lib/python3.8/urllib/request.py", line 502, in _call_chain
result = func(*args)
File "/usr/lib/python3.8/urllib/request.py", line 1244, in http_error_401
retry = self.http_error_auth_reqed('www-authenticate',
File "/usr/lib/python3.8/urllib/request.py", line 1124, in http_error_auth_reqed
return self.retry_http_digest_auth(req, authreq)
File "/usr/lib/python3.8/urllib/request.py", line 1138, in retry_http_digest_auth
resp = self.parent.open(req, timeout=req.timeout)
File "/usr/lib/python3.8/urllib/request.py", line 531, in open
response = meth(req, response)
File "/usr/lib/python3.8/urllib/request.py", line 640, in http_response
response = self.parent.error(
File "/usr/lib/python3.8/urllib/request.py", line 563, in error
result = self._call_chain(*args)
File "/usr/lib/python3.8/urllib/request.py", line 502, in _call_chain
result = func(*args)
File "/usr/lib/python3.8/urllib/request.py", line 1244, in http_error_401
retry = self.http_error_auth_reqed('www-authenticate',
File "/usr/lib/python3.8/urllib/request.py", line 1124, in http_error_auth_reqed
return self.retry_http_digest_auth(req, authreq)
File "/usr/lib/python3.8/urllib/request.py", line 1138, in retry_http_digest_auth
resp = self.parent.open(req, timeout=req.timeout)
File "/usr/lib/python3.8/urllib/request.py", line 531, in open
response = meth(req, response)
File "/usr/lib/python3.8/urllib/request.py", line 640, in http_response
response = self.parent.error(
File "/usr/lib/python3.8/urllib/request.py", line 563, in error
result = self._call_chain(*args)
File "/usr/lib/python3.8/urllib/request.py", line 502, in _call_chain
result = func(*args)
File "/usr/lib/python3.8/urllib/request.py", line 1244, in http_error_401
retry = self.http_error_auth_reqed('www-authenticate',
File "/usr/lib/python3.8/urllib/request.py", line 1124, in http_error_auth_reqed
return self.retry_http_digest_auth(req, authreq)
File "/usr/lib/python3.8/urllib/request.py", line 1138, in retry_http_digest_auth
resp = self.parent.open(req, timeout=req.timeout)
File "/usr/lib/python3.8/urllib/request.py", line 531, in open
response = meth(req, response)
File "/usr/lib/python3.8/urllib/request.py", line 640, in http_response
response = self.parent.error(
File "/usr/lib/python3.8/urllib/request.py", line 563, in error
result = self._call_chain(*args)
File "/usr/lib/python3.8/urllib/request.py", line 502, in _call_chain
result = func(*args)
File "/usr/lib/python3.8/urllib/request.py", line 1244, in http_error_401
retry = self.http_error_auth_reqed('www-authenticate',
File "/usr/lib/python3.8/urllib/request.py", line 1124, in http_error_auth_reqed
return self.retry_http_digest_auth(req, authreq)
File "/usr/lib/python3.8/urllib/request.py", line 1138, in retry_http_digest_auth
resp = self.parent.open(req, timeout=req.timeout)
File "/usr/lib/python3.8/urllib/request.py", line 531, in open
response = meth(req, response)
File "/usr/lib/python3.8/urllib/request.py", line 640, in http_response
response = self.parent.error(
File "/usr/lib/python3.8/urllib/request.py", line 563, in error
result = self._call_chain(*args)
File "/usr/lib/python3.8/urllib/request.py", line 502, in _call_chain
result = func(*args)
File "/usr/lib/python3.8/urllib/request.py", line 1244, in http_error_401
retry = self.http_error_auth_reqed('www-authenticate',
File "/usr/lib/python3.8/urllib/request.py", line 1117, in http_error_auth_reqed
raise HTTPError(req.full_url, 401, "digest auth failed",
urllib.error.HTTPError: HTTP Error 401: digest auth failed
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/sysadm/.ansible/tmp/ansible-tmp-1638198795.0287964-1186445-275398547896703/AnsiballZ_uri.py", line 100, in <module>
_ansiballz_main()
File "/home/sysadm/.ansible/tmp/ansible-tmp-1638198795.0287964-1186445-275398547896703/AnsiballZ_uri.py", line 92, in _ansiballz_main
invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)
File "/home/sysadm/.ansible/tmp/ansible-tmp-1638198795.0287964-1186445-275398547896703/AnsiballZ_uri.py", line 40, in invoke_module
runpy.run_module(mod_name='ansible.modules.uri', init_globals=dict(_module_fqn='ansible.modules.uri', _modlib_path=modlib_path),
File "/usr/lib/python3.8/runpy.py", line 207, in run_module
return _run_module_code(code, init_globals, run_name, mod_spec)
File "/usr/lib/python3.8/runpy.py", line 97, in _run_module_code
_run_code(code, mod_globals, init_globals,
File "/usr/lib/python3.8/runpy.py", line 87, in _run_code
exec(code, run_globals)
File "/tmp/ansible_ansible.legacy.uri_payload_e2qi06tp/ansible_ansible.legacy.uri_payload.zip/ansible/modules/uri.py", line 793, in <module>
File "/tmp/ansible_ansible.legacy.uri_payload_e2qi06tp/ansible_ansible.legacy.uri_payload.zip/ansible/modules/uri.py", line 710, in main
File "/tmp/ansible_ansible.legacy.uri_payload_e2qi06tp/ansible_ansible.legacy.uri_payload.zip/ansible/modules/uri.py", line 599, in uri
File "/tmp/ansible_ansible.legacy.uri_payload_e2qi06tp/ansible_ansible.legacy.uri_payload.zip/ansible/module_utils/urls.py", line 1804, in fetch_url
File "/usr/lib/python3.8/tempfile.py", line 606, in __getattr__
file = self.__dict__['file']
KeyError: 'file'
fatal: [moloch]: FAILED! => changed=false
module_stderr: |-
OpenSSH_8.2p1 Ubuntu-4ubuntu0.3, OpenSSL 1.1.1f 31 Mar 2020
debug1: Reading configuration data /home/mrmeszaros/.ssh/config
debug1: /home/mrmeszaros/.ssh/config line 1: Applying options for 172.19.*
debug1: Reading configuration data /etc/ssh/ssh_config
debug1: /etc/ssh/ssh_config line 19: include /etc/ssh/ssh_config.d/*.conf matched no files
debug1: /etc/ssh/ssh_config line 21: Applying options for *
debug2: resolve_canonicalize: hostname 172.19.90.1 is address
debug1: auto-mux: Trying existing master
debug2: fd 3 setting O_NONBLOCK
debug2: mux_client_hello_exchange: master version 4
debug3: mux_client_forwards: request forwardings: 0 local, 0 remote
debug3: mux_client_request_session: entering
debug3: mux_client_request_alive: entering
debug3: mux_client_request_alive: done pid = 1186335
debug3: mux_client_request_session: session request sent
debug3: mux_client_read_packet: read header failed: Broken pipe
debug2: Received exit status from master 1
Shared connection to 172.19.90.1 closed.
module_stdout: |-
Traceback (most recent call last):
File "/tmp/ansible_ansible.legacy.uri_payload_e2qi06tp/ansible_ansible.legacy.uri_payload.zip/ansible/module_utils/urls.py", line 1755, in fetch_url
File "/tmp/ansible_ansible.legacy.uri_payload_e2qi06tp/ansible_ansible.legacy.uri_payload.zip/ansible/module_utils/urls.py", line 1535, in open_url
File "/tmp/ansible_ansible.legacy.uri_payload_e2qi06tp/ansible_ansible.legacy.uri_payload.zip/ansible/module_utils/urls.py", line 1446, in open
File "/usr/lib/python3.8/urllib/request.py", line 222, in urlopen
return opener.open(url, data, timeout)
File "/usr/lib/python3.8/urllib/request.py", line 531, in open
response = meth(req, response)
File "/usr/lib/python3.8/urllib/request.py", line 640, in http_response
response = self.parent.error(
File "/usr/lib/python3.8/urllib/request.py", line 563, in error
result = self._call_chain(*args)
File "/usr/lib/python3.8/urllib/request.py", line 502, in _call_chain
result = func(*args)
File "/usr/lib/python3.8/urllib/request.py", line 1244, in http_error_401
retry = self.http_error_auth_reqed('www-authenticate',
File "/usr/lib/python3.8/urllib/request.py", line 1124, in http_error_auth_reqed
return self.retry_http_digest_auth(req, authreq)
File "/usr/lib/python3.8/urllib/request.py", line 1138, in retry_http_digest_auth
resp = self.parent.open(req, timeout=req.timeout)
File "/usr/lib/python3.8/urllib/request.py", line 531, in open
response = meth(req, response)
File "/usr/lib/python3.8/urllib/request.py", line 640, in http_response
response = self.parent.error(
File "/usr/lib/python3.8/urllib/request.py", line 563, in error
result = self._call_chain(*args)
File "/usr/lib/python3.8/urllib/request.py", line 502, in _call_chain
result = func(*args)
File "/usr/lib/python3.8/urllib/request.py", line 1244, in http_error_401
retry = self.http_error_auth_reqed('www-authenticate',
File "/usr/lib/python3.8/urllib/request.py", line 1124, in http_error_auth_reqed
return self.retry_http_digest_auth(req, authreq)
File "/usr/lib/python3.8/urllib/request.py", line 1138, in retry_http_digest_auth
resp = self.parent.open(req, timeout=req.timeout)
File "/usr/lib/python3.8/urllib/request.py", line 531, in open
response = meth(req, response)
File "/usr/lib/python3.8/urllib/request.py", line 640, in http_response
response = self.parent.error(
File "/usr/lib/python3.8/urllib/request.py", line 563, in error
result = self._call_chain(*args)
File "/usr/lib/python3.8/urllib/request.py", line 502, in _call_chain
result = func(*args)
File "/usr/lib/python3.8/urllib/request.py", line 1244, in http_error_401
retry = self.http_error_auth_reqed('www-authenticate',
File "/usr/lib/python3.8/urllib/request.py", line 1124, in http_error_auth_reqed
return self.retry_http_digest_auth(req, authreq)
File "/usr/lib/python3.8/urllib/request.py", line 1138, in retry_http_digest_auth
resp = self.parent.open(req, timeout=req.timeout)
File "/usr/lib/python3.8/urllib/request.py", line 531, in open
response = meth(req, response)
File "/usr/lib/python3.8/urllib/request.py", line 640, in http_response
response = self.parent.error(
File "/usr/lib/python3.8/urllib/request.py", line 563, in error
result = self._call_chain(*args)
File "/usr/lib/python3.8/urllib/request.py", line 502, in _call_chain
result = func(*args)
File "/usr/lib/python3.8/urllib/request.py", line 1244, in http_error_401
retry = self.http_error_auth_reqed('www-authenticate',
File "/usr/lib/python3.8/urllib/request.py", line 1124, in http_error_auth_reqed
return self.retry_http_digest_auth(req, authreq)
File "/usr/lib/python3.8/urllib/request.py", line 1138, in retry_http_digest_auth
resp = self.parent.open(req, timeout=req.timeout)
File "/usr/lib/python3.8/urllib/request.py", line 531, in open
response = meth(req, response)
File "/usr/lib/python3.8/urllib/request.py", line 640, in http_response
response = self.parent.error(
File "/usr/lib/python3.8/urllib/request.py", line 563, in error
result = self._call_chain(*args)
File "/usr/lib/python3.8/urllib/request.py", line 502, in _call_chain
result = func(*args)
File "/usr/lib/python3.8/urllib/request.py", line 1244, in http_error_401
retry = self.http_error_auth_reqed('www-authenticate',
File "/usr/lib/python3.8/urllib/request.py", line 1124, in http_error_auth_reqed
return self.retry_http_digest_auth(req, authreq)
File "/usr/lib/python3.8/urllib/request.py", line 1138, in retry_http_digest_auth
resp = self.parent.open(req, timeout=req.timeout)
File "/usr/lib/python3.8/urllib/request.py", line 531, in open
response = meth(req, response)
File "/usr/lib/python3.8/urllib/request.py", line 640, in http_response
response = self.parent.error(
File "/usr/lib/python3.8/urllib/request.py", line 563, in error
result = self._call_chain(*args)
File "/usr/lib/python3.8/urllib/request.py", line 502, in _call_chain
result = func(*args)
File "/usr/lib/python3.8/urllib/request.py", line 1244, in http_error_401
retry = self.http_error_auth_reqed('www-authenticate',
File "/usr/lib/python3.8/urllib/request.py", line 1124, in http_error_auth_reqed
return self.retry_http_digest_auth(req, authreq)
File "/usr/lib/python3.8/urllib/request.py", line 1138, in retry_http_digest_auth
resp = self.parent.open(req, timeout=req.timeout)
File "/usr/lib/python3.8/urllib/request.py", line 531, in open
response = meth(req, response)
File "/usr/lib/python3.8/urllib/request.py", line 640, in http_response
response = self.parent.error(
File "/usr/lib/python3.8/urllib/request.py", line 563, in error
result = self._call_chain(*args)
File "/usr/lib/python3.8/urllib/request.py", line 502, in _call_chain
result = func(*args)
File "/usr/lib/python3.8/urllib/request.py", line 1244, in http_error_401
retry = self.http_error_auth_reqed('www-authenticate',
File "/usr/lib/python3.8/urllib/request.py", line 1117, in http_error_auth_reqed
raise HTTPError(req.full_url, 401, "digest auth failed",
urllib.error.HTTPError: HTTP Error 401: digest auth failed
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/sysadm/.ansible/tmp/ansible-tmp-1638198795.0287964-1186445-275398547896703/AnsiballZ_uri.py", line 100, in <module>
_ansiballz_main()
File "/home/sysadm/.ansible/tmp/ansible-tmp-1638198795.0287964-1186445-275398547896703/AnsiballZ_uri.py", line 92, in _ansiballz_main
invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)
File "/home/sysadm/.ansible/tmp/ansible-tmp-1638198795.0287964-1186445-275398547896703/AnsiballZ_uri.py", line 40, in invoke_module
runpy.run_module(mod_name='ansible.modules.uri', init_globals=dict(_module_fqn='ansible.modules.uri', _modlib_path=modlib_path),
File "/usr/lib/python3.8/runpy.py", line 207, in run_module
return _run_module_code(code, init_globals, run_name, mod_spec)
File "/usr/lib/python3.8/runpy.py", line 97, in _run_module_code
_run_code(code, mod_globals, init_globals,
File "/usr/lib/python3.8/runpy.py", line 87, in _run_code
exec(code, run_globals)
File "/tmp/ansible_ansible.legacy.uri_payload_e2qi06tp/ansible_ansible.legacy.uri_payload.zip/ansible/modules/uri.py", line 793, in <module>
File "/tmp/ansible_ansible.legacy.uri_payload_e2qi06tp/ansible_ansible.legacy.uri_payload.zip/ansible/modules/uri.py", line 710, in main
File "/tmp/ansible_ansible.legacy.uri_payload_e2qi06tp/ansible_ansible.legacy.uri_payload.zip/ansible/modules/uri.py", line 599, in uri
File "/tmp/ansible_ansible.legacy.uri_payload_e2qi06tp/ansible_ansible.legacy.uri_payload.zip/ansible/module_utils/urls.py", line 1804, in fetch_url
File "/usr/lib/python3.8/tempfile.py", line 606, in __getattr__
file = self.__dict__['file']
KeyError: 'file'
msg: |-
MODULE FAILURE
See stdout/stderr for the exact error
rc: 1
PLAY RECAP ****************************************************************************************************************************************************************************************************************************
moloch : ok=0 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/76386
|
https://github.com/ansible/ansible/pull/76421
|
82ff7efa9401ba0cc79b59e9d239492258e1791b
|
eca97a19a3d40d3ae1bc3e5e676edea159197157
| 2021-11-29T15:21:14Z |
python
| 2021-12-02T19:42:29Z |
test/integration/targets/uri/tasks/main.yml
|
# test code for the uri module
# (c) 2014, Leonid Evdokimov <[email protected]>
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <https://www.gnu.org/licenses/>.
- name: set role facts
set_fact:
http_port: 15260
files_dir: '{{ remote_tmp_dir|expanduser }}/files'
checkout_dir: '{{ remote_tmp_dir }}/git'
- name: create a directory to serve files from
file:
dest: "{{ files_dir }}"
state: directory
- copy:
src: "{{ item }}"
dest: "{{files_dir}}/{{ item }}"
with_sequence: start=0 end=4 format=pass%d.json
- copy:
src: "{{ item }}"
dest: "{{files_dir}}/{{ item }}"
with_sequence: start=0 end=30 format=fail%d.json
- copy:
src: "testserver.py"
dest: "{{ remote_tmp_dir }}/testserver.py"
- name: start SimpleHTTPServer
shell: cd {{ files_dir }} && {{ ansible_python.executable }} {{ remote_tmp_dir}}/testserver.py {{ http_port }}
async: 120 # this test set can take ~1m to run on FreeBSD (via Shippable)
poll: 0
- wait_for: port={{ http_port }}
- name: checksum pass_json
stat: path={{ files_dir }}/{{ item }}.json get_checksum=yes
register: pass_checksum
with_sequence: start=0 end=4 format=pass%d
- name: fetch pass_json
uri: return_content=yes url=http://localhost:{{ http_port }}/{{ item }}.json
register: fetch_pass_json
with_sequence: start=0 end=4 format=pass%d
- name: check pass_json
assert:
that:
- '"json" in item.1'
- item.0.stat.checksum == item.1.content | checksum
with_together:
- "{{pass_checksum.results}}"
- "{{fetch_pass_json.results}}"
- name: checksum fail_json
stat: path={{ files_dir }}/{{ item }}.json get_checksum=yes
register: fail_checksum
with_sequence: start=0 end=30 format=fail%d
- name: fetch fail_json
uri: return_content=yes url=http://localhost:{{ http_port }}/{{ item }}.json
register: fail
with_sequence: start=0 end=30 format=fail%d
- name: check fail_json
assert:
that:
- item.0.stat.checksum == item.1.content | checksum
- '"json" not in item.1'
with_together:
- "{{fail_checksum.results}}"
- "{{fail.results}}"
- name: test https fetch to a site with mismatched hostname and certificate
uri:
url: "https://{{ badssl_host }}/"
dest: "{{ remote_tmp_dir }}/shouldnotexist.html"
ignore_errors: True
register: result
- stat:
path: "{{ remote_tmp_dir }}/shouldnotexist.html"
register: stat_result
- name: Assert that the file was not downloaded
assert:
that:
- result.failed == true
- "'Failed to validate the SSL certificate' in result.msg or 'Hostname mismatch' in result.msg or (result.msg is match('hostname .* doesn.t match .*'))"
- stat_result.stat.exists == false
- result.status is defined
- result.status == -1
- result.url == 'https://' ~ badssl_host ~ '/'
- name: Clean up any cruft from the results directory
file:
name: "{{ remote_tmp_dir }}/kreitz.html"
state: absent
- name: test https fetch to a site with mismatched hostname and certificate and validate_certs=no
uri:
url: "https://{{ badssl_host }}/"
dest: "{{ remote_tmp_dir }}/kreitz.html"
validate_certs: no
register: result
- stat:
path: "{{ remote_tmp_dir }}/kreitz.html"
register: stat_result
- name: Assert that the file was downloaded
assert:
that:
- "stat_result.stat.exists == true"
- "result.changed == true"
- name: "get ca certificate {{ self_signed_host }}"
get_url:
url: "http://{{ httpbin_host }}/ca2cert.pem"
dest: "{{ remote_tmp_dir }}/ca2cert.pem"
- name: test https fetch to a site with self signed certificate using ca_path
uri:
url: "https://{{ self_signed_host }}:444/"
dest: "{{ remote_tmp_dir }}/self-signed_using_ca_path.html"
ca_path: "{{ remote_tmp_dir }}/ca2cert.pem"
validate_certs: yes
register: result
- stat:
path: "{{ remote_tmp_dir }}/self-signed_using_ca_path.html"
register: stat_result
- name: Assert that the file was downloaded
assert:
that:
- "stat_result.stat.exists == true"
- "result.changed == true"
- name: test https fetch to a site with self signed certificate without using ca_path
uri:
url: "https://{{ self_signed_host }}:444/"
dest: "{{ remote_tmp_dir }}/self-signed-without_using_ca_path.html"
validate_certs: yes
register: result
ignore_errors: true
- stat:
path: "{{ remote_tmp_dir }}/self-signed-without_using_ca_path.html"
register: stat_result
- name: Assure that https access to a host with self-signed certificate without providing ca_path fails
assert:
that:
- "stat_result.stat.exists == false"
- result is failed
- "'certificate verify failed' in result.msg"
- name: Locate ca-bundle
stat:
path: '{{ item }}'
loop:
- /etc/ssl/certs/ca-bundle.crt
- /etc/ssl/certs/ca-certificates.crt
- /var/lib/ca-certificates/ca-bundle.pem
- /usr/local/share/certs/ca-root-nss.crt
- '{{ macos_cafile.stdout_lines|default(["/_i_dont_exist_ca.pem"])|first }}'
- /etc/ssl/cert.pem
register: ca_bundle_candidates
- name: Test that ca_path can be a full bundle
uri:
url: "https://{{ httpbin_host }}/get"
ca_path: '{{ ca_bundle }}'
vars:
ca_bundle: '{{ ca_bundle_candidates.results|selectattr("stat.exists")|map(attribute="item")|first }}'
- name: test redirect without follow_redirects
uri:
url: 'https://{{ httpbin_host }}/redirect/2'
follow_redirects: 'none'
status_code: 302
register: result
- name: Assert location header
assert:
that:
- 'result.location|default("") == "https://{{ httpbin_host }}/relative-redirect/1"'
- name: Check SSL with redirect
uri:
url: 'https://{{ httpbin_host }}/redirect/2'
register: result
- name: Assert SSL with redirect
assert:
that:
- 'result.url|default("") == "https://{{ httpbin_host }}/get"'
- name: redirect to bad SSL site
uri:
url: 'http://{{ badssl_host }}'
register: result
ignore_errors: true
- name: Ensure bad SSL site reidrect fails
assert:
that:
- result is failed
- 'badssl_host in result.msg'
- name: test basic auth
uri:
url: 'https://{{ httpbin_host }}/basic-auth/user/passwd'
user: user
password: passwd
- name: test basic forced auth
uri:
url: 'https://{{ httpbin_host }}/hidden-basic-auth/user/passwd'
force_basic_auth: true
user: user
password: passwd
- name: test digest auth
uri:
url: 'https://{{ httpbin_host }}/digest-auth/auth/user/passwd'
user: user
password: passwd
headers:
Cookie: "fake=fake_value"
- name: test unredirected_headers
uri:
url: 'https://{{ httpbin_host }}/redirect-to?status_code=301&url=/basic-auth/user/passwd'
user: user
password: passwd
force_basic_auth: true
unredirected_headers:
- authorization
ignore_errors: true
register: unredirected_headers
- name: test omitting unredirected headers
uri:
url: 'https://{{ httpbin_host }}/redirect-to?status_code=301&url=/basic-auth/user/passwd'
user: user
password: passwd
force_basic_auth: true
register: redirected_headers
- name: ensure unredirected_headers caused auth to fail
assert:
that:
- unredirected_headers is failed
- unredirected_headers.status == 401
- redirected_headers is successful
- redirected_headers.status == 200
- name: test PUT
uri:
url: 'https://{{ httpbin_host }}/put'
method: PUT
body: 'foo=bar'
- name: test OPTIONS
uri:
url: 'https://{{ httpbin_host }}/'
method: OPTIONS
register: result
- name: Assert we got an allow header
assert:
that:
- 'result.allow.split(", ")|sort == ["GET", "HEAD", "OPTIONS"]'
# Ubuntu12.04 doesn't have python-urllib3, this makes handling required dependencies a pain across all variations
# We'll use this to just skip 12.04 on those tests. We should be sufficiently covered with other OSes and versions
- name: Set fact if running on Ubuntu 12.04
set_fact:
is_ubuntu_precise: "{{ ansible_distribution == 'Ubuntu' and ansible_distribution_release == 'precise' }}"
- name: Test that SNI succeeds on python versions that have SNI
uri:
url: 'https://{{ sni_host }}/'
return_content: true
when: ansible_python.has_sslcontext
register: result
- name: Assert SNI verification succeeds on new python
assert:
that:
- result is successful
- 'sni_host in result.content'
when: ansible_python.has_sslcontext
- name: Verify SNI verification fails on old python without urllib3 contrib
uri:
url: 'https://{{ sni_host }}'
ignore_errors: true
when: not ansible_python.has_sslcontext
register: result
- name: Assert SNI verification fails on old python
assert:
that:
- result is failed
when: result is not skipped
- name: check if urllib3 is installed as an OS package
package:
name: "{{ uri_os_packages[ansible_os_family].urllib3 }}"
check_mode: yes
when: not ansible_python.has_sslcontext and not is_ubuntu_precise|bool and uri_os_packages[ansible_os_family].urllib3|default
register: urllib3
- name: uninstall conflicting urllib3 pip package
pip:
name: urllib3
state: absent
when: not ansible_python.has_sslcontext and not is_ubuntu_precise|bool and uri_os_packages[ansible_os_family].urllib3|default and urllib3.changed
- name: install OS packages that are needed for SNI on old python
package:
name: "{{ item }}"
with_items: "{{ uri_os_packages[ansible_os_family].step1 | default([]) }}"
when: not ansible_python.has_sslcontext and not is_ubuntu_precise|bool
- name: install python modules for Older Python SNI verification
pip:
name: "{{ item }}"
with_items:
- ndg-httpsclient
when: not ansible_python.has_sslcontext and not is_ubuntu_precise|bool
- name: Verify SNI verification succeeds on old python with urllib3 contrib
uri:
url: 'https://{{ sni_host }}'
return_content: true
when: not ansible_python.has_sslcontext and not is_ubuntu_precise|bool
register: result
- name: Assert SNI verification succeeds on old python
assert:
that:
- result is successful
- 'sni_host in result.content'
when: not ansible_python.has_sslcontext and not is_ubuntu_precise|bool
- name: Uninstall ndg-httpsclient
pip:
name: "{{ item }}"
state: absent
with_items:
- ndg-httpsclient
when: not ansible_python.has_sslcontext and not is_ubuntu_precise|bool
- name: uninstall OS packages that are needed for SNI on old python
package:
name: "{{ item }}"
state: absent
with_items: "{{ uri_os_packages[ansible_os_family].step1 | default([]) }}"
when: not ansible_python.has_sslcontext and not is_ubuntu_precise|bool
- name: install OS packages that are needed for building cryptography
package:
name: "{{ item }}"
with_items: "{{ uri_os_packages[ansible_os_family].step2 | default([]) }}"
when: not ansible_python.has_sslcontext and not is_ubuntu_precise|bool
- name: create constraints path
set_fact:
remote_constraints: "{{ remote_tmp_dir }}/constraints.txt"
when: not ansible_python.has_sslcontext and not is_ubuntu_precise|bool
- name: create constraints file
copy:
content: |
cryptography == 2.1.4
idna == 2.5
pyopenssl == 17.5.0
six == 1.13.0
urllib3 == 1.23
dest: "{{ remote_constraints }}"
when: not ansible_python.has_sslcontext and not is_ubuntu_precise|bool
- name: install urllib3 and pyopenssl via pip
pip:
name: "{{ item }}"
extra_args: "-c {{ remote_constraints }}"
with_items:
- urllib3
- PyOpenSSL
when: not ansible_python.has_sslcontext and not is_ubuntu_precise|bool
- name: Verify SNI verification succeeds on old python with pip urllib3 contrib
uri:
url: 'https://{{ sni_host }}'
return_content: true
when: not ansible_python.has_sslcontext and not is_ubuntu_precise|bool
register: result
- name: Assert SNI verification succeeds on old python with pip urllib3 contrib
assert:
that:
- result is successful
- 'sni_host in result.content'
when: not ansible_python.has_sslcontext and not is_ubuntu_precise|bool
- name: Uninstall urllib3 and PyOpenSSL
pip:
name: "{{ item }}"
state: absent
with_items:
- urllib3
- PyOpenSSL
when: not ansible_python.has_sslcontext and not is_ubuntu_precise|bool
- name: validate the status_codes are correct
uri:
url: "https://{{ httpbin_host }}/status/202"
status_code: 202
method: POST
body: foo
- name: Validate body_format json does not override content-type in 2.3 or newer
uri:
url: "https://{{ httpbin_host }}/post"
method: POST
body:
foo: bar
body_format: json
headers:
'Content-Type': 'text/json'
return_content: true
register: result
failed_when: result.json.headers['Content-Type'] != 'text/json'
- name: Validate body_format form-urlencoded using dicts works
uri:
url: https://{{ httpbin_host }}/post
method: POST
body:
user: foo
password: bar!#@ |&82$M
submit: Sign in
body_format: form-urlencoded
return_content: yes
register: result
- name: Assert form-urlencoded dict input
assert:
that:
- result is successful
- result.json.headers['Content-Type'] == 'application/x-www-form-urlencoded'
- result.json.form.password == 'bar!#@ |&82$M'
- name: Validate body_format form-urlencoded using lists works
uri:
url: https://{{ httpbin_host }}/post
method: POST
body:
- [ user, foo ]
- [ password, bar!#@ |&82$M ]
- [ submit, Sign in ]
body_format: form-urlencoded
return_content: yes
register: result
- name: Assert form-urlencoded list input
assert:
that:
- result is successful
- result.json.headers['Content-Type'] == 'application/x-www-form-urlencoded'
- result.json.form.password == 'bar!#@ |&82$M'
- name: Validate body_format form-urlencoded of invalid input fails
uri:
url: https://{{ httpbin_host }}/post
method: POST
body:
- foo
- bar: baz
body_format: form-urlencoded
return_content: yes
register: result
ignore_errors: yes
- name: Assert invalid input fails
assert:
that:
- result is failure
- "'failed to parse body as form_urlencoded: too many values to unpack' in result.msg"
- name: multipart/form-data
uri:
url: https://{{ httpbin_host }}/post
method: POST
body_format: form-multipart
body:
file1:
filename: formdata.txt
file2:
content: text based file content
filename: fake.txt
mime_type: text/plain
text_form_field1: value1
text_form_field2:
content: value2
mime_type: text/plain
register: multipart
- name: Assert multipart/form-data
assert:
that:
- multipart.json.files.file1 == '_multipart/form-data_\n'
- multipart.json.files.file2 == 'text based file content'
- multipart.json.form.text_form_field1 == 'value1'
- multipart.json.form.text_form_field2 == 'value2'
# https://github.com/ansible/ansible/issues/74276 - verifies we don't have a traceback
- name: multipart/form-data with invalid value
uri:
url: https://{{ httpbin_host }}/post
method: POST
body_format: form-multipart
body:
integer_value: 1
register: multipart_invalid
failed_when: 'multipart_invalid.msg != "failed to parse body as form-multipart: value must be a string, or mapping, cannot be type int"'
- name: Validate invalid method
uri:
url: https://{{ httpbin_host }}/anything
method: UNKNOWN
register: result
ignore_errors: yes
- name: Assert invalid method fails
assert:
that:
- result is failure
- result.status == 405
- "'METHOD NOT ALLOWED' in result.msg"
- name: Test client cert auth, no certs
uri:
url: "https://ansible.http.tests/ssl_client_verify"
status_code: 200
return_content: true
register: result
failed_when: result.content != "ansible.http.tests:NONE"
when: has_httptester
- name: Test client cert auth, with certs
uri:
url: "https://ansible.http.tests/ssl_client_verify"
client_cert: "{{ remote_tmp_dir }}/client.pem"
client_key: "{{ remote_tmp_dir }}/client.key"
return_content: true
register: result
failed_when: result.content != "ansible.http.tests:SUCCESS"
when: has_httptester
- name: Test client cert auth, with no validation
uri:
url: "https://fail.ansible.http.tests/ssl_client_verify"
client_cert: "{{ remote_tmp_dir }}/client.pem"
client_key: "{{ remote_tmp_dir }}/client.key"
return_content: true
validate_certs: no
register: result
failed_when: result.content != "ansible.http.tests:SUCCESS"
when: has_httptester
- name: Test client cert auth, with validation and ssl mismatch
uri:
url: "https://fail.ansible.http.tests/ssl_client_verify"
client_cert: "{{ remote_tmp_dir }}/client.pem"
client_key: "{{ remote_tmp_dir }}/client.key"
return_content: true
validate_certs: yes
register: result
failed_when: result is not failed
when: has_httptester
- uri:
url: https://{{ httpbin_host }}/response-headers?Set-Cookie=Foo%3Dbar&Set-Cookie=Baz%3Dqux
register: result
- assert:
that:
- result['set_cookie'] == 'Foo=bar, Baz=qux'
# Python sorts cookies in order of most specific (ie. longest) path first
# items with the same path are reversed from response order
- result['cookies_string'] == 'Baz=qux; Foo=bar'
- name: Write out netrc template
template:
src: netrc.j2
dest: "{{ remote_tmp_dir }}/netrc"
- name: Test netrc with port
uri:
url: "https://{{ httpbin_host }}:443/basic-auth/user/passwd"
environment:
NETRC: "{{ remote_tmp_dir }}/netrc"
- name: Test JSON POST with src
uri:
url: "https://{{ httpbin_host}}/post"
src: pass0.json
method: POST
return_content: true
body_format: json
register: result
- name: Validate POST with src works
assert:
that:
- result.json.json[0] == 'JSON Test Pattern pass1'
- name: Copy file pass0.json to remote
copy:
src: "{{ role_path }}/files/pass0.json"
dest: "{{ remote_tmp_dir }}/pass0.json"
- name: Test JSON POST with src and remote_src=True
uri:
url: "https://{{ httpbin_host}}/post"
src: "{{ remote_tmp_dir }}/pass0.json"
remote_src: true
method: POST
return_content: true
body_format: json
register: result
- name: Validate POST with src and remote_src=True works
assert:
that:
- result.json.json[0] == 'JSON Test Pattern pass1'
- name: Make request that includes password in JSON keys
uri:
url: "https://{{ httpbin_host}}/get?key-password=value-password"
user: admin
password: password
register: sanitize_keys
- name: assert that keys were sanitized
assert:
that:
- sanitize_keys.json.args['key-********'] == 'value-********'
- name: Create a testing file
copy:
content: "content"
dest: "{{ remote_tmp_dir }}/output"
- name: Download a file from non existing location
uri:
url: http://does/not/exist
dest: "{{ remote_tmp_dir }}/output"
ignore_errors: yes
- name: Save testing file's output
command: "cat {{ remote_tmp_dir }}/output"
register: file_out
- name: Test the testing file was not overwritten
assert:
that:
- "'content' in file_out.stdout"
- name: Clean up
file:
dest: "{{ remote_tmp_dir }}/output"
state: absent
- name: Test follow_redirects=none
import_tasks: redirect-none.yml
- name: Test follow_redirects=safe
import_tasks: redirect-safe.yml
- name: Test follow_redirects=urllib2
import_tasks: redirect-urllib2.yml
- name: Test follow_redirects=all
import_tasks: redirect-all.yml
- name: Check unexpected failures
import_tasks: unexpected-failures.yml
- name: Check return-content
import_tasks: return-content.yml
- name: Test use_gssapi=True
include_tasks:
file: use_gssapi.yml
apply:
environment:
KRB5_CONFIG: '{{ krb5_config }}'
KRB5CCNAME: FILE:{{ remote_tmp_dir }}/krb5.cc
when: krb5_config is defined
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 76,379 |
template lookup `jinja2_native=0` does not convert `None` to an empty string
|
### Summary
The option `jinja2_native=1` seems not well applied in template module.
### Issue Type
Bug Report
### Component Name
template
### Ansible Version
```console
$ ansible --version
ansible [core 2.11.6]
config file = /usr/local/src/ansible.cfg
configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/lib/python3.9/site-packages/ansible
ansible collection location = /root/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/local/bin/ansible
python version = 3.9.9 (main, Nov 17 2021, 16:20:40) [GCC 10.2.1 20210110]
jinja version = 3.0.3
libyaml = True
```
### Configuration
```console
$ ansible-config dump --only-changed
# empty output
```
### OS / Environment
Debian 11.1 (Bullseye)
### Steps to Reproduce
Here my files:
```sh
root@flhpz4:/usr/local/src# ls -al
total 20
drwxr-xr-x 1 root root 4096 Nov 27 18:50 .
drwxr-xr-x 1 root root 4096 Nov 15 00:00 ..
-rw-r--r-- 1 root root 830 Nov 27 18:45 playbook-bug.yml
-rw-r--r-- 1 root root 234 Nov 27 18:39 template.j2
```
The `template.j2` file:
```
var_good is {% if var_good is not none %}NOT {% endif %}none, value=[{{ var_good }}], type={{ var_good | type_debug }}
var_bad is {% if var_bad is not none %}NOT {% endif %}none, value=[{{ var_bad }}], type={{ var_bad | type_debug }}
```
And my playbook:
```yaml
---
- name: "test produce a bug... It seems to me..."
hosts: "localhost"
connection: "local"
gather_facts: false
vars:
d:
x: null
y: null
var_good: "{{ y }}"
var_bad: "{{ d.x }}"
msg: |-
var_good: type={{ var_good | type_debug }}, value=[{{ var_good }}]
var_bad: type={{ var_bad | type_debug }}, value=[{{ var_bad }}]
tasks:
- ansible.builtin.debug:
msg: "{{ msg.split('\n') }}"
- name: "lookup without jinja2_native"
ansible.builtin.debug:
msg: "{{ lookup('template', 'template.j2') | trim | split('\n') }}"
- name: "lookup with jinja2_native=yes"
ansible.builtin.debug:
msg: "{{ lookup('template', 'template.j2', jinja2_native='yes') | trim | split('\n') }}"
- ansible.builtin.template:
src: "template.j2"
dest: "/tmp/template.yml"
```
Here is the command to produce the bug (according to me):
```sh
rm -f /tmp/template.yml && ANSIBLE_JINJA2_NATIVE=1 ansible-playbook --diff -i 'localhost,' playbook-bug.yml
```
### Expected Results
To me, because `jinja2_native` is enabled, the tasks:
- `lookup without jinja2_native`
- and `lookup with jinja2_native=yes`
should evaluate `var_good` **and** `var_bad` to `none`, and not just `var_good`, no?
Currently, these tasks evaluate `var_good` to `none` (OK for me) and `var_bad` to the **string** `None`.
By the way, with `jinja2_native` **disabled** (ie `ANSIBLE_JINJA2_NATIVE=0`), these tasks evaluate `var_bad` to an **empty string**. So `jinja2_native=1` changes the behavior of template module, but not in the expected way to me.
### Actual Results
```console
PLAY [test produce a bug... It seems to me...] *************************************************
TASK [ansible.builtin.debug] *******************************************************************
ok: [localhost] => {
"msg": [
"var_good: type=NoneType, value=[None]",
"var_bad: type=NoneType, value=[None]"
]
}
TASK [lookup without jinja2_native] ************************************************************
ok: [localhost] => {
"msg": [
"var_good is none, value=[None], type=NoneType",
"var_bad is NOT none, value=[None], type=str"
]
}
TASK [lookup with jinja2_native=yes] ***********************************************************
ok: [localhost] => {
"msg": [
"var_good is none, value=[None], type=NoneType",
"var_bad is none, value=[None], type=NoneType"
]
}
TASK [ansible.builtin.template] ****************************************************************
--- before
+++ after: /root/.ansible/tmp/ansible-local-19257luyp3lx/tmpt6d9t7gs/template.j2
@@ -0,0 +1,2 @@
+var_good is none, value=[None], type=NoneType
+var_bad is NOT none, value=[None], type=str
changed: [localhost]
PLAY RECAP *************************************************************************************
localhost : ok=4 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
```
### Code of Conduct
- [x] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/76379
|
https://github.com/ansible/ansible/pull/76435
|
eca97a19a3d40d3ae1bc3e5e676edea159197157
|
4e7be293a5855f804ace91b2dbf548e4f7f3a633
| 2021-11-27T19:46:56Z |
python
| 2021-12-02T19:55:28Z |
changelogs/fragments/76379-set-finalize-on-new-env.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 76,379 |
template lookup `jinja2_native=0` does not convert `None` to an empty string
|
### Summary
The option `jinja2_native=1` seems not well applied in template module.
### Issue Type
Bug Report
### Component Name
template
### Ansible Version
```console
$ ansible --version
ansible [core 2.11.6]
config file = /usr/local/src/ansible.cfg
configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/lib/python3.9/site-packages/ansible
ansible collection location = /root/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/local/bin/ansible
python version = 3.9.9 (main, Nov 17 2021, 16:20:40) [GCC 10.2.1 20210110]
jinja version = 3.0.3
libyaml = True
```
### Configuration
```console
$ ansible-config dump --only-changed
# empty output
```
### OS / Environment
Debian 11.1 (Bullseye)
### Steps to Reproduce
Here my files:
```sh
root@flhpz4:/usr/local/src# ls -al
total 20
drwxr-xr-x 1 root root 4096 Nov 27 18:50 .
drwxr-xr-x 1 root root 4096 Nov 15 00:00 ..
-rw-r--r-- 1 root root 830 Nov 27 18:45 playbook-bug.yml
-rw-r--r-- 1 root root 234 Nov 27 18:39 template.j2
```
The `template.j2` file:
```
var_good is {% if var_good is not none %}NOT {% endif %}none, value=[{{ var_good }}], type={{ var_good | type_debug }}
var_bad is {% if var_bad is not none %}NOT {% endif %}none, value=[{{ var_bad }}], type={{ var_bad | type_debug }}
```
And my playbook:
```yaml
---
- name: "test produce a bug... It seems to me..."
hosts: "localhost"
connection: "local"
gather_facts: false
vars:
d:
x: null
y: null
var_good: "{{ y }}"
var_bad: "{{ d.x }}"
msg: |-
var_good: type={{ var_good | type_debug }}, value=[{{ var_good }}]
var_bad: type={{ var_bad | type_debug }}, value=[{{ var_bad }}]
tasks:
- ansible.builtin.debug:
msg: "{{ msg.split('\n') }}"
- name: "lookup without jinja2_native"
ansible.builtin.debug:
msg: "{{ lookup('template', 'template.j2') | trim | split('\n') }}"
- name: "lookup with jinja2_native=yes"
ansible.builtin.debug:
msg: "{{ lookup('template', 'template.j2', jinja2_native='yes') | trim | split('\n') }}"
- ansible.builtin.template:
src: "template.j2"
dest: "/tmp/template.yml"
```
Here is the command to produce the bug (according to me):
```sh
rm -f /tmp/template.yml && ANSIBLE_JINJA2_NATIVE=1 ansible-playbook --diff -i 'localhost,' playbook-bug.yml
```
### Expected Results
To me, because `jinja2_native` is enabled, the tasks:
- `lookup without jinja2_native`
- and `lookup with jinja2_native=yes`
should evaluate `var_good` **and** `var_bad` to `none`, and not just `var_good`, no?
Currently, these tasks evaluate `var_good` to `none` (OK for me) and `var_bad` to the **string** `None`.
By the way, with `jinja2_native` **disabled** (ie `ANSIBLE_JINJA2_NATIVE=0`), these tasks evaluate `var_bad` to an **empty string**. So `jinja2_native=1` changes the behavior of template module, but not in the expected way to me.
### Actual Results
```console
PLAY [test produce a bug... It seems to me...] *************************************************
TASK [ansible.builtin.debug] *******************************************************************
ok: [localhost] => {
"msg": [
"var_good: type=NoneType, value=[None]",
"var_bad: type=NoneType, value=[None]"
]
}
TASK [lookup without jinja2_native] ************************************************************
ok: [localhost] => {
"msg": [
"var_good is none, value=[None], type=NoneType",
"var_bad is NOT none, value=[None], type=str"
]
}
TASK [lookup with jinja2_native=yes] ***********************************************************
ok: [localhost] => {
"msg": [
"var_good is none, value=[None], type=NoneType",
"var_bad is none, value=[None], type=NoneType"
]
}
TASK [ansible.builtin.template] ****************************************************************
--- before
+++ after: /root/.ansible/tmp/ansible-local-19257luyp3lx/tmpt6d9t7gs/template.j2
@@ -0,0 +1,2 @@
+var_good is none, value=[None], type=NoneType
+var_bad is NOT none, value=[None], type=str
changed: [localhost]
PLAY RECAP *************************************************************************************
localhost : ok=4 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
```
### Code of Conduct
- [x] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/76379
|
https://github.com/ansible/ansible/pull/76435
|
eca97a19a3d40d3ae1bc3e5e676edea159197157
|
4e7be293a5855f804ace91b2dbf548e4f7f3a633
| 2021-11-27T19:46:56Z |
python
| 2021-12-02T19:55:28Z |
lib/ansible/template/__init__.py
|
# (c) 2012-2014, Michael DeHaan <[email protected]>
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
# Make coding more python3-ish
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import ast
import datetime
import os
import pkgutil
import pwd
import re
import time
from contextlib import contextmanager
from hashlib import sha1
from numbers import Number
from traceback import format_exc
from jinja2.exceptions import TemplateSyntaxError, UndefinedError
from jinja2.loaders import FileSystemLoader
from jinja2.nativetypes import NativeEnvironment
from jinja2.runtime import Context, StrictUndefined
from ansible import constants as C
from ansible.errors import (
AnsibleAssertionError,
AnsibleError,
AnsibleFilterError,
AnsibleLookupError,
AnsibleOptionsError,
AnsiblePluginRemovedError,
AnsibleUndefinedVariable,
)
from ansible.module_utils.six import string_types, text_type
from ansible.module_utils._text import to_native, to_text, to_bytes
from ansible.module_utils.common._collections_compat import Iterator, Sequence, Mapping, MappingView, MutableMapping
from ansible.module_utils.common.collections import is_sequence
from ansible.module_utils.compat.importlib import import_module
from ansible.plugins.loader import filter_loader, lookup_loader, test_loader
from ansible.template.native_helpers import ansible_native_concat, ansible_concat
from ansible.template.template import AnsibleJ2Template
from ansible.template.vars import AnsibleJ2Vars
from ansible.utils.collection_loader import AnsibleCollectionRef
from ansible.utils.display import Display
from ansible.utils.collection_loader._collection_finder import _get_collection_metadata
from ansible.utils.listify import listify_lookup_plugin_terms
from ansible.utils.native_jinja import NativeJinjaText
from ansible.utils.unsafe_proxy import wrap_var
display = Display()
__all__ = ['Templar', 'generate_ansible_template_vars']
# Primitive Types which we don't want Jinja to convert to strings.
NON_TEMPLATED_TYPES = (bool, Number)
JINJA2_OVERRIDE = '#jinja2:'
JINJA2_BEGIN_TOKENS = frozenset(('variable_begin', 'block_begin', 'comment_begin', 'raw_begin'))
JINJA2_END_TOKENS = frozenset(('variable_end', 'block_end', 'comment_end', 'raw_end'))
RANGE_TYPE = type(range(0))
def generate_ansible_template_vars(path, fullpath=None, dest_path=None):
if fullpath is None:
b_path = to_bytes(path)
else:
b_path = to_bytes(fullpath)
try:
template_uid = pwd.getpwuid(os.stat(b_path).st_uid).pw_name
except (KeyError, TypeError):
template_uid = os.stat(b_path).st_uid
temp_vars = {
'template_host': to_text(os.uname()[1]),
'template_path': path,
'template_mtime': datetime.datetime.fromtimestamp(os.path.getmtime(b_path)),
'template_uid': to_text(template_uid),
'template_run_date': datetime.datetime.now(),
'template_destpath': to_native(dest_path) if dest_path else None,
}
if fullpath is None:
temp_vars['template_fullpath'] = os.path.abspath(path)
else:
temp_vars['template_fullpath'] = fullpath
managed_default = C.DEFAULT_MANAGED_STR
managed_str = managed_default.format(
host=temp_vars['template_host'],
uid=temp_vars['template_uid'],
file=temp_vars['template_path'],
)
temp_vars['ansible_managed'] = to_text(time.strftime(to_native(managed_str), time.localtime(os.path.getmtime(b_path))))
return temp_vars
def _escape_backslashes(data, jinja_env):
"""Double backslashes within jinja2 expressions
A user may enter something like this in a playbook::
debug:
msg: "Test Case 1\\3; {{ test1_name | regex_replace('^(.*)_name$', '\\1')}}"
The string inside of the {{ gets interpreted multiple times First by yaml.
Then by python. And finally by jinja2 as part of it's variable. Because
it is processed by both python and jinja2, the backslash escaped
characters get unescaped twice. This means that we'd normally have to use
four backslashes to escape that. This is painful for playbook authors as
they have to remember different rules for inside vs outside of a jinja2
expression (The backslashes outside of the "{{ }}" only get processed by
yaml and python. So they only need to be escaped once). The following
code fixes this by automatically performing the extra quoting of
backslashes inside of a jinja2 expression.
"""
if '\\' in data and '{{' in data:
new_data = []
d2 = jinja_env.preprocess(data)
in_var = False
for token in jinja_env.lex(d2):
if token[1] == 'variable_begin':
in_var = True
new_data.append(token[2])
elif token[1] == 'variable_end':
in_var = False
new_data.append(token[2])
elif in_var and token[1] == 'string':
# Double backslashes only if we're inside of a jinja2 variable
new_data.append(token[2].replace('\\', '\\\\'))
else:
new_data.append(token[2])
data = ''.join(new_data)
return data
def is_possibly_template(data, jinja_env):
"""Determines if a string looks like a template, by seeing if it
contains a jinja2 start delimiter. Does not guarantee that the string
is actually a template.
This is different than ``is_template`` which is more strict.
This method may return ``True`` on a string that is not templatable.
Useful when guarding passing a string for templating, but when
you want to allow the templating engine to make the final
assessment which may result in ``TemplateSyntaxError``.
"""
if isinstance(data, string_types):
for marker in (jinja_env.block_start_string, jinja_env.variable_start_string, jinja_env.comment_start_string):
if marker in data:
return True
return False
def is_template(data, jinja_env):
"""This function attempts to quickly detect whether a value is a jinja2
template. To do so, we look for the first 2 matching jinja2 tokens for
start and end delimiters.
"""
found = None
start = True
comment = False
d2 = jinja_env.preprocess(data)
# Quick check to see if this is remotely like a template before doing
# more expensive investigation.
if not is_possibly_template(d2, jinja_env):
return False
# This wraps a lot of code, but this is due to lex returning a generator
# so we may get an exception at any part of the loop
try:
for token in jinja_env.lex(d2):
if token[1] in JINJA2_BEGIN_TOKENS:
if start and token[1] == 'comment_begin':
# Comments can wrap other token types
comment = True
start = False
# Example: variable_end -> variable
found = token[1].split('_')[0]
elif token[1] in JINJA2_END_TOKENS:
if token[1].split('_')[0] == found:
return True
elif comment:
continue
return False
except TemplateSyntaxError:
return False
return False
def _count_newlines_from_end(in_str):
'''
Counts the number of newlines at the end of a string. This is used during
the jinja2 templating to ensure the count matches the input, since some newlines
may be thrown away during the templating.
'''
try:
i = len(in_str)
j = i - 1
while in_str[j] == '\n':
j -= 1
return i - 1 - j
except IndexError:
# Uncommon cases: zero length string and string containing only newlines
return i
def recursive_check_defined(item):
from jinja2.runtime import Undefined
if isinstance(item, MutableMapping):
for key in item:
recursive_check_defined(item[key])
elif isinstance(item, list):
for i in item:
recursive_check_defined(i)
else:
if isinstance(item, Undefined):
raise AnsibleFilterError("{0} is undefined".format(item))
def _is_rolled(value):
"""Helper method to determine if something is an unrolled generator,
iterator, or similar object
"""
return (
isinstance(value, Iterator) or
isinstance(value, MappingView) or
isinstance(value, RANGE_TYPE)
)
def _unroll_iterator(func):
"""Wrapper function, that intercepts the result of a filter
and auto unrolls a generator, so that users are not required to
explicitly use ``|list`` to unroll.
"""
def wrapper(*args, **kwargs):
ret = func(*args, **kwargs)
if _is_rolled(ret):
return list(ret)
return ret
return _update_wrapper(wrapper, func)
def _update_wrapper(wrapper, func):
# This code is duplicated from ``functools.update_wrapper`` from Py3.7.
# ``functools.update_wrapper`` was failing when the func was ``functools.partial``
for attr in ('__module__', '__name__', '__qualname__', '__doc__', '__annotations__'):
try:
value = getattr(func, attr)
except AttributeError:
pass
else:
setattr(wrapper, attr, value)
for attr in ('__dict__',):
getattr(wrapper, attr).update(getattr(func, attr, {}))
wrapper.__wrapped__ = func
return wrapper
def _wrap_native_text(func):
"""Wrapper function, that intercepts the result of a filter
and wraps it into NativeJinjaText which is then used
in ``ansible_native_concat`` to indicate that it is a text
which should not be passed into ``literal_eval``.
"""
def wrapper(*args, **kwargs):
ret = func(*args, **kwargs)
return NativeJinjaText(ret)
return _update_wrapper(wrapper, func)
class AnsibleUndefined(StrictUndefined):
'''
A custom Undefined class, which returns further Undefined objects on access,
rather than throwing an exception.
'''
def __getattr__(self, name):
if name == '__UNSAFE__':
# AnsibleUndefined should never be assumed to be unsafe
# This prevents ``hasattr(val, '__UNSAFE__')`` from evaluating to ``True``
raise AttributeError(name)
# Return original Undefined object to preserve the first failure context
return self
def __getitem__(self, key):
# Return original Undefined object to preserve the first failure context
return self
def __repr__(self):
return 'AnsibleUndefined(hint={0!r}, obj={1!r}, name={2!r})'.format(
self._undefined_hint,
self._undefined_obj,
self._undefined_name
)
def __contains__(self, item):
# Return original Undefined object to preserve the first failure context
return self
class AnsibleContext(Context):
'''
A custom context, which intercepts resolve() calls and sets a flag
internally if any variable lookup returns an AnsibleUnsafe value. This
flag is checked post-templating, and (when set) will result in the
final templated result being wrapped in AnsibleUnsafe.
'''
def __init__(self, *args, **kwargs):
super(AnsibleContext, self).__init__(*args, **kwargs)
self.unsafe = False
def _is_unsafe(self, val):
'''
Our helper function, which will also recursively check dict and
list entries due to the fact that they may be repr'd and contain
a key or value which contains jinja2 syntax and would otherwise
lose the AnsibleUnsafe value.
'''
if isinstance(val, dict):
for key in val.keys():
if self._is_unsafe(val[key]):
return True
elif isinstance(val, list):
for item in val:
if self._is_unsafe(item):
return True
elif getattr(val, '__UNSAFE__', False) is True:
return True
return False
def _update_unsafe(self, val):
if val is not None and not self.unsafe and self._is_unsafe(val):
self.unsafe = True
def resolve(self, key):
'''
The intercepted resolve(), which uses the helper above to set the
internal flag whenever an unsafe variable value is returned.
'''
val = super(AnsibleContext, self).resolve(key)
self._update_unsafe(val)
return val
def resolve_or_missing(self, key):
val = super(AnsibleContext, self).resolve_or_missing(key)
self._update_unsafe(val)
return val
def get_all(self):
"""Return the complete context as a dict including the exported
variables. For optimizations reasons this might not return an
actual copy so be careful with using it.
This is to prevent from running ``AnsibleJ2Vars`` through dict():
``dict(self.parent, **self.vars)``
In Ansible this means that ALL variables would be templated in the
process of re-creating the parent because ``AnsibleJ2Vars`` templates
each variable in its ``__getitem__`` method. Instead we re-create the
parent via ``AnsibleJ2Vars.add_locals`` that creates a new
``AnsibleJ2Vars`` copy without templating each variable.
This will prevent unnecessarily templating unused variables in cases
like setting a local variable and passing it to {% include %}
in a template.
Also see ``AnsibleJ2Template``and
https://github.com/pallets/jinja/commit/d67f0fd4cc2a4af08f51f4466150d49da7798729
"""
if not self.vars:
return self.parent
if not self.parent:
return self.vars
if isinstance(self.parent, AnsibleJ2Vars):
return self.parent.add_locals(self.vars)
else:
# can this happen in Ansible?
return dict(self.parent, **self.vars)
class JinjaPluginIntercept(MutableMapping):
def __init__(self, delegatee, pluginloader, *args, **kwargs):
super(JinjaPluginIntercept, self).__init__(*args, **kwargs)
self._delegatee = delegatee
self._pluginloader = pluginloader
if self._pluginloader.class_name == 'FilterModule':
self._method_map_name = 'filters'
self._dirname = 'filter'
elif self._pluginloader.class_name == 'TestModule':
self._method_map_name = 'tests'
self._dirname = 'test'
self._collection_jinja_func_cache = {}
self._ansible_plugins_loaded = False
def _load_ansible_plugins(self):
if self._ansible_plugins_loaded:
return
for plugin in self._pluginloader.all():
try:
method_map = getattr(plugin, self._method_map_name)
self._delegatee.update(method_map())
except Exception as e:
display.warning("Skipping %s plugin %s as it seems to be invalid: %r" % (self._dirname, to_text(plugin._original_path), e))
continue
if self._pluginloader.class_name == 'FilterModule':
for plugin_name, plugin in self._delegatee.items():
if plugin_name in C.STRING_TYPE_FILTERS:
self._delegatee[plugin_name] = _wrap_native_text(plugin)
else:
self._delegatee[plugin_name] = _unroll_iterator(plugin)
self._ansible_plugins_loaded = True
# FUTURE: we can cache FQ filter/test calls for the entire duration of a run, since a given collection's impl's
# aren't supposed to change during a run
def __getitem__(self, key):
self._load_ansible_plugins()
try:
if not isinstance(key, string_types):
raise ValueError('key must be a string')
key = to_native(key)
if '.' not in key: # might be a built-in or legacy, check the delegatee dict first, then try for a last-chance base redirect
func = self._delegatee.get(key)
if func:
return func
# didn't find it in the pre-built Jinja env, assume it's a former builtin and follow the normal routing path
leaf_key = key
key = 'ansible.builtin.' + key
else:
leaf_key = key.split('.')[-1]
acr = AnsibleCollectionRef.try_parse_fqcr(key, self._dirname)
if not acr:
raise KeyError('invalid plugin name: {0}'.format(key))
ts = _get_collection_metadata(acr.collection)
# TODO: implement support for collection-backed redirect (currently only builtin)
# TODO: implement cycle detection (unified across collection redir as well)
routing_entry = ts.get('plugin_routing', {}).get(self._dirname, {}).get(leaf_key, {})
deprecation_entry = routing_entry.get('deprecation')
if deprecation_entry:
warning_text = deprecation_entry.get('warning_text')
removal_date = deprecation_entry.get('removal_date')
removal_version = deprecation_entry.get('removal_version')
if not warning_text:
warning_text = '{0} "{1}" is deprecated'.format(self._dirname, key)
display.deprecated(warning_text, version=removal_version, date=removal_date, collection_name=acr.collection)
tombstone_entry = routing_entry.get('tombstone')
if tombstone_entry:
warning_text = tombstone_entry.get('warning_text')
removal_date = tombstone_entry.get('removal_date')
removal_version = tombstone_entry.get('removal_version')
if not warning_text:
warning_text = '{0} "{1}" has been removed'.format(self._dirname, key)
exc_msg = display.get_deprecation_message(warning_text, version=removal_version, date=removal_date,
collection_name=acr.collection, removed=True)
raise AnsiblePluginRemovedError(exc_msg)
redirect_fqcr = routing_entry.get('redirect', None)
if redirect_fqcr:
acr = AnsibleCollectionRef.from_fqcr(ref=redirect_fqcr, ref_type=self._dirname)
display.vvv('redirecting {0} {1} to {2}.{3}'.format(self._dirname, key, acr.collection, acr.resource))
key = redirect_fqcr
# TODO: handle recursive forwarding (not necessary for builtin, but definitely for further collection redirs)
func = self._collection_jinja_func_cache.get(key)
if func:
return func
try:
pkg = import_module(acr.n_python_package_name)
except ImportError:
raise KeyError()
parent_prefix = acr.collection
if acr.subdirs:
parent_prefix = '{0}.{1}'.format(parent_prefix, acr.subdirs)
# TODO: implement collection-level redirect
for dummy, module_name, ispkg in pkgutil.iter_modules(pkg.__path__, prefix=parent_prefix + '.'):
if ispkg:
continue
try:
plugin_impl = self._pluginloader.get(module_name)
except Exception as e:
raise TemplateSyntaxError(to_native(e), 0)
method_map = getattr(plugin_impl, self._method_map_name)
try:
func_items = method_map().items()
except Exception as e:
display.warning(
"Skipping %s plugin %s as it seems to be invalid: %r" % (self._dirname, to_text(plugin_impl._original_path), e),
)
continue
for func_name, func in func_items:
fq_name = '.'.join((parent_prefix, func_name))
# FIXME: detect/warn on intra-collection function name collisions
if self._pluginloader.class_name == 'FilterModule':
if fq_name.startswith(('ansible.builtin.', 'ansible.legacy.')) and \
func_name in C.STRING_TYPE_FILTERS:
self._collection_jinja_func_cache[fq_name] = _wrap_native_text(func)
else:
self._collection_jinja_func_cache[fq_name] = _unroll_iterator(func)
else:
self._collection_jinja_func_cache[fq_name] = func
function_impl = self._collection_jinja_func_cache[key]
return function_impl
except AnsiblePluginRemovedError as apre:
raise TemplateSyntaxError(to_native(apre), 0)
except KeyError:
raise
except Exception as ex:
display.warning('an unexpected error occurred during Jinja2 environment setup: {0}'.format(to_native(ex)))
display.vvv('exception during Jinja2 environment setup: {0}'.format(format_exc()))
raise TemplateSyntaxError(to_native(ex), 0)
def __setitem__(self, key, value):
return self._delegatee.__setitem__(key, value)
def __delitem__(self, key):
raise NotImplementedError()
def __iter__(self):
# not strictly accurate since we're not counting dynamically-loaded values
return iter(self._delegatee)
def __len__(self):
# not strictly accurate since we're not counting dynamically-loaded values
return len(self._delegatee)
class AnsibleEnvironment(NativeEnvironment):
'''
Our custom environment, which simply allows us to override the class-level
values for the Template and Context classes used by jinja2 internally.
'''
context_class = AnsibleContext
template_class = AnsibleJ2Template
def __init__(self, *args, **kwargs):
super(AnsibleEnvironment, self).__init__(*args, **kwargs)
self.filters = JinjaPluginIntercept(self.filters, filter_loader)
self.tests = JinjaPluginIntercept(self.tests, test_loader)
class AnsibleNativeEnvironment(NativeEnvironment):
def __new__(cls):
raise AnsibleAssertionError(
'It is not allowed to create instances of AnsibleNativeEnvironment. '
'The class is kept for backwards compatibility of '
'Templar.copy_with_new_env, see the method for more information.'
)
class Templar:
'''
The main class for templating, with the main entry-point of template().
'''
def __init__(self, loader, shared_loader_obj=None, variables=None):
# NOTE shared_loader_obj is deprecated, ansible.plugins.loader is used
# directly. Keeping the arg for now in case 3rd party code "uses" it.
self._loader = loader
self._filters = None
self._tests = None
self._available_variables = {} if variables is None else variables
self._cached_result = {}
self._basedir = loader.get_basedir() if loader else './'
self._fail_on_undefined_errors = C.DEFAULT_UNDEFINED_VAR_BEHAVIOR
self.environment = AnsibleEnvironment(
trim_blocks=True,
undefined=AnsibleUndefined,
extensions=self._get_extensions(),
finalize=self._finalize,
loader=FileSystemLoader(self._basedir),
)
# jinja2 global is inconsistent across versions, this normalizes them
self.environment.globals['dict'] = dict
# Custom globals
self.environment.globals['lookup'] = self._lookup
self.environment.globals['query'] = self.environment.globals['q'] = self._query_lookup
self.environment.globals['now'] = self._now_datetime
self.environment.globals['finalize'] = self._finalize
self.environment.globals['undef'] = self._make_undefined
# the current rendering context under which the templar class is working
self.cur_context = None
# FIXME this regex should be re-compiled each time variable_start_string and variable_end_string are changed
self.SINGLE_VAR = re.compile(r"^%s\s*(\w*)\s*%s$" % (self.environment.variable_start_string, self.environment.variable_end_string))
self.jinja2_native = C.DEFAULT_JINJA2_NATIVE
def copy_with_new_env(self, environment_class=AnsibleEnvironment, **kwargs):
r"""Creates a new copy of Templar with a new environment.
Since Ansible 2.13 this method is being deprecated and is kept only
for backwards compatibility:
- AnsibleEnvironment is now based on NativeEnvironment
- AnsibleNativeEnvironment is replaced by what is effectively a dummy class
for purposes of this method, see below
- environment_class arg no longer controls what type of environment is created,
AnsibleEnvironment is used regardless of the value passed in environment_class
- environment_class is used to determine the value of jinja2_native of the newly
created Templar; if AnsibleNativeEnvironment is passed in environment_class
new_templar.jinja2_native is set to True, any other value will result in
new_templar.jinja2_native being set to False unless overriden by the value
passed in kwargs
:kwarg environment_class: See above.
:kwarg \*\*kwargs: Optional arguments for the new environment that override existing
environment attributes.
:returns: Copy of Templar with updated environment.
"""
display.deprecated(
'Templar.copy_with_new_env is no longer used within Ansible codebase and is being deprecated. '
'For temporarily creating a new environment with custom arguments use set_temporary_context context manager. '
'To control whether the Templar uses the jinja2_native functionality set/unset Templar.jinja2_native instance attribute.',
version='2.14', collection_name='ansible.builtin'
)
# We need to use __new__ to skip __init__, mainly not to create a new
# environment there only to override it below
new_env = object.__new__(AnsibleEnvironment)
new_env.__dict__.update(self.environment.__dict__)
new_templar = object.__new__(Templar)
new_templar.__dict__.update(self.__dict__)
new_templar.environment = new_env
new_templar.jinja2_native = environment_class is AnsibleNativeEnvironment
mapping = {
'available_variables': new_templar,
'jinja2_native': self,
'searchpath': new_env.loader,
}
for key, value in kwargs.items():
obj = mapping.get(key, new_env)
try:
if value is not None:
setattr(obj, key, value)
except AttributeError:
# Ignore invalid attrs
pass
return new_templar
def _get_extensions(self):
'''
Return jinja2 extensions to load.
If some extensions are set via jinja_extensions in ansible.cfg, we try
to load them with the jinja environment.
'''
jinja_exts = []
if C.DEFAULT_JINJA2_EXTENSIONS:
# make sure the configuration directive doesn't contain spaces
# and split extensions in an array
jinja_exts = C.DEFAULT_JINJA2_EXTENSIONS.replace(" ", "").split(',')
return jinja_exts
@property
def available_variables(self):
return self._available_variables
@available_variables.setter
def available_variables(self, variables):
'''
Sets the list of template variables this Templar instance will use
to template things, so we don't have to pass them around between
internal methods. We also clear the template cache here, as the variables
are being changed.
'''
if not isinstance(variables, Mapping):
raise AnsibleAssertionError("the type of 'variables' should be a Mapping but was a %s" % (type(variables)))
self._available_variables = variables
self._cached_result = {}
@contextmanager
def set_temporary_context(self, **kwargs):
"""Context manager used to set temporary templating context, without having to worry about resetting
original values afterward
Use a keyword that maps to the attr you are setting. Applies to ``self.environment`` by default, to
set context on another object, it must be in ``mapping``.
"""
mapping = {
'available_variables': self,
'jinja2_native': self,
'searchpath': self.environment.loader,
}
original = {}
for key, value in kwargs.items():
obj = mapping.get(key, self.environment)
try:
original[key] = getattr(obj, key)
if value is not None:
setattr(obj, key, value)
except AttributeError:
# Ignore invalid attrs
pass
yield
for key in original:
obj = mapping.get(key, self.environment)
setattr(obj, key, original[key])
def template(self, variable, convert_bare=False, preserve_trailing_newlines=True, escape_backslashes=True, fail_on_undefined=None, overrides=None,
convert_data=True, static_vars=None, cache=True, disable_lookups=False):
'''
Templates (possibly recursively) any given data as input. If convert_bare is
set to True, the given data will be wrapped as a jinja2 variable ('{{foo}}')
before being sent through the template engine.
'''
static_vars = [] if static_vars is None else static_vars
# Don't template unsafe variables, just return them.
if hasattr(variable, '__UNSAFE__'):
return variable
if fail_on_undefined is None:
fail_on_undefined = self._fail_on_undefined_errors
if convert_bare:
variable = self._convert_bare_variable(variable)
if isinstance(variable, string_types):
if not self.is_possibly_template(variable):
return variable
# Check to see if the string we are trying to render is just referencing a single
# var. In this case we don't want to accidentally change the type of the variable
# to a string by using the jinja template renderer. We just want to pass it.
only_one = self.SINGLE_VAR.match(variable)
if only_one:
var_name = only_one.group(1)
if var_name in self._available_variables:
resolved_val = self._available_variables[var_name]
if isinstance(resolved_val, NON_TEMPLATED_TYPES):
return resolved_val
elif resolved_val is None:
return C.DEFAULT_NULL_REPRESENTATION
# Using a cache in order to prevent template calls with already templated variables
sha1_hash = None
if cache:
variable_hash = sha1(text_type(variable).encode('utf-8'))
options_hash = sha1(
(
text_type(preserve_trailing_newlines) +
text_type(escape_backslashes) +
text_type(fail_on_undefined) +
text_type(overrides)
).encode('utf-8')
)
sha1_hash = variable_hash.hexdigest() + options_hash.hexdigest()
if sha1_hash in self._cached_result:
return self._cached_result[sha1_hash]
result = self.do_template(
variable,
preserve_trailing_newlines=preserve_trailing_newlines,
escape_backslashes=escape_backslashes,
fail_on_undefined=fail_on_undefined,
overrides=overrides,
disable_lookups=disable_lookups,
convert_data=convert_data,
)
# we only cache in the case where we have a single variable
# name, to make sure we're not putting things which may otherwise
# be dynamic in the cache (filters, lookups, etc.)
if cache and only_one:
self._cached_result[sha1_hash] = result
return result
elif is_sequence(variable):
return [self.template(
v,
preserve_trailing_newlines=preserve_trailing_newlines,
fail_on_undefined=fail_on_undefined,
overrides=overrides,
disable_lookups=disable_lookups,
) for v in variable]
elif isinstance(variable, Mapping):
d = {}
# we don't use iteritems() here to avoid problems if the underlying dict
# changes sizes due to the templating, which can happen with hostvars
for k in variable.keys():
if k not in static_vars:
d[k] = self.template(
variable[k],
preserve_trailing_newlines=preserve_trailing_newlines,
fail_on_undefined=fail_on_undefined,
overrides=overrides,
disable_lookups=disable_lookups,
)
else:
d[k] = variable[k]
return d
else:
return variable
def is_template(self, data):
'''lets us know if data has a template'''
if isinstance(data, string_types):
return is_template(data, self.environment)
elif isinstance(data, (list, tuple)):
for v in data:
if self.is_template(v):
return True
elif isinstance(data, dict):
for k in data:
if self.is_template(k) or self.is_template(data[k]):
return True
return False
templatable = is_template
def is_possibly_template(self, data):
return is_possibly_template(data, self.environment)
def _convert_bare_variable(self, variable):
'''
Wraps a bare string, which may have an attribute portion (ie. foo.bar)
in jinja2 variable braces so that it is evaluated properly.
'''
if isinstance(variable, string_types):
contains_filters = "|" in variable
first_part = variable.split("|")[0].split(".")[0].split("[")[0]
if (contains_filters or first_part in self._available_variables) and self.environment.variable_start_string not in variable:
return "%s%s%s" % (self.environment.variable_start_string, variable, self.environment.variable_end_string)
# the variable didn't meet the conditions to be converted,
# so just return it as-is
return variable
def _finalize(self, thing):
'''
A custom finalize method for jinja2, which prevents None from being returned. This
avoids a string of ``"None"`` as ``None`` has no importance in YAML.
If using ANSIBLE_JINJA2_NATIVE we bypass this and return the actual value always
'''
if _is_rolled(thing):
# Auto unroll a generator, so that users are not required to
# explicitly use ``|list`` to unroll
# This only affects the scenario where the final result of templating
# is a generator, and not where a filter creates a generator in the middle
# of a template. See ``_unroll_iterator`` for the other case. This is probably
# unncessary
return list(thing)
if self.jinja2_native:
return thing
return thing if thing is not None else ''
def _fail_lookup(self, name, *args, **kwargs):
raise AnsibleError("The lookup `%s` was found, however lookups were disabled from templating" % name)
def _now_datetime(self, utc=False, fmt=None):
'''jinja2 global function to return current datetime, potentially formatted via strftime'''
if utc:
now = datetime.datetime.utcnow()
else:
now = datetime.datetime.now()
if fmt:
return now.strftime(fmt)
return now
def _query_lookup(self, name, *args, **kwargs):
''' wrapper for lookup, force wantlist true'''
kwargs['wantlist'] = True
return self._lookup(name, *args, **kwargs)
def _lookup(self, name, *args, **kwargs):
instance = lookup_loader.get(name, loader=self._loader, templar=self)
if instance is None:
raise AnsibleError("lookup plugin (%s) not found" % name)
wantlist = kwargs.pop('wantlist', False)
allow_unsafe = kwargs.pop('allow_unsafe', C.DEFAULT_ALLOW_UNSAFE_LOOKUPS)
errors = kwargs.pop('errors', 'strict')
loop_terms = listify_lookup_plugin_terms(terms=args, templar=self, loader=self._loader, fail_on_undefined=True, convert_bare=False)
# safely catch run failures per #5059
try:
ran = instance.run(loop_terms, variables=self._available_variables, **kwargs)
except (AnsibleUndefinedVariable, UndefinedError) as e:
raise AnsibleUndefinedVariable(e)
except AnsibleOptionsError as e:
# invalid options given to lookup, just reraise
raise e
except AnsibleLookupError as e:
# lookup handled error but still decided to bail
msg = 'Lookup failed but the error is being ignored: %s' % to_native(e)
if errors == 'warn':
display.warning(msg)
elif errors == 'ignore':
display.display(msg, log_only=True)
else:
raise e
return [] if wantlist else None
except Exception as e:
# errors not handled by lookup
msg = u"An unhandled exception occurred while running the lookup plugin '%s'. Error was a %s, original message: %s" % \
(name, type(e), to_text(e))
if errors == 'warn':
display.warning(msg)
elif errors == 'ignore':
display.display(msg, log_only=True)
else:
display.vvv('exception during Jinja2 execution: {0}'.format(format_exc()))
raise AnsibleError(to_native(msg), orig_exc=e)
return [] if wantlist else None
if ran and allow_unsafe is False:
if self.cur_context:
self.cur_context.unsafe = True
if wantlist:
return wrap_var(ran)
try:
if isinstance(ran[0], NativeJinjaText):
ran = wrap_var(NativeJinjaText(",".join(ran)))
else:
ran = wrap_var(",".join(ran))
except TypeError:
# Lookup Plugins should always return lists. Throw an error if that's not
# the case:
if not isinstance(ran, Sequence):
raise AnsibleError("The lookup plugin '%s' did not return a list."
% name)
# The TypeError we can recover from is when the value *inside* of the list
# is not a string
if len(ran) == 1:
ran = wrap_var(ran[0])
else:
ran = wrap_var(ran)
return ran
def _make_undefined(self, hint=None):
from jinja2.runtime import Undefined
if hint is None or isinstance(hint, Undefined) or hint == '':
hint = "Mandatory variable has not been overridden"
return AnsibleUndefined(hint)
def do_template(self, data, preserve_trailing_newlines=True, escape_backslashes=True, fail_on_undefined=None, overrides=None, disable_lookups=False,
convert_data=False):
if self.jinja2_native and not isinstance(data, string_types):
return data
# For preserving the number of input newlines in the output (used
# later in this method)
data_newlines = _count_newlines_from_end(data)
if fail_on_undefined is None:
fail_on_undefined = self._fail_on_undefined_errors
has_template_overrides = data.startswith(JINJA2_OVERRIDE)
try:
# NOTE Creating an overlay that lives only inside do_template means that overrides are not applied
# when templating nested variables in AnsibleJ2Vars where Templar.environment is used, not the overlay.
# This is historic behavior that is kept for backwards compatibility.
if overrides:
myenv = self.environment.overlay(overrides)
elif has_template_overrides:
myenv = self.environment.overlay()
else:
myenv = self.environment
# Get jinja env overrides from template
if has_template_overrides:
eol = data.find('\n')
line = data[len(JINJA2_OVERRIDE):eol]
data = data[eol + 1:]
for pair in line.split(','):
(key, val) = pair.split(':')
key = key.strip()
setattr(myenv, key, ast.literal_eval(val.strip()))
if escape_backslashes:
# Allow users to specify backslashes in playbooks as "\\" instead of as "\\\\".
data = _escape_backslashes(data, myenv)
try:
t = myenv.from_string(data)
except TemplateSyntaxError as e:
raise AnsibleError("template error while templating string: %s. String: %s" % (to_native(e), to_native(data)))
except Exception as e:
if 'recursion' in to_native(e):
raise AnsibleError("recursive loop detected in template string: %s" % to_native(data))
else:
return data
if disable_lookups:
t.globals['query'] = t.globals['q'] = t.globals['lookup'] = self._fail_lookup
jvars = AnsibleJ2Vars(self, t.globals)
self.cur_context = new_context = t.new_context(jvars, shared=True)
rf = t.root_render_func(new_context)
try:
if self.jinja2_native:
res = ansible_native_concat(rf)
else:
res = ansible_concat(rf, convert_data, myenv.variable_start_string)
unsafe = getattr(new_context, 'unsafe', False)
if unsafe:
res = wrap_var(res)
except TypeError as te:
if 'AnsibleUndefined' in to_native(te):
errmsg = "Unable to look up a name or access an attribute in template string (%s).\n" % to_native(data)
errmsg += "Make sure your variable name does not contain invalid characters like '-': %s" % to_native(te)
raise AnsibleUndefinedVariable(errmsg)
else:
display.debug("failing because of a type error, template data is: %s" % to_text(data))
raise AnsibleError("Unexpected templating type error occurred on (%s): %s" % (to_native(data), to_native(te)))
if isinstance(res, string_types) and preserve_trailing_newlines:
# The low level calls above do not preserve the newline
# characters at the end of the input data, so we use the
# calculate the difference in newlines and append them
# to the resulting output for parity
#
# Using Environment's keep_trailing_newline instead would
# result in change in behavior when trailing newlines
# would be kept also for included templates, for example:
# "Hello {% include 'world.txt' %}!" would render as
# "Hello world\n!\n" instead of "Hello world!\n".
res_newlines = _count_newlines_from_end(res)
if data_newlines > res_newlines:
res += self.environment.newline_sequence * (data_newlines - res_newlines)
if unsafe:
res = wrap_var(res)
return res
except (UndefinedError, AnsibleUndefinedVariable) as e:
if fail_on_undefined:
raise AnsibleUndefinedVariable(e)
else:
display.debug("Ignoring undefined failure: %s" % to_text(e))
return data
# for backwards compatibility in case anyone is using old private method directly
_do_template = do_template
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 72,582 |
broken cowsay leads to unusable ansible
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below -->
Unbelievable, but true: `cowsay` is broken on my Ubuntu 20.04 `Encode.c: loadable library and perl binaries are mismatched (got handshake key 0xdb00080, needed 0xcd00080)`
This leads to missing lines in the output (see below). I tried hard to search on popular search engines for a fix, without luck.
Main point that this is an issue that is really hard to resolve for less experienced users.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
`ansible-playbook`
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
config file = None
configured module search path = [u'/home/berntm/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/lib/python2.7/dist-packages/ansible
executable location = /usr/local/bin/ansible
python version = 2.7.18 (default, Aug 4 2020, 11:16:42) [GCC 9.3.0]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
##### STEPS TO REPRODUCE
That's the odd thing. I tried to run `alias cowsay="exit 1"` and also replaced /usr/bin/cowsay with a script that just `exit 1`. But in both cases then the lines are still there.
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
<!--- Paste example playbooks or commands between quotes below -->
```yaml
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
```
PLAY [localhost] **************************************************************************
TASK [Gathering Facts] ********************************************************************
ok: [localhost]
...
```
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes -->
```paste below
ok: [localhost]
...
```
|
https://github.com/ansible/ansible/issues/72582
|
https://github.com/ansible/ansible/pull/76326
|
60d094db730863cbab15bb920be8986175ec73a9
|
472028c869669dc05abd110ba8a335326c13ba8c
| 2020-11-11T11:17:17Z |
python
| 2021-12-07T14:11:06Z |
changelogs/fragments/72592-catch-broken-cowsay.yaml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 72,582 |
broken cowsay leads to unusable ansible
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below -->
Unbelievable, but true: `cowsay` is broken on my Ubuntu 20.04 `Encode.c: loadable library and perl binaries are mismatched (got handshake key 0xdb00080, needed 0xcd00080)`
This leads to missing lines in the output (see below). I tried hard to search on popular search engines for a fix, without luck.
Main point that this is an issue that is really hard to resolve for less experienced users.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
`ansible-playbook`
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
config file = None
configured module search path = [u'/home/berntm/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/lib/python2.7/dist-packages/ansible
executable location = /usr/local/bin/ansible
python version = 2.7.18 (default, Aug 4 2020, 11:16:42) [GCC 9.3.0]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
##### STEPS TO REPRODUCE
That's the odd thing. I tried to run `alias cowsay="exit 1"` and also replaced /usr/bin/cowsay with a script that just `exit 1`. But in both cases then the lines are still there.
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
<!--- Paste example playbooks or commands between quotes below -->
```yaml
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
```
PLAY [localhost] **************************************************************************
TASK [Gathering Facts] ********************************************************************
ok: [localhost]
...
```
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes -->
```paste below
ok: [localhost]
...
```
|
https://github.com/ansible/ansible/issues/72582
|
https://github.com/ansible/ansible/pull/76326
|
60d094db730863cbab15bb920be8986175ec73a9
|
472028c869669dc05abd110ba8a335326c13ba8c
| 2020-11-11T11:17:17Z |
python
| 2021-12-07T14:11:06Z |
lib/ansible/utils/display.py
|
# (c) 2014, Michael DeHaan <[email protected]>
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import ctypes.util
import errno
import fcntl
import getpass
import locale
import logging
import os
import random
import subprocess
import sys
import textwrap
import time
from struct import unpack, pack
from termios import TIOCGWINSZ
from ansible import constants as C
from ansible.errors import AnsibleError, AnsibleAssertionError
from ansible.module_utils._text import to_bytes, to_text
from ansible.module_utils.six import text_type
from ansible.utils.color import stringc
from ansible.utils.singleton import Singleton
from ansible.utils.unsafe_proxy import wrap_var
_LIBC = ctypes.cdll.LoadLibrary(ctypes.util.find_library('c'))
# Set argtypes, to avoid segfault if the wrong type is provided,
# restype is assumed to be c_int
_LIBC.wcwidth.argtypes = (ctypes.c_wchar,)
_LIBC.wcswidth.argtypes = (ctypes.c_wchar_p, ctypes.c_int)
# Max for c_int
_MAX_INT = 2 ** (ctypes.sizeof(ctypes.c_int) * 8 - 1) - 1
_LOCALE_INITIALIZED = False
_LOCALE_INITIALIZATION_ERR = None
def initialize_locale():
"""Set the locale to the users default setting
and set ``_LOCALE_INITIALIZED`` to indicate whether
``get_text_width`` may run into trouble
"""
global _LOCALE_INITIALIZED, _LOCALE_INITIALIZATION_ERR
if _LOCALE_INITIALIZED is False:
try:
locale.setlocale(locale.LC_ALL, '')
except locale.Error as e:
_LOCALE_INITIALIZATION_ERR = e
else:
_LOCALE_INITIALIZED = True
def get_text_width(text):
"""Function that utilizes ``wcswidth`` or ``wcwidth`` to determine the
number of columns used to display a text string.
We try first with ``wcswidth``, and fallback to iterating each
character and using wcwidth individually, falling back to a value of 0
for non-printable wide characters
On Py2, this depends on ``locale.setlocale(locale.LC_ALL, '')``,
that in the case of Ansible is done in ``bin/ansible``
"""
if not isinstance(text, text_type):
raise TypeError('get_text_width requires text, not %s' % type(text))
if _LOCALE_INITIALIZATION_ERR:
Display().warning(
'An error occurred while calling ansible.utils.display.initialize_locale '
'(%s). This may result in incorrectly calculated text widths that can '
'cause Display to print incorrect line lengths' % _LOCALE_INITIALIZATION_ERR
)
elif not _LOCALE_INITIALIZED:
Display().warning(
'ansible.utils.display.initialize_locale has not been called, '
'this may result in incorrectly calculated text widths that can '
'cause Display to print incorrect line lengths'
)
try:
width = _LIBC.wcswidth(text, _MAX_INT)
except ctypes.ArgumentError:
width = -1
if width != -1:
return width
width = 0
counter = 0
for c in text:
counter += 1
if c in (u'\x08', u'\x7f', u'\x94', u'\x1b'):
# A few characters result in a subtraction of length:
# BS, DEL, CCH, ESC
# ESC is slightly different in that it's part of an escape sequence, and
# while ESC is non printable, it's part of an escape sequence, which results
# in a single non printable length
width -= 1
counter -= 1
continue
try:
w = _LIBC.wcwidth(c)
except ctypes.ArgumentError:
w = -1
if w == -1:
# -1 signifies a non-printable character
# use 0 here as a best effort
w = 0
width += w
if width == 0 and counter and not _LOCALE_INITIALIZED:
raise EnvironmentError(
'ansible.utils.display.initialize_locale has not been called, '
'and get_text_width could not calculate text width of %r' % text
)
# It doesn't make sense to have a negative printable width
return width if width >= 0 else 0
class FilterBlackList(logging.Filter):
def __init__(self, blacklist):
self.blacklist = [logging.Filter(name) for name in blacklist]
def filter(self, record):
return not any(f.filter(record) for f in self.blacklist)
class FilterUserInjector(logging.Filter):
"""
This is a filter which injects the current user as the 'user' attribute on each record. We need to add this filter
to all logger handlers so that 3rd party libraries won't print an exception due to user not being defined.
"""
try:
username = getpass.getuser()
except KeyError:
# people like to make containers w/o actual valid passwd/shadow and use host uids
username = 'uid=%s' % os.getuid()
def filter(self, record):
record.user = FilterUserInjector.username
return True
logger = None
# TODO: make this a callback event instead
if getattr(C, 'DEFAULT_LOG_PATH'):
path = C.DEFAULT_LOG_PATH
if path and (os.path.exists(path) and os.access(path, os.W_OK)) or os.access(os.path.dirname(path), os.W_OK):
# NOTE: level is kept at INFO to avoid security disclosures caused by certain libraries when using DEBUG
logging.basicConfig(filename=path, level=logging.INFO, # DO NOT set to logging.DEBUG
format='%(asctime)s p=%(process)d u=%(user)s n=%(name)s | %(message)s')
logger = logging.getLogger('ansible')
for handler in logging.root.handlers:
handler.addFilter(FilterBlackList(getattr(C, 'DEFAULT_LOG_FILTER', [])))
handler.addFilter(FilterUserInjector())
else:
print("[WARNING]: log file at %s is not writeable and we cannot create it, aborting\n" % path, file=sys.stderr)
# map color to log levels
color_to_log_level = {C.COLOR_ERROR: logging.ERROR,
C.COLOR_WARN: logging.WARNING,
C.COLOR_OK: logging.INFO,
C.COLOR_SKIP: logging.WARNING,
C.COLOR_UNREACHABLE: logging.ERROR,
C.COLOR_DEBUG: logging.DEBUG,
C.COLOR_CHANGED: logging.INFO,
C.COLOR_DEPRECATE: logging.WARNING,
C.COLOR_VERBOSE: logging.INFO}
b_COW_PATHS = (
b"/usr/bin/cowsay",
b"/usr/games/cowsay",
b"/usr/local/bin/cowsay", # BSD path for cowsay
b"/opt/local/bin/cowsay", # MacPorts path for cowsay
)
class Display(metaclass=Singleton):
def __init__(self, verbosity=0):
self.columns = None
self.verbosity = verbosity
# list of all deprecation messages to prevent duplicate display
self._deprecations = {}
self._warns = {}
self._errors = {}
self.b_cowsay = None
self.noncow = C.ANSIBLE_COW_SELECTION
self.set_cowsay_info()
if self.b_cowsay:
try:
cmd = subprocess.Popen([self.b_cowsay, "-l"], stdout=subprocess.PIPE, stderr=subprocess.PIPE)
(out, err) = cmd.communicate()
self.cows_available = {to_text(c) for c in out.split()}
if C.ANSIBLE_COW_ACCEPTLIST and any(C.ANSIBLE_COW_ACCEPTLIST):
self.cows_available = set(C.ANSIBLE_COW_ACCEPTLIST).intersection(self.cows_available)
except Exception:
# could not execute cowsay for some reason
self.b_cowsay = False
self._set_column_width()
def set_cowsay_info(self):
if C.ANSIBLE_NOCOWS:
return
if C.ANSIBLE_COW_PATH:
self.b_cowsay = C.ANSIBLE_COW_PATH
else:
for b_cow_path in b_COW_PATHS:
if os.path.exists(b_cow_path):
self.b_cowsay = b_cow_path
def display(self, msg, color=None, stderr=False, screen_only=False, log_only=False, newline=True):
""" Display a message to the user
Note: msg *must* be a unicode string to prevent UnicodeError tracebacks.
"""
nocolor = msg
if not log_only:
has_newline = msg.endswith(u'\n')
if has_newline:
msg2 = msg[:-1]
else:
msg2 = msg
if color:
msg2 = stringc(msg2, color)
if has_newline or newline:
msg2 = msg2 + u'\n'
msg2 = to_bytes(msg2, encoding=self._output_encoding(stderr=stderr))
if sys.version_info >= (3,):
# Convert back to text string on python3
# We first convert to a byte string so that we get rid of
# characters that are invalid in the user's locale
msg2 = to_text(msg2, self._output_encoding(stderr=stderr), errors='replace')
# Note: After Display() class is refactored need to update the log capture
# code in 'bin/ansible-connection' (and other relevant places).
if not stderr:
fileobj = sys.stdout
else:
fileobj = sys.stderr
fileobj.write(msg2)
try:
fileobj.flush()
except IOError as e:
# Ignore EPIPE in case fileobj has been prematurely closed, eg.
# when piping to "head -n1"
if e.errno != errno.EPIPE:
raise
if logger and not screen_only:
# We first convert to a byte string so that we get rid of
# color and characters that are invalid in the user's locale
msg2 = to_bytes(nocolor.lstrip(u'\n'))
if sys.version_info >= (3,):
# Convert back to text string on python3
msg2 = to_text(msg2, self._output_encoding(stderr=stderr))
lvl = logging.INFO
if color:
# set logger level based on color (not great)
try:
lvl = color_to_log_level[color]
except KeyError:
# this should not happen, but JIC
raise AnsibleAssertionError('Invalid color supplied to display: %s' % color)
# actually log
logger.log(lvl, msg2)
def v(self, msg, host=None):
return self.verbose(msg, host=host, caplevel=0)
def vv(self, msg, host=None):
return self.verbose(msg, host=host, caplevel=1)
def vvv(self, msg, host=None):
return self.verbose(msg, host=host, caplevel=2)
def vvvv(self, msg, host=None):
return self.verbose(msg, host=host, caplevel=3)
def vvvvv(self, msg, host=None):
return self.verbose(msg, host=host, caplevel=4)
def vvvvvv(self, msg, host=None):
return self.verbose(msg, host=host, caplevel=5)
def debug(self, msg, host=None):
if C.DEFAULT_DEBUG:
if host is None:
self.display("%6d %0.5f: %s" % (os.getpid(), time.time(), msg), color=C.COLOR_DEBUG)
else:
self.display("%6d %0.5f [%s]: %s" % (os.getpid(), time.time(), host, msg), color=C.COLOR_DEBUG)
def verbose(self, msg, host=None, caplevel=2):
to_stderr = C.VERBOSE_TO_STDERR
if self.verbosity > caplevel:
if host is None:
self.display(msg, color=C.COLOR_VERBOSE, stderr=to_stderr)
else:
self.display("<%s> %s" % (host, msg), color=C.COLOR_VERBOSE, stderr=to_stderr)
def get_deprecation_message(self, msg, version=None, removed=False, date=None, collection_name=None):
''' used to print out a deprecation message.'''
msg = msg.strip()
if msg and msg[-1] not in ['!', '?', '.']:
msg += '.'
if collection_name == 'ansible.builtin':
collection_name = 'ansible-core'
if removed:
header = '[DEPRECATED]: {0}'.format(msg)
removal_fragment = 'This feature was removed'
help_text = 'Please update your playbooks.'
else:
header = '[DEPRECATION WARNING]: {0}'.format(msg)
removal_fragment = 'This feature will be removed'
# FUTURE: make this a standalone warning so it only shows up once?
help_text = 'Deprecation warnings can be disabled by setting deprecation_warnings=False in ansible.cfg.'
if collection_name:
from_fragment = 'from {0}'.format(collection_name)
else:
from_fragment = ''
if date:
when = 'in a release after {0}.'.format(date)
elif version:
when = 'in version {0}.'.format(version)
else:
when = 'in a future release.'
message_text = ' '.join(f for f in [header, removal_fragment, from_fragment, when, help_text] if f)
return message_text
def deprecated(self, msg, version=None, removed=False, date=None, collection_name=None):
if not removed and not C.DEPRECATION_WARNINGS:
return
message_text = self.get_deprecation_message(msg, version=version, removed=removed, date=date, collection_name=collection_name)
if removed:
raise AnsibleError(message_text)
wrapped = textwrap.wrap(message_text, self.columns, drop_whitespace=False)
message_text = "\n".join(wrapped) + "\n"
if message_text not in self._deprecations:
self.display(message_text.strip(), color=C.COLOR_DEPRECATE, stderr=True)
self._deprecations[message_text] = 1
def warning(self, msg, formatted=False):
if not formatted:
new_msg = "[WARNING]: %s" % msg
wrapped = textwrap.wrap(new_msg, self.columns)
new_msg = "\n".join(wrapped) + "\n"
else:
new_msg = "\n[WARNING]: \n%s" % msg
if new_msg not in self._warns:
self.display(new_msg, color=C.COLOR_WARN, stderr=True)
self._warns[new_msg] = 1
def system_warning(self, msg):
if C.SYSTEM_WARNINGS:
self.warning(msg)
def banner(self, msg, color=None, cows=True):
'''
Prints a header-looking line with cowsay or stars with length depending on terminal width (3 minimum)
'''
msg = to_text(msg)
if self.b_cowsay and cows:
try:
self.banner_cowsay(msg)
return
except OSError:
self.warning("somebody cleverly deleted cowsay or something during the PB run. heh.")
msg = msg.strip()
try:
star_len = self.columns - get_text_width(msg)
except EnvironmentError:
star_len = self.columns - len(msg)
if star_len <= 3:
star_len = 3
stars = u"*" * star_len
self.display(u"\n%s %s" % (msg, stars), color=color)
def banner_cowsay(self, msg, color=None):
if u": [" in msg:
msg = msg.replace(u"[", u"")
if msg.endswith(u"]"):
msg = msg[:-1]
runcmd = [self.b_cowsay, b"-W", b"60"]
if self.noncow:
thecow = self.noncow
if thecow == 'random':
thecow = random.choice(list(self.cows_available))
runcmd.append(b'-f')
runcmd.append(to_bytes(thecow))
runcmd.append(to_bytes(msg))
cmd = subprocess.Popen(runcmd, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
(out, err) = cmd.communicate()
self.display(u"%s\n" % to_text(out), color=color)
def error(self, msg, wrap_text=True):
if wrap_text:
new_msg = u"\n[ERROR]: %s" % msg
wrapped = textwrap.wrap(new_msg, self.columns)
new_msg = u"\n".join(wrapped) + u"\n"
else:
new_msg = u"ERROR! %s" % msg
if new_msg not in self._errors:
self.display(new_msg, color=C.COLOR_ERROR, stderr=True)
self._errors[new_msg] = 1
@staticmethod
def prompt(msg, private=False):
prompt_string = to_bytes(msg, encoding=Display._output_encoding())
if sys.version_info >= (3,):
# Convert back into text on python3. We do this double conversion
# to get rid of characters that are illegal in the user's locale
prompt_string = to_text(prompt_string)
if private:
return getpass.getpass(prompt_string)
else:
return input(prompt_string)
def do_var_prompt(self, varname, private=True, prompt=None, encrypt=None, confirm=False, salt_size=None, salt=None, default=None, unsafe=None):
result = None
if sys.__stdin__.isatty():
do_prompt = self.prompt
if prompt and default is not None:
msg = "%s [%s]: " % (prompt, default)
elif prompt:
msg = "%s: " % prompt
else:
msg = 'input for %s: ' % varname
if confirm:
while True:
result = do_prompt(msg, private)
second = do_prompt("confirm " + msg, private)
if result == second:
break
self.display("***** VALUES ENTERED DO NOT MATCH ****")
else:
result = do_prompt(msg, private)
else:
result = None
self.warning("Not prompting as we are not in interactive mode")
# if result is false and default is not None
if not result and default is not None:
result = default
if encrypt:
# Circular import because encrypt needs a display class
from ansible.utils.encrypt import do_encrypt
result = do_encrypt(result, encrypt, salt_size, salt)
# handle utf-8 chars
result = to_text(result, errors='surrogate_or_strict')
if unsafe:
result = wrap_var(result)
return result
@staticmethod
def _output_encoding(stderr=False):
encoding = locale.getpreferredencoding()
# https://bugs.python.org/issue6202
# Python2 hardcodes an obsolete value on Mac. Use MacOSX defaults
# instead.
if encoding in ('mac-roman',):
encoding = 'utf-8'
return encoding
def _set_column_width(self):
if os.isatty(1):
tty_size = unpack('HHHH', fcntl.ioctl(1, TIOCGWINSZ, pack('HHHH', 0, 0, 0, 0)))[1]
else:
tty_size = 0
self.columns = max(79, tty_size - 1)
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 72,582 |
broken cowsay leads to unusable ansible
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below -->
Unbelievable, but true: `cowsay` is broken on my Ubuntu 20.04 `Encode.c: loadable library and perl binaries are mismatched (got handshake key 0xdb00080, needed 0xcd00080)`
This leads to missing lines in the output (see below). I tried hard to search on popular search engines for a fix, without luck.
Main point that this is an issue that is really hard to resolve for less experienced users.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
`ansible-playbook`
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
config file = None
configured module search path = [u'/home/berntm/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/lib/python2.7/dist-packages/ansible
executable location = /usr/local/bin/ansible
python version = 2.7.18 (default, Aug 4 2020, 11:16:42) [GCC 9.3.0]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
##### STEPS TO REPRODUCE
That's the odd thing. I tried to run `alias cowsay="exit 1"` and also replaced /usr/bin/cowsay with a script that just `exit 1`. But in both cases then the lines are still there.
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
<!--- Paste example playbooks or commands between quotes below -->
```yaml
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
```
PLAY [localhost] **************************************************************************
TASK [Gathering Facts] ********************************************************************
ok: [localhost]
...
```
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes -->
```paste below
ok: [localhost]
...
```
|
https://github.com/ansible/ansible/issues/72582
|
https://github.com/ansible/ansible/pull/76326
|
60d094db730863cbab15bb920be8986175ec73a9
|
472028c869669dc05abd110ba8a335326c13ba8c
| 2020-11-11T11:17:17Z |
python
| 2021-12-07T14:11:06Z |
test/units/utils/display/test_broken_cowsay.py
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 76,473 |
Ansible-test import fails - one or more Python packages bundled by this ansible-core
|
### Summary
Ansible-test import fails in our CI with one or more Python packages bundled by this ansible-core
### Issue Type
Bug Report
### Component Name
ansible-test
### Ansible Version
```console
$ ansible --version
ansible [core 2.12.0]
config file = /etc/ansible/ansible.cfg
configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.9/site-packages/ansible
ansible collection location = /root/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/bin/ansible
python version = 3.9.8 (main, Nov 8 2021, 00:00:00) [GCC 11.2.1 20211019 (Red Hat 11.2.1-6)]
jinja version = 2.11.3
libyaml = True
```
### Configuration
```console
$ ansible-config dump --only-changed
```
### OS / Environment
container centos:stream9
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
```
### Expected Results
Pass the test
### Actual Results
```console
https://github.com/oVirt/ovirt-ansible-collection/runs/4391847035?check_suite_focus=true
WARNING: The directory '/github/home/.cache/pip' or its parent directory is not owned or is not writable by the current user. The cache has been disabled. Check the permissions and owner of that directory. If executing pip with sudo, you should use sudo's -H flag.
Collecting cryptography<3.4
Downloading cryptography-3.3.2-cp36-abi3-manylinux2010_x86_64.whl (2.6 MB)
Collecting six>=1.4.1
Downloading six-1.16.0-py2.py3-none-any.whl (11 kB)
Collecting cffi>=1.12
Downloading cffi-1.15.0-cp39-cp39-manylinux_2_12_x86_64.manylinux2010_x86_64.whl (444 kB)
Collecting pycparser
Downloading pycparser-2.21-py2.py3-none-any.whl (118 kB)
Installing collected packages: pycparser, six, cffi, cryptography
Successfully installed cffi-1.15.0 cryptography-3.3.2 pycparser-2.21 six-1.16.0
WARNING: The directory '/github/home/.cache/pip' or its parent directory is not owned or is not writable by the current user. The cache has been disabled. Check the permissions and owner of that directory. If executing pip with sudo, you should use sudo's -H flag.
Collecting jinja2
Downloading Jinja2-3.0.3-py3-none-any.whl (133 kB)
Collecting PyYAML
Downloading PyYAML-6.0-cp39-cp39-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl (661 kB)
Collecting packaging
Downloading packaging-21.3-py3-none-any.whl (40 kB)
Collecting resolvelib<0.6.0,>=0.5.3
Downloading resolvelib-0.5.4-py2.py3-none-any.whl (12 kB)
Collecting MarkupSafe>=2.0
Downloading MarkupSafe-2.0.1-cp39-cp39-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl (30 kB)
Collecting pyparsing!=3.0.5,>=2.0.2
Downloading pyparsing-3.0.6-py3-none-any.whl (97 kB)
Installing collected packages: pyparsing, MarkupSafe, resolvelib, PyYAML, packaging, jinja2
Successfully installed MarkupSafe-2.0.1 PyYAML-6.0 jinja2-3.0.3 packaging-21.3 pyparsing-3.0.6 resolvelib-0.5.4
ERROR: Command "importer.py" returned exit status 10.
>>> Standard Error
/tmp/ansible-test-ibrisv3_/ansible/_vendor/__init__.py:42: UserWarning: One or more Python packages bundled by this ansible-core distribution were already loaded (packaging). This may result in undefined behavior.
warnings.warn('One or more Python packages bundled by this ansible-core distribution were already '
/tmp/ansible-test-ibrisv3_/ansible/_vendor/__init__.py:42: UserWarning: One or more Python packages bundled by this ansible-core distribution were already loaded (packaging). This may result in undefined behavior.
warnings.warn('One or more Python packages bundled by this ansible-core distribution were already '
/tmp/ansible-test-ibrisv3_/ansible/_vendor/__init__.py:42: UserWarning: One or more Python packages bundled by this ansible-core distribution were already loaded (packaging). This may result in undefined behavior.
warnings.warn('One or more Python packages bundled by this ansible-core distribution were already '
/tmp/ansible-test-ibrisv3_/ansible/_vendor/__init__.py:42: UserWarning: One or more Python packages bundled by this ansible-core distribution were already loaded (packaging). This may result in undefined behavior.
warnings.warn('One or more Python packages bundled by this ansible-core distribution were already '
/tmp/ansible-test-ibrisv3_/ansible/_vendor/__init__.py:42: UserWarning: One or more Python packages bundled by this ansible-core distribution were already loaded (packaging). This may result in undefined behavior.
warnings.warn('One or more Python packages bundled by this ansible-core distribution were already '
>>> Standard Output
plugins/inventory/ovirt.py:85:0: traceback: UserWarning: One or more Python packages bundled by this ansible-core distribution were already loaded (packaging). This may result in undefined behavior.
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/76473
|
https://github.com/ansible/ansible/pull/76503
|
bccfb83e00086fe7118b1f79201f5d6278cc6490
|
82f59d4843358768be98fbade9847c65970d48d5
| 2021-12-06T15:08:28Z |
python
| 2021-12-08T20:04:45Z |
changelogs/fragments/ansible-test-sanity-vendoring.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 76,473 |
Ansible-test import fails - one or more Python packages bundled by this ansible-core
|
### Summary
Ansible-test import fails in our CI with one or more Python packages bundled by this ansible-core
### Issue Type
Bug Report
### Component Name
ansible-test
### Ansible Version
```console
$ ansible --version
ansible [core 2.12.0]
config file = /etc/ansible/ansible.cfg
configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.9/site-packages/ansible
ansible collection location = /root/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/bin/ansible
python version = 3.9.8 (main, Nov 8 2021, 00:00:00) [GCC 11.2.1 20211019 (Red Hat 11.2.1-6)]
jinja version = 2.11.3
libyaml = True
```
### Configuration
```console
$ ansible-config dump --only-changed
```
### OS / Environment
container centos:stream9
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
```
### Expected Results
Pass the test
### Actual Results
```console
https://github.com/oVirt/ovirt-ansible-collection/runs/4391847035?check_suite_focus=true
WARNING: The directory '/github/home/.cache/pip' or its parent directory is not owned or is not writable by the current user. The cache has been disabled. Check the permissions and owner of that directory. If executing pip with sudo, you should use sudo's -H flag.
Collecting cryptography<3.4
Downloading cryptography-3.3.2-cp36-abi3-manylinux2010_x86_64.whl (2.6 MB)
Collecting six>=1.4.1
Downloading six-1.16.0-py2.py3-none-any.whl (11 kB)
Collecting cffi>=1.12
Downloading cffi-1.15.0-cp39-cp39-manylinux_2_12_x86_64.manylinux2010_x86_64.whl (444 kB)
Collecting pycparser
Downloading pycparser-2.21-py2.py3-none-any.whl (118 kB)
Installing collected packages: pycparser, six, cffi, cryptography
Successfully installed cffi-1.15.0 cryptography-3.3.2 pycparser-2.21 six-1.16.0
WARNING: The directory '/github/home/.cache/pip' or its parent directory is not owned or is not writable by the current user. The cache has been disabled. Check the permissions and owner of that directory. If executing pip with sudo, you should use sudo's -H flag.
Collecting jinja2
Downloading Jinja2-3.0.3-py3-none-any.whl (133 kB)
Collecting PyYAML
Downloading PyYAML-6.0-cp39-cp39-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl (661 kB)
Collecting packaging
Downloading packaging-21.3-py3-none-any.whl (40 kB)
Collecting resolvelib<0.6.0,>=0.5.3
Downloading resolvelib-0.5.4-py2.py3-none-any.whl (12 kB)
Collecting MarkupSafe>=2.0
Downloading MarkupSafe-2.0.1-cp39-cp39-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl (30 kB)
Collecting pyparsing!=3.0.5,>=2.0.2
Downloading pyparsing-3.0.6-py3-none-any.whl (97 kB)
Installing collected packages: pyparsing, MarkupSafe, resolvelib, PyYAML, packaging, jinja2
Successfully installed MarkupSafe-2.0.1 PyYAML-6.0 jinja2-3.0.3 packaging-21.3 pyparsing-3.0.6 resolvelib-0.5.4
ERROR: Command "importer.py" returned exit status 10.
>>> Standard Error
/tmp/ansible-test-ibrisv3_/ansible/_vendor/__init__.py:42: UserWarning: One or more Python packages bundled by this ansible-core distribution were already loaded (packaging). This may result in undefined behavior.
warnings.warn('One or more Python packages bundled by this ansible-core distribution were already '
/tmp/ansible-test-ibrisv3_/ansible/_vendor/__init__.py:42: UserWarning: One or more Python packages bundled by this ansible-core distribution were already loaded (packaging). This may result in undefined behavior.
warnings.warn('One or more Python packages bundled by this ansible-core distribution were already '
/tmp/ansible-test-ibrisv3_/ansible/_vendor/__init__.py:42: UserWarning: One or more Python packages bundled by this ansible-core distribution were already loaded (packaging). This may result in undefined behavior.
warnings.warn('One or more Python packages bundled by this ansible-core distribution were already '
/tmp/ansible-test-ibrisv3_/ansible/_vendor/__init__.py:42: UserWarning: One or more Python packages bundled by this ansible-core distribution were already loaded (packaging). This may result in undefined behavior.
warnings.warn('One or more Python packages bundled by this ansible-core distribution were already '
/tmp/ansible-test-ibrisv3_/ansible/_vendor/__init__.py:42: UserWarning: One or more Python packages bundled by this ansible-core distribution were already loaded (packaging). This may result in undefined behavior.
warnings.warn('One or more Python packages bundled by this ansible-core distribution were already '
>>> Standard Output
plugins/inventory/ovirt.py:85:0: traceback: UserWarning: One or more Python packages bundled by this ansible-core distribution were already loaded (packaging). This may result in undefined behavior.
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/76473
|
https://github.com/ansible/ansible/pull/76503
|
bccfb83e00086fe7118b1f79201f5d6278cc6490
|
82f59d4843358768be98fbade9847c65970d48d5
| 2021-12-06T15:08:28Z |
python
| 2021-12-08T20:04:45Z |
test/integration/targets/ansible-test/aliases
|
shippable/posix/group1 # runs in the distro test containers
shippable/generic/group1 # runs in the default test container
context/controller
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 76,473 |
Ansible-test import fails - one or more Python packages bundled by this ansible-core
|
### Summary
Ansible-test import fails in our CI with one or more Python packages bundled by this ansible-core
### Issue Type
Bug Report
### Component Name
ansible-test
### Ansible Version
```console
$ ansible --version
ansible [core 2.12.0]
config file = /etc/ansible/ansible.cfg
configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.9/site-packages/ansible
ansible collection location = /root/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/bin/ansible
python version = 3.9.8 (main, Nov 8 2021, 00:00:00) [GCC 11.2.1 20211019 (Red Hat 11.2.1-6)]
jinja version = 2.11.3
libyaml = True
```
### Configuration
```console
$ ansible-config dump --only-changed
```
### OS / Environment
container centos:stream9
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
```
### Expected Results
Pass the test
### Actual Results
```console
https://github.com/oVirt/ovirt-ansible-collection/runs/4391847035?check_suite_focus=true
WARNING: The directory '/github/home/.cache/pip' or its parent directory is not owned or is not writable by the current user. The cache has been disabled. Check the permissions and owner of that directory. If executing pip with sudo, you should use sudo's -H flag.
Collecting cryptography<3.4
Downloading cryptography-3.3.2-cp36-abi3-manylinux2010_x86_64.whl (2.6 MB)
Collecting six>=1.4.1
Downloading six-1.16.0-py2.py3-none-any.whl (11 kB)
Collecting cffi>=1.12
Downloading cffi-1.15.0-cp39-cp39-manylinux_2_12_x86_64.manylinux2010_x86_64.whl (444 kB)
Collecting pycparser
Downloading pycparser-2.21-py2.py3-none-any.whl (118 kB)
Installing collected packages: pycparser, six, cffi, cryptography
Successfully installed cffi-1.15.0 cryptography-3.3.2 pycparser-2.21 six-1.16.0
WARNING: The directory '/github/home/.cache/pip' or its parent directory is not owned or is not writable by the current user. The cache has been disabled. Check the permissions and owner of that directory. If executing pip with sudo, you should use sudo's -H flag.
Collecting jinja2
Downloading Jinja2-3.0.3-py3-none-any.whl (133 kB)
Collecting PyYAML
Downloading PyYAML-6.0-cp39-cp39-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl (661 kB)
Collecting packaging
Downloading packaging-21.3-py3-none-any.whl (40 kB)
Collecting resolvelib<0.6.0,>=0.5.3
Downloading resolvelib-0.5.4-py2.py3-none-any.whl (12 kB)
Collecting MarkupSafe>=2.0
Downloading MarkupSafe-2.0.1-cp39-cp39-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl (30 kB)
Collecting pyparsing!=3.0.5,>=2.0.2
Downloading pyparsing-3.0.6-py3-none-any.whl (97 kB)
Installing collected packages: pyparsing, MarkupSafe, resolvelib, PyYAML, packaging, jinja2
Successfully installed MarkupSafe-2.0.1 PyYAML-6.0 jinja2-3.0.3 packaging-21.3 pyparsing-3.0.6 resolvelib-0.5.4
ERROR: Command "importer.py" returned exit status 10.
>>> Standard Error
/tmp/ansible-test-ibrisv3_/ansible/_vendor/__init__.py:42: UserWarning: One or more Python packages bundled by this ansible-core distribution were already loaded (packaging). This may result in undefined behavior.
warnings.warn('One or more Python packages bundled by this ansible-core distribution were already '
/tmp/ansible-test-ibrisv3_/ansible/_vendor/__init__.py:42: UserWarning: One or more Python packages bundled by this ansible-core distribution were already loaded (packaging). This may result in undefined behavior.
warnings.warn('One or more Python packages bundled by this ansible-core distribution were already '
/tmp/ansible-test-ibrisv3_/ansible/_vendor/__init__.py:42: UserWarning: One or more Python packages bundled by this ansible-core distribution were already loaded (packaging). This may result in undefined behavior.
warnings.warn('One or more Python packages bundled by this ansible-core distribution were already '
/tmp/ansible-test-ibrisv3_/ansible/_vendor/__init__.py:42: UserWarning: One or more Python packages bundled by this ansible-core distribution were already loaded (packaging). This may result in undefined behavior.
warnings.warn('One or more Python packages bundled by this ansible-core distribution were already '
/tmp/ansible-test-ibrisv3_/ansible/_vendor/__init__.py:42: UserWarning: One or more Python packages bundled by this ansible-core distribution were already loaded (packaging). This may result in undefined behavior.
warnings.warn('One or more Python packages bundled by this ansible-core distribution were already '
>>> Standard Output
plugins/inventory/ovirt.py:85:0: traceback: UserWarning: One or more Python packages bundled by this ansible-core distribution were already loaded (packaging). This may result in undefined behavior.
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/76473
|
https://github.com/ansible/ansible/pull/76503
|
bccfb83e00086fe7118b1f79201f5d6278cc6490
|
82f59d4843358768be98fbade9847c65970d48d5
| 2021-12-06T15:08:28Z |
python
| 2021-12-08T20:04:45Z |
test/integration/targets/ansible-test/ansible_collections/ns/col/plugins/lookup/vendor1.py
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 76,473 |
Ansible-test import fails - one or more Python packages bundled by this ansible-core
|
### Summary
Ansible-test import fails in our CI with one or more Python packages bundled by this ansible-core
### Issue Type
Bug Report
### Component Name
ansible-test
### Ansible Version
```console
$ ansible --version
ansible [core 2.12.0]
config file = /etc/ansible/ansible.cfg
configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.9/site-packages/ansible
ansible collection location = /root/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/bin/ansible
python version = 3.9.8 (main, Nov 8 2021, 00:00:00) [GCC 11.2.1 20211019 (Red Hat 11.2.1-6)]
jinja version = 2.11.3
libyaml = True
```
### Configuration
```console
$ ansible-config dump --only-changed
```
### OS / Environment
container centos:stream9
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
```
### Expected Results
Pass the test
### Actual Results
```console
https://github.com/oVirt/ovirt-ansible-collection/runs/4391847035?check_suite_focus=true
WARNING: The directory '/github/home/.cache/pip' or its parent directory is not owned or is not writable by the current user. The cache has been disabled. Check the permissions and owner of that directory. If executing pip with sudo, you should use sudo's -H flag.
Collecting cryptography<3.4
Downloading cryptography-3.3.2-cp36-abi3-manylinux2010_x86_64.whl (2.6 MB)
Collecting six>=1.4.1
Downloading six-1.16.0-py2.py3-none-any.whl (11 kB)
Collecting cffi>=1.12
Downloading cffi-1.15.0-cp39-cp39-manylinux_2_12_x86_64.manylinux2010_x86_64.whl (444 kB)
Collecting pycparser
Downloading pycparser-2.21-py2.py3-none-any.whl (118 kB)
Installing collected packages: pycparser, six, cffi, cryptography
Successfully installed cffi-1.15.0 cryptography-3.3.2 pycparser-2.21 six-1.16.0
WARNING: The directory '/github/home/.cache/pip' or its parent directory is not owned or is not writable by the current user. The cache has been disabled. Check the permissions and owner of that directory. If executing pip with sudo, you should use sudo's -H flag.
Collecting jinja2
Downloading Jinja2-3.0.3-py3-none-any.whl (133 kB)
Collecting PyYAML
Downloading PyYAML-6.0-cp39-cp39-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl (661 kB)
Collecting packaging
Downloading packaging-21.3-py3-none-any.whl (40 kB)
Collecting resolvelib<0.6.0,>=0.5.3
Downloading resolvelib-0.5.4-py2.py3-none-any.whl (12 kB)
Collecting MarkupSafe>=2.0
Downloading MarkupSafe-2.0.1-cp39-cp39-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl (30 kB)
Collecting pyparsing!=3.0.5,>=2.0.2
Downloading pyparsing-3.0.6-py3-none-any.whl (97 kB)
Installing collected packages: pyparsing, MarkupSafe, resolvelib, PyYAML, packaging, jinja2
Successfully installed MarkupSafe-2.0.1 PyYAML-6.0 jinja2-3.0.3 packaging-21.3 pyparsing-3.0.6 resolvelib-0.5.4
ERROR: Command "importer.py" returned exit status 10.
>>> Standard Error
/tmp/ansible-test-ibrisv3_/ansible/_vendor/__init__.py:42: UserWarning: One or more Python packages bundled by this ansible-core distribution were already loaded (packaging). This may result in undefined behavior.
warnings.warn('One or more Python packages bundled by this ansible-core distribution were already '
/tmp/ansible-test-ibrisv3_/ansible/_vendor/__init__.py:42: UserWarning: One or more Python packages bundled by this ansible-core distribution were already loaded (packaging). This may result in undefined behavior.
warnings.warn('One or more Python packages bundled by this ansible-core distribution were already '
/tmp/ansible-test-ibrisv3_/ansible/_vendor/__init__.py:42: UserWarning: One or more Python packages bundled by this ansible-core distribution were already loaded (packaging). This may result in undefined behavior.
warnings.warn('One or more Python packages bundled by this ansible-core distribution were already '
/tmp/ansible-test-ibrisv3_/ansible/_vendor/__init__.py:42: UserWarning: One or more Python packages bundled by this ansible-core distribution were already loaded (packaging). This may result in undefined behavior.
warnings.warn('One or more Python packages bundled by this ansible-core distribution were already '
/tmp/ansible-test-ibrisv3_/ansible/_vendor/__init__.py:42: UserWarning: One or more Python packages bundled by this ansible-core distribution were already loaded (packaging). This may result in undefined behavior.
warnings.warn('One or more Python packages bundled by this ansible-core distribution were already '
>>> Standard Output
plugins/inventory/ovirt.py:85:0: traceback: UserWarning: One or more Python packages bundled by this ansible-core distribution were already loaded (packaging). This may result in undefined behavior.
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/76473
|
https://github.com/ansible/ansible/pull/76503
|
bccfb83e00086fe7118b1f79201f5d6278cc6490
|
82f59d4843358768be98fbade9847c65970d48d5
| 2021-12-06T15:08:28Z |
python
| 2021-12-08T20:04:45Z |
test/integration/targets/ansible-test/ansible_collections/ns/col/plugins/lookup/vendor2.py
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 76,473 |
Ansible-test import fails - one or more Python packages bundled by this ansible-core
|
### Summary
Ansible-test import fails in our CI with one or more Python packages bundled by this ansible-core
### Issue Type
Bug Report
### Component Name
ansible-test
### Ansible Version
```console
$ ansible --version
ansible [core 2.12.0]
config file = /etc/ansible/ansible.cfg
configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.9/site-packages/ansible
ansible collection location = /root/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/bin/ansible
python version = 3.9.8 (main, Nov 8 2021, 00:00:00) [GCC 11.2.1 20211019 (Red Hat 11.2.1-6)]
jinja version = 2.11.3
libyaml = True
```
### Configuration
```console
$ ansible-config dump --only-changed
```
### OS / Environment
container centos:stream9
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
```
### Expected Results
Pass the test
### Actual Results
```console
https://github.com/oVirt/ovirt-ansible-collection/runs/4391847035?check_suite_focus=true
WARNING: The directory '/github/home/.cache/pip' or its parent directory is not owned or is not writable by the current user. The cache has been disabled. Check the permissions and owner of that directory. If executing pip with sudo, you should use sudo's -H flag.
Collecting cryptography<3.4
Downloading cryptography-3.3.2-cp36-abi3-manylinux2010_x86_64.whl (2.6 MB)
Collecting six>=1.4.1
Downloading six-1.16.0-py2.py3-none-any.whl (11 kB)
Collecting cffi>=1.12
Downloading cffi-1.15.0-cp39-cp39-manylinux_2_12_x86_64.manylinux2010_x86_64.whl (444 kB)
Collecting pycparser
Downloading pycparser-2.21-py2.py3-none-any.whl (118 kB)
Installing collected packages: pycparser, six, cffi, cryptography
Successfully installed cffi-1.15.0 cryptography-3.3.2 pycparser-2.21 six-1.16.0
WARNING: The directory '/github/home/.cache/pip' or its parent directory is not owned or is not writable by the current user. The cache has been disabled. Check the permissions and owner of that directory. If executing pip with sudo, you should use sudo's -H flag.
Collecting jinja2
Downloading Jinja2-3.0.3-py3-none-any.whl (133 kB)
Collecting PyYAML
Downloading PyYAML-6.0-cp39-cp39-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl (661 kB)
Collecting packaging
Downloading packaging-21.3-py3-none-any.whl (40 kB)
Collecting resolvelib<0.6.0,>=0.5.3
Downloading resolvelib-0.5.4-py2.py3-none-any.whl (12 kB)
Collecting MarkupSafe>=2.0
Downloading MarkupSafe-2.0.1-cp39-cp39-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl (30 kB)
Collecting pyparsing!=3.0.5,>=2.0.2
Downloading pyparsing-3.0.6-py3-none-any.whl (97 kB)
Installing collected packages: pyparsing, MarkupSafe, resolvelib, PyYAML, packaging, jinja2
Successfully installed MarkupSafe-2.0.1 PyYAML-6.0 jinja2-3.0.3 packaging-21.3 pyparsing-3.0.6 resolvelib-0.5.4
ERROR: Command "importer.py" returned exit status 10.
>>> Standard Error
/tmp/ansible-test-ibrisv3_/ansible/_vendor/__init__.py:42: UserWarning: One or more Python packages bundled by this ansible-core distribution were already loaded (packaging). This may result in undefined behavior.
warnings.warn('One or more Python packages bundled by this ansible-core distribution were already '
/tmp/ansible-test-ibrisv3_/ansible/_vendor/__init__.py:42: UserWarning: One or more Python packages bundled by this ansible-core distribution were already loaded (packaging). This may result in undefined behavior.
warnings.warn('One or more Python packages bundled by this ansible-core distribution were already '
/tmp/ansible-test-ibrisv3_/ansible/_vendor/__init__.py:42: UserWarning: One or more Python packages bundled by this ansible-core distribution were already loaded (packaging). This may result in undefined behavior.
warnings.warn('One or more Python packages bundled by this ansible-core distribution were already '
/tmp/ansible-test-ibrisv3_/ansible/_vendor/__init__.py:42: UserWarning: One or more Python packages bundled by this ansible-core distribution were already loaded (packaging). This may result in undefined behavior.
warnings.warn('One or more Python packages bundled by this ansible-core distribution were already '
/tmp/ansible-test-ibrisv3_/ansible/_vendor/__init__.py:42: UserWarning: One or more Python packages bundled by this ansible-core distribution were already loaded (packaging). This may result in undefined behavior.
warnings.warn('One or more Python packages bundled by this ansible-core distribution were already '
>>> Standard Output
plugins/inventory/ovirt.py:85:0: traceback: UserWarning: One or more Python packages bundled by this ansible-core distribution were already loaded (packaging). This may result in undefined behavior.
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/76473
|
https://github.com/ansible/ansible/pull/76503
|
bccfb83e00086fe7118b1f79201f5d6278cc6490
|
82f59d4843358768be98fbade9847c65970d48d5
| 2021-12-06T15:08:28Z |
python
| 2021-12-08T20:04:45Z |
test/integration/targets/ansible-test/collection-tests/sanity-vendor.sh
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 76,473 |
Ansible-test import fails - one or more Python packages bundled by this ansible-core
|
### Summary
Ansible-test import fails in our CI with one or more Python packages bundled by this ansible-core
### Issue Type
Bug Report
### Component Name
ansible-test
### Ansible Version
```console
$ ansible --version
ansible [core 2.12.0]
config file = /etc/ansible/ansible.cfg
configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.9/site-packages/ansible
ansible collection location = /root/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/bin/ansible
python version = 3.9.8 (main, Nov 8 2021, 00:00:00) [GCC 11.2.1 20211019 (Red Hat 11.2.1-6)]
jinja version = 2.11.3
libyaml = True
```
### Configuration
```console
$ ansible-config dump --only-changed
```
### OS / Environment
container centos:stream9
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
```
### Expected Results
Pass the test
### Actual Results
```console
https://github.com/oVirt/ovirt-ansible-collection/runs/4391847035?check_suite_focus=true
WARNING: The directory '/github/home/.cache/pip' or its parent directory is not owned or is not writable by the current user. The cache has been disabled. Check the permissions and owner of that directory. If executing pip with sudo, you should use sudo's -H flag.
Collecting cryptography<3.4
Downloading cryptography-3.3.2-cp36-abi3-manylinux2010_x86_64.whl (2.6 MB)
Collecting six>=1.4.1
Downloading six-1.16.0-py2.py3-none-any.whl (11 kB)
Collecting cffi>=1.12
Downloading cffi-1.15.0-cp39-cp39-manylinux_2_12_x86_64.manylinux2010_x86_64.whl (444 kB)
Collecting pycparser
Downloading pycparser-2.21-py2.py3-none-any.whl (118 kB)
Installing collected packages: pycparser, six, cffi, cryptography
Successfully installed cffi-1.15.0 cryptography-3.3.2 pycparser-2.21 six-1.16.0
WARNING: The directory '/github/home/.cache/pip' or its parent directory is not owned or is not writable by the current user. The cache has been disabled. Check the permissions and owner of that directory. If executing pip with sudo, you should use sudo's -H flag.
Collecting jinja2
Downloading Jinja2-3.0.3-py3-none-any.whl (133 kB)
Collecting PyYAML
Downloading PyYAML-6.0-cp39-cp39-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl (661 kB)
Collecting packaging
Downloading packaging-21.3-py3-none-any.whl (40 kB)
Collecting resolvelib<0.6.0,>=0.5.3
Downloading resolvelib-0.5.4-py2.py3-none-any.whl (12 kB)
Collecting MarkupSafe>=2.0
Downloading MarkupSafe-2.0.1-cp39-cp39-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl (30 kB)
Collecting pyparsing!=3.0.5,>=2.0.2
Downloading pyparsing-3.0.6-py3-none-any.whl (97 kB)
Installing collected packages: pyparsing, MarkupSafe, resolvelib, PyYAML, packaging, jinja2
Successfully installed MarkupSafe-2.0.1 PyYAML-6.0 jinja2-3.0.3 packaging-21.3 pyparsing-3.0.6 resolvelib-0.5.4
ERROR: Command "importer.py" returned exit status 10.
>>> Standard Error
/tmp/ansible-test-ibrisv3_/ansible/_vendor/__init__.py:42: UserWarning: One or more Python packages bundled by this ansible-core distribution were already loaded (packaging). This may result in undefined behavior.
warnings.warn('One or more Python packages bundled by this ansible-core distribution were already '
/tmp/ansible-test-ibrisv3_/ansible/_vendor/__init__.py:42: UserWarning: One or more Python packages bundled by this ansible-core distribution were already loaded (packaging). This may result in undefined behavior.
warnings.warn('One or more Python packages bundled by this ansible-core distribution were already '
/tmp/ansible-test-ibrisv3_/ansible/_vendor/__init__.py:42: UserWarning: One or more Python packages bundled by this ansible-core distribution were already loaded (packaging). This may result in undefined behavior.
warnings.warn('One or more Python packages bundled by this ansible-core distribution were already '
/tmp/ansible-test-ibrisv3_/ansible/_vendor/__init__.py:42: UserWarning: One or more Python packages bundled by this ansible-core distribution were already loaded (packaging). This may result in undefined behavior.
warnings.warn('One or more Python packages bundled by this ansible-core distribution were already '
/tmp/ansible-test-ibrisv3_/ansible/_vendor/__init__.py:42: UserWarning: One or more Python packages bundled by this ansible-core distribution were already loaded (packaging). This may result in undefined behavior.
warnings.warn('One or more Python packages bundled by this ansible-core distribution were already '
>>> Standard Output
plugins/inventory/ovirt.py:85:0: traceback: UserWarning: One or more Python packages bundled by this ansible-core distribution were already loaded (packaging). This may result in undefined behavior.
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/76473
|
https://github.com/ansible/ansible/pull/76503
|
bccfb83e00086fe7118b1f79201f5d6278cc6490
|
82f59d4843358768be98fbade9847c65970d48d5
| 2021-12-06T15:08:28Z |
python
| 2021-12-08T20:04:45Z |
test/lib/ansible_test/_util/target/sanity/import/importer.py
|
"""Import the given python module(s) and report error(s) encountered."""
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
def main():
"""
Main program function used to isolate globals from imported code.
Changes to globals in imported modules on Python 2.x will overwrite our own globals.
"""
import ansible
import contextlib
import datetime
import json
import os
import re
import runpy
import subprocess
import sys
import traceback
import types
import warnings
ansible_path = os.path.dirname(os.path.dirname(ansible.__file__))
temp_path = os.environ['SANITY_TEMP_PATH'] + os.path.sep
external_python = os.environ.get('SANITY_EXTERNAL_PYTHON')
yaml_to_json_path = os.environ.get('SANITY_YAML_TO_JSON')
collection_full_name = os.environ.get('SANITY_COLLECTION_FULL_NAME')
collection_root = os.environ.get('ANSIBLE_COLLECTIONS_PATH')
import_type = os.environ.get('SANITY_IMPORTER_TYPE')
try:
# noinspection PyCompatibility
from importlib import import_module
except ImportError:
def import_module(name):
__import__(name)
return sys.modules[name]
try:
# noinspection PyCompatibility
from StringIO import StringIO
except ImportError:
from io import StringIO
if collection_full_name:
# allow importing code from collections when testing a collection
from ansible.module_utils.common.text.converters import to_bytes, to_text, to_native, text_type
# noinspection PyProtectedMember
from ansible.utils.collection_loader._collection_finder import _AnsibleCollectionFinder
from ansible.utils.collection_loader import _collection_finder
yaml_to_dict_cache = {}
# unique ISO date marker matching the one present in yaml_to_json.py
iso_date_marker = 'isodate:f23983df-f3df-453c-9904-bcd08af468cc:'
iso_date_re = re.compile('^%s([0-9]{4})-([0-9]{2})-([0-9]{2})$' % iso_date_marker)
def parse_value(value):
"""Custom value parser for JSON deserialization that recognizes our internal ISO date format."""
if isinstance(value, text_type):
match = iso_date_re.search(value)
if match:
value = datetime.date(int(match.group(1)), int(match.group(2)), int(match.group(3)))
return value
def object_hook(data):
"""Object hook for custom ISO date deserialization from JSON."""
return dict((key, parse_value(value)) for key, value in data.items())
def yaml_to_dict(yaml, content_id):
"""
Return a Python dict version of the provided YAML.
Conversion is done in a subprocess since the current Python interpreter does not have access to PyYAML.
"""
if content_id in yaml_to_dict_cache:
return yaml_to_dict_cache[content_id]
try:
cmd = [external_python, yaml_to_json_path]
proc = subprocess.Popen([to_bytes(c) for c in cmd], # pylint: disable=consider-using-with
stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
stdout_bytes, stderr_bytes = proc.communicate(to_bytes(yaml))
if proc.returncode != 0:
raise Exception('command %s failed with return code %d: %s' % ([to_native(c) for c in cmd], proc.returncode, to_native(stderr_bytes)))
data = yaml_to_dict_cache[content_id] = json.loads(to_text(stdout_bytes), object_hook=object_hook)
return data
except Exception as ex:
raise Exception('internal importer error - failed to parse yaml: %s' % to_native(ex))
_collection_finder._meta_yml_to_dict = yaml_to_dict # pylint: disable=protected-access
collection_loader = _AnsibleCollectionFinder(paths=[collection_root])
# noinspection PyProtectedMember
collection_loader._install() # pylint: disable=protected-access
else:
# do not support collection loading when not testing a collection
collection_loader = None
# remove all modules under the ansible package
list(map(sys.modules.pop, [m for m in sys.modules if m.partition('.')[0] == ansible.__name__]))
if import_type == 'module':
# pre-load an empty ansible package to prevent unwanted code in __init__.py from loading
# this more accurately reflects the environment that AnsiballZ runs modules under
# it also avoids issues with imports in the ansible package that are not allowed
ansible_module = types.ModuleType(ansible.__name__)
ansible_module.__file__ = ansible.__file__
ansible_module.__path__ = ansible.__path__
ansible_module.__package__ = ansible.__package__
sys.modules[ansible.__name__] = ansible_module
class ImporterAnsibleModuleException(Exception):
"""Exception thrown during initialization of ImporterAnsibleModule."""
class ImporterAnsibleModule:
"""Replacement for AnsibleModule to support import testing."""
def __init__(self, *args, **kwargs):
raise ImporterAnsibleModuleException()
class RestrictedModuleLoader:
"""Python module loader that restricts inappropriate imports."""
def __init__(self, path, name, restrict_to_module_paths):
self.path = path
self.name = name
self.loaded_modules = set()
self.restrict_to_module_paths = restrict_to_module_paths
def find_module(self, fullname, path=None):
"""Return self if the given fullname is restricted, otherwise return None.
:param fullname: str
:param path: str
:return: RestrictedModuleLoader | None
"""
if fullname in self.loaded_modules:
return None # ignore modules that are already being loaded
if is_name_in_namepace(fullname, ['ansible']):
if not self.restrict_to_module_paths:
return None # for non-modules, everything in the ansible namespace is allowed
if fullname in ('ansible.module_utils.basic',):
return self # intercept loading so we can modify the result
if is_name_in_namepace(fullname, ['ansible.module_utils', self.name]):
return None # module_utils and module under test are always allowed
if any(os.path.exists(candidate_path) for candidate_path in convert_ansible_name_to_absolute_paths(fullname)):
return self # restrict access to ansible files that exist
return None # ansible file does not exist, do not restrict access
if is_name_in_namepace(fullname, ['ansible_collections']):
if not collection_loader:
return self # restrict access to collections when we are not testing a collection
if not self.restrict_to_module_paths:
return None # for non-modules, everything in the ansible namespace is allowed
if is_name_in_namepace(fullname, ['ansible_collections...plugins.module_utils', self.name]):
return None # module_utils and module under test are always allowed
if collection_loader.find_module(fullname, path):
return self # restrict access to collection files that exist
return None # collection file does not exist, do not restrict access
# not a namespace we care about
return None
def load_module(self, fullname):
"""Raise an ImportError.
:type fullname: str
"""
if fullname == 'ansible.module_utils.basic':
module = self.__load_module(fullname)
# stop Ansible module execution during AnsibleModule instantiation
module.AnsibleModule = ImporterAnsibleModule
# no-op for _load_params since it may be called before instantiating AnsibleModule
module._load_params = lambda *args, **kwargs: {} # pylint: disable=protected-access
return module
raise ImportError('import of "%s" is not allowed in this context' % fullname)
def __load_module(self, fullname):
"""Load the requested module while avoiding infinite recursion.
:type fullname: str
:rtype: module
"""
self.loaded_modules.add(fullname)
return import_module(fullname)
def run(restrict_to_module_paths):
"""Main program function."""
base_dir = os.getcwd()
messages = set()
for path in sys.argv[1:] or sys.stdin.read().splitlines():
name = convert_relative_path_to_name(path)
test_python_module(path, name, base_dir, messages, restrict_to_module_paths)
if messages:
sys.exit(10)
def test_python_module(path, name, base_dir, messages, restrict_to_module_paths):
"""Test the given python module by importing it.
:type path: str
:type name: str
:type base_dir: str
:type messages: set[str]
:type restrict_to_module_paths: bool
"""
if name in sys.modules:
return # cannot be tested because it has already been loaded
is_ansible_module = (path.startswith('lib/ansible/modules/') or path.startswith('plugins/modules/')) and os.path.basename(path) != '__init__.py'
run_main = is_ansible_module
if path == 'lib/ansible/modules/async_wrapper.py':
# async_wrapper is a non-standard Ansible module (does not use AnsibleModule) so we cannot test the main function
run_main = False
capture_normal = Capture()
capture_main = Capture()
run_module_ok = False
try:
with monitor_sys_modules(path, messages):
with restrict_imports(path, name, messages, restrict_to_module_paths):
with capture_output(capture_normal):
import_module(name)
if run_main:
run_module_ok = is_ansible_module
with monitor_sys_modules(path, messages):
with restrict_imports(path, name, messages, restrict_to_module_paths):
with capture_output(capture_main):
runpy.run_module(name, run_name='__main__', alter_sys=True)
except ImporterAnsibleModuleException:
# module instantiated AnsibleModule without raising an exception
if not run_module_ok:
if is_ansible_module:
report_message(path, 0, 0, 'module-guard', "AnsibleModule instantiation not guarded by `if __name__ == '__main__'`", messages)
else:
report_message(path, 0, 0, 'non-module', "AnsibleModule instantiated by import of non-module", messages)
except BaseException as ex: # pylint: disable=locally-disabled, broad-except
# intentionally catch all exceptions, including calls to sys.exit
exc_type, _exc, exc_tb = sys.exc_info()
message = str(ex)
results = list(reversed(traceback.extract_tb(exc_tb)))
line = 0
offset = 0
full_path = os.path.join(base_dir, path)
base_path = base_dir + os.path.sep
source = None
# avoid line wraps in messages
message = re.sub(r'\n *', ': ', message)
for result in results:
if result[0] == full_path:
# save the line number for the file under test
line = result[1] or 0
if not source and result[0].startswith(base_path) and not result[0].startswith(temp_path):
# save the first path and line number in the traceback which is in our source tree
source = (os.path.relpath(result[0], base_path), result[1] or 0, 0)
if isinstance(ex, SyntaxError):
# SyntaxError has better information than the traceback
if ex.filename == full_path: # pylint: disable=locally-disabled, no-member
# syntax error was reported in the file under test
line = ex.lineno or 0 # pylint: disable=locally-disabled, no-member
offset = ex.offset or 0 # pylint: disable=locally-disabled, no-member
elif ex.filename.startswith(base_path) and not ex.filename.startswith(temp_path): # pylint: disable=locally-disabled, no-member
# syntax error was reported in our source tree
source = (os.path.relpath(ex.filename, base_path), ex.lineno or 0, ex.offset or 0) # pylint: disable=locally-disabled, no-member
# remove the filename and line number from the message
# either it was extracted above, or it's not really useful information
message = re.sub(r' \(.*?, line [0-9]+\)$', '', message)
if source and source[0] != path:
message += ' (at %s:%d:%d)' % (source[0], source[1], source[2])
report_message(path, line, offset, 'traceback', '%s: %s' % (exc_type.__name__, message), messages)
finally:
capture_report(path, capture_normal, messages)
capture_report(path, capture_main, messages)
def is_name_in_namepace(name, namespaces):
"""Returns True if the given name is one of the given namespaces, otherwise returns False."""
name_parts = name.split('.')
for namespace in namespaces:
namespace_parts = namespace.split('.')
length = min(len(name_parts), len(namespace_parts))
truncated_name = name_parts[0:length]
truncated_namespace = namespace_parts[0:length]
# empty parts in the namespace are treated as wildcards
# to simplify the comparison, use those empty parts to indicate the positions in the name to be empty as well
for idx, part in enumerate(truncated_namespace):
if not part:
truncated_name[idx] = part
# example: name=ansible, allowed_name=ansible.module_utils
# example: name=ansible.module_utils.system.ping, allowed_name=ansible.module_utils
if truncated_name == truncated_namespace:
return True
return False
def check_sys_modules(path, before, messages):
"""Check for unwanted changes to sys.modules.
:type path: str
:type before: dict[str, module]
:type messages: set[str]
"""
after = sys.modules
removed = set(before.keys()) - set(after.keys())
changed = set(key for key, value in before.items() if key in after and value != after[key])
# additions are checked by our custom PEP 302 loader, so we don't need to check them again here
for module in sorted(removed):
report_message(path, 0, 0, 'unload', 'unloading of "%s" in sys.modules is not supported' % module, messages)
for module in sorted(changed):
report_message(path, 0, 0, 'reload', 'reloading of "%s" in sys.modules is not supported' % module, messages)
def convert_ansible_name_to_absolute_paths(name):
"""Calculate the module path from the given name.
:type name: str
:rtype: list[str]
"""
return [
os.path.join(ansible_path, name.replace('.', os.path.sep)),
os.path.join(ansible_path, name.replace('.', os.path.sep)) + '.py',
]
def convert_relative_path_to_name(path):
"""Calculate the module name from the given path.
:type path: str
:rtype: str
"""
if path.endswith('/__init__.py'):
clean_path = os.path.dirname(path)
else:
clean_path = path
clean_path = os.path.splitext(clean_path)[0]
name = clean_path.replace(os.path.sep, '.')
if collection_loader:
# when testing collections the relative paths (and names) being tested are within the collection under test
name = 'ansible_collections.%s.%s' % (collection_full_name, name)
else:
# when testing ansible all files being imported reside under the lib directory
name = name[len('lib/'):]
return name
class Capture:
"""Captured output and/or exception."""
def __init__(self):
self.stdout = StringIO()
self.stderr = StringIO()
def capture_report(path, capture, messages):
"""Report on captured output.
:type path: str
:type capture: Capture
:type messages: set[str]
"""
if capture.stdout.getvalue():
first = capture.stdout.getvalue().strip().splitlines()[0].strip()
report_message(path, 0, 0, 'stdout', first, messages)
if capture.stderr.getvalue():
first = capture.stderr.getvalue().strip().splitlines()[0].strip()
report_message(path, 0, 0, 'stderr', first, messages)
def report_message(path, line, column, code, message, messages):
"""Report message if not already reported.
:type path: str
:type line: int
:type column: int
:type code: str
:type message: str
:type messages: set[str]
"""
message = '%s:%d:%d: %s: %s' % (path, line, column, code, message)
if message not in messages:
messages.add(message)
print(message)
@contextlib.contextmanager
def restrict_imports(path, name, messages, restrict_to_module_paths):
"""Restrict available imports.
:type path: str
:type name: str
:type messages: set[str]
:type restrict_to_module_paths: bool
"""
restricted_loader = RestrictedModuleLoader(path, name, restrict_to_module_paths)
# noinspection PyTypeChecker
sys.meta_path.insert(0, restricted_loader)
sys.path_importer_cache.clear()
try:
yield
finally:
if import_type == 'plugin':
from ansible.utils.collection_loader._collection_finder import _AnsibleCollectionFinder
_AnsibleCollectionFinder._remove() # pylint: disable=protected-access
if sys.meta_path[0] != restricted_loader:
report_message(path, 0, 0, 'metapath', 'changes to sys.meta_path[0] are not permitted', messages)
while restricted_loader in sys.meta_path:
# noinspection PyTypeChecker
sys.meta_path.remove(restricted_loader)
sys.path_importer_cache.clear()
@contextlib.contextmanager
def monitor_sys_modules(path, messages):
"""Monitor sys.modules for unwanted changes, reverting any additions made to our own namespaces."""
snapshot = sys.modules.copy()
try:
yield
finally:
check_sys_modules(path, snapshot, messages)
for key in set(sys.modules.keys()) - set(snapshot.keys()):
if is_name_in_namepace(key, ('ansible', 'ansible_collections')):
del sys.modules[key] # only unload our own code since we know it's native Python
@contextlib.contextmanager
def capture_output(capture):
"""Capture sys.stdout and sys.stderr.
:type capture: Capture
"""
old_stdout = sys.stdout
old_stderr = sys.stderr
sys.stdout = capture.stdout
sys.stderr = capture.stderr
# clear all warnings registries to make all warnings available
for module in sys.modules.values():
try:
# noinspection PyUnresolvedReferences
module.__warningregistry__.clear()
except AttributeError:
pass
with warnings.catch_warnings():
warnings.simplefilter('error')
if sys.version_info[0] == 2:
warnings.filterwarnings(
"ignore",
"Python 2 is no longer supported by the Python core team. Support for it is now deprecated in cryptography,"
" and will be removed in a future release.")
warnings.filterwarnings(
"ignore",
"Python 2 is no longer supported by the Python core team. Support for it is now deprecated in cryptography,"
" and will be removed in the next release.")
if sys.version_info[:2] == (3, 5):
warnings.filterwarnings(
"ignore",
"Python 3.5 support will be dropped in the next release ofcryptography. Please upgrade your Python.")
warnings.filterwarnings(
"ignore",
"Python 3.5 support will be dropped in the next release of cryptography. Please upgrade your Python.")
if sys.version_info >= (3, 10):
# Temporary solution for Python 3.10 until find_spec is implemented in RestrictedModuleLoader.
# That implementation is dependent on find_spec being added to the controller's collection loader first.
# The warning text is: main.<locals>.RestrictedModuleLoader.find_spec() not found; falling back to find_module()
warnings.filterwarnings(
"ignore",
r"main\.<locals>\.RestrictedModuleLoader\.find_spec\(\) not found; falling back to find_module\(\)",
)
# Temporary solution for Python 3.10 until exec_module is implemented in RestrictedModuleLoader.
# That implementation is dependent on exec_module being added to the controller's collection loader first.
# The warning text is: main.<locals>.RestrictedModuleLoader.exec_module() not found; falling back to load_module()
warnings.filterwarnings(
"ignore",
r"main\.<locals>\.RestrictedModuleLoader\.exec_module\(\) not found; falling back to load_module\(\)",
)
# Temporary solution for Python 3.10 until find_spec is implemented in the controller's collection loader.
warnings.filterwarnings(
"ignore",
r"_Ansible.*Finder\.find_spec\(\) not found; falling back to find_module\(\)",
)
# Temporary solution for Python 3.10 until exec_module is implemented in the controller's collection loader.
warnings.filterwarnings(
"ignore",
r"_Ansible.*Loader\.exec_module\(\) not found; falling back to load_module\(\)",
)
# Temporary solution until there is a vendored copy of distutils.version in module_utils.
# Some of our dependencies such as packaging.tags also import distutils, which we have no control over
# The warning text is: The distutils package is deprecated and slated for removal in Python 3.12.
# Use setuptools or check PEP 632 for potential alternatives
warnings.filterwarnings(
"ignore",
r"The distutils package is deprecated and slated for removal in Python 3\.12\. .*",
)
try:
yield
finally:
sys.stdout = old_stdout
sys.stderr = old_stderr
run(import_type == 'module')
if __name__ == '__main__':
main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 76,504 |
Ansible devel sanity tests failing with error: packaging Python module unavailable; unable to validate collection
|
### Summary
Ansible devel sanity tests failing with error: packaging Python module unavailable; unable to validate the collection
### Issue Type
Bug Report
### Component Name
ansible sanity tests
### Ansible Version
```console
$ ansible devel
```
### Configuration
```console
$ ansible-config dump --only-changed
```
### OS / Environment
mac os
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
```
Ansible devel sanity tests is failing with `stderr: [WARNING]: packaging Python module unavailable; unable to validate collection`. Plz ref the below test failures:
[ansible-test-sanity-docker-devel](https://dashboard.zuul.ansible.com/t/ansible/build/ffc5d7c618a740bf9c783e15c2d14806)
[network-ee-sanity-tests](https://dashboard.zuul.ansible.com/t/ansible/build/b4ab02b51d76487b97356f3c6ffe76fe)
### Expected Results
Ansible sanity tests run without failure for `devel`
### Actual Results
```console
Ansible sanity tests fail for `devel`
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/76504
|
https://github.com/ansible/ansible/pull/76513
|
0df60f3b617e1cf6144a76512573c9e54ef4ce7c
|
e56e47faa7f22f1de4079ed5494daf49ba7efbf6
| 2021-12-08T12:47:38Z |
python
| 2021-12-08T23:53:38Z |
changelogs/fragments/ansible-test-import-collections.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 76,504 |
Ansible devel sanity tests failing with error: packaging Python module unavailable; unable to validate collection
|
### Summary
Ansible devel sanity tests failing with error: packaging Python module unavailable; unable to validate the collection
### Issue Type
Bug Report
### Component Name
ansible sanity tests
### Ansible Version
```console
$ ansible devel
```
### Configuration
```console
$ ansible-config dump --only-changed
```
### OS / Environment
mac os
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
```
Ansible devel sanity tests is failing with `stderr: [WARNING]: packaging Python module unavailable; unable to validate collection`. Plz ref the below test failures:
[ansible-test-sanity-docker-devel](https://dashboard.zuul.ansible.com/t/ansible/build/ffc5d7c618a740bf9c783e15c2d14806)
[network-ee-sanity-tests](https://dashboard.zuul.ansible.com/t/ansible/build/b4ab02b51d76487b97356f3c6ffe76fe)
### Expected Results
Ansible sanity tests run without failure for `devel`
### Actual Results
```console
Ansible sanity tests fail for `devel`
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/76504
|
https://github.com/ansible/ansible/pull/76513
|
0df60f3b617e1cf6144a76512573c9e54ef4ce7c
|
e56e47faa7f22f1de4079ed5494daf49ba7efbf6
| 2021-12-08T12:47:38Z |
python
| 2021-12-08T23:53:38Z |
lib/ansible/plugins/loader.py
|
# (c) 2012, Daniel Hokka Zakrisson <[email protected]>
# (c) 2012-2014, Michael DeHaan <[email protected]> and others
# (c) 2017, Toshio Kuratomi <[email protected]>
# (c) 2017 Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import glob
import os
import os.path
import sys
import warnings
from collections import defaultdict, namedtuple
from ansible import constants as C
from ansible.errors import AnsibleError, AnsiblePluginCircularRedirect, AnsiblePluginRemovedError, AnsibleCollectionUnsupportedVersionError
from ansible.module_utils._text import to_bytes, to_text, to_native
from ansible.module_utils.compat.importlib import import_module
from ansible.module_utils.six import string_types
from ansible.parsing.utils.yaml import from_yaml
from ansible.parsing.yaml.loader import AnsibleLoader
from ansible.plugins import get_plugin_class, MODULE_CACHE, PATH_CACHE, PLUGIN_PATH_CACHE
from ansible.utils.collection_loader import AnsibleCollectionConfig, AnsibleCollectionRef
from ansible.utils.collection_loader._collection_finder import _AnsibleCollectionFinder, _get_collection_metadata
from ansible.utils.display import Display
from ansible.utils.plugin_docs import add_fragments
from ansible import __version__ as ansible_version
# TODO: take the packaging dep, or vendor SpecifierSet?
try:
from packaging.specifiers import SpecifierSet
from packaging.version import Version
except ImportError:
SpecifierSet = None
Version = None
try:
import importlib.util
imp = None
except ImportError:
import imp
display = Display()
get_with_context_result = namedtuple('get_with_context_result', ['object', 'plugin_load_context'])
def get_all_plugin_loaders():
return [(name, obj) for (name, obj) in globals().items() if isinstance(obj, PluginLoader)]
def add_all_plugin_dirs(path):
''' add any existing plugin dirs in the path provided '''
b_path = os.path.expanduser(to_bytes(path, errors='surrogate_or_strict'))
if os.path.isdir(b_path):
for name, obj in get_all_plugin_loaders():
if obj.subdir:
plugin_path = os.path.join(b_path, to_bytes(obj.subdir))
if os.path.isdir(plugin_path):
obj.add_directory(to_text(plugin_path))
else:
display.warning("Ignoring invalid path provided to plugin path: '%s' is not a directory" % to_text(path))
def get_shell_plugin(shell_type=None, executable=None):
if not shell_type:
# default to sh
shell_type = 'sh'
# mostly for backwards compat
if executable:
if isinstance(executable, string_types):
shell_filename = os.path.basename(executable)
try:
shell = shell_loader.get(shell_filename)
except Exception:
shell = None
if shell is None:
for shell in shell_loader.all():
if shell_filename in shell.COMPATIBLE_SHELLS:
shell_type = shell.SHELL_FAMILY
break
else:
raise AnsibleError("Either a shell type or a shell executable must be provided ")
shell = shell_loader.get(shell_type)
if not shell:
raise AnsibleError("Could not find the shell plugin required (%s)." % shell_type)
if executable:
setattr(shell, 'executable', executable)
return shell
def add_dirs_to_loader(which_loader, paths):
loader = getattr(sys.modules[__name__], '%s_loader' % which_loader)
for path in paths:
loader.add_directory(path, with_subdir=True)
class PluginPathContext(object):
def __init__(self, path, internal):
self.path = path
self.internal = internal
class PluginLoadContext(object):
def __init__(self):
self.original_name = None
self.redirect_list = []
self.error_list = []
self.import_error_list = []
self.load_attempts = []
self.pending_redirect = None
self.exit_reason = None
self.plugin_resolved_path = None
self.plugin_resolved_name = None
self.plugin_resolved_collection = None # empty string for resolved plugins from user-supplied paths
self.deprecated = False
self.removal_date = None
self.removal_version = None
self.deprecation_warnings = []
self.resolved = False
self._resolved_fqcn = None
@property
def resolved_fqcn(self):
if not self.resolved:
return
if not self._resolved_fqcn:
final_plugin = self.redirect_list[-1]
if AnsibleCollectionRef.is_valid_fqcr(final_plugin) and final_plugin.startswith('ansible.legacy.'):
final_plugin = final_plugin.split('ansible.legacy.')[-1]
if self.plugin_resolved_collection and not AnsibleCollectionRef.is_valid_fqcr(final_plugin):
final_plugin = self.plugin_resolved_collection + '.' + final_plugin
self._resolved_fqcn = final_plugin
return self._resolved_fqcn
def record_deprecation(self, name, deprecation, collection_name):
if not deprecation:
return self
# The `or ''` instead of using `.get(..., '')` makes sure that even if the user explicitly
# sets `warning_text` to `~` (None) or `false`, we still get an empty string.
warning_text = deprecation.get('warning_text', None) or ''
removal_date = deprecation.get('removal_date', None)
removal_version = deprecation.get('removal_version', None)
# If both removal_date and removal_version are specified, use removal_date
if removal_date is not None:
removal_version = None
warning_text = '{0} has been deprecated.{1}{2}'.format(name, ' ' if warning_text else '', warning_text)
display.deprecated(warning_text, date=removal_date, version=removal_version, collection_name=collection_name)
self.deprecated = True
if removal_date:
self.removal_date = removal_date
if removal_version:
self.removal_version = removal_version
self.deprecation_warnings.append(warning_text)
return self
def resolve(self, resolved_name, resolved_path, resolved_collection, exit_reason):
self.pending_redirect = None
self.plugin_resolved_name = resolved_name
self.plugin_resolved_path = resolved_path
self.plugin_resolved_collection = resolved_collection
self.exit_reason = exit_reason
self.resolved = True
return self
def redirect(self, redirect_name):
self.pending_redirect = redirect_name
self.exit_reason = 'pending redirect resolution from {0} to {1}'.format(self.original_name, redirect_name)
self.resolved = False
return self
def nope(self, exit_reason):
self.pending_redirect = None
self.exit_reason = exit_reason
self.resolved = False
return self
class PluginLoader:
'''
PluginLoader loads plugins from the configured plugin directories.
It searches for plugins by iterating through the combined list of play basedirs, configured
paths, and the python path. The first match is used.
'''
def __init__(self, class_name, package, config, subdir, aliases=None, required_base_class=None):
aliases = {} if aliases is None else aliases
self.class_name = class_name
self.base_class = required_base_class
self.package = package
self.subdir = subdir
# FIXME: remove alias dict in favor of alias by symlink?
self.aliases = aliases
if config and not isinstance(config, list):
config = [config]
elif not config:
config = []
self.config = config
if class_name not in MODULE_CACHE:
MODULE_CACHE[class_name] = {}
if class_name not in PATH_CACHE:
PATH_CACHE[class_name] = None
if class_name not in PLUGIN_PATH_CACHE:
PLUGIN_PATH_CACHE[class_name] = defaultdict(dict)
# hold dirs added at runtime outside of config
self._extra_dirs = []
# caches
self._module_cache = MODULE_CACHE[class_name]
self._paths = PATH_CACHE[class_name]
self._plugin_path_cache = PLUGIN_PATH_CACHE[class_name]
self._searched_paths = set()
def __repr__(self):
return 'PluginLoader(type={0})'.format(AnsibleCollectionRef.legacy_plugin_dir_to_plugin_type(self.subdir))
def _clear_caches(self):
if C.OLD_PLUGIN_CACHE_CLEARING:
self._paths = None
else:
# reset global caches
MODULE_CACHE[self.class_name] = {}
PATH_CACHE[self.class_name] = None
PLUGIN_PATH_CACHE[self.class_name] = defaultdict(dict)
# reset internal caches
self._module_cache = MODULE_CACHE[self.class_name]
self._paths = PATH_CACHE[self.class_name]
self._plugin_path_cache = PLUGIN_PATH_CACHE[self.class_name]
self._searched_paths = set()
def __setstate__(self, data):
'''
Deserializer.
'''
class_name = data.get('class_name')
package = data.get('package')
config = data.get('config')
subdir = data.get('subdir')
aliases = data.get('aliases')
base_class = data.get('base_class')
PATH_CACHE[class_name] = data.get('PATH_CACHE')
PLUGIN_PATH_CACHE[class_name] = data.get('PLUGIN_PATH_CACHE')
self.__init__(class_name, package, config, subdir, aliases, base_class)
self._extra_dirs = data.get('_extra_dirs', [])
self._searched_paths = data.get('_searched_paths', set())
def __getstate__(self):
'''
Serializer.
'''
return dict(
class_name=self.class_name,
base_class=self.base_class,
package=self.package,
config=self.config,
subdir=self.subdir,
aliases=self.aliases,
_extra_dirs=self._extra_dirs,
_searched_paths=self._searched_paths,
PATH_CACHE=PATH_CACHE[self.class_name],
PLUGIN_PATH_CACHE=PLUGIN_PATH_CACHE[self.class_name],
)
def format_paths(self, paths):
''' Returns a string suitable for printing of the search path '''
# Uses a list to get the order right
ret = []
for i in paths:
if i not in ret:
ret.append(i)
return os.pathsep.join(ret)
def print_paths(self):
return self.format_paths(self._get_paths(subdirs=False))
def _all_directories(self, dir):
results = []
results.append(dir)
for root, subdirs, files in os.walk(dir, followlinks=True):
if '__init__.py' in files:
for x in subdirs:
results.append(os.path.join(root, x))
return results
def _get_package_paths(self, subdirs=True):
''' Gets the path of a Python package '''
if not self.package:
return []
if not hasattr(self, 'package_path'):
m = __import__(self.package)
parts = self.package.split('.')[1:]
for parent_mod in parts:
m = getattr(m, parent_mod)
self.package_path = to_text(os.path.dirname(m.__file__), errors='surrogate_or_strict')
if subdirs:
return self._all_directories(self.package_path)
return [self.package_path]
def _get_paths_with_context(self, subdirs=True):
''' Return a list of PluginPathContext objects to search for plugins in '''
# FIXME: This is potentially buggy if subdirs is sometimes True and sometimes False.
# In current usage, everything calls this with subdirs=True except for module_utils_loader and ansible-doc
# which always calls it with subdirs=False. So there currently isn't a problem with this caching.
if self._paths is not None:
return self._paths
ret = [PluginPathContext(p, False) for p in self._extra_dirs]
# look in any configured plugin paths, allow one level deep for subcategories
if self.config is not None:
for path in self.config:
path = os.path.abspath(os.path.expanduser(path))
if subdirs:
contents = glob.glob("%s/*" % path) + glob.glob("%s/*/*" % path)
for c in contents:
c = to_text(c, errors='surrogate_or_strict')
if os.path.isdir(c) and c not in ret:
ret.append(PluginPathContext(c, False))
path = to_text(path, errors='surrogate_or_strict')
if path not in ret:
ret.append(PluginPathContext(path, False))
# look for any plugins installed in the package subtree
# Note package path always gets added last so that every other type of
# path is searched before it.
ret.extend([PluginPathContext(p, True) for p in self._get_package_paths(subdirs=subdirs)])
# HACK: because powershell modules are in the same directory
# hierarchy as other modules we have to process them last. This is
# because powershell only works on windows but the other modules work
# anywhere (possibly including windows if the correct language
# interpreter is installed). the non-powershell modules can have any
# file extension and thus powershell modules are picked up in that.
# The non-hack way to fix this is to have powershell modules be
# a different PluginLoader/ModuleLoader. But that requires changing
# other things too (known thing to change would be PATHS_CACHE,
# PLUGIN_PATHS_CACHE, and MODULE_CACHE. Since those three dicts key
# on the class_name and neither regular modules nor powershell modules
# would have class_names, they would not work as written.
#
# The expected sort order is paths in the order in 'ret' with paths ending in '/windows' at the end,
# also in the original order they were found in 'ret'.
# The .sort() method is guaranteed to be stable, so original order is preserved.
ret.sort(key=lambda p: p.path.endswith('/windows'))
# cache and return the result
self._paths = ret
return ret
def _get_paths(self, subdirs=True):
''' Return a list of paths to search for plugins in '''
paths_with_context = self._get_paths_with_context(subdirs=subdirs)
return [path_with_context.path for path_with_context in paths_with_context]
def _load_config_defs(self, name, module, path):
''' Reads plugin docs to find configuration setting definitions, to push to config manager for later use '''
# plugins w/o class name don't support config
if self.class_name:
type_name = get_plugin_class(self.class_name)
# if type name != 'module_doc_fragment':
if type_name in C.CONFIGURABLE_PLUGINS:
dstring = AnsibleLoader(getattr(module, 'DOCUMENTATION', ''), file_name=path).get_single_data()
if dstring:
add_fragments(dstring, path, fragment_loader=fragment_loader, is_module=(type_name == 'module'))
if dstring and 'options' in dstring and isinstance(dstring['options'], dict):
C.config.initialize_plugin_configuration_definitions(type_name, name, dstring['options'])
display.debug('Loaded config def from plugin (%s/%s)' % (type_name, name))
def add_directory(self, directory, with_subdir=False):
''' Adds an additional directory to the search path '''
directory = os.path.realpath(directory)
if directory is not None:
if with_subdir:
directory = os.path.join(directory, self.subdir)
if directory not in self._extra_dirs:
# append the directory and invalidate the path cache
self._extra_dirs.append(directory)
self._clear_caches()
display.debug('Added %s to loader search path' % (directory))
def _query_collection_routing_meta(self, acr, plugin_type, extension=None):
collection_pkg = import_module(acr.n_python_collection_package_name)
if not collection_pkg:
return None
# FIXME: shouldn't need this...
try:
# force any type-specific metadata postprocessing to occur
import_module(acr.n_python_collection_package_name + '.plugins.{0}'.format(plugin_type))
except ImportError:
pass
# this will be created by the collection PEP302 loader
collection_meta = getattr(collection_pkg, '_collection_meta', None)
if not collection_meta:
return None
# TODO: add subdirs support
# check for extension-specific entry first (eg 'setup.ps1')
# TODO: str/bytes on extension/name munging
if acr.subdirs:
subdir_qualified_resource = '.'.join([acr.subdirs, acr.resource])
else:
subdir_qualified_resource = acr.resource
entry = collection_meta.get('plugin_routing', {}).get(plugin_type, {}).get(subdir_qualified_resource + extension, None)
if not entry:
# try for extension-agnostic entry
entry = collection_meta.get('plugin_routing', {}).get(plugin_type, {}).get(subdir_qualified_resource, None)
return entry
def _find_fq_plugin(self, fq_name, extension, plugin_load_context, ignore_deprecated=False):
"""Search builtin paths to find a plugin. No external paths are searched,
meaning plugins inside roles inside collections will be ignored.
"""
plugin_load_context.resolved = False
plugin_type = AnsibleCollectionRef.legacy_plugin_dir_to_plugin_type(self.subdir)
acr = AnsibleCollectionRef.from_fqcr(fq_name, plugin_type)
# check collection metadata to see if any special handling is required for this plugin
routing_metadata = self._query_collection_routing_meta(acr, plugin_type, extension=extension)
# TODO: factor this into a wrapper method
if routing_metadata:
deprecation = routing_metadata.get('deprecation', None)
# this will no-op if there's no deprecation metadata for this plugin
if not ignore_deprecated:
plugin_load_context.record_deprecation(fq_name, deprecation, acr.collection)
tombstone = routing_metadata.get('tombstone', None)
# FIXME: clean up text gen
if tombstone:
removal_date = tombstone.get('removal_date')
removal_version = tombstone.get('removal_version')
warning_text = tombstone.get('warning_text') or ''
warning_text = '{0} has been removed.{1}{2}'.format(fq_name, ' ' if warning_text else '', warning_text)
removed_msg = display.get_deprecation_message(msg=warning_text, version=removal_version,
date=removal_date, removed=True,
collection_name=acr.collection)
plugin_load_context.removal_date = removal_date
plugin_load_context.removal_version = removal_version
plugin_load_context.resolved = True
plugin_load_context.exit_reason = removed_msg
raise AnsiblePluginRemovedError(removed_msg, plugin_load_context=plugin_load_context)
redirect = routing_metadata.get('redirect', None)
if redirect:
# FIXME: remove once this is covered in debug or whatever
display.vv("redirecting (type: {0}) {1} to {2}".format(plugin_type, fq_name, redirect))
# The name doing the redirection is added at the beginning of _resolve_plugin_step,
# but if the unqualified name is used in conjunction with the collections keyword, only
# the unqualified name is in the redirect list.
if fq_name not in plugin_load_context.redirect_list:
plugin_load_context.redirect_list.append(fq_name)
return plugin_load_context.redirect(redirect)
# TODO: non-FQCN case, do we support `.` prefix for current collection, assume it with no dots, require it for subdirs in current, or ?
n_resource = to_native(acr.resource, errors='strict')
# we want this before the extension is added
full_name = '{0}.{1}'.format(acr.n_python_package_name, n_resource)
if extension:
n_resource += extension
pkg = sys.modules.get(acr.n_python_package_name)
if not pkg:
# FIXME: there must be cheaper/safer way to do this
try:
pkg = import_module(acr.n_python_package_name)
except ImportError:
return plugin_load_context.nope('Python package {0} not found'.format(acr.n_python_package_name))
pkg_path = os.path.dirname(pkg.__file__)
n_resource_path = os.path.join(pkg_path, n_resource)
# FIXME: and is file or file link or ...
if os.path.exists(n_resource_path):
return plugin_load_context.resolve(
full_name, to_text(n_resource_path), acr.collection, 'found exact match for {0} in {1}'.format(full_name, acr.collection))
if extension:
# the request was extension-specific, don't try for an extensionless match
return plugin_load_context.nope('no match for {0} in {1}'.format(to_text(n_resource), acr.collection))
# look for any matching extension in the package location (sans filter)
found_files = [f
for f in glob.iglob(os.path.join(pkg_path, n_resource) + '.*')
if os.path.isfile(f) and not f.endswith(C.MODULE_IGNORE_EXTS)]
if not found_files:
return plugin_load_context.nope('failed fuzzy extension match for {0} in {1}'.format(full_name, acr.collection))
if len(found_files) > 1:
# TODO: warn?
pass
return plugin_load_context.resolve(
full_name, to_text(found_files[0]), acr.collection, 'found fuzzy extension match for {0} in {1}'.format(full_name, acr.collection))
def find_plugin(self, name, mod_type='', ignore_deprecated=False, check_aliases=False, collection_list=None):
''' Find a plugin named name '''
result = self.find_plugin_with_context(name, mod_type, ignore_deprecated, check_aliases, collection_list)
if result.resolved and result.plugin_resolved_path:
return result.plugin_resolved_path
return None
def find_plugin_with_context(self, name, mod_type='', ignore_deprecated=False, check_aliases=False, collection_list=None):
''' Find a plugin named name, returning contextual info about the load, recursively resolving redirection '''
plugin_load_context = PluginLoadContext()
plugin_load_context.original_name = name
while True:
result = self._resolve_plugin_step(name, mod_type, ignore_deprecated, check_aliases, collection_list, plugin_load_context=plugin_load_context)
if result.pending_redirect:
if result.pending_redirect in result.redirect_list:
raise AnsiblePluginCircularRedirect('plugin redirect loop resolving {0} (path: {1})'.format(result.original_name, result.redirect_list))
name = result.pending_redirect
result.pending_redirect = None
plugin_load_context = result
else:
break
# TODO: smuggle these to the controller when we're in a worker, reduce noise from normal things like missing plugin packages during collection search
if plugin_load_context.error_list:
display.warning("errors were encountered during the plugin load for {0}:\n{1}".format(name, plugin_load_context.error_list))
# TODO: display/return import_error_list? Only useful for forensics...
# FIXME: store structured deprecation data in PluginLoadContext and use display.deprecate
# if plugin_load_context.deprecated and C.config.get_config_value('DEPRECATION_WARNINGS'):
# for dw in plugin_load_context.deprecation_warnings:
# # TODO: need to smuggle these to the controller if we're in a worker context
# display.warning('[DEPRECATION WARNING] ' + dw)
return plugin_load_context
# FIXME: name bikeshed
def _resolve_plugin_step(self, name, mod_type='', ignore_deprecated=False,
check_aliases=False, collection_list=None, plugin_load_context=PluginLoadContext()):
if not plugin_load_context:
raise ValueError('A PluginLoadContext is required')
plugin_load_context.redirect_list.append(name)
plugin_load_context.resolved = False
global _PLUGIN_FILTERS
if name in _PLUGIN_FILTERS[self.package]:
plugin_load_context.exit_reason = '{0} matched a defined plugin filter'.format(name)
return plugin_load_context
if mod_type:
suffix = mod_type
elif self.class_name:
# Ansible plugins that run in the controller process (most plugins)
suffix = '.py'
else:
# Only Ansible Modules. Ansible modules can be any executable so
# they can have any suffix
suffix = ''
# FIXME: need this right now so we can still load shipped PS module_utils- come up with a more robust solution
if (AnsibleCollectionRef.is_valid_fqcr(name) or collection_list) and not name.startswith('Ansible'):
if '.' in name or not collection_list:
candidates = [name]
else:
candidates = ['{0}.{1}'.format(c, name) for c in collection_list]
for candidate_name in candidates:
try:
plugin_load_context.load_attempts.append(candidate_name)
# HACK: refactor this properly
if candidate_name.startswith('ansible.legacy'):
# 'ansible.legacy' refers to the plugin finding behavior used before collections existed.
# They need to search 'library' and the various '*_plugins' directories in order to find the file.
plugin_load_context = self._find_plugin_legacy(name.replace('ansible.legacy.', '', 1),
plugin_load_context, ignore_deprecated, check_aliases, suffix)
else:
# 'ansible.builtin' should be handled here. This means only internal, or builtin, paths are searched.
plugin_load_context = self._find_fq_plugin(candidate_name, suffix, plugin_load_context=plugin_load_context,
ignore_deprecated=ignore_deprecated)
# Pending redirects are added to the redirect_list at the beginning of _resolve_plugin_step.
# Once redirects are resolved, ensure the final FQCN is added here.
# e.g. 'ns.coll.module' is included rather than only 'module' if a collections list is provided:
# - module:
# collections: ['ns.coll']
if plugin_load_context.resolved and candidate_name not in plugin_load_context.redirect_list:
plugin_load_context.redirect_list.append(candidate_name)
if plugin_load_context.resolved or plugin_load_context.pending_redirect: # if we got an answer or need to chase down a redirect, return
return plugin_load_context
except (AnsiblePluginRemovedError, AnsiblePluginCircularRedirect, AnsibleCollectionUnsupportedVersionError):
# these are generally fatal, let them fly
raise
except ImportError as ie:
plugin_load_context.import_error_list.append(ie)
except Exception as ex:
# FIXME: keep actual errors, not just assembled messages
plugin_load_context.error_list.append(to_native(ex))
if plugin_load_context.error_list:
display.debug(msg='plugin lookup for {0} failed; errors: {1}'.format(name, '; '.join(plugin_load_context.error_list)))
plugin_load_context.exit_reason = 'no matches found for {0}'.format(name)
return plugin_load_context
# if we got here, there's no collection list and it's not an FQ name, so do legacy lookup
return self._find_plugin_legacy(name, plugin_load_context, ignore_deprecated, check_aliases, suffix)
def _find_plugin_legacy(self, name, plugin_load_context, ignore_deprecated=False, check_aliases=False, suffix=None):
"""Search library and various *_plugins paths in order to find the file.
This was behavior prior to the existence of collections.
"""
plugin_load_context.resolved = False
if check_aliases:
name = self.aliases.get(name, name)
# The particular cache to look for modules within. This matches the
# requested mod_type
pull_cache = self._plugin_path_cache[suffix]
try:
path_with_context = pull_cache[name]
plugin_load_context.plugin_resolved_path = path_with_context.path
plugin_load_context.plugin_resolved_name = name
plugin_load_context.plugin_resolved_collection = 'ansible.builtin' if path_with_context.internal else ''
plugin_load_context.resolved = True
return plugin_load_context
except KeyError:
# Cache miss. Now let's find the plugin
pass
# TODO: Instead of using the self._paths cache (PATH_CACHE) and
# self._searched_paths we could use an iterator. Before enabling that
# we need to make sure we don't want to add additional directories
# (add_directory()) once we start using the iterator.
# We can use _get_paths_with_context() since add_directory() forces a cache refresh.
for path_with_context in (p for p in self._get_paths_with_context() if p.path not in self._searched_paths and os.path.isdir(to_bytes(p.path))):
path = path_with_context.path
b_path = to_bytes(path)
display.debug('trying %s' % path)
plugin_load_context.load_attempts.append(path)
internal = path_with_context.internal
try:
full_paths = (os.path.join(b_path, f) for f in os.listdir(b_path))
except OSError as e:
display.warning("Error accessing plugin paths: %s" % to_text(e))
for full_path in (to_native(f) for f in full_paths if os.path.isfile(f) and not f.endswith(b'__init__.py')):
full_name = os.path.basename(full_path)
# HACK: We have no way of executing python byte compiled files as ansible modules so specifically exclude them
# FIXME: I believe this is only correct for modules and module_utils.
# For all other plugins we want .pyc and .pyo should be valid
if any(full_path.endswith(x) for x in C.MODULE_IGNORE_EXTS):
continue
splitname = os.path.splitext(full_name)
base_name = splitname[0]
try:
extension = splitname[1]
except IndexError:
extension = ''
# everything downstream expects unicode
full_path = to_text(full_path, errors='surrogate_or_strict')
# Module found, now enter it into the caches that match this file
if base_name not in self._plugin_path_cache['']:
self._plugin_path_cache[''][base_name] = PluginPathContext(full_path, internal)
if full_name not in self._plugin_path_cache['']:
self._plugin_path_cache[''][full_name] = PluginPathContext(full_path, internal)
if base_name not in self._plugin_path_cache[extension]:
self._plugin_path_cache[extension][base_name] = PluginPathContext(full_path, internal)
if full_name not in self._plugin_path_cache[extension]:
self._plugin_path_cache[extension][full_name] = PluginPathContext(full_path, internal)
self._searched_paths.add(path)
try:
path_with_context = pull_cache[name]
plugin_load_context.plugin_resolved_path = path_with_context.path
plugin_load_context.plugin_resolved_name = name
plugin_load_context.plugin_resolved_collection = 'ansible.builtin' if path_with_context.internal else ''
plugin_load_context.resolved = True
return plugin_load_context
except KeyError:
# Didn't find the plugin in this directory. Load modules from the next one
pass
# if nothing is found, try finding alias/deprecated
if not name.startswith('_'):
alias_name = '_' + name
# We've already cached all the paths at this point
if alias_name in pull_cache:
path_with_context = pull_cache[alias_name]
if not ignore_deprecated and not os.path.islink(path_with_context.path):
# FIXME: this is not always the case, some are just aliases
display.deprecated('%s is kept for backwards compatibility but usage is discouraged. ' # pylint: disable=ansible-deprecated-no-version
'The module documentation details page may explain more about this rationale.' % name.lstrip('_'))
plugin_load_context.plugin_resolved_path = path_with_context.path
plugin_load_context.plugin_resolved_name = alias_name
plugin_load_context.plugin_resolved_collection = 'ansible.builtin' if path_with_context.internal else ''
plugin_load_context.resolved = True
return plugin_load_context
# last ditch, if it's something that can be redirected, look for a builtin redirect before giving up
candidate_fqcr = 'ansible.builtin.{0}'.format(name)
if '.' not in name and AnsibleCollectionRef.is_valid_fqcr(candidate_fqcr):
return self._find_fq_plugin(fq_name=candidate_fqcr, extension=suffix, plugin_load_context=plugin_load_context,
ignore_deprecated=ignore_deprecated)
return plugin_load_context.nope('{0} is not eligible for last-chance resolution'.format(name))
def has_plugin(self, name, collection_list=None):
''' Checks if a plugin named name exists '''
try:
return self.find_plugin(name, collection_list=collection_list) is not None
except Exception as ex:
if isinstance(ex, AnsibleError):
raise
# log and continue, likely an innocuous type/package loading failure in collections import
display.debug('has_plugin error: {0}'.format(to_text(ex)))
__contains__ = has_plugin
def _load_module_source(self, name, path):
# avoid collisions across plugins
if name.startswith('ansible_collections.'):
full_name = name
else:
full_name = '.'.join([self.package, name])
if full_name in sys.modules:
# Avoids double loading, See https://github.com/ansible/ansible/issues/13110
return sys.modules[full_name]
with warnings.catch_warnings():
warnings.simplefilter("ignore", RuntimeWarning)
if imp is None:
spec = importlib.util.spec_from_file_location(to_native(full_name), to_native(path))
module = importlib.util.module_from_spec(spec)
spec.loader.exec_module(module)
sys.modules[full_name] = module
else:
with open(to_bytes(path), 'rb') as module_file:
# to_native is used here because imp.load_source's path is for tracebacks and python's traceback formatting uses native strings
module = imp.load_source(to_native(full_name), to_native(path), module_file)
return module
def _update_object(self, obj, name, path, redirected_names=None):
# set extra info on the module, in case we want it later
setattr(obj, '_original_path', path)
setattr(obj, '_load_name', name)
setattr(obj, '_redirected_names', redirected_names or [])
def get(self, name, *args, **kwargs):
return self.get_with_context(name, *args, **kwargs).object
def get_with_context(self, name, *args, **kwargs):
''' instantiates a plugin of the given name using arguments '''
found_in_cache = True
class_only = kwargs.pop('class_only', False)
collection_list = kwargs.pop('collection_list', None)
if name in self.aliases:
name = self.aliases[name]
plugin_load_context = self.find_plugin_with_context(name, collection_list=collection_list)
if not plugin_load_context.resolved or not plugin_load_context.plugin_resolved_path:
# FIXME: this is probably an error (eg removed plugin)
return get_with_context_result(None, plugin_load_context)
name = plugin_load_context.plugin_resolved_name
path = plugin_load_context.plugin_resolved_path
redirected_names = plugin_load_context.redirect_list or []
if path not in self._module_cache:
self._module_cache[path] = self._load_module_source(name, path)
self._load_config_defs(name, self._module_cache[path], path)
found_in_cache = False
obj = getattr(self._module_cache[path], self.class_name)
if self.base_class:
# The import path is hardcoded and should be the right place,
# so we are not expecting an ImportError.
module = __import__(self.package, fromlist=[self.base_class])
# Check whether this obj has the required base class.
try:
plugin_class = getattr(module, self.base_class)
except AttributeError:
return get_with_context_result(None, plugin_load_context)
if not issubclass(obj, plugin_class):
return get_with_context_result(None, plugin_load_context)
# FIXME: update this to use the load context
self._display_plugin_load(self.class_name, name, self._searched_paths, path, found_in_cache=found_in_cache, class_only=class_only)
if not class_only:
try:
# A plugin may need to use its _load_name in __init__ (for example, to set
# or get options from config), so update the object before using the constructor
instance = object.__new__(obj)
self._update_object(instance, name, path, redirected_names)
obj.__init__(instance, *args, **kwargs)
obj = instance
except TypeError as e:
if "abstract" in e.args[0]:
# Abstract Base Class. The found plugin file does not
# fully implement the defined interface.
return get_with_context_result(None, plugin_load_context)
raise
self._update_object(obj, name, path, redirected_names)
return get_with_context_result(obj, plugin_load_context)
def _display_plugin_load(self, class_name, name, searched_paths, path, found_in_cache=None, class_only=None):
''' formats data to display debug info for plugin loading, also avoids processing unless really needed '''
if C.DEFAULT_DEBUG:
msg = 'Loading %s \'%s\' from %s' % (class_name, os.path.basename(name), path)
if len(searched_paths) > 1:
msg = '%s (searched paths: %s)' % (msg, self.format_paths(searched_paths))
if found_in_cache or class_only:
msg = '%s (found_in_cache=%s, class_only=%s)' % (msg, found_in_cache, class_only)
display.debug(msg)
def all(self, *args, **kwargs):
'''
Iterate through all plugins of this type
A plugin loader is initialized with a specific type. This function is an iterator returning
all of the plugins of that type to the caller.
:kwarg path_only: If this is set to True, then we return the paths to where the plugins reside
instead of an instance of the plugin. This conflicts with class_only and both should
not be set.
:kwarg class_only: If this is set to True then we return the python class which implements
a plugin rather than an instance of the plugin. This conflicts with path_only and both
should not be set.
:kwarg _dedupe: By default, we only return one plugin per plugin name. Deduplication happens
in the same way as the :meth:`get` and :meth:`find_plugin` methods resolve which plugin
should take precedence. If this is set to False, then we return all of the plugins
found, including those with duplicate names. In the case of duplicates, the order in
which they are returned is the one that would take precedence first, followed by the
others in decreasing precedence order. This should only be used by subclasses which
want to manage their own deduplication of the plugins.
:*args: Any extra arguments are passed to each plugin when it is instantiated.
:**kwargs: Any extra keyword arguments are passed to each plugin when it is instantiated.
'''
# TODO: Change the signature of this method to:
# def all(return_type='instance', args=None, kwargs=None):
# if args is None: args = []
# if kwargs is None: kwargs = {}
# return_type can be instance, class, or path.
# These changes will mean that plugin parameters won't conflict with our params and
# will also make it impossible to request both a path and a class at the same time.
#
# Move _dedupe to be a class attribute, CUSTOM_DEDUPE, with subclasses for filters and
# tests setting it to True
global _PLUGIN_FILTERS
dedupe = kwargs.pop('_dedupe', True)
path_only = kwargs.pop('path_only', False)
class_only = kwargs.pop('class_only', False)
# Having both path_only and class_only is a coding bug
if path_only and class_only:
raise AnsibleError('Do not set both path_only and class_only when calling PluginLoader.all()')
all_matches = []
found_in_cache = True
for i in self._get_paths():
all_matches.extend(glob.glob(to_native(os.path.join(i, "*.py"))))
loaded_modules = set()
for path in sorted(all_matches, key=os.path.basename):
name = os.path.splitext(path)[0]
basename = os.path.basename(name)
if basename == '__init__' or basename in _PLUGIN_FILTERS[self.package]:
# either empty or ignored by the module blocklist
continue
if basename == 'base' and self.package == 'ansible.plugins.cache':
# cache has legacy 'base.py' file, which is wrapper for __init__.py
continue
if dedupe and basename in loaded_modules:
continue
loaded_modules.add(basename)
if path_only:
yield path
continue
if path not in self._module_cache:
try:
if self.subdir in ('filter_plugins', 'test_plugins'):
# filter and test plugin files can contain multiple plugins
# they must have a unique python module name to prevent them from shadowing each other
full_name = '{0}_{1}'.format(abs(hash(path)), basename)
else:
full_name = basename
module = self._load_module_source(full_name, path)
self._load_config_defs(basename, module, path)
except Exception as e:
display.warning("Skipping plugin (%s) as it seems to be invalid: %s" % (path, to_text(e)))
continue
self._module_cache[path] = module
found_in_cache = False
try:
obj = getattr(self._module_cache[path], self.class_name)
except AttributeError as e:
display.warning("Skipping plugin (%s) as it seems to be invalid: %s" % (path, to_text(e)))
continue
if self.base_class:
# The import path is hardcoded and should be the right place,
# so we are not expecting an ImportError.
module = __import__(self.package, fromlist=[self.base_class])
# Check whether this obj has the required base class.
try:
plugin_class = getattr(module, self.base_class)
except AttributeError:
continue
if not issubclass(obj, plugin_class):
continue
self._display_plugin_load(self.class_name, basename, self._searched_paths, path, found_in_cache=found_in_cache, class_only=class_only)
if not class_only:
try:
obj = obj(*args, **kwargs)
except TypeError as e:
display.warning("Skipping plugin (%s) as it seems to be incomplete: %s" % (path, to_text(e)))
self._update_object(obj, basename, path)
yield obj
class Jinja2Loader(PluginLoader):
"""
PluginLoader optimized for Jinja2 plugins
The filter and test plugins are Jinja2 plugins encapsulated inside of our plugin format.
The way the calling code is setup, we need to do a few things differently in the all() method
We can't use the base class version because of file == plugin assumptions and dedupe logic
"""
def find_plugin(self, name, collection_list=None):
if '.' in name: # NOTE: this is wrong way, use: AnsibleCollectionRef.is_valid_fqcr(name) or collection_list
return super(Jinja2Loader, self).find_plugin(name, collection_list=collection_list)
# Nothing is currently using this method
raise AnsibleError('No code should call "find_plugin" for Jinja2Loaders (Not implemented)')
def get(self, name, *args, **kwargs):
if '.' in name: # NOTE: this is wrong way to detect collection, see note above for example
return super(Jinja2Loader, self).get(name, *args, **kwargs)
# Nothing is currently using this method
raise AnsibleError('No code should call "get" for Jinja2Loaders (Not implemented)')
def all(self, *args, **kwargs):
"""
Differences with :meth:`PluginLoader.all`:
* Unlike other plugin types, file != plugin, a file can contain multiple plugins (of same type).
This is why we do not deduplicate ansible file names at this point, we mostly care about
the names of the actual jinja2 plugins which are inside of our files.
* We reverse the order of the list of files compared to other PluginLoaders. This is
because of how calling code chooses to sync the plugins from the list. It adds all the
Jinja2 plugins from one of our Ansible files into a dict. Then it adds the Jinja2
plugins from the next Ansible file, overwriting any Jinja2 plugins that had the same
name. This is an encapsulation violation (the PluginLoader should not know about what
calling code does with the data) but we're pushing the common code here. We'll fix
this in the future by moving more of the common code into this PluginLoader.
* We return a list. We could iterate the list instead but that's extra work for no gain because
the API receiving this doesn't care. It just needs an iterable
* This method will NOT fetch collection plugins, only those that would be expected under 'ansible.legacy'.
"""
# We don't deduplicate ansible file names.
# Instead, calling code deduplicates jinja2 plugin names when loading each file.
kwargs['_dedupe'] = False
# TODO: move this to initalization and extract/dedupe plugin names in loader and offset this from
# caller. It would have to cache/refresh on add_directory to reevaluate plugin list and dedupe.
# Another option is to always prepend 'ansible.legac'y and force the collection path to
# load/find plugins, just need to check compatiblity of that approach.
# This would also enable get/find_plugin for these type of plugins.
# We have to instantiate a list of all files so that we can reverse the list.
# We reverse it so that calling code will deduplicate this correctly.
files = list(super(Jinja2Loader, self).all(*args, **kwargs))
files .reverse()
return files
def _load_plugin_filter():
filters = defaultdict(frozenset)
user_set = False
if C.PLUGIN_FILTERS_CFG is None:
filter_cfg = '/etc/ansible/plugin_filters.yml'
else:
filter_cfg = C.PLUGIN_FILTERS_CFG
user_set = True
if os.path.exists(filter_cfg):
with open(filter_cfg, 'rb') as f:
try:
filter_data = from_yaml(f.read())
except Exception as e:
display.warning(u'The plugin filter file, {0} was not parsable.'
u' Skipping: {1}'.format(filter_cfg, to_text(e)))
return filters
try:
version = filter_data['filter_version']
except KeyError:
display.warning(u'The plugin filter file, {0} was invalid.'
u' Skipping.'.format(filter_cfg))
return filters
# Try to convert for people specifying version as a float instead of string
version = to_text(version)
version = version.strip()
if version == u'1.0':
# Modules and action plugins share the same blacklist since the difference between the
# two isn't visible to the users
try:
filters['ansible.modules'] = frozenset(filter_data['module_blacklist'])
except TypeError:
display.warning(u'Unable to parse the plugin filter file {0} as'
u' module_blacklist is not a list.'
u' Skipping.'.format(filter_cfg))
return filters
filters['ansible.plugins.action'] = filters['ansible.modules']
else:
display.warning(u'The plugin filter file, {0} was a version not recognized by this'
u' version of Ansible. Skipping.'.format(filter_cfg))
else:
if user_set:
display.warning(u'The plugin filter file, {0} does not exist.'
u' Skipping.'.format(filter_cfg))
# Specialcase the stat module as Ansible can run very few things if stat is blacklisted.
if 'stat' in filters['ansible.modules']:
raise AnsibleError('The stat module was specified in the module blacklist file, {0}, but'
' Ansible will not function without the stat module. Please remove stat'
' from the blacklist.'.format(to_native(filter_cfg)))
return filters
# since we don't want the actual collection loader understanding metadata, we'll do it in an event handler
def _on_collection_load_handler(collection_name, collection_path):
display.vvvv(to_text('Loading collection {0} from {1}'.format(collection_name, collection_path)))
collection_meta = _get_collection_metadata(collection_name)
try:
if not _does_collection_support_ansible_version(collection_meta.get('requires_ansible', ''), ansible_version):
mismatch_behavior = C.config.get_config_value('COLLECTIONS_ON_ANSIBLE_VERSION_MISMATCH')
message = 'Collection {0} does not support Ansible version {1}'.format(collection_name, ansible_version)
if mismatch_behavior == 'warning':
display.warning(message)
elif mismatch_behavior == 'error':
raise AnsibleCollectionUnsupportedVersionError(message)
except AnsibleError:
raise
except Exception as ex:
display.warning('Error parsing collection metadata requires_ansible value from collection {0}: {1}'.format(collection_name, ex))
def _does_collection_support_ansible_version(requirement_string, ansible_version):
if not requirement_string:
return True
if not SpecifierSet:
display.warning('packaging Python module unavailable; unable to validate collection Ansible version requirements')
return True
ss = SpecifierSet(requirement_string)
# ignore prerelease/postrelease/beta/dev flags for simplicity
base_ansible_version = Version(ansible_version).base_version
return ss.contains(base_ansible_version)
def _configure_collection_loader():
if AnsibleCollectionConfig.collection_finder:
display.warning('AnsibleCollectionFinder has already been configured')
return
finder = _AnsibleCollectionFinder(C.config.get_config_value('COLLECTIONS_PATHS'), C.config.get_config_value('COLLECTIONS_SCAN_SYS_PATH'))
finder._install()
# this should succeed now
AnsibleCollectionConfig.on_collection_load += _on_collection_load_handler
# TODO: All of the following is initialization code It should be moved inside of an initialization
# function which is called at some point early in the ansible and ansible-playbook CLI startup.
_PLUGIN_FILTERS = _load_plugin_filter()
_configure_collection_loader()
# doc fragments first
fragment_loader = PluginLoader(
'ModuleDocFragment',
'ansible.plugins.doc_fragments',
C.DOC_FRAGMENT_PLUGIN_PATH,
'doc_fragments',
)
action_loader = PluginLoader(
'ActionModule',
'ansible.plugins.action',
C.DEFAULT_ACTION_PLUGIN_PATH,
'action_plugins',
required_base_class='ActionBase',
)
cache_loader = PluginLoader(
'CacheModule',
'ansible.plugins.cache',
C.DEFAULT_CACHE_PLUGIN_PATH,
'cache_plugins',
)
callback_loader = PluginLoader(
'CallbackModule',
'ansible.plugins.callback',
C.DEFAULT_CALLBACK_PLUGIN_PATH,
'callback_plugins',
)
connection_loader = PluginLoader(
'Connection',
'ansible.plugins.connection',
C.DEFAULT_CONNECTION_PLUGIN_PATH,
'connection_plugins',
aliases={'paramiko': 'paramiko_ssh'},
required_base_class='ConnectionBase',
)
shell_loader = PluginLoader(
'ShellModule',
'ansible.plugins.shell',
'shell_plugins',
'shell_plugins',
)
module_loader = PluginLoader(
'',
'ansible.modules',
C.DEFAULT_MODULE_PATH,
'library',
)
module_utils_loader = PluginLoader(
'',
'ansible.module_utils',
C.DEFAULT_MODULE_UTILS_PATH,
'module_utils',
)
# NB: dedicated loader is currently necessary because PS module_utils expects "with subdir" lookup where
# regular module_utils doesn't. This can be revisited once we have more granular loaders.
ps_module_utils_loader = PluginLoader(
'',
'ansible.module_utils',
C.DEFAULT_MODULE_UTILS_PATH,
'module_utils',
)
lookup_loader = PluginLoader(
'LookupModule',
'ansible.plugins.lookup',
C.DEFAULT_LOOKUP_PLUGIN_PATH,
'lookup_plugins',
required_base_class='LookupBase',
)
filter_loader = Jinja2Loader(
'FilterModule',
'ansible.plugins.filter',
C.DEFAULT_FILTER_PLUGIN_PATH,
'filter_plugins',
)
test_loader = Jinja2Loader(
'TestModule',
'ansible.plugins.test',
C.DEFAULT_TEST_PLUGIN_PATH,
'test_plugins'
)
strategy_loader = PluginLoader(
'StrategyModule',
'ansible.plugins.strategy',
C.DEFAULT_STRATEGY_PLUGIN_PATH,
'strategy_plugins',
required_base_class='StrategyBase',
)
terminal_loader = PluginLoader(
'TerminalModule',
'ansible.plugins.terminal',
C.DEFAULT_TERMINAL_PLUGIN_PATH,
'terminal_plugins',
required_base_class='TerminalBase'
)
vars_loader = PluginLoader(
'VarsModule',
'ansible.plugins.vars',
C.DEFAULT_VARS_PLUGIN_PATH,
'vars_plugins',
)
cliconf_loader = PluginLoader(
'Cliconf',
'ansible.plugins.cliconf',
C.DEFAULT_CLICONF_PLUGIN_PATH,
'cliconf_plugins',
required_base_class='CliconfBase'
)
netconf_loader = PluginLoader(
'Netconf',
'ansible.plugins.netconf',
C.DEFAULT_NETCONF_PLUGIN_PATH,
'netconf_plugins',
required_base_class='NetconfBase'
)
inventory_loader = PluginLoader(
'InventoryModule',
'ansible.plugins.inventory',
C.DEFAULT_INVENTORY_PLUGIN_PATH,
'inventory_plugins'
)
httpapi_loader = PluginLoader(
'HttpApi',
'ansible.plugins.httpapi',
C.DEFAULT_HTTPAPI_PLUGIN_PATH,
'httpapi_plugins',
required_base_class='HttpApiBase',
)
become_loader = PluginLoader(
'BecomeModule',
'ansible.plugins.become',
C.BECOME_PLUGIN_PATH,
'become_plugins'
)
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 76,504 |
Ansible devel sanity tests failing with error: packaging Python module unavailable; unable to validate collection
|
### Summary
Ansible devel sanity tests failing with error: packaging Python module unavailable; unable to validate the collection
### Issue Type
Bug Report
### Component Name
ansible sanity tests
### Ansible Version
```console
$ ansible devel
```
### Configuration
```console
$ ansible-config dump --only-changed
```
### OS / Environment
mac os
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
```
Ansible devel sanity tests is failing with `stderr: [WARNING]: packaging Python module unavailable; unable to validate collection`. Plz ref the below test failures:
[ansible-test-sanity-docker-devel](https://dashboard.zuul.ansible.com/t/ansible/build/ffc5d7c618a740bf9c783e15c2d14806)
[network-ee-sanity-tests](https://dashboard.zuul.ansible.com/t/ansible/build/b4ab02b51d76487b97356f3c6ffe76fe)
### Expected Results
Ansible sanity tests run without failure for `devel`
### Actual Results
```console
Ansible sanity tests fail for `devel`
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/76504
|
https://github.com/ansible/ansible/pull/76513
|
0df60f3b617e1cf6144a76512573c9e54ef4ce7c
|
e56e47faa7f22f1de4079ed5494daf49ba7efbf6
| 2021-12-08T12:47:38Z |
python
| 2021-12-08T23:53:38Z |
test/integration/targets/ansible-test/ansible_collections/ns/col/plugins/lookup/vendor1.py
|
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
DOCUMENTATION = '''
name: vendor1
short_description: lookup
description: Lookup.
author:
- Ansible Core Team
'''
EXAMPLES = '''#'''
RETURN = '''#'''
from ansible.plugins.lookup import LookupBase
try:
import demo
except ImportError:
pass
else:
raise Exception('demo import found when it should not be')
class LookupModule(LookupBase):
def run(self, terms, variables, **kwargs):
self.set_options(var_options=variables, direct=kwargs)
return terms
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 76,504 |
Ansible devel sanity tests failing with error: packaging Python module unavailable; unable to validate collection
|
### Summary
Ansible devel sanity tests failing with error: packaging Python module unavailable; unable to validate the collection
### Issue Type
Bug Report
### Component Name
ansible sanity tests
### Ansible Version
```console
$ ansible devel
```
### Configuration
```console
$ ansible-config dump --only-changed
```
### OS / Environment
mac os
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
```
Ansible devel sanity tests is failing with `stderr: [WARNING]: packaging Python module unavailable; unable to validate collection`. Plz ref the below test failures:
[ansible-test-sanity-docker-devel](https://dashboard.zuul.ansible.com/t/ansible/build/ffc5d7c618a740bf9c783e15c2d14806)
[network-ee-sanity-tests](https://dashboard.zuul.ansible.com/t/ansible/build/b4ab02b51d76487b97356f3c6ffe76fe)
### Expected Results
Ansible sanity tests run without failure for `devel`
### Actual Results
```console
Ansible sanity tests fail for `devel`
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/76504
|
https://github.com/ansible/ansible/pull/76513
|
0df60f3b617e1cf6144a76512573c9e54ef4ce7c
|
e56e47faa7f22f1de4079ed5494daf49ba7efbf6
| 2021-12-08T12:47:38Z |
python
| 2021-12-08T23:53:38Z |
test/integration/targets/ansible-test/ansible_collections/ns/col/plugins/lookup/vendor2.py
|
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
DOCUMENTATION = '''
name: vendor2
short_description: lookup
description: Lookup.
author:
- Ansible Core Team
'''
EXAMPLES = '''#'''
RETURN = '''#'''
from ansible.plugins.lookup import LookupBase
try:
import demo
except ImportError:
pass
else:
raise Exception('demo import found when it should not be')
class LookupModule(LookupBase):
def run(self, terms, variables, **kwargs):
self.set_options(var_options=variables, direct=kwargs)
return terms
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 76,504 |
Ansible devel sanity tests failing with error: packaging Python module unavailable; unable to validate collection
|
### Summary
Ansible devel sanity tests failing with error: packaging Python module unavailable; unable to validate the collection
### Issue Type
Bug Report
### Component Name
ansible sanity tests
### Ansible Version
```console
$ ansible devel
```
### Configuration
```console
$ ansible-config dump --only-changed
```
### OS / Environment
mac os
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
```
Ansible devel sanity tests is failing with `stderr: [WARNING]: packaging Python module unavailable; unable to validate collection`. Plz ref the below test failures:
[ansible-test-sanity-docker-devel](https://dashboard.zuul.ansible.com/t/ansible/build/ffc5d7c618a740bf9c783e15c2d14806)
[network-ee-sanity-tests](https://dashboard.zuul.ansible.com/t/ansible/build/b4ab02b51d76487b97356f3c6ffe76fe)
### Expected Results
Ansible sanity tests run without failure for `devel`
### Actual Results
```console
Ansible sanity tests fail for `devel`
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/76504
|
https://github.com/ansible/ansible/pull/76513
|
0df60f3b617e1cf6144a76512573c9e54ef4ce7c
|
e56e47faa7f22f1de4079ed5494daf49ba7efbf6
| 2021-12-08T12:47:38Z |
python
| 2021-12-08T23:53:38Z |
test/lib/ansible_test/_util/target/sanity/import/importer.py
|
"""Import the given python module(s) and report error(s) encountered."""
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
def main():
"""
Main program function used to isolate globals from imported code.
Changes to globals in imported modules on Python 2.x will overwrite our own globals.
"""
import os
import sys
import types
# preload an empty ansible._vendor module to prevent use of any embedded modules during the import test
vendor_module_name = 'ansible._vendor'
vendor_module = types.ModuleType(vendor_module_name)
vendor_module.__file__ = os.path.join(os.path.sep.join(os.path.abspath(__file__).split(os.path.sep)[:-8]), 'lib/ansible/_vendor/__init__.py')
vendor_module.__path__ = []
vendor_module.__package__ = vendor_module_name
sys.modules[vendor_module_name] = vendor_module
import ansible
import contextlib
import datetime
import json
import re
import runpy
import subprocess
import traceback
import warnings
ansible_path = os.path.dirname(os.path.dirname(ansible.__file__))
temp_path = os.environ['SANITY_TEMP_PATH'] + os.path.sep
external_python = os.environ.get('SANITY_EXTERNAL_PYTHON')
yaml_to_json_path = os.environ.get('SANITY_YAML_TO_JSON')
collection_full_name = os.environ.get('SANITY_COLLECTION_FULL_NAME')
collection_root = os.environ.get('ANSIBLE_COLLECTIONS_PATH')
import_type = os.environ.get('SANITY_IMPORTER_TYPE')
try:
# noinspection PyCompatibility
from importlib import import_module
except ImportError:
def import_module(name):
__import__(name)
return sys.modules[name]
try:
# noinspection PyCompatibility
from StringIO import StringIO
except ImportError:
from io import StringIO
if collection_full_name:
# allow importing code from collections when testing a collection
from ansible.module_utils.common.text.converters import to_bytes, to_text, to_native, text_type
# noinspection PyProtectedMember
from ansible.utils.collection_loader._collection_finder import _AnsibleCollectionFinder
from ansible.utils.collection_loader import _collection_finder
yaml_to_dict_cache = {}
# unique ISO date marker matching the one present in yaml_to_json.py
iso_date_marker = 'isodate:f23983df-f3df-453c-9904-bcd08af468cc:'
iso_date_re = re.compile('^%s([0-9]{4})-([0-9]{2})-([0-9]{2})$' % iso_date_marker)
def parse_value(value):
"""Custom value parser for JSON deserialization that recognizes our internal ISO date format."""
if isinstance(value, text_type):
match = iso_date_re.search(value)
if match:
value = datetime.date(int(match.group(1)), int(match.group(2)), int(match.group(3)))
return value
def object_hook(data):
"""Object hook for custom ISO date deserialization from JSON."""
return dict((key, parse_value(value)) for key, value in data.items())
def yaml_to_dict(yaml, content_id):
"""
Return a Python dict version of the provided YAML.
Conversion is done in a subprocess since the current Python interpreter does not have access to PyYAML.
"""
if content_id in yaml_to_dict_cache:
return yaml_to_dict_cache[content_id]
try:
cmd = [external_python, yaml_to_json_path]
proc = subprocess.Popen([to_bytes(c) for c in cmd], # pylint: disable=consider-using-with
stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
stdout_bytes, stderr_bytes = proc.communicate(to_bytes(yaml))
if proc.returncode != 0:
raise Exception('command %s failed with return code %d: %s' % ([to_native(c) for c in cmd], proc.returncode, to_native(stderr_bytes)))
data = yaml_to_dict_cache[content_id] = json.loads(to_text(stdout_bytes), object_hook=object_hook)
return data
except Exception as ex:
raise Exception('internal importer error - failed to parse yaml: %s' % to_native(ex))
_collection_finder._meta_yml_to_dict = yaml_to_dict # pylint: disable=protected-access
collection_loader = _AnsibleCollectionFinder(paths=[collection_root])
# noinspection PyProtectedMember
collection_loader._install() # pylint: disable=protected-access
else:
# do not support collection loading when not testing a collection
collection_loader = None
# remove all modules under the ansible package, except the preloaded vendor module
list(map(sys.modules.pop, [m for m in sys.modules if m.partition('.')[0] == ansible.__name__ and m != vendor_module_name]))
if import_type == 'module':
# pre-load an empty ansible package to prevent unwanted code in __init__.py from loading
# this more accurately reflects the environment that AnsiballZ runs modules under
# it also avoids issues with imports in the ansible package that are not allowed
ansible_module = types.ModuleType(ansible.__name__)
ansible_module.__file__ = ansible.__file__
ansible_module.__path__ = ansible.__path__
ansible_module.__package__ = ansible.__package__
sys.modules[ansible.__name__] = ansible_module
class ImporterAnsibleModuleException(Exception):
"""Exception thrown during initialization of ImporterAnsibleModule."""
class ImporterAnsibleModule:
"""Replacement for AnsibleModule to support import testing."""
def __init__(self, *args, **kwargs):
raise ImporterAnsibleModuleException()
class RestrictedModuleLoader:
"""Python module loader that restricts inappropriate imports."""
def __init__(self, path, name, restrict_to_module_paths):
self.path = path
self.name = name
self.loaded_modules = set()
self.restrict_to_module_paths = restrict_to_module_paths
def find_module(self, fullname, path=None):
"""Return self if the given fullname is restricted, otherwise return None.
:param fullname: str
:param path: str
:return: RestrictedModuleLoader | None
"""
if fullname in self.loaded_modules:
return None # ignore modules that are already being loaded
if is_name_in_namepace(fullname, ['ansible']):
if not self.restrict_to_module_paths:
return None # for non-modules, everything in the ansible namespace is allowed
if fullname in ('ansible.module_utils.basic',):
return self # intercept loading so we can modify the result
if is_name_in_namepace(fullname, ['ansible.module_utils', self.name]):
return None # module_utils and module under test are always allowed
if any(os.path.exists(candidate_path) for candidate_path in convert_ansible_name_to_absolute_paths(fullname)):
return self # restrict access to ansible files that exist
return None # ansible file does not exist, do not restrict access
if is_name_in_namepace(fullname, ['ansible_collections']):
if not collection_loader:
return self # restrict access to collections when we are not testing a collection
if not self.restrict_to_module_paths:
return None # for non-modules, everything in the ansible namespace is allowed
if is_name_in_namepace(fullname, ['ansible_collections...plugins.module_utils', self.name]):
return None # module_utils and module under test are always allowed
if collection_loader.find_module(fullname, path):
return self # restrict access to collection files that exist
return None # collection file does not exist, do not restrict access
# not a namespace we care about
return None
def load_module(self, fullname):
"""Raise an ImportError.
:type fullname: str
"""
if fullname == 'ansible.module_utils.basic':
module = self.__load_module(fullname)
# stop Ansible module execution during AnsibleModule instantiation
module.AnsibleModule = ImporterAnsibleModule
# no-op for _load_params since it may be called before instantiating AnsibleModule
module._load_params = lambda *args, **kwargs: {} # pylint: disable=protected-access
return module
raise ImportError('import of "%s" is not allowed in this context' % fullname)
def __load_module(self, fullname):
"""Load the requested module while avoiding infinite recursion.
:type fullname: str
:rtype: module
"""
self.loaded_modules.add(fullname)
return import_module(fullname)
def run(restrict_to_module_paths):
"""Main program function."""
base_dir = os.getcwd()
messages = set()
for path in sys.argv[1:] or sys.stdin.read().splitlines():
name = convert_relative_path_to_name(path)
test_python_module(path, name, base_dir, messages, restrict_to_module_paths)
if messages:
sys.exit(10)
def test_python_module(path, name, base_dir, messages, restrict_to_module_paths):
"""Test the given python module by importing it.
:type path: str
:type name: str
:type base_dir: str
:type messages: set[str]
:type restrict_to_module_paths: bool
"""
if name in sys.modules:
return # cannot be tested because it has already been loaded
is_ansible_module = (path.startswith('lib/ansible/modules/') or path.startswith('plugins/modules/')) and os.path.basename(path) != '__init__.py'
run_main = is_ansible_module
if path == 'lib/ansible/modules/async_wrapper.py':
# async_wrapper is a non-standard Ansible module (does not use AnsibleModule) so we cannot test the main function
run_main = False
capture_normal = Capture()
capture_main = Capture()
run_module_ok = False
try:
with monitor_sys_modules(path, messages):
with restrict_imports(path, name, messages, restrict_to_module_paths):
with capture_output(capture_normal):
import_module(name)
if run_main:
run_module_ok = is_ansible_module
with monitor_sys_modules(path, messages):
with restrict_imports(path, name, messages, restrict_to_module_paths):
with capture_output(capture_main):
runpy.run_module(name, run_name='__main__', alter_sys=True)
except ImporterAnsibleModuleException:
# module instantiated AnsibleModule without raising an exception
if not run_module_ok:
if is_ansible_module:
report_message(path, 0, 0, 'module-guard', "AnsibleModule instantiation not guarded by `if __name__ == '__main__'`", messages)
else:
report_message(path, 0, 0, 'non-module', "AnsibleModule instantiated by import of non-module", messages)
except BaseException as ex: # pylint: disable=locally-disabled, broad-except
# intentionally catch all exceptions, including calls to sys.exit
exc_type, _exc, exc_tb = sys.exc_info()
message = str(ex)
results = list(reversed(traceback.extract_tb(exc_tb)))
line = 0
offset = 0
full_path = os.path.join(base_dir, path)
base_path = base_dir + os.path.sep
source = None
# avoid line wraps in messages
message = re.sub(r'\n *', ': ', message)
for result in results:
if result[0] == full_path:
# save the line number for the file under test
line = result[1] or 0
if not source and result[0].startswith(base_path) and not result[0].startswith(temp_path):
# save the first path and line number in the traceback which is in our source tree
source = (os.path.relpath(result[0], base_path), result[1] or 0, 0)
if isinstance(ex, SyntaxError):
# SyntaxError has better information than the traceback
if ex.filename == full_path: # pylint: disable=locally-disabled, no-member
# syntax error was reported in the file under test
line = ex.lineno or 0 # pylint: disable=locally-disabled, no-member
offset = ex.offset or 0 # pylint: disable=locally-disabled, no-member
elif ex.filename.startswith(base_path) and not ex.filename.startswith(temp_path): # pylint: disable=locally-disabled, no-member
# syntax error was reported in our source tree
source = (os.path.relpath(ex.filename, base_path), ex.lineno or 0, ex.offset or 0) # pylint: disable=locally-disabled, no-member
# remove the filename and line number from the message
# either it was extracted above, or it's not really useful information
message = re.sub(r' \(.*?, line [0-9]+\)$', '', message)
if source and source[0] != path:
message += ' (at %s:%d:%d)' % (source[0], source[1], source[2])
report_message(path, line, offset, 'traceback', '%s: %s' % (exc_type.__name__, message), messages)
finally:
capture_report(path, capture_normal, messages)
capture_report(path, capture_main, messages)
def is_name_in_namepace(name, namespaces):
"""Returns True if the given name is one of the given namespaces, otherwise returns False."""
name_parts = name.split('.')
for namespace in namespaces:
namespace_parts = namespace.split('.')
length = min(len(name_parts), len(namespace_parts))
truncated_name = name_parts[0:length]
truncated_namespace = namespace_parts[0:length]
# empty parts in the namespace are treated as wildcards
# to simplify the comparison, use those empty parts to indicate the positions in the name to be empty as well
for idx, part in enumerate(truncated_namespace):
if not part:
truncated_name[idx] = part
# example: name=ansible, allowed_name=ansible.module_utils
# example: name=ansible.module_utils.system.ping, allowed_name=ansible.module_utils
if truncated_name == truncated_namespace:
return True
return False
def check_sys_modules(path, before, messages):
"""Check for unwanted changes to sys.modules.
:type path: str
:type before: dict[str, module]
:type messages: set[str]
"""
after = sys.modules
removed = set(before.keys()) - set(after.keys())
changed = set(key for key, value in before.items() if key in after and value != after[key])
# additions are checked by our custom PEP 302 loader, so we don't need to check them again here
for module in sorted(removed):
report_message(path, 0, 0, 'unload', 'unloading of "%s" in sys.modules is not supported' % module, messages)
for module in sorted(changed):
report_message(path, 0, 0, 'reload', 'reloading of "%s" in sys.modules is not supported' % module, messages)
def convert_ansible_name_to_absolute_paths(name):
"""Calculate the module path from the given name.
:type name: str
:rtype: list[str]
"""
return [
os.path.join(ansible_path, name.replace('.', os.path.sep)),
os.path.join(ansible_path, name.replace('.', os.path.sep)) + '.py',
]
def convert_relative_path_to_name(path):
"""Calculate the module name from the given path.
:type path: str
:rtype: str
"""
if path.endswith('/__init__.py'):
clean_path = os.path.dirname(path)
else:
clean_path = path
clean_path = os.path.splitext(clean_path)[0]
name = clean_path.replace(os.path.sep, '.')
if collection_loader:
# when testing collections the relative paths (and names) being tested are within the collection under test
name = 'ansible_collections.%s.%s' % (collection_full_name, name)
else:
# when testing ansible all files being imported reside under the lib directory
name = name[len('lib/'):]
return name
class Capture:
"""Captured output and/or exception."""
def __init__(self):
self.stdout = StringIO()
self.stderr = StringIO()
def capture_report(path, capture, messages):
"""Report on captured output.
:type path: str
:type capture: Capture
:type messages: set[str]
"""
if capture.stdout.getvalue():
first = capture.stdout.getvalue().strip().splitlines()[0].strip()
report_message(path, 0, 0, 'stdout', first, messages)
if capture.stderr.getvalue():
first = capture.stderr.getvalue().strip().splitlines()[0].strip()
report_message(path, 0, 0, 'stderr', first, messages)
def report_message(path, line, column, code, message, messages):
"""Report message if not already reported.
:type path: str
:type line: int
:type column: int
:type code: str
:type message: str
:type messages: set[str]
"""
message = '%s:%d:%d: %s: %s' % (path, line, column, code, message)
if message not in messages:
messages.add(message)
print(message)
@contextlib.contextmanager
def restrict_imports(path, name, messages, restrict_to_module_paths):
"""Restrict available imports.
:type path: str
:type name: str
:type messages: set[str]
:type restrict_to_module_paths: bool
"""
restricted_loader = RestrictedModuleLoader(path, name, restrict_to_module_paths)
# noinspection PyTypeChecker
sys.meta_path.insert(0, restricted_loader)
sys.path_importer_cache.clear()
try:
yield
finally:
if import_type == 'plugin':
from ansible.utils.collection_loader._collection_finder import _AnsibleCollectionFinder
_AnsibleCollectionFinder._remove() # pylint: disable=protected-access
if sys.meta_path[0] != restricted_loader:
report_message(path, 0, 0, 'metapath', 'changes to sys.meta_path[0] are not permitted', messages)
while restricted_loader in sys.meta_path:
# noinspection PyTypeChecker
sys.meta_path.remove(restricted_loader)
sys.path_importer_cache.clear()
@contextlib.contextmanager
def monitor_sys_modules(path, messages):
"""Monitor sys.modules for unwanted changes, reverting any additions made to our own namespaces."""
snapshot = sys.modules.copy()
try:
yield
finally:
check_sys_modules(path, snapshot, messages)
for key in set(sys.modules.keys()) - set(snapshot.keys()):
if is_name_in_namepace(key, ('ansible', 'ansible_collections')):
del sys.modules[key] # only unload our own code since we know it's native Python
@contextlib.contextmanager
def capture_output(capture):
"""Capture sys.stdout and sys.stderr.
:type capture: Capture
"""
old_stdout = sys.stdout
old_stderr = sys.stderr
sys.stdout = capture.stdout
sys.stderr = capture.stderr
# clear all warnings registries to make all warnings available
for module in sys.modules.values():
try:
# noinspection PyUnresolvedReferences
module.__warningregistry__.clear()
except AttributeError:
pass
with warnings.catch_warnings():
warnings.simplefilter('error')
if sys.version_info[0] == 2:
warnings.filterwarnings(
"ignore",
"Python 2 is no longer supported by the Python core team. Support for it is now deprecated in cryptography,"
" and will be removed in a future release.")
warnings.filterwarnings(
"ignore",
"Python 2 is no longer supported by the Python core team. Support for it is now deprecated in cryptography,"
" and will be removed in the next release.")
if sys.version_info[:2] == (3, 5):
warnings.filterwarnings(
"ignore",
"Python 3.5 support will be dropped in the next release ofcryptography. Please upgrade your Python.")
warnings.filterwarnings(
"ignore",
"Python 3.5 support will be dropped in the next release of cryptography. Please upgrade your Python.")
if sys.version_info >= (3, 10):
# Temporary solution for Python 3.10 until find_spec is implemented in RestrictedModuleLoader.
# That implementation is dependent on find_spec being added to the controller's collection loader first.
# The warning text is: main.<locals>.RestrictedModuleLoader.find_spec() not found; falling back to find_module()
warnings.filterwarnings(
"ignore",
r"main\.<locals>\.RestrictedModuleLoader\.find_spec\(\) not found; falling back to find_module\(\)",
)
# Temporary solution for Python 3.10 until exec_module is implemented in RestrictedModuleLoader.
# That implementation is dependent on exec_module being added to the controller's collection loader first.
# The warning text is: main.<locals>.RestrictedModuleLoader.exec_module() not found; falling back to load_module()
warnings.filterwarnings(
"ignore",
r"main\.<locals>\.RestrictedModuleLoader\.exec_module\(\) not found; falling back to load_module\(\)",
)
# Temporary solution for Python 3.10 until find_spec is implemented in the controller's collection loader.
warnings.filterwarnings(
"ignore",
r"_Ansible.*Finder\.find_spec\(\) not found; falling back to find_module\(\)",
)
# Temporary solution for Python 3.10 until exec_module is implemented in the controller's collection loader.
warnings.filterwarnings(
"ignore",
r"_Ansible.*Loader\.exec_module\(\) not found; falling back to load_module\(\)",
)
# Temporary solution until there is a vendored copy of distutils.version in module_utils.
# Some of our dependencies such as packaging.tags also import distutils, which we have no control over
# The warning text is: The distutils package is deprecated and slated for removal in Python 3.12.
# Use setuptools or check PEP 632 for potential alternatives
warnings.filterwarnings(
"ignore",
r"The distutils package is deprecated and slated for removal in Python 3\.12\. .*",
)
try:
yield
finally:
sys.stdout = old_stdout
sys.stderr = old_stderr
run(import_type == 'module')
if __name__ == '__main__':
main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 76,442 |
Investigate if AnsibleJ2Template should be based on NativeTemplate
|
### Summary
`AnsibleEnvironment.from_string` returns `AnsibleJ2Template` which is based on jinja’s non-native `Template` at the moment but since we have moved over to `NativeEnvironment` it should be probably be based on `NativeTemplate` instead.
If a `render` method is called on the returned template, its non-native version is called which is unexpected as `NativeEnvironment` was used to create the template. This [creates issues](https://dashboard.zuul.ansible.com/t/ansible/build/8e423341f7b74189a59983bd666b6212).
We do not see this issue in ansible-core because we do not call the `render` method directly but do it in a custom way instead. However fixing this might prevent issues in the future.
### Issue Type
Bug Report
### Component Name
lib/ansible/template/template.py
### Ansible Version
```console
$ ansible --version
devel (2.13.0.dev0) only
```
### Configuration
```console
$ ansible-config dump --only-changed
```
### OS / Environment
N/A
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```python
from ansible.template import Templar
environment = Templar(loader=None).environment
template = environment.from_string("{{ a }}")
result = template.render({'a': 1})
```
### Expected Results
-
### Actual Results
```console
-
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/76442
|
https://github.com/ansible/ansible/pull/76471
|
37c80ea89307dc2f6c8cb047905ab85c3217a11c
|
19a58859d6ed96f9b68db94094795b532edb350c
| 2021-12-02T17:29:01Z |
python
| 2021-12-09T16:29:38Z |
lib/ansible/template/template.py
|
# (c) 2012, Michael DeHaan <[email protected]>
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
# Make coding more python3-ish
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import jinja2
__all__ = ['AnsibleJ2Template']
class AnsibleJ2Template(jinja2.environment.Template):
'''
A helper class, which prevents Jinja2 from running AnsibleJ2Vars through dict().
Without this, {% include %} and similar will create new contexts unlike the special
one created in Templar.template. This ensures they are all alike, except for
potential locals.
'''
def new_context(self, vars=None, shared=False, locals=None):
if vars is None:
vars = dict(self.globals or ())
if isinstance(vars, dict):
vars = vars.copy()
if locals is not None:
vars.update(locals)
else:
vars = vars.add_locals(locals)
return self.environment.context_class(self.environment, vars, self.name, self.blocks)
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 76,446 |
ansible-test sanity linking incorrect documentation
|
### Summary
when running ansible-test over a collection, if there are errors or failures, it will link you to the correct documentation to help resolve the issue. this functionality is currently broken as it is linking to `docs.ansible.com/ansible/` instead of `docs.ansible.com/ansible-core/` which leads to 404s.
### Issue Type
Bug Report
### Component Name
ansible-test
### Ansible Version
```console
$ ansible --version
ansible [core 2.11.2]
```
### Configuration
```console
$ ansible-config dump --only-changed
```
### OS / Environment
Fedora 34
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
ansible-test sanity --docker
```
### Expected Results
Expect for docs link to correct navigate to the correct documentation.
### Actual Results
```console
leads to documentation that 404s. Ex: https://docs.ansible.com/ansible/2.12/dev_guide/testing/sanity/pylint.html
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/76446
|
https://github.com/ansible/ansible/pull/76514
|
19a58859d6ed96f9b68db94094795b532edb350c
|
16cdac66fe01a2492e6737849d0fb81e812ee96b
| 2021-12-02T21:08:12Z |
python
| 2021-12-09T16:58:14Z |
changelogs/fragments/ansible-test-links.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 76,446 |
ansible-test sanity linking incorrect documentation
|
### Summary
when running ansible-test over a collection, if there are errors or failures, it will link you to the correct documentation to help resolve the issue. this functionality is currently broken as it is linking to `docs.ansible.com/ansible/` instead of `docs.ansible.com/ansible-core/` which leads to 404s.
### Issue Type
Bug Report
### Component Name
ansible-test
### Ansible Version
```console
$ ansible --version
ansible [core 2.11.2]
```
### Configuration
```console
$ ansible-config dump --only-changed
```
### OS / Environment
Fedora 34
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
ansible-test sanity --docker
```
### Expected Results
Expect for docs link to correct navigate to the correct documentation.
### Actual Results
```console
leads to documentation that 404s. Ex: https://docs.ansible.com/ansible/2.12/dev_guide/testing/sanity/pylint.html
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/76446
|
https://github.com/ansible/ansible/pull/76514
|
19a58859d6ed96f9b68db94094795b532edb350c
|
16cdac66fe01a2492e6737849d0fb81e812ee96b
| 2021-12-02T21:08:12Z |
python
| 2021-12-09T16:58:14Z |
test/lib/ansible_test/_internal/commands/integration/cloud/aws.py
|
"""AWS plugin for integration tests."""
from __future__ import annotations
import os
import uuid
import configparser
import typing as t
from ....util import (
ApplicationError,
display,
)
from ....config import (
IntegrationConfig,
)
from ....target import (
IntegrationTarget,
)
from ....core_ci import (
AnsibleCoreCI,
)
from ....host_configs import (
OriginConfig,
)
from . import (
CloudEnvironment,
CloudEnvironmentConfig,
CloudProvider,
)
class AwsCloudProvider(CloudProvider):
"""AWS cloud provider plugin. Sets up cloud resources before delegation."""
def __init__(self, args): # type: (IntegrationConfig) -> None
super().__init__(args)
self.uses_config = True
def filter(self, targets, exclude): # type: (t.Tuple[IntegrationTarget, ...], t.List[str]) -> None
"""Filter out the cloud tests when the necessary config and resources are not available."""
aci = self._create_ansible_core_ci()
if aci.available:
return
super().filter(targets, exclude)
def setup(self): # type: () -> None
"""Setup the cloud resource before delegation and register a cleanup callback."""
super().setup()
aws_config_path = os.path.expanduser('~/.aws')
if os.path.exists(aws_config_path) and isinstance(self.args.controller, OriginConfig):
raise ApplicationError('Rename "%s" or use the --docker or --remote option to isolate tests.' % aws_config_path)
if not self._use_static_config():
self._setup_dynamic()
def _setup_dynamic(self): # type: () -> None
"""Request AWS credentials through the Ansible Core CI service."""
display.info('Provisioning %s cloud environment.' % self.platform, verbosity=1)
config = self._read_config_template()
aci = self._create_ansible_core_ci()
response = aci.start()
if not self.args.explain:
credentials = response['aws']['credentials']
values = dict(
ACCESS_KEY=credentials['access_key'],
SECRET_KEY=credentials['secret_key'],
SECURITY_TOKEN=credentials['session_token'],
REGION='us-east-1',
)
display.sensitive.add(values['SECRET_KEY'])
display.sensitive.add(values['SECURITY_TOKEN'])
config = self._populate_config_template(config, values)
self._write_config(config)
def _create_ansible_core_ci(self): # type: () -> AnsibleCoreCI
"""Return an AWS instance of AnsibleCoreCI."""
return AnsibleCoreCI(self.args, 'aws', 'aws', 'aws', persist=False)
class AwsCloudEnvironment(CloudEnvironment):
"""AWS cloud environment plugin. Updates integration test environment after delegation."""
def get_environment_config(self): # type: () -> CloudEnvironmentConfig
"""Return environment configuration for use in the test environment after delegation."""
parser = configparser.ConfigParser()
parser.read(self.config_path)
ansible_vars = dict(
resource_prefix=self.resource_prefix,
tiny_prefix=uuid.uuid4().hex[0:12]
)
# noinspection PyTypeChecker
ansible_vars.update(dict(parser.items('default')))
display.sensitive.add(ansible_vars.get('aws_secret_key'))
display.sensitive.add(ansible_vars.get('security_token'))
if 'aws_cleanup' not in ansible_vars:
ansible_vars['aws_cleanup'] = not self.managed
env_vars = {'ANSIBLE_DEBUG_BOTOCORE_LOGS': 'True'}
return CloudEnvironmentConfig(
env_vars=env_vars,
ansible_vars=ansible_vars,
callback_plugins=['aws_resource_actions'],
)
def on_failure(self, target, tries): # type: (IntegrationTarget, int) -> None
"""Callback to run when an integration target fails."""
if not tries and self.managed:
display.notice('If %s failed due to permissions, the IAM test policy may need to be updated. '
'https://docs.ansible.com/ansible/devel/dev_guide/platforms/aws_guidelines.html#aws-permissions-for-integration-tests.'
% target.name)
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 76,446 |
ansible-test sanity linking incorrect documentation
|
### Summary
when running ansible-test over a collection, if there are errors or failures, it will link you to the correct documentation to help resolve the issue. this functionality is currently broken as it is linking to `docs.ansible.com/ansible/` instead of `docs.ansible.com/ansible-core/` which leads to 404s.
### Issue Type
Bug Report
### Component Name
ansible-test
### Ansible Version
```console
$ ansible --version
ansible [core 2.11.2]
```
### Configuration
```console
$ ansible-config dump --only-changed
```
### OS / Environment
Fedora 34
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
ansible-test sanity --docker
```
### Expected Results
Expect for docs link to correct navigate to the correct documentation.
### Actual Results
```console
leads to documentation that 404s. Ex: https://docs.ansible.com/ansible/2.12/dev_guide/testing/sanity/pylint.html
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/76446
|
https://github.com/ansible/ansible/pull/76514
|
19a58859d6ed96f9b68db94094795b532edb350c
|
16cdac66fe01a2492e6737849d0fb81e812ee96b
| 2021-12-02T21:08:12Z |
python
| 2021-12-09T16:58:14Z |
test/lib/ansible_test/_internal/commands/sanity/integration_aliases.py
|
"""Sanity test to check integration test aliases."""
from __future__ import annotations
import json
import textwrap
import os
import typing as t
from . import (
SanitySingleVersion,
SanityMessage,
SanityFailure,
SanitySuccess,
SanityTargets,
SANITY_ROOT,
)
from ...test import (
TestResult,
)
from ...config import (
SanityConfig,
)
from ...target import (
filter_targets,
walk_posix_integration_targets,
walk_windows_integration_targets,
walk_integration_targets,
walk_module_targets,
CompletionTarget,
)
from ..integration.cloud import (
get_cloud_platforms,
)
from ...io import (
read_text_file,
)
from ...util import (
display,
raw_command,
)
from ...util_common import (
write_json_test_results,
ResultType,
)
from ...host_configs import (
PythonConfig,
)
class IntegrationAliasesTest(SanitySingleVersion):
"""Sanity test to evaluate integration test aliases."""
CI_YML = '.azure-pipelines/azure-pipelines.yml'
TEST_ALIAS_PREFIX = 'shippable' # this will be changed at some point in the future
DISABLED = 'disabled/'
UNSTABLE = 'unstable/'
UNSUPPORTED = 'unsupported/'
EXPLAIN_URL = 'https://docs.ansible.com/ansible/devel/dev_guide/testing/sanity/integration-aliases.html'
TEMPLATE_DISABLED = """
The following integration tests are **disabled** [[explain]({explain_url}#disabled)]:
{tests}
Consider fixing the integration tests before or alongside changes.
"""
TEMPLATE_UNSTABLE = """
The following integration tests are **unstable** [[explain]({explain_url}#unstable)]:
{tests}
Tests may need to be restarted due to failures unrelated to changes.
"""
TEMPLATE_UNSUPPORTED = """
The following integration tests are **unsupported** [[explain]({explain_url}#unsupported)]:
{tests}
Consider running the tests manually or extending test infrastructure to add support.
"""
TEMPLATE_UNTESTED = """
The following modules have **no integration tests** [[explain]({explain_url}#untested)]:
{tests}
Consider adding integration tests before or alongside changes.
"""
ansible_only = True
def __init__(self):
super().__init__()
self._ci_config = {} # type: t.Dict[str, t.Any]
self._ci_test_groups = {} # type: t.Dict[str, t.List[int]]
@property
def can_ignore(self): # type: () -> bool
"""True if the test supports ignore entries."""
return False
@property
def no_targets(self): # type: () -> bool
"""True if the test does not use test targets. Mutually exclusive with all_targets."""
return True
def load_ci_config(self, python): # type: (PythonConfig) -> t.Dict[str, t.Any]
"""Load and return the CI YAML configuration."""
if not self._ci_config:
self._ci_config = self.load_yaml(python, self.CI_YML)
return self._ci_config
@property
def ci_test_groups(self): # type: () -> t.Dict[str, t.List[int]]
"""Return a dictionary of CI test names and their group(s)."""
if not self._ci_test_groups:
test_groups = {}
for stage in self._ci_config['stages']:
for job in stage['jobs']:
if job.get('template') != 'templates/matrix.yml':
continue
parameters = job['parameters']
groups = parameters.get('groups', [])
test_format = parameters.get('testFormat', '{0}')
test_group_format = parameters.get('groupFormat', '{0}/{{1}}')
for target in parameters['targets']:
test = target.get('test') or target.get('name')
if groups:
tests_formatted = [test_group_format.format(test_format).format(test, group) for group in groups]
else:
tests_formatted = [test_format.format(test)]
for test_formatted in tests_formatted:
parts = test_formatted.split('/')
key = parts[0]
if key in ('sanity', 'units'):
continue
try:
group = int(parts[-1])
except ValueError:
continue
if group < 1 or group > 99:
continue
group_set = test_groups.setdefault(key, set())
group_set.add(group)
self._ci_test_groups = dict((key, sorted(value)) for key, value in test_groups.items())
return self._ci_test_groups
def format_test_group_alias(self, name, fallback=''): # type: (str, str) -> str
"""Return a test group alias using the given name and fallback."""
group_numbers = self.ci_test_groups.get(name, None)
if group_numbers:
if min(group_numbers) != 1:
display.warning('Min test group "%s" in %s is %d instead of 1.' % (name, self.CI_YML, min(group_numbers)), unique=True)
if max(group_numbers) != len(group_numbers):
display.warning('Max test group "%s" in %s is %d instead of %d.' % (name, self.CI_YML, max(group_numbers), len(group_numbers)), unique=True)
if max(group_numbers) > 9:
alias = '%s/%s/group(%s)/' % (self.TEST_ALIAS_PREFIX, name, '|'.join(str(i) for i in range(min(group_numbers), max(group_numbers) + 1)))
elif len(group_numbers) > 1:
alias = '%s/%s/group[%d-%d]/' % (self.TEST_ALIAS_PREFIX, name, min(group_numbers), max(group_numbers))
else:
alias = '%s/%s/group%d/' % (self.TEST_ALIAS_PREFIX, name, min(group_numbers))
elif fallback:
alias = '%s/%s/group%d/' % (self.TEST_ALIAS_PREFIX, fallback, 1)
else:
raise Exception('cannot find test group "%s" in %s' % (name, self.CI_YML))
return alias
def load_yaml(self, python, path): # type: (PythonConfig, str) -> t.Dict[str, t.Any]
"""Load the specified YAML file and return the contents."""
yaml_to_json_path = os.path.join(SANITY_ROOT, self.name, 'yaml_to_json.py')
return json.loads(raw_command([python.path, yaml_to_json_path], data=read_text_file(path), capture=True)[0])
def test(self, args, targets, python): # type: (SanityConfig, SanityTargets, PythonConfig) -> TestResult
if args.explain:
return SanitySuccess(self.name)
if not os.path.isfile(self.CI_YML):
return SanityFailure(self.name, messages=[SanityMessage(
message='file missing',
path=self.CI_YML,
)])
results = dict(
comments=[],
labels={},
)
self.load_ci_config(python)
self.check_changes(args, results)
write_json_test_results(ResultType.BOT, 'data-sanity-ci.json', results)
messages = []
messages += self.check_posix_targets(args)
messages += self.check_windows_targets()
if messages:
return SanityFailure(self.name, messages=messages)
return SanitySuccess(self.name)
def check_posix_targets(self, args): # type: (SanityConfig) -> t.List[SanityMessage]
"""Check POSIX integration test targets and return messages with any issues found."""
posix_targets = tuple(walk_posix_integration_targets())
clouds = get_cloud_platforms(args, posix_targets)
cloud_targets = ['cloud/%s/' % cloud for cloud in clouds]
all_cloud_targets = tuple(filter_targets(posix_targets, ['cloud/'], directories=False, errors=False))
invalid_cloud_targets = tuple(filter_targets(all_cloud_targets, cloud_targets, include=False, directories=False, errors=False))
messages = []
for target in invalid_cloud_targets:
for alias in target.aliases:
if alias.startswith('cloud/') and alias != 'cloud/':
if any(alias.startswith(cloud_target) for cloud_target in cloud_targets):
continue
messages.append(SanityMessage('invalid alias `%s`' % alias, '%s/aliases' % target.path))
messages += self.check_ci_group(
targets=tuple(filter_targets(posix_targets, ['cloud/', '%s/generic/' % self.TEST_ALIAS_PREFIX], include=False, directories=False, errors=False)),
find=self.format_test_group_alias('linux').replace('linux', 'posix'),
find_incidental=['%s/posix/incidental/' % self.TEST_ALIAS_PREFIX],
)
messages += self.check_ci_group(
targets=tuple(filter_targets(posix_targets, ['%s/generic/' % self.TEST_ALIAS_PREFIX], directories=False, errors=False)),
find=self.format_test_group_alias('generic'),
)
for cloud in clouds:
if cloud == 'httptester':
find = self.format_test_group_alias('linux').replace('linux', 'posix')
find_incidental = ['%s/posix/incidental/' % self.TEST_ALIAS_PREFIX]
else:
find = self.format_test_group_alias(cloud, 'generic')
find_incidental = ['%s/%s/incidental/' % (self.TEST_ALIAS_PREFIX, cloud), '%s/cloud/incidental/' % self.TEST_ALIAS_PREFIX]
messages += self.check_ci_group(
targets=tuple(filter_targets(posix_targets, ['cloud/%s/' % cloud], directories=False, errors=False)),
find=find,
find_incidental=find_incidental,
)
return messages
def check_windows_targets(self):
"""
:rtype: list[SanityMessage]
"""
windows_targets = tuple(walk_windows_integration_targets())
messages = []
messages += self.check_ci_group(
targets=windows_targets,
find=self.format_test_group_alias('windows'),
find_incidental=['%s/windows/incidental/' % self.TEST_ALIAS_PREFIX],
)
return messages
def check_ci_group(
self,
targets, # type: t.Tuple[CompletionTarget, ...]
find, # type: str
find_incidental=None, # type: t.Optional[t.List[str]]
): # type: (...) -> t.List[SanityMessage]
"""Check the CI groups set in the provided targets and return a list of messages with any issues found."""
all_paths = set(target.path for target in targets)
supported_paths = set(target.path for target in filter_targets(targets, [find], directories=False, errors=False))
unsupported_paths = set(target.path for target in filter_targets(targets, [self.UNSUPPORTED], directories=False, errors=False))
if find_incidental:
incidental_paths = set(target.path for target in filter_targets(targets, find_incidental, directories=False, errors=False))
else:
incidental_paths = set()
unassigned_paths = all_paths - supported_paths - unsupported_paths - incidental_paths
conflicting_paths = supported_paths & unsupported_paths
unassigned_message = 'missing alias `%s` or `%s`' % (find.strip('/'), self.UNSUPPORTED.strip('/'))
conflicting_message = 'conflicting alias `%s` and `%s`' % (find.strip('/'), self.UNSUPPORTED.strip('/'))
messages = []
for path in unassigned_paths:
messages.append(SanityMessage(unassigned_message, '%s/aliases' % path))
for path in conflicting_paths:
messages.append(SanityMessage(conflicting_message, '%s/aliases' % path))
return messages
def check_changes(self, args, results): # type: (SanityConfig, t.Dict[str, t.Any]) -> None
"""Check changes and store results in the provided results dictionary."""
integration_targets = list(walk_integration_targets())
module_targets = list(walk_module_targets())
integration_targets_by_name = dict((target.name, target) for target in integration_targets)
module_names_by_path = dict((target.path, target.module) for target in module_targets)
disabled_targets = []
unstable_targets = []
unsupported_targets = []
for command in [command for command in args.metadata.change_description.focused_command_targets if 'integration' in command]:
for target in args.metadata.change_description.focused_command_targets[command]:
if self.DISABLED in integration_targets_by_name[target].aliases:
disabled_targets.append(target)
elif self.UNSTABLE in integration_targets_by_name[target].aliases:
unstable_targets.append(target)
elif self.UNSUPPORTED in integration_targets_by_name[target].aliases:
unsupported_targets.append(target)
untested_modules = []
for path in args.metadata.change_description.no_integration_paths:
module = module_names_by_path.get(path)
if module:
untested_modules.append(module)
comments = [
self.format_comment(self.TEMPLATE_DISABLED, disabled_targets),
self.format_comment(self.TEMPLATE_UNSTABLE, unstable_targets),
self.format_comment(self.TEMPLATE_UNSUPPORTED, unsupported_targets),
self.format_comment(self.TEMPLATE_UNTESTED, untested_modules),
]
comments = [comment for comment in comments if comment]
labels = dict(
needs_tests=bool(untested_modules),
disabled_tests=bool(disabled_targets),
unstable_tests=bool(unstable_targets),
unsupported_tests=bool(unsupported_targets),
)
results['comments'] += comments
results['labels'].update(labels)
def format_comment(self, template, targets): # type: (str, t.List[str]) -> t.Optional[str]
"""Format and return a comment based on the given template and targets, or None if there are no targets."""
if not targets:
return None
tests = '\n'.join('- %s' % target for target in targets)
data = dict(
explain_url=self.EXPLAIN_URL,
tests=tests,
)
message = textwrap.dedent(template).strip().format(**data)
return message
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 76,446 |
ansible-test sanity linking incorrect documentation
|
### Summary
when running ansible-test over a collection, if there are errors or failures, it will link you to the correct documentation to help resolve the issue. this functionality is currently broken as it is linking to `docs.ansible.com/ansible/` instead of `docs.ansible.com/ansible-core/` which leads to 404s.
### Issue Type
Bug Report
### Component Name
ansible-test
### Ansible Version
```console
$ ansible --version
ansible [core 2.11.2]
```
### Configuration
```console
$ ansible-config dump --only-changed
```
### OS / Environment
Fedora 34
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
ansible-test sanity --docker
```
### Expected Results
Expect for docs link to correct navigate to the correct documentation.
### Actual Results
```console
leads to documentation that 404s. Ex: https://docs.ansible.com/ansible/2.12/dev_guide/testing/sanity/pylint.html
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/76446
|
https://github.com/ansible/ansible/pull/76514
|
19a58859d6ed96f9b68db94094795b532edb350c
|
16cdac66fe01a2492e6737849d0fb81e812ee96b
| 2021-12-02T21:08:12Z |
python
| 2021-12-09T16:58:14Z |
test/lib/ansible_test/_internal/test.py
|
"""Classes for storing and processing test results."""
from __future__ import annotations
import datetime
import re
import typing as t
from .util import (
display,
get_ansible_version,
)
from .util_common import (
write_text_test_results,
write_json_test_results,
ResultType,
)
from .metadata import (
Metadata,
)
from .config import (
TestConfig,
)
from . import junit_xml
def calculate_best_confidence(choices, metadata): # type: (t.Tuple[t.Tuple[str, int], ...], Metadata) -> int
"""Return the best confidence value available from the given choices and metadata."""
best_confidence = 0
for path, line in choices:
confidence = calculate_confidence(path, line, metadata)
best_confidence = max(confidence, best_confidence)
return best_confidence
def calculate_confidence(path, line, metadata): # type: (str, int, Metadata) -> int
"""Return the confidence level for a test result associated with the given file path and line number."""
ranges = metadata.changes.get(path)
# no changes were made to the file
if not ranges:
return 0
# changes were made to the same file and line
if any(r[0] <= line <= r[1] in r for r in ranges):
return 100
# changes were made to the same file and the line number is unknown
if line == 0:
return 75
# changes were made to the same file and the line number is different
return 50
class TestResult:
"""Base class for test results."""
def __init__(self, command, test, python_version=None): # type: (str, str, t.Optional[str]) -> None
self.command = command
self.test = test
self.python_version = python_version
self.name = self.test or self.command
if self.python_version:
self.name += '-python-%s' % self.python_version
def write(self, args): # type: (TestConfig) -> None
"""Write the test results to various locations."""
self.write_console()
self.write_bot(args)
if args.lint:
self.write_lint()
if args.junit:
self.write_junit(args)
def write_console(self): # type: () -> None
"""Write results to console."""
def write_lint(self): # type: () -> None
"""Write lint results to stdout."""
def write_bot(self, args): # type: (TestConfig) -> None
"""Write results to a file for ansibullbot to consume."""
def write_junit(self, args): # type: (TestConfig) -> None
"""Write results to a junit XML file."""
def create_result_name(self, extension): # type: (str) -> str
"""Return the name of the result file using the given extension."""
name = 'ansible-test-%s' % self.command
if self.test:
name += '-%s' % self.test
if self.python_version:
name += '-python-%s' % self.python_version
name += extension
return name
def save_junit(self, args, test_case): # type: (TestConfig, junit_xml.TestCase) -> None
"""Save the given test case results to disk as JUnit XML."""
suites = junit_xml.TestSuites(
suites=[
junit_xml.TestSuite(
name='ansible-test',
cases=[test_case],
timestamp=datetime.datetime.utcnow(),
),
],
)
report = suites.to_pretty_xml()
if args.explain:
return
write_text_test_results(ResultType.JUNIT, self.create_result_name('.xml'), report)
class TestTimeout(TestResult):
"""Test timeout."""
def __init__(self, timeout_duration): # type: (int) -> None
super().__init__(command='timeout', test='')
self.timeout_duration = timeout_duration
def write(self, args): # type: (TestConfig) -> None
"""Write the test results to various locations."""
message = 'Tests were aborted after exceeding the %d minute time limit.' % self.timeout_duration
# Include a leading newline to improve readability on Shippable "Tests" tab.
# Without this, the first line becomes indented.
output = '''
One or more of the following situations may be responsible:
- Code changes have resulted in tests that hang or run for an excessive amount of time.
- Tests have been added which exceed the time limit when combined with existing tests.
- Test infrastructure and/or external dependencies are operating slower than normal.'''
if args.coverage:
output += '\n- Additional overhead from collecting code coverage has resulted in tests exceeding the time limit.'
output += '\n\nConsult the console log for additional details on where the timeout occurred.'
timestamp = datetime.datetime.utcnow()
suites = junit_xml.TestSuites(
suites=[
junit_xml.TestSuite(
name='ansible-test',
timestamp=timestamp,
cases=[
junit_xml.TestCase(
name='timeout',
classname='timeout',
errors=[
junit_xml.TestError(
message=message,
),
],
),
],
)
],
)
report = suites.to_pretty_xml()
write_text_test_results(ResultType.JUNIT, self.create_result_name('.xml'), report)
class TestSuccess(TestResult):
"""Test success."""
def write_junit(self, args): # type: (TestConfig) -> None
"""Write results to a junit XML file."""
test_case = junit_xml.TestCase(classname=self.command, name=self.name)
self.save_junit(args, test_case)
class TestSkipped(TestResult):
"""Test skipped."""
def __init__(self, command, test, python_version=None): # type: (str, str, t.Optional[str]) -> None
super().__init__(command, test, python_version)
self.reason = None # type: t.Optional[str]
def write_console(self): # type: () -> None
"""Write results to console."""
if self.reason:
display.warning(self.reason)
else:
display.info('No tests applicable.', verbosity=1)
def write_junit(self, args): # type: (TestConfig) -> None
"""Write results to a junit XML file."""
test_case = junit_xml.TestCase(
classname=self.command,
name=self.name,
skipped=self.reason or 'No tests applicable.',
)
self.save_junit(args, test_case)
class TestFailure(TestResult):
"""Test failure."""
def __init__(
self,
command, # type: str
test, # type: str
python_version=None, # type: t.Optional[str]
messages=None, # type: t.Optional[t.List[TestMessage]]
summary=None, # type: t.Optional[str]
):
super().__init__(command, test, python_version)
if messages:
messages = sorted(messages)
else:
messages = []
self.messages = messages
self.summary = summary
def write(self, args): # type: (TestConfig) -> None
"""Write the test results to various locations."""
if args.metadata.changes:
self.populate_confidence(args.metadata)
super().write(args)
def write_console(self): # type: () -> None
"""Write results to console."""
if self.summary:
display.error(self.summary)
else:
if self.python_version:
specifier = ' on python %s' % self.python_version
else:
specifier = ''
display.error('Found %d %s issue(s)%s which need to be resolved:' % (len(self.messages), self.test or self.command, specifier))
for message in self.messages:
display.error(message.format(show_confidence=True))
doc_url = self.find_docs()
if doc_url:
display.info('See documentation for help: %s' % doc_url)
def write_lint(self): # type: () -> None
"""Write lint results to stdout."""
if self.summary:
command = self.format_command()
message = 'The test `%s` failed. See stderr output for details.' % command
path = ''
message = TestMessage(message, path)
print(message)
else:
for message in self.messages:
print(message)
def write_junit(self, args): # type: (TestConfig) -> None
"""Write results to a junit XML file."""
title = self.format_title()
output = self.format_block()
test_case = junit_xml.TestCase(
classname=self.command,
name=self.name,
failures=[
junit_xml.TestFailure(
message=title,
output=output,
),
],
)
self.save_junit(args, test_case)
def write_bot(self, args): # type: (TestConfig) -> None
"""Write results to a file for ansibullbot to consume."""
docs = self.find_docs()
message = self.format_title(help_link=docs)
output = self.format_block()
if self.messages:
verified = all((m.confidence or 0) >= 50 for m in self.messages)
else:
verified = False
bot_data = dict(
verified=verified,
docs=docs,
results=[
dict(
message=message,
output=output,
),
],
)
if args.explain:
return
write_json_test_results(ResultType.BOT, self.create_result_name('.json'), bot_data)
def populate_confidence(self, metadata): # type: (Metadata) -> None
"""Populate test result confidence using the provided metadata."""
for message in self.messages:
if message.confidence is None:
message.confidence = calculate_confidence(message.path, message.line, metadata)
def format_command(self): # type: () -> str
"""Return a string representing the CLI command associated with the test failure."""
command = 'ansible-test %s' % self.command
if self.test:
command += ' --test %s' % self.test
if self.python_version:
command += ' --python %s' % self.python_version
return command
def find_docs(self):
"""
:rtype: str
"""
if self.command != 'sanity':
return None # only sanity tests have docs links
# Use the major.minor version for the URL only if this a release that
# matches the pattern 2.4.0, otherwise, use 'devel'
ansible_version = get_ansible_version()
url_version = 'devel'
if re.search(r'^[0-9.]+$', ansible_version):
url_version = '.'.join(ansible_version.split('.')[:2])
testing_docs_url = 'https://docs.ansible.com/ansible/%s/dev_guide/testing' % url_version
url = '%s/%s/' % (testing_docs_url, self.command)
if self.test:
url += '%s.html' % self.test
return url
def format_title(self, help_link=None): # type: (t.Optional[str]) -> str
"""Return a string containing a title/heading for this test failure, including an optional help link to explain the test."""
command = self.format_command()
if self.summary:
reason = 'the error'
else:
reason = '1 error' if len(self.messages) == 1 else '%d errors' % len(self.messages)
if help_link:
help_link_markup = ' [[explain](%s)]' % help_link
else:
help_link_markup = ''
title = 'The test `%s`%s failed with %s:' % (command, help_link_markup, reason)
return title
def format_block(self): # type: () -> str
"""Format the test summary or messages as a block of text and return the result."""
if self.summary:
block = self.summary
else:
block = '\n'.join(m.format() for m in self.messages)
message = block.strip()
# Hack to remove ANSI color reset code from SubprocessError messages.
message = message.replace(display.clear, '')
return message
class TestMessage:
"""Single test message for one file."""
def __init__(
self,
message, # type: str
path, # type: str
line=0, # type: int
column=0, # type: int
level='error', # type: str
code=None, # type: t.Optional[str]
confidence=None, # type: t.Optional[int]
):
self.__path = path
self.__line = line
self.__column = column
self.__level = level
self.__code = code
self.__message = message
self.confidence = confidence
@property
def path(self): # type: () -> str
"""Return the path."""
return self.__path
@property
def line(self): # type: () -> int
"""Return the line number, or 0 if none is available."""
return self.__line
@property
def column(self): # type: () -> int
"""Return the column number, or 0 if none is available."""
return self.__column
@property
def level(self): # type: () -> str
"""Return the level."""
return self.__level
@property
def code(self): # type: () -> t.Optional[str]
"""Return the code, if any."""
return self.__code
@property
def message(self): # type: () -> str
"""Return the message."""
return self.__message
@property
def tuple(self): # type: () -> t.Tuple[str, int, int, str, t.Optional[str], str]
"""Return a tuple with all the immutable values of this test message."""
return self.__path, self.__line, self.__column, self.__level, self.__code, self.__message
def __lt__(self, other):
return self.tuple < other.tuple
def __le__(self, other):
return self.tuple <= other.tuple
def __eq__(self, other):
return self.tuple == other.tuple
def __ne__(self, other):
return self.tuple != other.tuple
def __gt__(self, other):
return self.tuple > other.tuple
def __ge__(self, other):
return self.tuple >= other.tuple
def __hash__(self):
return hash(self.tuple)
def __str__(self):
return self.format()
def format(self, show_confidence=False): # type: (bool) -> str
"""Return a string representation of this message, optionally including the confidence level."""
if self.__code:
msg = '%s: %s' % (self.__code, self.__message)
else:
msg = self.__message
if show_confidence and self.confidence is not None:
msg += ' (%d%%)' % self.confidence
return '%s:%s:%s: %s' % (self.__path, self.__line, self.__column, msg)
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 76,446 |
ansible-test sanity linking incorrect documentation
|
### Summary
when running ansible-test over a collection, if there are errors or failures, it will link you to the correct documentation to help resolve the issue. this functionality is currently broken as it is linking to `docs.ansible.com/ansible/` instead of `docs.ansible.com/ansible-core/` which leads to 404s.
### Issue Type
Bug Report
### Component Name
ansible-test
### Ansible Version
```console
$ ansible --version
ansible [core 2.11.2]
```
### Configuration
```console
$ ansible-config dump --only-changed
```
### OS / Environment
Fedora 34
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
ansible-test sanity --docker
```
### Expected Results
Expect for docs link to correct navigate to the correct documentation.
### Actual Results
```console
leads to documentation that 404s. Ex: https://docs.ansible.com/ansible/2.12/dev_guide/testing/sanity/pylint.html
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/76446
|
https://github.com/ansible/ansible/pull/76514
|
19a58859d6ed96f9b68db94094795b532edb350c
|
16cdac66fe01a2492e6737849d0fb81e812ee96b
| 2021-12-02T21:08:12Z |
python
| 2021-12-09T16:58:14Z |
test/lib/ansible_test/_util/controller/sanity/code-smell/runtime-metadata.py
|
"""Schema validation of ansible-core's ansible_builtin_runtime.yml and collection's meta/runtime.yml"""
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import datetime
import os
import re
import sys
from functools import partial
import yaml
from voluptuous import All, Any, MultipleInvalid, PREVENT_EXTRA
from voluptuous import Required, Schema, Invalid
from voluptuous.humanize import humanize_error
from ansible.module_utils.compat.version import StrictVersion, LooseVersion
from ansible.module_utils.six import string_types
from ansible.utils.version import SemanticVersion
def isodate(value, check_deprecation_date=False, is_tombstone=False):
"""Validate a datetime.date or ISO 8601 date string."""
# datetime.date objects come from YAML dates, these are ok
if isinstance(value, datetime.date):
removal_date = value
else:
# make sure we have a string
msg = 'Expected ISO 8601 date string (YYYY-MM-DD), or YAML date'
if not isinstance(value, string_types):
raise Invalid(msg)
# From Python 3.7 in, there is datetime.date.fromisoformat(). For older versions,
# we have to do things manually.
if not re.match('^[0-9]{4}-[0-9]{2}-[0-9]{2}$', value):
raise Invalid(msg)
try:
removal_date = datetime.datetime.strptime(value, '%Y-%m-%d').date()
except ValueError:
raise Invalid(msg)
# Make sure date is correct
today = datetime.date.today()
if is_tombstone:
# For a tombstone, the removal date must be in the past
if today < removal_date:
raise Invalid(
'The tombstone removal_date (%s) must not be after today (%s)' % (removal_date, today))
else:
# For a deprecation, the removal date must be in the future. Only test this if
# check_deprecation_date is truish, to avoid checks to suddenly start to fail.
if check_deprecation_date and today > removal_date:
raise Invalid(
'The deprecation removal_date (%s) must be after today (%s)' % (removal_date, today))
return value
def removal_version(value, is_ansible, current_version=None, is_tombstone=False):
"""Validate a removal version string."""
msg = (
'Removal version must be a string' if is_ansible else
'Removal version must be a semantic version (https://semver.org/)'
)
if not isinstance(value, string_types):
raise Invalid(msg)
try:
if is_ansible:
version = StrictVersion()
version.parse(value)
version = LooseVersion(value) # We're storing Ansible's version as a LooseVersion
else:
version = SemanticVersion()
version.parse(value)
if version.major != 0 and (version.minor != 0 or version.patch != 0):
raise Invalid('removal_version (%r) must be a major release, not a minor or patch release '
'(see specification at https://semver.org/)' % (value, ))
if current_version is not None:
if is_tombstone:
# For a tombstone, the removal version must not be in the future
if version > current_version:
raise Invalid('The tombstone removal_version (%r) must not be after the '
'current version (%s)' % (value, current_version))
else:
# For a deprecation, the removal version must be in the future
if version <= current_version:
raise Invalid('The deprecation removal_version (%r) must be after the '
'current version (%s)' % (value, current_version))
except ValueError:
raise Invalid(msg)
return value
def any_value(value):
"""Accepts anything."""
return value
def get_ansible_version():
"""Return current ansible-core version"""
from ansible.release import __version__
return LooseVersion('.'.join(__version__.split('.')[:3]))
def get_collection_version():
"""Return current collection version, or None if it is not available"""
import importlib.util
collection_detail_path = os.path.join(os.path.dirname(os.path.dirname(os.path.dirname(__file__))), 'tools', 'collection_detail.py')
collection_detail_spec = importlib.util.spec_from_file_location('collection_detail', collection_detail_path)
collection_detail = importlib.util.module_from_spec(collection_detail_spec)
sys.modules['collection_detail'] = collection_detail
collection_detail_spec.loader.exec_module(collection_detail)
# noinspection PyBroadException
try:
result = collection_detail.read_manifest_json('.') or collection_detail.read_galaxy_yml('.')
return SemanticVersion(result['version'])
except Exception: # pylint: disable=broad-except
# We do not care why it fails, in case we cannot get the version
# just return None to indicate "we don't know".
return None
def validate_metadata_file(path, is_ansible, check_deprecation_dates=False):
"""Validate explicit runtime metadata file"""
try:
with open(path, 'r') as f_path:
routing = yaml.safe_load(f_path)
except yaml.error.MarkedYAMLError as ex:
print('%s:%d:%d: YAML load failed: %s' % (path, ex.context_mark.line +
1, ex.context_mark.column + 1, re.sub(r'\s+', ' ', str(ex))))
return
except Exception as ex: # pylint: disable=broad-except
print('%s:%d:%d: YAML load failed: %s' %
(path, 0, 0, re.sub(r'\s+', ' ', str(ex))))
return
if is_ansible:
current_version = get_ansible_version()
else:
current_version = get_collection_version()
# Updates to schema MUST also be reflected in the documentation
# ~https://docs.ansible.com/ansible/devel/dev_guide/developing_collections.html
# plugin_routing schema
avoid_additional_data = Schema(
Any(
{
Required('removal_version'): any_value,
'warning_text': any_value,
},
{
Required('removal_date'): any_value,
'warning_text': any_value,
}
),
extra=PREVENT_EXTRA
)
deprecation_schema = All(
# The first schema validates the input, and the second makes sure no extra keys are specified
Schema(
{
'removal_version': partial(removal_version, is_ansible=is_ansible,
current_version=current_version),
'removal_date': partial(isodate, check_deprecation_date=check_deprecation_dates),
'warning_text': Any(*string_types),
}
),
avoid_additional_data
)
tombstoning_schema = All(
# The first schema validates the input, and the second makes sure no extra keys are specified
Schema(
{
'removal_version': partial(removal_version, is_ansible=is_ansible,
current_version=current_version, is_tombstone=True),
'removal_date': partial(isodate, is_tombstone=True),
'warning_text': Any(*string_types),
}
),
avoid_additional_data
)
plugin_routing_schema = Any(
Schema({
('deprecation'): Any(deprecation_schema),
('tombstone'): Any(tombstoning_schema),
('redirect'): Any(*string_types),
}, extra=PREVENT_EXTRA),
)
list_dict_plugin_routing_schema = [{str_type: plugin_routing_schema}
for str_type in string_types]
plugin_schema = Schema({
('action'): Any(None, *list_dict_plugin_routing_schema),
('become'): Any(None, *list_dict_plugin_routing_schema),
('cache'): Any(None, *list_dict_plugin_routing_schema),
('callback'): Any(None, *list_dict_plugin_routing_schema),
('cliconf'): Any(None, *list_dict_plugin_routing_schema),
('connection'): Any(None, *list_dict_plugin_routing_schema),
('doc_fragments'): Any(None, *list_dict_plugin_routing_schema),
('filter'): Any(None, *list_dict_plugin_routing_schema),
('httpapi'): Any(None, *list_dict_plugin_routing_schema),
('inventory'): Any(None, *list_dict_plugin_routing_schema),
('lookup'): Any(None, *list_dict_plugin_routing_schema),
('module_utils'): Any(None, *list_dict_plugin_routing_schema),
('modules'): Any(None, *list_dict_plugin_routing_schema),
('netconf'): Any(None, *list_dict_plugin_routing_schema),
('shell'): Any(None, *list_dict_plugin_routing_schema),
('strategy'): Any(None, *list_dict_plugin_routing_schema),
('terminal'): Any(None, *list_dict_plugin_routing_schema),
('test'): Any(None, *list_dict_plugin_routing_schema),
('vars'): Any(None, *list_dict_plugin_routing_schema),
}, extra=PREVENT_EXTRA)
# import_redirection schema
import_redirection_schema = Any(
Schema({
('redirect'): Any(*string_types),
# import_redirect doesn't currently support deprecation
}, extra=PREVENT_EXTRA)
)
list_dict_import_redirection_schema = [{str_type: import_redirection_schema}
for str_type in string_types]
# top level schema
schema = Schema({
# All of these are optional
('plugin_routing'): Any(plugin_schema),
('import_redirection'): Any(None, *list_dict_import_redirection_schema),
# requires_ansible: In the future we should validate this with SpecifierSet
('requires_ansible'): Any(*string_types),
('action_groups'): dict,
}, extra=PREVENT_EXTRA)
# Ensure schema is valid
try:
schema(routing)
except MultipleInvalid as ex:
for error in ex.errors:
# No way to get line/column numbers
print('%s:%d:%d: %s' % (path, 0, 0, humanize_error(routing, error)))
def main():
"""Validate runtime metadata"""
paths = sys.argv[1:] or sys.stdin.read().splitlines()
collection_legacy_file = 'meta/routing.yml'
collection_runtime_file = 'meta/runtime.yml'
# This is currently disabled, because if it is enabled this test can start failing
# at a random date. For this to be properly activated, we (a) need to be able to return
# codes for this test, and (b) make this error optional.
check_deprecation_dates = False
for path in paths:
if path == collection_legacy_file:
print('%s:%d:%d: %s' % (path, 0, 0, ("Should be called '%s'" % collection_runtime_file)))
continue
validate_metadata_file(
path,
is_ansible=path not in (collection_legacy_file, collection_runtime_file),
check_deprecation_dates=check_deprecation_dates)
if __name__ == '__main__':
main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 76,446 |
ansible-test sanity linking incorrect documentation
|
### Summary
when running ansible-test over a collection, if there are errors or failures, it will link you to the correct documentation to help resolve the issue. this functionality is currently broken as it is linking to `docs.ansible.com/ansible/` instead of `docs.ansible.com/ansible-core/` which leads to 404s.
### Issue Type
Bug Report
### Component Name
ansible-test
### Ansible Version
```console
$ ansible --version
ansible [core 2.11.2]
```
### Configuration
```console
$ ansible-config dump --only-changed
```
### OS / Environment
Fedora 34
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
ansible-test sanity --docker
```
### Expected Results
Expect for docs link to correct navigate to the correct documentation.
### Actual Results
```console
leads to documentation that 404s. Ex: https://docs.ansible.com/ansible/2.12/dev_guide/testing/sanity/pylint.html
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/76446
|
https://github.com/ansible/ansible/pull/76514
|
19a58859d6ed96f9b68db94094795b532edb350c
|
16cdac66fe01a2492e6737849d0fb81e812ee96b
| 2021-12-02T21:08:12Z |
python
| 2021-12-09T16:58:14Z |
test/lib/ansible_test/_util/controller/sanity/validate-modules/validate_modules/main.py
|
# -*- coding: utf-8 -*-
#
# Copyright (C) 2015 Matt Martz <[email protected]>
# Copyright (C) 2015 Rackspace US, Inc.
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import abc
import argparse
import ast
import datetime
import json
import errno
import os
import re
import subprocess
import sys
import tempfile
import traceback
from collections import OrderedDict
from contextlib import contextmanager
from ansible.module_utils.compat.version import StrictVersion, LooseVersion
from fnmatch import fnmatch
import yaml
from ansible import __version__ as ansible_version
from ansible.executor.module_common import REPLACER_WINDOWS
from ansible.module_utils.common._collections_compat import Mapping
from ansible.module_utils.common.parameters import DEFAULT_TYPE_VALIDATORS
from ansible.plugins.loader import fragment_loader
from ansible.utils.collection_loader._collection_finder import _AnsibleCollectionFinder
from ansible.utils.plugin_docs import REJECTLIST, add_collection_to_versions_and_dates, add_fragments, get_docstring
from ansible.utils.version import SemanticVersion
from .module_args import AnsibleModuleImportError, AnsibleModuleNotInitialized, get_argument_spec
from .schema import ansible_module_kwargs_schema, doc_schema, return_schema
from .utils import CaptureStd, NoArgsAnsibleModule, compare_unordered_lists, is_empty, parse_yaml, parse_isodate
from voluptuous.humanize import humanize_error
from ansible.module_utils.six import PY3, with_metaclass, string_types
if PY3:
# Because there is no ast.TryExcept in Python 3 ast module
TRY_EXCEPT = ast.Try
# REPLACER_WINDOWS from ansible.executor.module_common is byte
# string but we need unicode for Python 3
REPLACER_WINDOWS = REPLACER_WINDOWS.decode('utf-8')
else:
TRY_EXCEPT = ast.TryExcept
REJECTLIST_DIRS = frozenset(('.git', 'test', '.github', '.idea'))
INDENT_REGEX = re.compile(r'([\t]*)')
TYPE_REGEX = re.compile(r'.*(if|or)(\s+[^"\']*|\s+)(?<!_)(?<!str\()type\([^)].*')
SYS_EXIT_REGEX = re.compile(r'[^#]*sys.exit\s*\(.*')
NO_LOG_REGEX = re.compile(r'(?:pass(?!ive)|secret|token|key)', re.I)
REJECTLIST_IMPORTS = {
'requests': {
'new_only': True,
'error': {
'code': 'use-module-utils-urls',
'msg': ('requests import found, should use '
'ansible.module_utils.urls instead')
}
},
r'boto(?:\.|$)': {
'new_only': True,
'error': {
'code': 'use-boto3',
'msg': 'boto import found, new modules should use boto3'
}
},
}
SUBPROCESS_REGEX = re.compile(r'subprocess\.Po.*')
OS_CALL_REGEX = re.compile(r'os\.call.*')
LOOSE_ANSIBLE_VERSION = LooseVersion('.'.join(ansible_version.split('.')[:3]))
def is_potential_secret_option(option_name):
if not NO_LOG_REGEX.search(option_name):
return False
# If this is a count, type, algorithm, timeout, filename, or name, it is probably not a secret
if option_name.endswith((
'_count', '_type', '_alg', '_algorithm', '_timeout', '_name', '_comment',
'_bits', '_id', '_identifier', '_period', '_file', '_filename',
)):
return False
# 'key' also matches 'publickey', which is generally not secret
if any(part in option_name for part in (
'publickey', 'public_key', 'keyusage', 'key_usage', 'keyserver', 'key_server',
'keysize', 'key_size', 'keyservice', 'key_service', 'pub_key', 'pubkey',
'keyboard', 'secretary',
)):
return False
return True
def compare_dates(d1, d2):
try:
date1 = parse_isodate(d1, allow_date=True)
date2 = parse_isodate(d2, allow_date=True)
return date1 == date2
except ValueError:
# At least one of d1 and d2 cannot be parsed. Simply compare values.
return d1 == d2
class ReporterEncoder(json.JSONEncoder):
def default(self, o):
if isinstance(o, Exception):
return str(o)
return json.JSONEncoder.default(self, o)
class Reporter:
def __init__(self):
self.files = OrderedDict()
def _ensure_default_entry(self, path):
try:
self.files[path]
except KeyError:
self.files[path] = {
'errors': [],
'warnings': [],
'traces': [],
'warning_traces': []
}
def _log(self, path, code, msg, level='error', line=0, column=0):
self._ensure_default_entry(path)
lvl_dct = self.files[path]['%ss' % level]
lvl_dct.append({
'code': code,
'msg': msg,
'line': line,
'column': column
})
def error(self, *args, **kwargs):
self._log(*args, level='error', **kwargs)
def warning(self, *args, **kwargs):
self._log(*args, level='warning', **kwargs)
def trace(self, path, tracebk):
self._ensure_default_entry(path)
self.files[path]['traces'].append(tracebk)
def warning_trace(self, path, tracebk):
self._ensure_default_entry(path)
self.files[path]['warning_traces'].append(tracebk)
@staticmethod
@contextmanager
def _output_handle(output):
if output != '-':
handle = open(output, 'w+')
else:
handle = sys.stdout
yield handle
handle.flush()
handle.close()
@staticmethod
def _filter_out_ok(reports):
temp_reports = OrderedDict()
for path, report in reports.items():
if report['errors'] or report['warnings']:
temp_reports[path] = report
return temp_reports
def plain(self, warnings=False, output='-'):
"""Print out the test results in plain format
output is ignored here for now
"""
ret = []
for path, report in Reporter._filter_out_ok(self.files).items():
traces = report['traces'][:]
if warnings and report['warnings']:
traces.extend(report['warning_traces'])
for trace in traces:
print('TRACE:')
print('\n '.join((' %s' % trace).splitlines()))
for error in report['errors']:
error['path'] = path
print('%(path)s:%(line)d:%(column)d: E%(code)s %(msg)s' % error)
ret.append(1)
if warnings:
for warning in report['warnings']:
warning['path'] = path
print('%(path)s:%(line)d:%(column)d: W%(code)s %(msg)s' % warning)
return 3 if ret else 0
def json(self, warnings=False, output='-'):
"""Print out the test results in json format
warnings is not respected in this output
"""
ret = [len(r['errors']) for r in self.files.values()]
with Reporter._output_handle(output) as handle:
print(json.dumps(Reporter._filter_out_ok(self.files), indent=4, cls=ReporterEncoder), file=handle)
return 3 if sum(ret) else 0
class Validator(with_metaclass(abc.ABCMeta, object)):
"""Validator instances are intended to be run on a single object. if you
are scanning multiple objects for problems, you'll want to have a separate
Validator for each one."""
def __init__(self, reporter=None):
self.reporter = reporter
@property
@abc.abstractmethod
def object_name(self):
"""Name of the object we validated"""
pass
@property
@abc.abstractmethod
def object_path(self):
"""Path of the object we validated"""
pass
@abc.abstractmethod
def validate(self):
"""Run this method to generate the test results"""
pass
class ModuleValidator(Validator):
REJECTLIST_PATTERNS = ('.git*', '*.pyc', '*.pyo', '.*', '*.md', '*.rst', '*.txt')
REJECTLIST_FILES = frozenset(('.git', '.gitignore', '.travis.yml',
'.gitattributes', '.gitmodules', 'COPYING',
'__init__.py', 'VERSION', 'test-docs.sh'))
REJECTLIST = REJECTLIST_FILES.union(REJECTLIST['MODULE'])
PS_DOC_REJECTLIST = frozenset((
'async_status.ps1',
'slurp.ps1',
'setup.ps1'
))
# win_dsc is a dynamic arg spec, the docs won't ever match
PS_ARG_VALIDATE_REJECTLIST = frozenset(('win_dsc.ps1', ))
ACCEPTLIST_FUTURE_IMPORTS = frozenset(('absolute_import', 'division', 'print_function'))
def __init__(self, path, analyze_arg_spec=False, collection=None, collection_version=None,
base_branch=None, git_cache=None, reporter=None, routing=None):
super(ModuleValidator, self).__init__(reporter=reporter or Reporter())
self.path = path
self.basename = os.path.basename(self.path)
self.name = os.path.splitext(self.basename)[0]
self.analyze_arg_spec = analyze_arg_spec
self._Version = LooseVersion
self._StrictVersion = StrictVersion
self.collection = collection
self.collection_name = 'ansible.builtin'
if self.collection:
self._Version = SemanticVersion
self._StrictVersion = SemanticVersion
collection_namespace_path, collection_name = os.path.split(self.collection)
self.collection_name = '%s.%s' % (os.path.basename(collection_namespace_path), collection_name)
self.routing = routing
self.collection_version = None
if collection_version is not None:
self.collection_version_str = collection_version
self.collection_version = SemanticVersion(collection_version)
self.base_branch = base_branch
self.git_cache = git_cache or GitCache()
self._python_module_override = False
with open(path) as f:
self.text = f.read()
self.length = len(self.text.splitlines())
try:
self.ast = ast.parse(self.text)
except Exception:
self.ast = None
if base_branch:
self.base_module = self._get_base_file()
else:
self.base_module = None
def _create_version(self, v, collection_name=None):
if not v:
raise ValueError('Empty string is not a valid version')
if collection_name == 'ansible.builtin':
return LooseVersion(v)
if collection_name is not None:
return SemanticVersion(v)
return self._Version(v)
def _create_strict_version(self, v, collection_name=None):
if not v:
raise ValueError('Empty string is not a valid version')
if collection_name == 'ansible.builtin':
return StrictVersion(v)
if collection_name is not None:
return SemanticVersion(v)
return self._StrictVersion(v)
def __enter__(self):
return self
def __exit__(self, exc_type, exc_value, traceback):
if not self.base_module:
return
try:
os.remove(self.base_module)
except Exception:
pass
@property
def object_name(self):
return self.basename
@property
def object_path(self):
return self.path
def _get_collection_meta(self):
"""Implement if we need this for version_added comparisons
"""
pass
def _python_module(self):
if self.path.endswith('.py') or self._python_module_override:
return True
return False
def _powershell_module(self):
if self.path.endswith('.ps1'):
return True
return False
def _just_docs(self):
"""Module can contain just docs and from __future__ boilerplate
"""
try:
for child in self.ast.body:
if not isinstance(child, ast.Assign):
# allowed from __future__ imports
if isinstance(child, ast.ImportFrom) and child.module == '__future__':
for future_import in child.names:
if future_import.name not in self.ACCEPTLIST_FUTURE_IMPORTS:
break
else:
continue
return False
return True
except AttributeError:
return False
def _get_base_branch_module_path(self):
"""List all paths within lib/ansible/modules to try and match a moved module"""
return self.git_cache.base_module_paths.get(self.object_name)
def _has_alias(self):
"""Return true if the module has any aliases."""
return self.object_name in self.git_cache.head_aliased_modules
def _get_base_file(self):
# In case of module moves, look for the original location
base_path = self._get_base_branch_module_path()
command = ['git', 'show', '%s:%s' % (self.base_branch, base_path or self.path)]
p = subprocess.Popen(command, stdout=subprocess.PIPE,
stderr=subprocess.PIPE)
stdout, stderr = p.communicate()
if int(p.returncode) != 0:
return None
t = tempfile.NamedTemporaryFile(delete=False)
t.write(stdout)
t.close()
return t.name
def _is_new_module(self):
if self._has_alias():
return False
return not self.object_name.startswith('_') and bool(self.base_branch) and not bool(self.base_module)
def _check_interpreter(self, powershell=False):
if powershell:
if not self.text.startswith('#!powershell\n'):
self.reporter.error(
path=self.object_path,
code='missing-powershell-interpreter',
msg='Interpreter line is not "#!powershell"'
)
return
if not self.text.startswith('#!/usr/bin/python'):
self.reporter.error(
path=self.object_path,
code='missing-python-interpreter',
msg='Interpreter line is not "#!/usr/bin/python"',
)
def _check_type_instead_of_isinstance(self, powershell=False):
if powershell:
return
for line_no, line in enumerate(self.text.splitlines()):
typekeyword = TYPE_REGEX.match(line)
if typekeyword:
# TODO: add column
self.reporter.error(
path=self.object_path,
code='unidiomatic-typecheck',
msg=('Type comparison using type() found. '
'Use isinstance() instead'),
line=line_no + 1
)
def _check_for_sys_exit(self):
# Optimize out the happy path
if 'sys.exit' not in self.text:
return
for line_no, line in enumerate(self.text.splitlines()):
sys_exit_usage = SYS_EXIT_REGEX.match(line)
if sys_exit_usage:
# TODO: add column
self.reporter.error(
path=self.object_path,
code='use-fail-json-not-sys-exit',
msg='sys.exit() call found. Should be exit_json/fail_json',
line=line_no + 1
)
def _check_gpl3_header(self):
header = '\n'.join(self.text.split('\n')[:20])
if ('GNU General Public License' not in header or
('version 3' not in header and 'v3.0' not in header)):
self.reporter.error(
path=self.object_path,
code='missing-gplv3-license',
msg='GPLv3 license header not found in the first 20 lines of the module'
)
elif self._is_new_module():
if len([line for line in header
if 'GNU General Public License' in line]) > 1:
self.reporter.error(
path=self.object_path,
code='use-short-gplv3-license',
msg='Found old style GPLv3 license header: '
'https://docs.ansible.com/ansible/devel/dev_guide/developing_modules_documenting.html#copyright'
)
def _check_for_subprocess(self):
for child in self.ast.body:
if isinstance(child, ast.Import):
if child.names[0].name == 'subprocess':
for line_no, line in enumerate(self.text.splitlines()):
sp_match = SUBPROCESS_REGEX.search(line)
if sp_match:
self.reporter.error(
path=self.object_path,
code='use-run-command-not-popen',
msg=('subprocess.Popen call found. Should be module.run_command'),
line=(line_no + 1),
column=(sp_match.span()[0] + 1)
)
def _check_for_os_call(self):
if 'os.call' in self.text:
for line_no, line in enumerate(self.text.splitlines()):
os_call_match = OS_CALL_REGEX.search(line)
if os_call_match:
self.reporter.error(
path=self.object_path,
code='use-run-command-not-os-call',
msg=('os.call() call found. Should be module.run_command'),
line=(line_no + 1),
column=(os_call_match.span()[0] + 1)
)
def _find_rejectlist_imports(self):
for child in self.ast.body:
names = []
if isinstance(child, ast.Import):
names.extend(child.names)
elif isinstance(child, TRY_EXCEPT):
bodies = child.body
for handler in child.handlers:
bodies.extend(handler.body)
for grandchild in bodies:
if isinstance(grandchild, ast.Import):
names.extend(grandchild.names)
for name in names:
# TODO: Add line/col
for rejectlist_import, options in REJECTLIST_IMPORTS.items():
if re.search(rejectlist_import, name.name):
new_only = options['new_only']
if self._is_new_module() and new_only:
self.reporter.error(
path=self.object_path,
**options['error']
)
elif not new_only:
self.reporter.error(
path=self.object_path,
**options['error']
)
def _find_module_utils(self):
linenos = []
found_basic = False
for child in self.ast.body:
if isinstance(child, (ast.Import, ast.ImportFrom)):
names = []
try:
names.append(child.module)
if child.module.endswith('.basic'):
found_basic = True
except AttributeError:
pass
names.extend([n.name for n in child.names])
if [n for n in names if n.startswith('ansible.module_utils')]:
linenos.append(child.lineno)
for name in child.names:
if ('module_utils' in getattr(child, 'module', '') and
isinstance(name, ast.alias) and
name.name == '*'):
msg = (
'module-utils-specific-import',
('module_utils imports should import specific '
'components, not "*"')
)
if self._is_new_module():
self.reporter.error(
path=self.object_path,
code=msg[0],
msg=msg[1],
line=child.lineno
)
else:
self.reporter.warning(
path=self.object_path,
code=msg[0],
msg=msg[1],
line=child.lineno
)
if (isinstance(name, ast.alias) and
name.name == 'basic'):
found_basic = True
if not found_basic:
self.reporter.warning(
path=self.object_path,
code='missing-module-utils-basic-import',
msg='Did not find "ansible.module_utils.basic" import'
)
return linenos
def _get_first_callable(self):
linenos = []
for child in self.ast.body:
if isinstance(child, (ast.FunctionDef, ast.ClassDef)):
linenos.append(child.lineno)
return min(linenos) if linenos else None
def _find_has_import(self):
for child in self.ast.body:
found_try_except_import = False
found_has = False
if isinstance(child, TRY_EXCEPT):
bodies = child.body
for handler in child.handlers:
bodies.extend(handler.body)
for grandchild in bodies:
if isinstance(grandchild, ast.Import):
found_try_except_import = True
if isinstance(grandchild, ast.Assign):
for target in grandchild.targets:
if not isinstance(target, ast.Name):
continue
if target.id.lower().startswith('has_'):
found_has = True
if found_try_except_import and not found_has:
# TODO: Add line/col
self.reporter.warning(
path=self.object_path,
code='try-except-missing-has',
msg='Found Try/Except block without HAS_ assignment'
)
def _ensure_imports_below_docs(self, doc_info, first_callable):
try:
min_doc_line = min(
[doc_info[key]['lineno'] for key in doc_info if doc_info[key]['lineno']]
)
except ValueError:
# We can't perform this validation, as there are no DOCs provided at all
return
max_doc_line = max(
[doc_info[key]['end_lineno'] for key in doc_info if doc_info[key]['end_lineno']]
)
import_lines = []
for child in self.ast.body:
if isinstance(child, (ast.Import, ast.ImportFrom)):
if isinstance(child, ast.ImportFrom) and child.module == '__future__':
# allowed from __future__ imports
for future_import in child.names:
if future_import.name not in self.ACCEPTLIST_FUTURE_IMPORTS:
self.reporter.error(
path=self.object_path,
code='illegal-future-imports',
msg=('Only the following from __future__ imports are allowed: %s'
% ', '.join(self.ACCEPTLIST_FUTURE_IMPORTS)),
line=child.lineno
)
break
else: # for-else. If we didn't find a problem nad break out of the loop, then this is a legal import
continue
import_lines.append(child.lineno)
if child.lineno < min_doc_line:
self.reporter.error(
path=self.object_path,
code='import-before-documentation',
msg=('Import found before documentation variables. '
'All imports must appear below '
'DOCUMENTATION/EXAMPLES/RETURN.'),
line=child.lineno
)
break
elif isinstance(child, TRY_EXCEPT):
bodies = child.body
for handler in child.handlers:
bodies.extend(handler.body)
for grandchild in bodies:
if isinstance(grandchild, (ast.Import, ast.ImportFrom)):
import_lines.append(grandchild.lineno)
if grandchild.lineno < min_doc_line:
self.reporter.error(
path=self.object_path,
code='import-before-documentation',
msg=('Import found before documentation '
'variables. All imports must appear below '
'DOCUMENTATION/EXAMPLES/RETURN.'),
line=child.lineno
)
break
for import_line in import_lines:
if not (max_doc_line < import_line < first_callable):
msg = (
'import-placement',
('Imports should be directly below DOCUMENTATION/EXAMPLES/'
'RETURN.')
)
if self._is_new_module():
self.reporter.error(
path=self.object_path,
code=msg[0],
msg=msg[1],
line=import_line
)
else:
self.reporter.warning(
path=self.object_path,
code=msg[0],
msg=msg[1],
line=import_line
)
def _validate_ps_replacers(self):
# loop all (for/else + error)
# get module list for each
# check "shape" of each module name
module_requires = r'(?im)^#\s*requires\s+\-module(?:s?)\s*(Ansible\.ModuleUtils\..+)'
csharp_requires = r'(?im)^#\s*ansiblerequires\s+\-csharputil\s*(Ansible\..+)'
found_requires = False
for req_stmt in re.finditer(module_requires, self.text):
found_requires = True
# this will bomb on dictionary format - "don't do that"
module_list = [x.strip() for x in req_stmt.group(1).split(',')]
if len(module_list) > 1:
self.reporter.error(
path=self.object_path,
code='multiple-utils-per-requires',
msg='Ansible.ModuleUtils requirements do not support multiple modules per statement: "%s"' % req_stmt.group(0)
)
continue
module_name = module_list[0]
if module_name.lower().endswith('.psm1'):
self.reporter.error(
path=self.object_path,
code='invalid-requires-extension',
msg='Module #Requires should not end in .psm1: "%s"' % module_name
)
for req_stmt in re.finditer(csharp_requires, self.text):
found_requires = True
# this will bomb on dictionary format - "don't do that"
module_list = [x.strip() for x in req_stmt.group(1).split(',')]
if len(module_list) > 1:
self.reporter.error(
path=self.object_path,
code='multiple-csharp-utils-per-requires',
msg='Ansible C# util requirements do not support multiple utils per statement: "%s"' % req_stmt.group(0)
)
continue
module_name = module_list[0]
if module_name.lower().endswith('.cs'):
self.reporter.error(
path=self.object_path,
code='illegal-extension-cs',
msg='Module #AnsibleRequires -CSharpUtil should not end in .cs: "%s"' % module_name
)
# also accept the legacy #POWERSHELL_COMMON replacer signal
if not found_requires and REPLACER_WINDOWS not in self.text:
self.reporter.error(
path=self.object_path,
code='missing-module-utils-import-csharp-requirements',
msg='No Ansible.ModuleUtils or C# Ansible util requirements/imports found'
)
def _find_ps_docs_py_file(self):
if self.object_name in self.PS_DOC_REJECTLIST:
return
py_path = self.path.replace('.ps1', '.py')
if not os.path.isfile(py_path):
self.reporter.error(
path=self.object_path,
code='missing-python-doc',
msg='Missing python documentation file'
)
return py_path
def _get_docs(self):
docs = {
'DOCUMENTATION': {
'value': None,
'lineno': 0,
'end_lineno': 0,
},
'EXAMPLES': {
'value': None,
'lineno': 0,
'end_lineno': 0,
},
'RETURN': {
'value': None,
'lineno': 0,
'end_lineno': 0,
},
}
for child in self.ast.body:
if isinstance(child, ast.Assign):
for grandchild in child.targets:
if not isinstance(grandchild, ast.Name):
continue
if grandchild.id == 'DOCUMENTATION':
docs['DOCUMENTATION']['value'] = child.value.s
docs['DOCUMENTATION']['lineno'] = child.lineno
docs['DOCUMENTATION']['end_lineno'] = (
child.lineno + len(child.value.s.splitlines())
)
elif grandchild.id == 'EXAMPLES':
docs['EXAMPLES']['value'] = child.value.s
docs['EXAMPLES']['lineno'] = child.lineno
docs['EXAMPLES']['end_lineno'] = (
child.lineno + len(child.value.s.splitlines())
)
elif grandchild.id == 'RETURN':
docs['RETURN']['value'] = child.value.s
docs['RETURN']['lineno'] = child.lineno
docs['RETURN']['end_lineno'] = (
child.lineno + len(child.value.s.splitlines())
)
return docs
def _validate_docs_schema(self, doc, schema, name, error_code):
# TODO: Add line/col
errors = []
try:
schema(doc)
except Exception as e:
for error in e.errors:
error.data = doc
errors.extend(e.errors)
for error in errors:
path = [str(p) for p in error.path]
local_error_code = getattr(error, 'ansible_error_code', error_code)
if isinstance(error.data, dict):
error_message = humanize_error(error.data, error)
else:
error_message = error
if path:
combined_path = '%s.%s' % (name, '.'.join(path))
else:
combined_path = name
self.reporter.error(
path=self.object_path,
code=local_error_code,
msg='%s: %s' % (combined_path, error_message)
)
def _validate_docs(self):
doc_info = self._get_docs()
doc = None
documentation_exists = False
examples_exist = False
returns_exist = False
# We have three ways of marking deprecated/removed files. Have to check each one
# individually and then make sure they all agree
filename_deprecated_or_removed = False
deprecated = False
removed = False
doc_deprecated = None # doc legally might not exist
routing_says_deprecated = False
if self.object_name.startswith('_') and not os.path.islink(self.object_path):
filename_deprecated_or_removed = True
# We are testing a collection
if self.routing:
routing_deprecation = self.routing.get('plugin_routing', {}).get('modules', {}).get(self.name, {}).get('deprecation', {})
if routing_deprecation:
# meta/runtime.yml says this is deprecated
routing_says_deprecated = True
deprecated = True
if not removed:
if not bool(doc_info['DOCUMENTATION']['value']):
self.reporter.error(
path=self.object_path,
code='missing-documentation',
msg='No DOCUMENTATION provided'
)
else:
documentation_exists = True
doc, errors, traces = parse_yaml(
doc_info['DOCUMENTATION']['value'],
doc_info['DOCUMENTATION']['lineno'],
self.name, 'DOCUMENTATION'
)
if doc:
add_collection_to_versions_and_dates(doc, self.collection_name, is_module=True)
for error in errors:
self.reporter.error(
path=self.object_path,
code='documentation-syntax-error',
**error
)
for trace in traces:
self.reporter.trace(
path=self.object_path,
tracebk=trace
)
if not errors and not traces:
missing_fragment = False
with CaptureStd():
try:
get_docstring(self.path, fragment_loader, verbose=True,
collection_name=self.collection_name, is_module=True)
except AssertionError:
fragment = doc['extends_documentation_fragment']
self.reporter.error(
path=self.object_path,
code='missing-doc-fragment',
msg='DOCUMENTATION fragment missing: %s' % fragment
)
missing_fragment = True
except Exception as e:
self.reporter.trace(
path=self.object_path,
tracebk=traceback.format_exc()
)
self.reporter.error(
path=self.object_path,
code='documentation-error',
msg='Unknown DOCUMENTATION error, see TRACE: %s' % e
)
if not missing_fragment:
add_fragments(doc, self.object_path, fragment_loader=fragment_loader, is_module=True)
if 'options' in doc and doc['options'] is None:
self.reporter.error(
path=self.object_path,
code='invalid-documentation-options',
msg='DOCUMENTATION.options must be a dictionary/hash when used',
)
if 'deprecated' in doc and doc.get('deprecated'):
doc_deprecated = True
doc_deprecation = doc['deprecated']
documentation_collection = doc_deprecation.get('removed_from_collection')
if documentation_collection != self.collection_name:
self.reporter.error(
path=self.object_path,
code='deprecation-wrong-collection',
msg='"DOCUMENTATION.deprecation.removed_from_collection must be the current collection name: %r vs. %r' % (
documentation_collection, self.collection_name)
)
else:
doc_deprecated = False
if os.path.islink(self.object_path):
# This module has an alias, which we can tell as it's a symlink
# Rather than checking for `module: $filename` we need to check against the true filename
self._validate_docs_schema(
doc,
doc_schema(
os.readlink(self.object_path).split('.')[0],
for_collection=bool(self.collection),
deprecated_module=deprecated,
),
'DOCUMENTATION',
'invalid-documentation',
)
else:
# This is the normal case
self._validate_docs_schema(
doc,
doc_schema(
self.object_name.split('.')[0],
for_collection=bool(self.collection),
deprecated_module=deprecated,
),
'DOCUMENTATION',
'invalid-documentation',
)
if not self.collection:
existing_doc = self._check_for_new_args(doc)
self._check_version_added(doc, existing_doc)
if not bool(doc_info['EXAMPLES']['value']):
self.reporter.error(
path=self.object_path,
code='missing-examples',
msg='No EXAMPLES provided'
)
else:
_doc, errors, traces = parse_yaml(doc_info['EXAMPLES']['value'],
doc_info['EXAMPLES']['lineno'],
self.name, 'EXAMPLES', load_all=True,
ansible_loader=True)
for error in errors:
self.reporter.error(
path=self.object_path,
code='invalid-examples',
**error
)
for trace in traces:
self.reporter.trace(
path=self.object_path,
tracebk=trace
)
if not bool(doc_info['RETURN']['value']):
if self._is_new_module():
self.reporter.error(
path=self.object_path,
code='missing-return',
msg='No RETURN provided'
)
else:
self.reporter.warning(
path=self.object_path,
code='missing-return-legacy',
msg='No RETURN provided'
)
else:
data, errors, traces = parse_yaml(doc_info['RETURN']['value'],
doc_info['RETURN']['lineno'],
self.name, 'RETURN')
if data:
add_collection_to_versions_and_dates(data, self.collection_name, is_module=True, return_docs=True)
self._validate_docs_schema(data, return_schema(for_collection=bool(self.collection)),
'RETURN', 'return-syntax-error')
for error in errors:
self.reporter.error(
path=self.object_path,
code='return-syntax-error',
**error
)
for trace in traces:
self.reporter.trace(
path=self.object_path,
tracebk=trace
)
# Check for mismatched deprecation
if not self.collection:
mismatched_deprecation = True
if not (filename_deprecated_or_removed or removed or deprecated or doc_deprecated):
mismatched_deprecation = False
else:
if (filename_deprecated_or_removed and doc_deprecated):
mismatched_deprecation = False
if (filename_deprecated_or_removed and removed and not (documentation_exists or examples_exist or returns_exist)):
mismatched_deprecation = False
if mismatched_deprecation:
self.reporter.error(
path=self.object_path,
code='deprecation-mismatch',
msg='Module deprecation/removed must agree in documentation, by prepending filename with'
' "_", and setting DOCUMENTATION.deprecated for deprecation or by removing all'
' documentation for removed'
)
else:
# We are testing a collection
if self.object_name.startswith('_'):
self.reporter.error(
path=self.object_path,
code='collections-no-underscore-on-deprecation',
msg='Deprecated content in collections MUST NOT start with "_", update meta/runtime.yml instead',
)
if not (doc_deprecated == routing_says_deprecated):
# DOCUMENTATION.deprecated and meta/runtime.yml disagree
self.reporter.error(
path=self.object_path,
code='deprecation-mismatch',
msg='"meta/runtime.yml" and DOCUMENTATION.deprecation do not agree.'
)
elif routing_says_deprecated:
# Both DOCUMENTATION.deprecated and meta/runtime.yml agree that the module is deprecated.
# Make sure they give the same version or date.
routing_date = routing_deprecation.get('removal_date')
routing_version = routing_deprecation.get('removal_version')
# The versions and dates in the module documentation are auto-tagged, so remove the tag
# to make comparison possible and to avoid confusing the user.
documentation_date = doc_deprecation.get('removed_at_date')
documentation_version = doc_deprecation.get('removed_in')
if not compare_dates(routing_date, documentation_date):
self.reporter.error(
path=self.object_path,
code='deprecation-mismatch',
msg='"meta/runtime.yml" and DOCUMENTATION.deprecation do not agree on removal date: %r vs. %r' % (
routing_date, documentation_date)
)
if routing_version != documentation_version:
self.reporter.error(
path=self.object_path,
code='deprecation-mismatch',
msg='"meta/runtime.yml" and DOCUMENTATION.deprecation do not agree on removal version: %r vs. %r' % (
routing_version, documentation_version)
)
# In the future we should error if ANSIBLE_METADATA exists in a collection
return doc_info, doc
def _check_version_added(self, doc, existing_doc):
version_added_raw = doc.get('version_added')
try:
collection_name = doc.get('version_added_collection')
version_added = self._create_strict_version(
str(version_added_raw or '0.0'),
collection_name=collection_name)
except ValueError as e:
version_added = version_added_raw or '0.0'
if self._is_new_module() or version_added != 'historical':
# already reported during schema validation, except:
if version_added == 'historical':
self.reporter.error(
path=self.object_path,
code='module-invalid-version-added',
msg='version_added is not a valid version number: %r. Error: %s' % (version_added, e)
)
return
if existing_doc and str(version_added_raw) != str(existing_doc.get('version_added')):
self.reporter.error(
path=self.object_path,
code='module-incorrect-version-added',
msg='version_added should be %r. Currently %r' % (existing_doc.get('version_added'), version_added_raw)
)
if not self._is_new_module():
return
should_be = '.'.join(ansible_version.split('.')[:2])
strict_ansible_version = self._create_strict_version(should_be, collection_name='ansible.builtin')
if (version_added < strict_ansible_version or
strict_ansible_version < version_added):
self.reporter.error(
path=self.object_path,
code='module-incorrect-version-added',
msg='version_added should be %r. Currently %r' % (should_be, version_added_raw)
)
def _validate_ansible_module_call(self, docs):
try:
spec, kwargs = get_argument_spec(self.path, self.collection)
except AnsibleModuleNotInitialized:
self.reporter.error(
path=self.object_path,
code='ansible-module-not-initialized',
msg="Execution of the module did not result in initialization of AnsibleModule",
)
return
except AnsibleModuleImportError as e:
self.reporter.error(
path=self.object_path,
code='import-error',
msg="Exception attempting to import module for argument_spec introspection, '%s'" % e
)
self.reporter.trace(
path=self.object_path,
tracebk=traceback.format_exc()
)
return
schema = ansible_module_kwargs_schema(self.object_name.split('.')[0], for_collection=bool(self.collection))
self._validate_docs_schema(kwargs, schema, 'AnsibleModule', 'invalid-ansiblemodule-schema')
self._validate_argument_spec(docs, spec, kwargs)
def _validate_list_of_module_args(self, name, terms, spec, context):
if terms is None:
return
if not isinstance(terms, (list, tuple)):
# This is already reported by schema checking
return
for check in terms:
if not isinstance(check, (list, tuple)):
# This is already reported by schema checking
continue
bad_term = False
for term in check:
if not isinstance(term, string_types):
msg = name
if context:
msg += " found in %s" % " -> ".join(context)
msg += " must contain strings in the lists or tuples; found value %r" % (term, )
self.reporter.error(
path=self.object_path,
code=name + '-type',
msg=msg,
)
bad_term = True
if bad_term:
continue
if len(set(check)) != len(check):
msg = name
if context:
msg += " found in %s" % " -> ".join(context)
msg += " has repeated terms"
self.reporter.error(
path=self.object_path,
code=name + '-collision',
msg=msg,
)
if not set(check) <= set(spec):
msg = name
if context:
msg += " found in %s" % " -> ".join(context)
msg += " contains terms which are not part of argument_spec: %s" % ", ".join(sorted(set(check).difference(set(spec))))
self.reporter.error(
path=self.object_path,
code=name + '-unknown',
msg=msg,
)
def _validate_required_if(self, terms, spec, context, module):
if terms is None:
return
if not isinstance(terms, (list, tuple)):
# This is already reported by schema checking
return
for check in terms:
if not isinstance(check, (list, tuple)) or len(check) not in [3, 4]:
# This is already reported by schema checking
continue
if len(check) == 4 and not isinstance(check[3], bool):
msg = "required_if"
if context:
msg += " found in %s" % " -> ".join(context)
msg += " must have forth value omitted or of type bool; got %r" % (check[3], )
self.reporter.error(
path=self.object_path,
code='required_if-is_one_of-type',
msg=msg,
)
requirements = check[2]
if not isinstance(requirements, (list, tuple)):
msg = "required_if"
if context:
msg += " found in %s" % " -> ".join(context)
msg += " must have third value (requirements) being a list or tuple; got type %r" % (requirements, )
self.reporter.error(
path=self.object_path,
code='required_if-requirements-type',
msg=msg,
)
continue
bad_term = False
for term in requirements:
if not isinstance(term, string_types):
msg = "required_if"
if context:
msg += " found in %s" % " -> ".join(context)
msg += " must have only strings in third value (requirements); got %r" % (term, )
self.reporter.error(
path=self.object_path,
code='required_if-requirements-type',
msg=msg,
)
bad_term = True
if bad_term:
continue
if len(set(requirements)) != len(requirements):
msg = "required_if"
if context:
msg += " found in %s" % " -> ".join(context)
msg += " has repeated terms in requirements"
self.reporter.error(
path=self.object_path,
code='required_if-requirements-collision',
msg=msg,
)
if not set(requirements) <= set(spec):
msg = "required_if"
if context:
msg += " found in %s" % " -> ".join(context)
msg += " contains terms in requirements which are not part of argument_spec: %s" % ", ".join(sorted(set(requirements).difference(set(spec))))
self.reporter.error(
path=self.object_path,
code='required_if-requirements-unknown',
msg=msg,
)
key = check[0]
if key not in spec:
msg = "required_if"
if context:
msg += " found in %s" % " -> ".join(context)
msg += " must have its key %s in argument_spec" % key
self.reporter.error(
path=self.object_path,
code='required_if-unknown-key',
msg=msg,
)
continue
if key in requirements:
msg = "required_if"
if context:
msg += " found in %s" % " -> ".join(context)
msg += " contains its key %s in requirements" % key
self.reporter.error(
path=self.object_path,
code='required_if-key-in-requirements',
msg=msg,
)
value = check[1]
if value is not None:
_type = spec[key].get('type', 'str')
if callable(_type):
_type_checker = _type
else:
_type_checker = DEFAULT_TYPE_VALIDATORS.get(_type)
try:
with CaptureStd():
dummy = _type_checker(value)
except (Exception, SystemExit):
msg = "required_if"
if context:
msg += " found in %s" % " -> ".join(context)
msg += " has value %r which does not fit to %s's parameter type %r" % (value, key, _type)
self.reporter.error(
path=self.object_path,
code='required_if-value-type',
msg=msg,
)
def _validate_required_by(self, terms, spec, context):
if terms is None:
return
if not isinstance(terms, Mapping):
# This is already reported by schema checking
return
for key, value in terms.items():
if isinstance(value, string_types):
value = [value]
if not isinstance(value, (list, tuple)):
# This is already reported by schema checking
continue
for term in value:
if not isinstance(term, string_types):
# This is already reported by schema checking
continue
if len(set(value)) != len(value) or key in value:
msg = "required_by"
if context:
msg += " found in %s" % " -> ".join(context)
msg += " has repeated terms"
self.reporter.error(
path=self.object_path,
code='required_by-collision',
msg=msg,
)
if not set(value) <= set(spec) or key not in spec:
msg = "required_by"
if context:
msg += " found in %s" % " -> ".join(context)
msg += " contains terms which are not part of argument_spec: %s" % ", ".join(sorted(set(value).difference(set(spec))))
self.reporter.error(
path=self.object_path,
code='required_by-unknown',
msg=msg,
)
def _validate_argument_spec(self, docs, spec, kwargs, context=None, last_context_spec=None):
if not self.analyze_arg_spec:
return
if docs is None:
docs = {}
if context is None:
context = []
if last_context_spec is None:
last_context_spec = kwargs
try:
if not context:
add_fragments(docs, self.object_path, fragment_loader=fragment_loader, is_module=True)
except Exception:
# Cannot merge fragments
return
# Use this to access type checkers later
module = NoArgsAnsibleModule({})
self._validate_list_of_module_args('mutually_exclusive', last_context_spec.get('mutually_exclusive'), spec, context)
self._validate_list_of_module_args('required_together', last_context_spec.get('required_together'), spec, context)
self._validate_list_of_module_args('required_one_of', last_context_spec.get('required_one_of'), spec, context)
self._validate_required_if(last_context_spec.get('required_if'), spec, context, module)
self._validate_required_by(last_context_spec.get('required_by'), spec, context)
provider_args = set()
args_from_argspec = set()
deprecated_args_from_argspec = set()
doc_options = docs.get('options', {})
if doc_options is None:
doc_options = {}
for arg, data in spec.items():
restricted_argument_names = ('message', 'syslog_facility')
if arg.lower() in restricted_argument_names:
msg = "Argument '%s' in argument_spec " % arg
if context:
msg += " found in %s" % " -> ".join(context)
msg += "must not be one of %s as it is used " \
"internally by Ansible Core Engine" % (",".join(restricted_argument_names))
self.reporter.error(
path=self.object_path,
code='invalid-argument-name',
msg=msg,
)
continue
if 'aliases' in data:
for al in data['aliases']:
if al.lower() in restricted_argument_names:
msg = "Argument alias '%s' in argument_spec " % al
if context:
msg += " found in %s" % " -> ".join(context)
msg += "must not be one of %s as it is used " \
"internally by Ansible Core Engine" % (",".join(restricted_argument_names))
self.reporter.error(
path=self.object_path,
code='invalid-argument-name',
msg=msg,
)
continue
# Could this a place where secrets are leaked?
# If it is type: path we know it's not a secret key as it's a file path.
# If it is type: bool it is more likely a flag indicating that something is secret, than an actual secret.
if all((
data.get('no_log') is None, is_potential_secret_option(arg),
data.get('type') not in ("path", "bool"), data.get('choices') is None,
)):
msg = "Argument '%s' in argument_spec could be a secret, though doesn't have `no_log` set" % arg
if context:
msg += " found in %s" % " -> ".join(context)
self.reporter.error(
path=self.object_path,
code='no-log-needed',
msg=msg,
)
if not isinstance(data, dict):
msg = "Argument '%s' in argument_spec" % arg
if context:
msg += " found in %s" % " -> ".join(context)
msg += " must be a dictionary/hash when used"
self.reporter.error(
path=self.object_path,
code='invalid-argument-spec',
msg=msg,
)
continue
removed_at_date = data.get('removed_at_date', None)
if removed_at_date is not None:
try:
if parse_isodate(removed_at_date, allow_date=False) < datetime.date.today():
msg = "Argument '%s' in argument_spec" % arg
if context:
msg += " found in %s" % " -> ".join(context)
msg += " has a removed_at_date '%s' before today" % removed_at_date
self.reporter.error(
path=self.object_path,
code='deprecated-date',
msg=msg,
)
except ValueError:
# This should only happen when removed_at_date is not in ISO format. Since schema
# validation already reported this as an error, don't report it a second time.
pass
deprecated_aliases = data.get('deprecated_aliases', None)
if deprecated_aliases is not None:
for deprecated_alias in deprecated_aliases:
if 'name' in deprecated_alias and 'date' in deprecated_alias:
try:
date = deprecated_alias['date']
if parse_isodate(date, allow_date=False) < datetime.date.today():
msg = "Argument '%s' in argument_spec" % arg
if context:
msg += " found in %s" % " -> ".join(context)
msg += " has deprecated aliases '%s' with removal date '%s' before today" % (
deprecated_alias['name'], deprecated_alias['date'])
self.reporter.error(
path=self.object_path,
code='deprecated-date',
msg=msg,
)
except ValueError:
# This should only happen when deprecated_alias['date'] is not in ISO format. Since
# schema validation already reported this as an error, don't report it a second
# time.
pass
has_version = False
if self.collection and self.collection_version is not None:
compare_version = self.collection_version
version_of_what = "this collection (%s)" % self.collection_version_str
code_prefix = 'collection'
has_version = True
elif not self.collection:
compare_version = LOOSE_ANSIBLE_VERSION
version_of_what = "Ansible (%s)" % ansible_version
code_prefix = 'ansible'
has_version = True
removed_in_version = data.get('removed_in_version', None)
if removed_in_version is not None:
try:
collection_name = data.get('removed_from_collection')
removed_in = self._create_version(str(removed_in_version), collection_name=collection_name)
if has_version and collection_name == self.collection_name and compare_version >= removed_in:
msg = "Argument '%s' in argument_spec" % arg
if context:
msg += " found in %s" % " -> ".join(context)
msg += " has a deprecated removed_in_version %r," % removed_in_version
msg += " i.e. the version is less than or equal to the current version of %s" % version_of_what
self.reporter.error(
path=self.object_path,
code=code_prefix + '-deprecated-version',
msg=msg,
)
except ValueError as e:
msg = "Argument '%s' in argument_spec" % arg
if context:
msg += " found in %s" % " -> ".join(context)
msg += " has an invalid removed_in_version number %r: %s" % (removed_in_version, e)
self.reporter.error(
path=self.object_path,
code='invalid-deprecated-version',
msg=msg,
)
except TypeError:
msg = "Argument '%s' in argument_spec" % arg
if context:
msg += " found in %s" % " -> ".join(context)
msg += " has an invalid removed_in_version number %r: " % (removed_in_version, )
msg += " error while comparing to version of %s" % version_of_what
self.reporter.error(
path=self.object_path,
code='invalid-deprecated-version',
msg=msg,
)
if deprecated_aliases is not None:
for deprecated_alias in deprecated_aliases:
if 'name' in deprecated_alias and 'version' in deprecated_alias:
try:
collection_name = deprecated_alias.get('collection_name')
version = self._create_version(str(deprecated_alias['version']), collection_name=collection_name)
if has_version and collection_name == self.collection_name and compare_version >= version:
msg = "Argument '%s' in argument_spec" % arg
if context:
msg += " found in %s" % " -> ".join(context)
msg += " has deprecated aliases '%s' with removal in version %r," % (
deprecated_alias['name'], deprecated_alias['version'])
msg += " i.e. the version is less than or equal to the current version of %s" % version_of_what
self.reporter.error(
path=self.object_path,
code=code_prefix + '-deprecated-version',
msg=msg,
)
except ValueError as e:
msg = "Argument '%s' in argument_spec" % arg
if context:
msg += " found in %s" % " -> ".join(context)
msg += " has deprecated aliases '%s' with invalid removal version %r: %s" % (
deprecated_alias['name'], deprecated_alias['version'], e)
self.reporter.error(
path=self.object_path,
code='invalid-deprecated-version',
msg=msg,
)
except TypeError:
msg = "Argument '%s' in argument_spec" % arg
if context:
msg += " found in %s" % " -> ".join(context)
msg += " has deprecated aliases '%s' with invalid removal version %r:" % (
deprecated_alias['name'], deprecated_alias['version'])
msg += " error while comparing to version of %s" % version_of_what
self.reporter.error(
path=self.object_path,
code='invalid-deprecated-version',
msg=msg,
)
aliases = data.get('aliases', [])
if arg in aliases:
msg = "Argument '%s' in argument_spec" % arg
if context:
msg += " found in %s" % " -> ".join(context)
msg += " is specified as its own alias"
self.reporter.error(
path=self.object_path,
code='parameter-alias-self',
msg=msg
)
if len(aliases) > len(set(aliases)):
msg = "Argument '%s' in argument_spec" % arg
if context:
msg += " found in %s" % " -> ".join(context)
msg += " has at least one alias specified multiple times in aliases"
self.reporter.error(
path=self.object_path,
code='parameter-alias-repeated',
msg=msg
)
if not context and arg == 'state':
bad_states = set(['list', 'info', 'get']) & set(data.get('choices', set()))
for bad_state in bad_states:
self.reporter.error(
path=self.object_path,
code='parameter-state-invalid-choice',
msg="Argument 'state' includes the value '%s' as a choice" % bad_state)
if not data.get('removed_in_version', None) and not data.get('removed_at_date', None):
args_from_argspec.add(arg)
args_from_argspec.update(aliases)
else:
deprecated_args_from_argspec.add(arg)
deprecated_args_from_argspec.update(aliases)
if arg == 'provider' and self.object_path.startswith('lib/ansible/modules/network/'):
if data.get('options') is not None and not isinstance(data.get('options'), Mapping):
self.reporter.error(
path=self.object_path,
code='invalid-argument-spec-options',
msg="Argument 'options' in argument_spec['provider'] must be a dictionary/hash when used",
)
elif data.get('options'):
# Record provider options from network modules, for later comparison
for provider_arg, provider_data in data.get('options', {}).items():
provider_args.add(provider_arg)
provider_args.update(provider_data.get('aliases', []))
if data.get('required') and data.get('default', object) != object:
msg = "Argument '%s' in argument_spec" % arg
if context:
msg += " found in %s" % " -> ".join(context)
msg += " is marked as required but specifies a default. Arguments with a" \
" default should not be marked as required"
self.reporter.error(
path=self.object_path,
code='no-default-for-required-parameter',
msg=msg
)
if arg in provider_args:
# Provider args are being removed from network module top level
# don't validate docs<->arg_spec checks below
continue
_type = data.get('type', 'str')
if callable(_type):
_type_checker = _type
else:
_type_checker = DEFAULT_TYPE_VALIDATORS.get(_type)
_elements = data.get('elements')
if (_type == 'list') and not _elements:
msg = "Argument '%s' in argument_spec" % arg
if context:
msg += " found in %s" % " -> ".join(context)
msg += " defines type as list but elements is not defined"
self.reporter.error(
path=self.object_path,
code='parameter-list-no-elements',
msg=msg
)
if _elements:
if not callable(_elements):
DEFAULT_TYPE_VALIDATORS.get(_elements)
if _type != 'list':
msg = "Argument '%s' in argument_spec" % arg
if context:
msg += " found in %s" % " -> ".join(context)
msg += " defines elements as %s but it is valid only when value of parameter type is list" % _elements
self.reporter.error(
path=self.object_path,
code='parameter-invalid-elements',
msg=msg
)
arg_default = None
if 'default' in data and not is_empty(data['default']):
try:
with CaptureStd():
arg_default = _type_checker(data['default'])
except (Exception, SystemExit):
msg = "Argument '%s' in argument_spec" % arg
if context:
msg += " found in %s" % " -> ".join(context)
msg += " defines default as (%r) but this is incompatible with parameter type %r" % (data['default'], _type)
self.reporter.error(
path=self.object_path,
code='incompatible-default-type',
msg=msg
)
continue
doc_options_args = []
for alias in sorted(set([arg] + list(aliases))):
if alias in doc_options:
doc_options_args.append(alias)
if len(doc_options_args) == 0:
# Undocumented arguments will be handled later (search for undocumented-parameter)
doc_options_arg = {}
else:
doc_options_arg = doc_options[doc_options_args[0]]
if len(doc_options_args) > 1:
msg = "Argument '%s' in argument_spec" % arg
if context:
msg += " found in %s" % " -> ".join(context)
msg += " with aliases %s is documented multiple times, namely as %s" % (
", ".join([("'%s'" % alias) for alias in aliases]),
", ".join([("'%s'" % alias) for alias in doc_options_args])
)
self.reporter.error(
path=self.object_path,
code='parameter-documented-multiple-times',
msg=msg
)
try:
doc_default = None
if 'default' in doc_options_arg and not is_empty(doc_options_arg['default']):
with CaptureStd():
doc_default = _type_checker(doc_options_arg['default'])
except (Exception, SystemExit):
msg = "Argument '%s' in documentation" % arg
if context:
msg += " found in %s" % " -> ".join(context)
msg += " defines default as (%r) but this is incompatible with parameter type %r" % (doc_options_arg.get('default'), _type)
self.reporter.error(
path=self.object_path,
code='doc-default-incompatible-type',
msg=msg
)
continue
if arg_default != doc_default:
msg = "Argument '%s' in argument_spec" % arg
if context:
msg += " found in %s" % " -> ".join(context)
msg += " defines default as (%r) but documentation defines default as (%r)" % (arg_default, doc_default)
self.reporter.error(
path=self.object_path,
code='doc-default-does-not-match-spec',
msg=msg
)
doc_type = doc_options_arg.get('type')
if 'type' in data and data['type'] is not None:
if doc_type is None:
if not arg.startswith('_'): # hidden parameter, for example _raw_params
msg = "Argument '%s' in argument_spec" % arg
if context:
msg += " found in %s" % " -> ".join(context)
msg += " defines type as %r but documentation doesn't define type" % (data['type'])
self.reporter.error(
path=self.object_path,
code='parameter-type-not-in-doc',
msg=msg
)
elif data['type'] != doc_type:
msg = "Argument '%s' in argument_spec" % arg
if context:
msg += " found in %s" % " -> ".join(context)
msg += " defines type as %r but documentation defines type as %r" % (data['type'], doc_type)
self.reporter.error(
path=self.object_path,
code='doc-type-does-not-match-spec',
msg=msg
)
else:
if doc_type is None:
msg = "Argument '%s' in argument_spec" % arg
if context:
msg += " found in %s" % " -> ".join(context)
msg += " uses default type ('str') but documentation doesn't define type"
self.reporter.error(
path=self.object_path,
code='doc-missing-type',
msg=msg
)
elif doc_type != 'str':
msg = "Argument '%s' in argument_spec" % arg
if context:
msg += " found in %s" % " -> ".join(context)
msg += " implies type as 'str' but documentation defines as %r" % doc_type
self.reporter.error(
path=self.object_path,
code='implied-parameter-type-mismatch',
msg=msg
)
doc_choices = []
try:
for choice in doc_options_arg.get('choices', []):
try:
with CaptureStd():
doc_choices.append(_type_checker(choice))
except (Exception, SystemExit):
msg = "Argument '%s' in documentation" % arg
if context:
msg += " found in %s" % " -> ".join(context)
msg += " defines choices as (%r) but this is incompatible with argument type %r" % (choice, _type)
self.reporter.error(
path=self.object_path,
code='doc-choices-incompatible-type',
msg=msg
)
raise StopIteration()
except StopIteration:
continue
arg_choices = []
try:
for choice in data.get('choices', []):
try:
with CaptureStd():
arg_choices.append(_type_checker(choice))
except (Exception, SystemExit):
msg = "Argument '%s' in argument_spec" % arg
if context:
msg += " found in %s" % " -> ".join(context)
msg += " defines choices as (%r) but this is incompatible with argument type %r" % (choice, _type)
self.reporter.error(
path=self.object_path,
code='incompatible-choices',
msg=msg
)
raise StopIteration()
except StopIteration:
continue
if not compare_unordered_lists(arg_choices, doc_choices):
msg = "Argument '%s' in argument_spec" % arg
if context:
msg += " found in %s" % " -> ".join(context)
msg += " defines choices as (%r) but documentation defines choices as (%r)" % (arg_choices, doc_choices)
self.reporter.error(
path=self.object_path,
code='doc-choices-do-not-match-spec',
msg=msg
)
doc_required = doc_options_arg.get('required', False)
data_required = data.get('required', False)
if (doc_required or data_required) and not (doc_required and data_required):
msg = "Argument '%s' in argument_spec" % arg
if context:
msg += " found in %s" % " -> ".join(context)
if doc_required:
msg += " is not required, but is documented as being required"
else:
msg += " is required, but is not documented as being required"
self.reporter.error(
path=self.object_path,
code='doc-required-mismatch',
msg=msg
)
doc_elements = doc_options_arg.get('elements', None)
doc_type = doc_options_arg.get('type', 'str')
data_elements = data.get('elements', None)
if (doc_elements and not doc_type == 'list'):
msg = "Argument '%s' " % arg
if context:
msg += " found in %s" % " -> ".join(context)
msg += " defines parameter elements as %s but it is valid only when value of parameter type is list" % doc_elements
self.reporter.error(
path=self.object_path,
code='doc-elements-invalid',
msg=msg
)
if (doc_elements or data_elements) and not (doc_elements == data_elements):
msg = "Argument '%s' in argument_spec" % arg
if context:
msg += " found in %s" % " -> ".join(context)
if data_elements:
msg += " specifies elements as %s," % data_elements
else:
msg += " does not specify elements,"
if doc_elements:
msg += "but elements is documented as being %s" % doc_elements
else:
msg += "but elements is not documented"
self.reporter.error(
path=self.object_path,
code='doc-elements-mismatch',
msg=msg
)
spec_suboptions = data.get('options')
doc_suboptions = doc_options_arg.get('suboptions', {})
if spec_suboptions:
if not doc_suboptions:
msg = "Argument '%s' in argument_spec" % arg
if context:
msg += " found in %s" % " -> ".join(context)
msg += " has sub-options but documentation does not define it"
self.reporter.error(
path=self.object_path,
code='missing-suboption-docs',
msg=msg
)
self._validate_argument_spec({'options': doc_suboptions}, spec_suboptions, kwargs,
context=context + [arg], last_context_spec=data)
for arg in args_from_argspec:
if not str(arg).isidentifier():
msg = "Argument '%s' in argument_spec" % arg
if context:
msg += " found in %s" % " -> ".join(context)
msg += " is not a valid python identifier"
self.reporter.error(
path=self.object_path,
code='parameter-invalid',
msg=msg
)
if docs:
args_from_docs = set()
for arg, data in doc_options.items():
args_from_docs.add(arg)
args_from_docs.update(data.get('aliases', []))
args_missing_from_docs = args_from_argspec.difference(args_from_docs)
docs_missing_from_args = args_from_docs.difference(args_from_argspec | deprecated_args_from_argspec)
for arg in args_missing_from_docs:
if arg in provider_args:
# Provider args are being removed from network module top level
# So they are likely not documented on purpose
continue
msg = "Argument '%s'" % arg
if context:
msg += " found in %s" % " -> ".join(context)
msg += " is listed in the argument_spec, but not documented in the module documentation"
self.reporter.error(
path=self.object_path,
code='undocumented-parameter',
msg=msg
)
for arg in docs_missing_from_args:
msg = "Argument '%s'" % arg
if context:
msg += " found in %s" % " -> ".join(context)
msg += " is listed in DOCUMENTATION.options, but not accepted by the module argument_spec"
self.reporter.error(
path=self.object_path,
code='nonexistent-parameter-documented',
msg=msg
)
def _check_for_new_args(self, doc):
if not self.base_branch or self._is_new_module():
return
with CaptureStd():
try:
existing_doc, dummy_examples, dummy_return, existing_metadata = get_docstring(
self.base_module, fragment_loader, verbose=True, collection_name=self.collection_name, is_module=True)
existing_options = existing_doc.get('options', {}) or {}
except AssertionError:
fragment = doc['extends_documentation_fragment']
self.reporter.warning(
path=self.object_path,
code='missing-existing-doc-fragment',
msg='Pre-existing DOCUMENTATION fragment missing: %s' % fragment
)
return
except Exception as e:
self.reporter.warning_trace(
path=self.object_path,
tracebk=e
)
self.reporter.warning(
path=self.object_path,
code='unknown-doc-fragment',
msg=('Unknown pre-existing DOCUMENTATION error, see TRACE. Submodule refs may need updated')
)
return
try:
mod_collection_name = existing_doc.get('version_added_collection')
mod_version_added = self._create_strict_version(
str(existing_doc.get('version_added', '0.0')),
collection_name=mod_collection_name)
except ValueError:
mod_collection_name = self.collection_name
mod_version_added = self._create_strict_version('0.0')
options = doc.get('options', {}) or {}
should_be = '.'.join(ansible_version.split('.')[:2])
strict_ansible_version = self._create_strict_version(should_be, collection_name='ansible.builtin')
for option, details in options.items():
try:
names = [option] + details.get('aliases', [])
except (TypeError, AttributeError):
# Reporting of this syntax error will be handled by schema validation.
continue
if any(name in existing_options for name in names):
# The option already existed. Make sure version_added didn't change.
for name in names:
existing_collection_name = existing_options.get(name, {}).get('version_added_collection')
existing_version = existing_options.get(name, {}).get('version_added')
if existing_version:
break
current_collection_name = details.get('version_added_collection')
current_version = details.get('version_added')
if current_collection_name != existing_collection_name:
self.reporter.error(
path=self.object_path,
code='option-incorrect-version-added-collection',
msg=('version_added for existing option (%s) should '
'belong to collection %r. Currently belongs to %r' %
(option, current_collection_name, existing_collection_name))
)
elif str(current_version) != str(existing_version):
self.reporter.error(
path=self.object_path,
code='option-incorrect-version-added',
msg=('version_added for existing option (%s) should '
'be %r. Currently %r' %
(option, existing_version, current_version))
)
continue
try:
collection_name = details.get('version_added_collection')
version_added = self._create_strict_version(
str(details.get('version_added', '0.0')),
collection_name=collection_name)
except ValueError as e:
# already reported during schema validation
continue
if collection_name != self.collection_name:
continue
if (strict_ansible_version != mod_version_added and
(version_added < strict_ansible_version or
strict_ansible_version < version_added)):
self.reporter.error(
path=self.object_path,
code='option-incorrect-version-added',
msg=('version_added for new option (%s) should '
'be %r. Currently %r' %
(option, should_be, version_added))
)
return existing_doc
@staticmethod
def is_on_rejectlist(path):
base_name = os.path.basename(path)
file_name = os.path.splitext(base_name)[0]
if file_name.startswith('_') and os.path.islink(path):
return True
if not frozenset((base_name, file_name)).isdisjoint(ModuleValidator.REJECTLIST):
return True
for pat in ModuleValidator.REJECTLIST_PATTERNS:
if fnmatch(base_name, pat):
return True
return False
def validate(self):
super(ModuleValidator, self).validate()
if not self._python_module() and not self._powershell_module():
self.reporter.error(
path=self.object_path,
code='invalid-extension',
msg=('Official Ansible modules must have a .py '
'extension for python modules or a .ps1 '
'for powershell modules')
)
self._python_module_override = True
if self._python_module() and self.ast is None:
self.reporter.error(
path=self.object_path,
code='python-syntax-error',
msg='Python SyntaxError while parsing module'
)
try:
compile(self.text, self.path, 'exec')
except Exception:
self.reporter.trace(
path=self.object_path,
tracebk=traceback.format_exc()
)
return
end_of_deprecation_should_be_removed_only = False
if self._python_module():
doc_info, docs = self._validate_docs()
# See if current version => deprecated.removed_in, ie, should be docs only
if docs and docs.get('deprecated', False):
if 'removed_in' in docs['deprecated']:
removed_in = None
collection_name = docs['deprecated'].get('removed_from_collection')
version = docs['deprecated']['removed_in']
if collection_name != self.collection_name:
self.reporter.error(
path=self.object_path,
code='invalid-module-deprecation-source',
msg=('The deprecation version for a module must be added in this collection')
)
else:
try:
removed_in = self._create_strict_version(str(version), collection_name=collection_name)
except ValueError as e:
self.reporter.error(
path=self.object_path,
code='invalid-module-deprecation-version',
msg=('The deprecation version %r cannot be parsed: %s' % (version, e))
)
if removed_in:
if not self.collection:
strict_ansible_version = self._create_strict_version(
'.'.join(ansible_version.split('.')[:2]), self.collection_name)
end_of_deprecation_should_be_removed_only = strict_ansible_version >= removed_in
if end_of_deprecation_should_be_removed_only:
self.reporter.error(
path=self.object_path,
code='ansible-deprecated-module',
msg='Module is marked for removal in version %s of Ansible when the current version is %s' % (
version, ansible_version),
)
elif self.collection_version:
strict_ansible_version = self.collection_version
end_of_deprecation_should_be_removed_only = strict_ansible_version >= removed_in
if end_of_deprecation_should_be_removed_only:
self.reporter.error(
path=self.object_path,
code='collection-deprecated-module',
msg='Module is marked for removal in version %s of this collection when the current version is %s' % (
version, self.collection_version_str),
)
# handle deprecation by date
if 'removed_at_date' in docs['deprecated']:
try:
removed_at_date = docs['deprecated']['removed_at_date']
if parse_isodate(removed_at_date, allow_date=True) < datetime.date.today():
msg = "Module's deprecated.removed_at_date date '%s' is before today" % removed_at_date
self.reporter.error(path=self.object_path, code='deprecated-date', msg=msg)
except ValueError:
# This happens if the date cannot be parsed. This is already checked by the schema.
pass
if self._python_module() and not self._just_docs() and not end_of_deprecation_should_be_removed_only:
self._validate_ansible_module_call(docs)
self._check_for_sys_exit()
self._find_rejectlist_imports()
self._find_module_utils()
self._find_has_import()
first_callable = self._get_first_callable() or 1000000 # use a bogus "high" line number if no callable exists
self._ensure_imports_below_docs(doc_info, first_callable)
self._check_for_subprocess()
self._check_for_os_call()
if self._powershell_module():
if self.basename in self.PS_DOC_REJECTLIST:
return
self._validate_ps_replacers()
docs_path = self._find_ps_docs_py_file()
# We can only validate PowerShell arg spec if it is using the new Ansible.Basic.AnsibleModule util
pattern = r'(?im)^#\s*ansiblerequires\s+\-csharputil\s*Ansible\.Basic'
if re.search(pattern, self.text) and self.object_name not in self.PS_ARG_VALIDATE_REJECTLIST:
with ModuleValidator(docs_path, base_branch=self.base_branch, git_cache=self.git_cache) as docs_mv:
docs = docs_mv._validate_docs()[1]
self._validate_ansible_module_call(docs)
self._check_gpl3_header()
if not self._just_docs() and not end_of_deprecation_should_be_removed_only:
self._check_interpreter(powershell=self._powershell_module())
self._check_type_instead_of_isinstance(
powershell=self._powershell_module()
)
class PythonPackageValidator(Validator):
REJECTLIST_FILES = frozenset(('__pycache__',))
def __init__(self, path, reporter=None):
super(PythonPackageValidator, self).__init__(reporter=reporter or Reporter())
self.path = path
self.basename = os.path.basename(path)
@property
def object_name(self):
return self.basename
@property
def object_path(self):
return self.path
def validate(self):
super(PythonPackageValidator, self).validate()
if self.basename in self.REJECTLIST_FILES:
return
init_file = os.path.join(self.path, '__init__.py')
if not os.path.exists(init_file):
self.reporter.error(
path=self.object_path,
code='subdirectory-missing-init',
msg='Ansible module subdirectories must contain an __init__.py'
)
def setup_collection_loader():
collections_paths = os.environ.get('ANSIBLE_COLLECTIONS_PATH', '').split(os.pathsep)
_AnsibleCollectionFinder(collections_paths)
def re_compile(value):
"""
Argparse expects things to raise TypeError, re.compile raises an re.error
exception
This function is a shorthand to convert the re.error exception to a
TypeError
"""
try:
return re.compile(value)
except re.error as e:
raise TypeError(e)
def run():
parser = argparse.ArgumentParser(prog="validate-modules")
parser.add_argument('modules', nargs='+',
help='Path to module or module directory')
parser.add_argument('-w', '--warnings', help='Show warnings',
action='store_true')
parser.add_argument('--exclude', help='RegEx exclusion pattern',
type=re_compile)
parser.add_argument('--arg-spec', help='Analyze module argument spec',
action='store_true', default=False)
parser.add_argument('--base-branch', default=None,
help='Used in determining if new options were added')
parser.add_argument('--format', choices=['json', 'plain'], default='plain',
help='Output format. Default: "%(default)s"')
parser.add_argument('--output', default='-',
help='Output location, use "-" for stdout. '
'Default "%(default)s"')
parser.add_argument('--collection',
help='Specifies the path to the collection, when '
'validating files within a collection. Ensure '
'that ANSIBLE_COLLECTIONS_PATH is set so the '
'contents of the collection can be located')
parser.add_argument('--collection-version',
help='The collection\'s version number used to check '
'deprecations')
args = parser.parse_args()
args.modules = [m.rstrip('/') for m in args.modules]
reporter = Reporter()
git_cache = GitCache(args.base_branch)
check_dirs = set()
routing = None
if args.collection:
setup_collection_loader()
routing_file = 'meta/runtime.yml'
# Load meta/runtime.yml if it exists, as it may contain deprecation information
if os.path.isfile(routing_file):
try:
with open(routing_file) as f:
routing = yaml.safe_load(f)
except yaml.error.MarkedYAMLError as ex:
print('%s:%d:%d: YAML load failed: %s' % (routing_file, ex.context_mark.line + 1, ex.context_mark.column + 1, re.sub(r'\s+', ' ', str(ex))))
except Exception as ex: # pylint: disable=broad-except
print('%s:%d:%d: YAML load failed: %s' % (routing_file, 0, 0, re.sub(r'\s+', ' ', str(ex))))
for module in args.modules:
if os.path.isfile(module):
path = module
if args.exclude and args.exclude.search(path):
continue
if ModuleValidator.is_on_rejectlist(path):
continue
with ModuleValidator(path, collection=args.collection, collection_version=args.collection_version,
analyze_arg_spec=args.arg_spec, base_branch=args.base_branch,
git_cache=git_cache, reporter=reporter, routing=routing) as mv1:
mv1.validate()
check_dirs.add(os.path.dirname(path))
for root, dirs, files in os.walk(module):
basedir = root[len(module) + 1:].split('/', 1)[0]
if basedir in REJECTLIST_DIRS:
continue
for dirname in dirs:
if root == module and dirname in REJECTLIST_DIRS:
continue
path = os.path.join(root, dirname)
if args.exclude and args.exclude.search(path):
continue
check_dirs.add(path)
for filename in files:
path = os.path.join(root, filename)
if args.exclude and args.exclude.search(path):
continue
if ModuleValidator.is_on_rejectlist(path):
continue
with ModuleValidator(path, collection=args.collection, collection_version=args.collection_version,
analyze_arg_spec=args.arg_spec, base_branch=args.base_branch,
git_cache=git_cache, reporter=reporter, routing=routing) as mv2:
mv2.validate()
if not args.collection:
for path in sorted(check_dirs):
pv = PythonPackageValidator(path, reporter=reporter)
pv.validate()
if args.format == 'plain':
sys.exit(reporter.plain(warnings=args.warnings, output=args.output))
else:
sys.exit(reporter.json(warnings=args.warnings, output=args.output))
class GitCache:
def __init__(self, base_branch):
self.base_branch = base_branch
if self.base_branch:
self.base_tree = self._git(['ls-tree', '-r', '--name-only', self.base_branch, 'lib/ansible/modules/'])
else:
self.base_tree = []
try:
self.head_tree = self._git(['ls-tree', '-r', '--name-only', 'HEAD', 'lib/ansible/modules/'])
except GitError as ex:
if ex.status == 128:
# fallback when there is no .git directory
self.head_tree = self._get_module_files()
else:
raise
except OSError as ex:
if ex.errno == errno.ENOENT:
# fallback when git is not installed
self.head_tree = self._get_module_files()
else:
raise
self.base_module_paths = dict((os.path.basename(p), p) for p in self.base_tree if os.path.splitext(p)[1] in ('.py', '.ps1'))
self.base_module_paths.pop('__init__.py', None)
self.head_aliased_modules = set()
for path in self.head_tree:
filename = os.path.basename(path)
if filename.startswith('_') and filename != '__init__.py':
if os.path.islink(path):
self.head_aliased_modules.add(os.path.basename(os.path.realpath(path)))
@staticmethod
def _get_module_files():
module_files = []
for (dir_path, dir_names, file_names) in os.walk('lib/ansible/modules/'):
for file_name in file_names:
module_files.append(os.path.join(dir_path, file_name))
return module_files
@staticmethod
def _git(args):
cmd = ['git'] + args
p = subprocess.Popen(cmd, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
stdout, stderr = p.communicate()
if p.returncode != 0:
raise GitError(stderr, p.returncode)
return stdout.decode('utf-8').splitlines()
class GitError(Exception):
def __init__(self, message, status):
super(GitError, self).__init__(message)
self.status = status
def main():
try:
run()
except KeyboardInterrupt:
pass
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 76,336 |
yum module on RHEL7 fills up disk with large yum cache after a repo with repo_gpgcheck=1 is added
|
### Summary
See #74962 (feel free to re-open that one and close this one as a duplicate).
### Issue Type
Bug Report
### Component Name
yum
### Ansible Version
```console
$ ansible --version
ansible [core 2.11.6]
config file = None
configured module search path = ['/home/username/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/username/.local/lib/python3.8/site-packages/ansible
ansible collection location = /home/username/.ansible/collections:/usr/share/ansible/collections
executable location = /home/username/.local/bin/ansible
python version = 3.8.10 (default, Sep 28 2021, 16:10:42) [GCC 9.3.0]
jinja version = 2.11.3
libyaml = True
```
### Configuration
```console
$ ansible-config dump --only-changed
[nothing]
```
### OS / Environment
RHEL 7.9
### Steps to Reproduce
As part of the playbook, there are some tasks that install repo's and download RPM's:
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
- name: Register ansible Yum repository
become: yes
ansible.builtin.yum_repository:
name: ansible-epel7
description: Ansible Red Hat 7 EPEL
baseurl: https://releases.ansible.com/ansible/rpm/release/epel-7-x86_64/
gpgcheck: yes
gpgkey: https://releases.ansible.com/keys/RPM-GPG-KEY-ansible-release.pub
- name: Download Ansible RPMs to local controller Ansible RPM repository
include_tasks: yum-download.yml
vars:
rpm_name: ansible
```
and:
```yaml
- name: Register ansible-runner Yum repository
become: yes
ansible.builtin.yum_repository:
name: ansible-runner
description: Ansible target dependencies
baseurl: https://releases.ansible.com/ansible-runner/rpm/epel-7-x86_64/
gpgcheck: yes
gpgkey: https://releases.ansible.com/keys/RPM-GPG-KEY-ansible-release.pub
- name: Download pexpect RPMs
include_tasks: yum-download.yml
vars:
rpm_name: python2-pexpect
```
with `yum-download.yml` as:
```yaml
- name: 'Download {{ rpm_name }} RPMs using yum'
become: yes
ansible.builtin.yum:
name: '{{ rpm_name }}'
download_only: yes
download_dir: '{{local_rpm_repo_folder}}'
environment: '{{ env_http_proxy }}'
```
### Expected Results
When running this playbook against a Red Hat 7.9 VM on Azure, I would expect just the RPM's to be downloaded (with maybe some small caching by yum).
### Actual Results
When running this playbook against a Red Hat 7.9 VM on Azure, yum fills the entire `/var` with cached data. (In fact, because by default `/var` is 8G I had to extend it first; I end up with some 12G!)
Only after changing:
```yaml
gpgcheck: yes
```
to:
```yaml
gpgcheck: no
```
does this work as expected (no huge amounts of data in `/var`, RPM's downloaded).
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/76336
|
https://github.com/ansible/ansible/pull/76345
|
4e70156d7ece970a41f8d0e7245002ea9a7df0ab
|
461f30c16012b7c6f2421047a44ef0e2d842617e
| 2021-11-22T08:39:47Z |
python
| 2021-12-13T16:38:20Z |
changelogs/fragments/76336-yum-makecache-fast.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 76,336 |
yum module on RHEL7 fills up disk with large yum cache after a repo with repo_gpgcheck=1 is added
|
### Summary
See #74962 (feel free to re-open that one and close this one as a duplicate).
### Issue Type
Bug Report
### Component Name
yum
### Ansible Version
```console
$ ansible --version
ansible [core 2.11.6]
config file = None
configured module search path = ['/home/username/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/username/.local/lib/python3.8/site-packages/ansible
ansible collection location = /home/username/.ansible/collections:/usr/share/ansible/collections
executable location = /home/username/.local/bin/ansible
python version = 3.8.10 (default, Sep 28 2021, 16:10:42) [GCC 9.3.0]
jinja version = 2.11.3
libyaml = True
```
### Configuration
```console
$ ansible-config dump --only-changed
[nothing]
```
### OS / Environment
RHEL 7.9
### Steps to Reproduce
As part of the playbook, there are some tasks that install repo's and download RPM's:
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
- name: Register ansible Yum repository
become: yes
ansible.builtin.yum_repository:
name: ansible-epel7
description: Ansible Red Hat 7 EPEL
baseurl: https://releases.ansible.com/ansible/rpm/release/epel-7-x86_64/
gpgcheck: yes
gpgkey: https://releases.ansible.com/keys/RPM-GPG-KEY-ansible-release.pub
- name: Download Ansible RPMs to local controller Ansible RPM repository
include_tasks: yum-download.yml
vars:
rpm_name: ansible
```
and:
```yaml
- name: Register ansible-runner Yum repository
become: yes
ansible.builtin.yum_repository:
name: ansible-runner
description: Ansible target dependencies
baseurl: https://releases.ansible.com/ansible-runner/rpm/epel-7-x86_64/
gpgcheck: yes
gpgkey: https://releases.ansible.com/keys/RPM-GPG-KEY-ansible-release.pub
- name: Download pexpect RPMs
include_tasks: yum-download.yml
vars:
rpm_name: python2-pexpect
```
with `yum-download.yml` as:
```yaml
- name: 'Download {{ rpm_name }} RPMs using yum'
become: yes
ansible.builtin.yum:
name: '{{ rpm_name }}'
download_only: yes
download_dir: '{{local_rpm_repo_folder}}'
environment: '{{ env_http_proxy }}'
```
### Expected Results
When running this playbook against a Red Hat 7.9 VM on Azure, I would expect just the RPM's to be downloaded (with maybe some small caching by yum).
### Actual Results
When running this playbook against a Red Hat 7.9 VM on Azure, yum fills the entire `/var` with cached data. (In fact, because by default `/var` is 8G I had to extend it first; I end up with some 12G!)
Only after changing:
```yaml
gpgcheck: yes
```
to:
```yaml
gpgcheck: no
```
does this work as expected (no huge amounts of data in `/var`, RPM's downloaded).
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/76336
|
https://github.com/ansible/ansible/pull/76345
|
4e70156d7ece970a41f8d0e7245002ea9a7df0ab
|
461f30c16012b7c6f2421047a44ef0e2d842617e
| 2021-11-22T08:39:47Z |
python
| 2021-12-13T16:38:20Z |
lib/ansible/modules/yum.py
|
#!/usr/bin/python
# -*- coding: utf-8 -*-
# Copyright: (c) 2012, Red Hat, Inc
# Written by Seth Vidal <skvidal at fedoraproject.org>
# Copyright: (c) 2014, Epic Games, Inc.
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
DOCUMENTATION = '''
---
module: yum
version_added: historical
short_description: Manages packages with the I(yum) package manager
description:
- Installs, upgrade, downgrades, removes, and lists packages and groups with the I(yum) package manager.
- This module only works on Python 2. If you require Python 3 support see the M(ansible.builtin.dnf) module.
options:
use_backend:
description:
- This module supports C(yum) (as it always has), this is known as C(yum3)/C(YUM3)/C(yum-deprecated) by
upstream yum developers. As of Ansible 2.7+, this module also supports C(YUM4), which is the
"new yum" and it has an C(dnf) backend.
- By default, this module will select the backend based on the C(ansible_pkg_mgr) fact.
default: "auto"
choices: [ auto, yum, yum4, dnf ]
type: str
version_added: "2.7"
name:
description:
- A package name or package specifier with version, like C(name-1.0).
- Comparison operators for package version are valid here C(>), C(<), C(>=), C(<=). Example - C(name>=1.0)
- If a previous version is specified, the task also needs to turn C(allow_downgrade) on.
See the C(allow_downgrade) documentation for caveats with downgrading packages.
- When using state=latest, this can be C('*') which means run C(yum -y update).
- You can also pass a url or a local path to a rpm file (using state=present).
To operate on several packages this can accept a comma separated string of packages or (as of 2.0) a list of packages.
aliases: [ pkg ]
type: list
elements: str
exclude:
description:
- Package name(s) to exclude when state=present, or latest
type: list
elements: str
version_added: "2.0"
list:
description:
- "Package name to run the equivalent of yum list --show-duplicates <package> against. In addition to listing packages,
use can also list the following: C(installed), C(updates), C(available) and C(repos)."
- This parameter is mutually exclusive with C(name).
type: str
state:
description:
- Whether to install (C(present) or C(installed), C(latest)), or remove (C(absent) or C(removed)) a package.
- C(present) and C(installed) will simply ensure that a desired package is installed.
- C(latest) will update the specified package if it's not of the latest available version.
- C(absent) and C(removed) will remove the specified package.
- Default is C(None), however in effect the default action is C(present) unless the C(autoremove) option is
enabled for this module, then C(absent) is inferred.
type: str
choices: [ absent, installed, latest, present, removed ]
enablerepo:
description:
- I(Repoid) of repositories to enable for the install/update operation.
These repos will not persist beyond the transaction.
When specifying multiple repos, separate them with a C(",").
- As of Ansible 2.7, this can alternatively be a list instead of C(",")
separated string
type: list
elements: str
version_added: "0.9"
disablerepo:
description:
- I(Repoid) of repositories to disable for the install/update operation.
These repos will not persist beyond the transaction.
When specifying multiple repos, separate them with a C(",").
- As of Ansible 2.7, this can alternatively be a list instead of C(",")
separated string
type: list
elements: str
version_added: "0.9"
conf_file:
description:
- The remote yum configuration file to use for the transaction.
type: str
version_added: "0.6"
disable_gpg_check:
description:
- Whether to disable the GPG checking of signatures of packages being
installed. Has an effect only if state is I(present) or I(latest).
type: bool
default: "no"
version_added: "1.2"
skip_broken:
description:
- Skip all unavailable packages or packages with broken dependencies
without raising an error. Equivalent to passing the --skip-broken option.
type: bool
default: "no"
version_added: "2.3"
update_cache:
description:
- Force yum to check if cache is out of date and redownload if needed.
Has an effect only if state is I(present) or I(latest).
type: bool
default: "no"
aliases: [ expire-cache ]
version_added: "1.9"
validate_certs:
description:
- This only applies if using a https url as the source of the rpm. e.g. for localinstall. If set to C(no), the SSL certificates will not be validated.
- This should only set to C(no) used on personally controlled sites using self-signed certificates as it avoids verifying the source site.
- Prior to 2.1 the code worked as if this was set to C(yes).
type: bool
default: "yes"
version_added: "2.1"
sslverify:
description:
- Disables SSL validation of the repository server for this transaction.
- This should be set to C(no) if one of the configured repositories is using an untrusted or self-signed certificate.
type: bool
default: "yes"
version_added: "2.13"
update_only:
description:
- When using latest, only update installed packages. Do not install packages.
- Has an effect only if state is I(latest)
default: "no"
type: bool
version_added: "2.5"
installroot:
description:
- Specifies an alternative installroot, relative to which all packages
will be installed.
default: "/"
type: str
version_added: "2.3"
security:
description:
- If set to C(yes), and C(state=latest) then only installs updates that have been marked security related.
type: bool
default: "no"
version_added: "2.4"
bugfix:
description:
- If set to C(yes), and C(state=latest) then only installs updates that have been marked bugfix related.
default: "no"
type: bool
version_added: "2.6"
allow_downgrade:
description:
- Specify if the named package and version is allowed to downgrade
a maybe already installed higher version of that package.
Note that setting allow_downgrade=True can make this module
behave in a non-idempotent way. The task could end up with a set
of packages that does not match the complete list of specified
packages to install (because dependencies between the downgraded
package and others can cause changes to the packages which were
in the earlier transaction).
type: bool
default: "no"
version_added: "2.4"
enable_plugin:
description:
- I(Plugin) name to enable for the install/update operation.
The enabled plugin will not persist beyond the transaction.
type: list
elements: str
version_added: "2.5"
disable_plugin:
description:
- I(Plugin) name to disable for the install/update operation.
The disabled plugins will not persist beyond the transaction.
type: list
elements: str
version_added: "2.5"
releasever:
description:
- Specifies an alternative release from which all packages will be
installed.
type: str
version_added: "2.7"
autoremove:
description:
- If C(yes), removes all "leaf" packages from the system that were originally
installed as dependencies of user-installed packages but which are no longer
required by any such package. Should be used alone or when state is I(absent)
- "NOTE: This feature requires yum >= 3.4.3 (RHEL/CentOS 7+)"
type: bool
default: "no"
version_added: "2.7"
disable_excludes:
description:
- Disable the excludes defined in YUM config files.
- If set to C(all), disables all excludes.
- If set to C(main), disable excludes defined in [main] in yum.conf.
- If set to C(repoid), disable excludes defined for given repo id.
type: str
version_added: "2.7"
download_only:
description:
- Only download the packages, do not install them.
default: "no"
type: bool
version_added: "2.7"
lock_timeout:
description:
- Amount of time to wait for the yum lockfile to be freed.
required: false
default: 30
type: int
version_added: "2.8"
install_weak_deps:
description:
- Will also install all packages linked by a weak dependency relation.
- "NOTE: This feature requires yum >= 4 (RHEL/CentOS 8+)"
type: bool
default: "yes"
version_added: "2.8"
download_dir:
description:
- Specifies an alternate directory to store packages.
- Has an effect only if I(download_only) is specified.
type: str
version_added: "2.8"
install_repoquery:
description:
- If repoquery is not available, install yum-utils. If the system is
registered to RHN or an RHN Satellite, repoquery allows for querying
all channels assigned to the system. It is also required to use the
'list' parameter.
- "NOTE: This will run and be logged as a separate yum transation which
takes place before any other installation or removal."
- "NOTE: This will use the system's default enabled repositories without
regard for disablerepo/enablerepo given to the module."
required: false
version_added: "1.5"
default: "yes"
type: bool
cacheonly:
description:
- Tells yum to run entirely from system cache; does not download or update metadata.
default: "no"
type: bool
version_added: "2.12"
extends_documentation_fragment:
- action_common_attributes
- action_common_attributes.flow
attributes:
action:
details: In the case of yum, it has 2 action plugins that use it under the hood, M(ansible.builtin.yum) and M(ansible.builtin.package).
support: partial
async:
support: none
bypass_host_loop:
support: none
check_mode:
support: full
diff_mode:
support: full
platform:
platforms: rhel
notes:
- When used with a `loop:` each package will be processed individually,
it is much more efficient to pass the list directly to the `name` option.
- In versions prior to 1.9.2 this module installed and removed each package
given to the yum module separately. This caused problems when packages
specified by filename or url had to be installed or removed together. In
1.9.2 this was fixed so that packages are installed in one yum
transaction. However, if one of the packages adds a new yum repository
that the other packages come from (such as epel-release) then that package
needs to be installed in a separate task. This mimics yum's command line
behaviour.
- 'Yum itself has two types of groups. "Package groups" are specified in the
rpm itself while "environment groups" are specified in a separate file
(usually by the distribution). Unfortunately, this division becomes
apparent to ansible users because ansible needs to operate on the group
of packages in a single transaction and yum requires groups to be specified
in different ways when used in that way. Package groups are specified as
"@development-tools" and environment groups are "@^gnome-desktop-environment".
Use the "yum group list hidden ids" command to see which category of group the group
you want to install falls into.'
- 'The yum module does not support clearing yum cache in an idempotent way, so it
was decided not to implement it, the only method is to use command and call the yum
command directly, namely "command: yum clean all"
https://github.com/ansible/ansible/pull/31450#issuecomment-352889579'
# informational: requirements for nodes
requirements:
- yum
author:
- Ansible Core Team
- Seth Vidal (@skvidal)
- Eduard Snesarev (@verm666)
- Berend De Schouwer (@berenddeschouwer)
- Abhijeet Kasurde (@Akasurde)
- Adam Miller (@maxamillion)
'''
EXAMPLES = '''
- name: Install the latest version of Apache
yum:
name: httpd
state: latest
- name: Install Apache >= 2.4
yum:
name: httpd>=2.4
state: present
- name: Install a list of packages (suitable replacement for 2.11 loop deprecation warning)
yum:
name:
- nginx
- postgresql
- postgresql-server
state: present
- name: Install a list of packages with a list variable
yum:
name: "{{ packages }}"
vars:
packages:
- httpd
- httpd-tools
- name: Remove the Apache package
yum:
name: httpd
state: absent
- name: Install the latest version of Apache from the testing repo
yum:
name: httpd
enablerepo: testing
state: present
- name: Install one specific version of Apache
yum:
name: httpd-2.2.29-1.4.amzn1
state: present
- name: Upgrade all packages
yum:
name: '*'
state: latest
- name: Upgrade all packages, excluding kernel & foo related packages
yum:
name: '*'
state: latest
exclude: kernel*,foo*
- name: Install the nginx rpm from a remote repo
yum:
name: http://nginx.org/packages/centos/6/noarch/RPMS/nginx-release-centos-6-0.el6.ngx.noarch.rpm
state: present
- name: Install nginx rpm from a local file
yum:
name: /usr/local/src/nginx-release-centos-6-0.el6.ngx.noarch.rpm
state: present
- name: Install the 'Development tools' package group
yum:
name: "@Development tools"
state: present
- name: Install the 'Gnome desktop' environment group
yum:
name: "@^gnome-desktop-environment"
state: present
- name: List ansible packages and register result to print with debug later
yum:
list: ansible
register: result
- name: Install package with multiple repos enabled
yum:
name: sos
enablerepo: "epel,ol7_latest"
- name: Install package with multiple repos disabled
yum:
name: sos
disablerepo: "epel,ol7_latest"
- name: Download the nginx package but do not install it
yum:
name:
- nginx
state: latest
download_only: true
'''
from ansible.module_utils.basic import AnsibleModule
from ansible.module_utils.common.locale import get_best_parsable_locale
from ansible.module_utils.common.respawn import has_respawned, respawn_module
from ansible.module_utils._text import to_native, to_text
from ansible.module_utils.urls import fetch_url
from ansible.module_utils.yumdnf import YumDnf, yumdnf_argument_spec
import errno
import os
import re
import sys
import tempfile
try:
import rpm
HAS_RPM_PYTHON = True
except ImportError:
HAS_RPM_PYTHON = False
try:
import yum
HAS_YUM_PYTHON = True
except ImportError:
HAS_YUM_PYTHON = False
try:
from yum.misc import find_unfinished_transactions, find_ts_remaining
from rpmUtils.miscutils import splitFilename, compareEVR
transaction_helpers = True
except ImportError:
transaction_helpers = False
from contextlib import contextmanager
from ansible.module_utils.urls import fetch_file
def_qf = "%{epoch}:%{name}-%{version}-%{release}.%{arch}"
rpmbin = None
class YumModule(YumDnf):
"""
Yum Ansible module back-end implementation
"""
def __init__(self, module):
# state=installed name=pkgspec
# state=removed name=pkgspec
# state=latest name=pkgspec
#
# informational commands:
# list=installed
# list=updates
# list=available
# list=repos
# list=pkgspec
# This populates instance vars for all argument spec params
super(YumModule, self).__init__(module)
self.pkg_mgr_name = "yum"
self.lockfile = '/var/run/yum.pid'
self._yum_base = None
def _enablerepos_with_error_checking(self):
# NOTE: This seems unintuitive, but it mirrors yum's CLI behavior
if len(self.enablerepo) == 1:
try:
self.yum_base.repos.enableRepo(self.enablerepo[0])
except yum.Errors.YumBaseError as e:
if u'repository not found' in to_text(e):
self.module.fail_json(msg="Repository %s not found." % self.enablerepo[0])
else:
raise e
else:
for rid in self.enablerepo:
try:
self.yum_base.repos.enableRepo(rid)
except yum.Errors.YumBaseError as e:
if u'repository not found' in to_text(e):
self.module.warn("Repository %s not found." % rid)
else:
raise e
def is_lockfile_pid_valid(self):
try:
try:
with open(self.lockfile, 'r') as f:
oldpid = int(f.readline())
except ValueError:
# invalid data
os.unlink(self.lockfile)
return False
if oldpid == os.getpid():
# that's us?
os.unlink(self.lockfile)
return False
try:
with open("/proc/%d/stat" % oldpid, 'r') as f:
stat = f.readline()
if stat.split()[2] == 'Z':
# Zombie
os.unlink(self.lockfile)
return False
except IOError:
# either /proc is not mounted or the process is already dead
try:
# check the state of the process
os.kill(oldpid, 0)
except OSError as e:
if e.errno == errno.ESRCH:
# No such process
os.unlink(self.lockfile)
return False
self.module.fail_json(msg="Unable to check PID %s in %s: %s" % (oldpid, self.lockfile, to_native(e)))
except (IOError, OSError) as e:
# lockfile disappeared?
return False
# another copy seems to be running
return True
@property
def yum_base(self):
if self._yum_base:
return self._yum_base
else:
# Only init once
self._yum_base = yum.YumBase()
self._yum_base.preconf.debuglevel = 0
self._yum_base.preconf.errorlevel = 0
self._yum_base.preconf.plugins = True
self._yum_base.preconf.enabled_plugins = self.enable_plugin
self._yum_base.preconf.disabled_plugins = self.disable_plugin
if self.releasever:
self._yum_base.preconf.releasever = self.releasever
if self.installroot != '/':
# do not setup installroot by default, because of error
# CRITICAL:yum.cli:Config Error: Error accessing file for config file:////etc/yum.conf
# in old yum version (like in CentOS 6.6)
self._yum_base.preconf.root = self.installroot
self._yum_base.conf.installroot = self.installroot
if self.conf_file and os.path.exists(self.conf_file):
self._yum_base.preconf.fn = self.conf_file
if os.geteuid() != 0:
if hasattr(self._yum_base, 'setCacheDir'):
self._yum_base.setCacheDir()
else:
cachedir = yum.misc.getCacheDir()
self._yum_base.repos.setCacheDir(cachedir)
self._yum_base.conf.cache = 0
if self.disable_excludes:
self._yum_base.conf.disable_excludes = self.disable_excludes
# setting conf.sslverify allows retrieving the repo's metadata
# without validating the certificate, but that does not allow
# package installation from a bad-ssl repo.
self._yum_base.conf.sslverify = self.sslverify
# A sideeffect of accessing conf is that the configuration is
# loaded and plugins are discovered
self.yum_base.conf
try:
for rid in self.disablerepo:
self.yum_base.repos.disableRepo(rid)
self._enablerepos_with_error_checking()
except Exception as e:
self.module.fail_json(msg="Failure talking to yum: %s" % to_native(e))
return self._yum_base
def po_to_envra(self, po):
if hasattr(po, 'ui_envra'):
return po.ui_envra
return '%s:%s-%s-%s.%s' % (po.epoch, po.name, po.version, po.release, po.arch)
def is_group_env_installed(self, name):
name_lower = name.lower()
if yum.__version_info__ >= (3, 4):
groups_list = self.yum_base.doGroupLists(return_evgrps=True)
else:
groups_list = self.yum_base.doGroupLists()
# list of the installed groups on the first index
groups = groups_list[0]
for group in groups:
if name_lower.endswith(group.name.lower()) or name_lower.endswith(group.groupid.lower()):
return True
if yum.__version_info__ >= (3, 4):
# list of the installed env_groups on the third index
envs = groups_list[2]
for env in envs:
if name_lower.endswith(env.name.lower()) or name_lower.endswith(env.environmentid.lower()):
return True
return False
def is_installed(self, repoq, pkgspec, qf=None, is_pkg=False):
if qf is None:
qf = "%{epoch}:%{name}-%{version}-%{release}.%{arch}\n"
if not repoq:
pkgs = []
try:
e, m, _ = self.yum_base.rpmdb.matchPackageNames([pkgspec])
pkgs = e + m
if not pkgs and not is_pkg:
pkgs.extend(self.yum_base.returnInstalledPackagesByDep(pkgspec))
except Exception as e:
self.module.fail_json(msg="Failure talking to yum: %s" % to_native(e))
return [self.po_to_envra(p) for p in pkgs]
else:
global rpmbin
if not rpmbin:
rpmbin = self.module.get_bin_path('rpm', required=True)
cmd = [rpmbin, '-q', '--qf', qf, pkgspec]
if '*' in pkgspec:
cmd.append('-a')
if self.installroot != '/':
cmd.extend(['--root', self.installroot])
# rpm localizes messages and we're screen scraping so make sure we use
# an appropriate locale
locale = get_best_parsable_locale(self.module)
lang_env = dict(LANG=locale, LC_ALL=locale, LC_MESSAGES=locale)
rc, out, err = self.module.run_command(cmd, environ_update=lang_env)
if rc != 0 and 'is not installed' not in out:
self.module.fail_json(msg='Error from rpm: %s: %s' % (cmd, err))
if 'is not installed' in out:
out = ''
pkgs = [p for p in out.replace('(none)', '0').split('\n') if p.strip()]
if not pkgs and not is_pkg:
cmd = [rpmbin, '-q', '--qf', qf, '--whatprovides', pkgspec]
if self.installroot != '/':
cmd.extend(['--root', self.installroot])
rc2, out2, err2 = self.module.run_command(cmd, environ_update=lang_env)
else:
rc2, out2, err2 = (0, '', '')
if rc2 != 0 and 'no package provides' not in out2:
self.module.fail_json(msg='Error from rpm: %s: %s' % (cmd, err + err2))
if 'no package provides' in out2:
out2 = ''
pkgs += [p for p in out2.replace('(none)', '0').split('\n') if p.strip()]
return pkgs
return []
def is_available(self, repoq, pkgspec, qf=def_qf):
if not repoq:
pkgs = []
try:
e, m, _ = self.yum_base.pkgSack.matchPackageNames([pkgspec])
pkgs = e + m
if not pkgs:
pkgs.extend(self.yum_base.returnPackagesByDep(pkgspec))
except Exception as e:
self.module.fail_json(msg="Failure talking to yum: %s" % to_native(e))
return [self.po_to_envra(p) for p in pkgs]
else:
myrepoq = list(repoq)
r_cmd = ['--disablerepo', ','.join(self.disablerepo)]
myrepoq.extend(r_cmd)
r_cmd = ['--enablerepo', ','.join(self.enablerepo)]
myrepoq.extend(r_cmd)
if self.releasever:
myrepoq.extend('--releasever=%s' % self.releasever)
cmd = myrepoq + ["--qf", qf, pkgspec]
rc, out, err = self.module.run_command(cmd)
if rc == 0:
return [p for p in out.split('\n') if p.strip()]
else:
self.module.fail_json(msg='Error from repoquery: %s: %s' % (cmd, err))
return []
def is_update(self, repoq, pkgspec, qf=def_qf):
if not repoq:
pkgs = []
updates = []
try:
pkgs = self.yum_base.returnPackagesByDep(pkgspec) + \
self.yum_base.returnInstalledPackagesByDep(pkgspec)
if not pkgs:
e, m, _ = self.yum_base.pkgSack.matchPackageNames([pkgspec])
pkgs = e + m
updates = self.yum_base.doPackageLists(pkgnarrow='updates').updates
except Exception as e:
self.module.fail_json(msg="Failure talking to yum: %s" % to_native(e))
retpkgs = (pkg for pkg in pkgs if pkg in updates)
return set(self.po_to_envra(p) for p in retpkgs)
else:
myrepoq = list(repoq)
r_cmd = ['--disablerepo', ','.join(self.disablerepo)]
myrepoq.extend(r_cmd)
r_cmd = ['--enablerepo', ','.join(self.enablerepo)]
myrepoq.extend(r_cmd)
if self.releasever:
myrepoq.extend('--releasever=%s' % self.releasever)
cmd = myrepoq + ["--pkgnarrow=updates", "--qf", qf, pkgspec]
rc, out, err = self.module.run_command(cmd)
if rc == 0:
return set(p for p in out.split('\n') if p.strip())
else:
self.module.fail_json(msg='Error from repoquery: %s: %s' % (cmd, err))
return set()
def what_provides(self, repoq, req_spec, qf=def_qf):
if not repoq:
pkgs = []
try:
try:
pkgs = self.yum_base.returnPackagesByDep(req_spec) + \
self.yum_base.returnInstalledPackagesByDep(req_spec)
except Exception as e:
# If a repo with `repo_gpgcheck=1` is added and the repo GPG
# key was never accepted, querying this repo will throw an
# error: 'repomd.xml signature could not be verified'. In that
# situation we need to run `yum -y makecache` which will accept
# the key and try again.
if 'repomd.xml signature could not be verified' in to_native(e):
if self.releasever:
self.module.run_command(self.yum_basecmd + ['makecache'] + ['--releasever=%s' % self.releasever])
else:
self.module.run_command(self.yum_basecmd + ['makecache'])
pkgs = self.yum_base.returnPackagesByDep(req_spec) + \
self.yum_base.returnInstalledPackagesByDep(req_spec)
else:
raise
if not pkgs:
e, m, _ = self.yum_base.pkgSack.matchPackageNames([req_spec])
pkgs.extend(e)
pkgs.extend(m)
e, m, _ = self.yum_base.rpmdb.matchPackageNames([req_spec])
pkgs.extend(e)
pkgs.extend(m)
except Exception as e:
self.module.fail_json(msg="Failure talking to yum: %s" % to_native(e))
return set(self.po_to_envra(p) for p in pkgs)
else:
myrepoq = list(repoq)
r_cmd = ['--disablerepo', ','.join(self.disablerepo)]
myrepoq.extend(r_cmd)
r_cmd = ['--enablerepo', ','.join(self.enablerepo)]
myrepoq.extend(r_cmd)
if self.releasever:
myrepoq.extend('--releasever=%s' % self.releasever)
cmd = myrepoq + ["--qf", qf, "--whatprovides", req_spec]
rc, out, err = self.module.run_command(cmd)
cmd = myrepoq + ["--qf", qf, req_spec]
rc2, out2, err2 = self.module.run_command(cmd)
if rc == 0 and rc2 == 0:
out += out2
pkgs = {p for p in out.split('\n') if p.strip()}
if not pkgs:
pkgs = self.is_installed(repoq, req_spec, qf=qf)
return pkgs
else:
self.module.fail_json(msg='Error from repoquery: %s: %s' % (cmd, err + err2))
return set()
def transaction_exists(self, pkglist):
"""
checks the package list to see if any packages are
involved in an incomplete transaction
"""
conflicts = []
if not transaction_helpers:
return conflicts
# first, we create a list of the package 'nvreas'
# so we can compare the pieces later more easily
pkglist_nvreas = (splitFilename(pkg) for pkg in pkglist)
# next, we build the list of packages that are
# contained within an unfinished transaction
unfinished_transactions = find_unfinished_transactions()
for trans in unfinished_transactions:
steps = find_ts_remaining(trans)
for step in steps:
# the action is install/erase/etc., but we only
# care about the package spec contained in the step
(action, step_spec) = step
(n, v, r, e, a) = splitFilename(step_spec)
# and see if that spec is in the list of packages
# requested for installation/updating
for pkg in pkglist_nvreas:
# if the name and arch match, we're going to assume
# this package is part of a pending transaction
# the label is just for display purposes
label = "%s-%s" % (n, a)
if n == pkg[0] and a == pkg[4]:
if label not in conflicts:
conflicts.append("%s-%s" % (n, a))
break
return conflicts
def local_envra(self, path):
"""return envra of a local rpm passed in"""
ts = rpm.TransactionSet()
ts.setVSFlags(rpm._RPMVSF_NOSIGNATURES)
fd = os.open(path, os.O_RDONLY)
try:
header = ts.hdrFromFdno(fd)
except rpm.error as e:
return None
finally:
os.close(fd)
return '%s:%s-%s-%s.%s' % (
header[rpm.RPMTAG_EPOCH] or '0',
header[rpm.RPMTAG_NAME],
header[rpm.RPMTAG_VERSION],
header[rpm.RPMTAG_RELEASE],
header[rpm.RPMTAG_ARCH]
)
@contextmanager
def set_env_proxy(self):
# setting system proxy environment and saving old, if exists
namepass = ""
scheme = ["http", "https"]
old_proxy_env = [os.getenv("http_proxy"), os.getenv("https_proxy")]
try:
# "_none_" is a special value to disable proxy in yum.conf/*.repo
if self.yum_base.conf.proxy and self.yum_base.conf.proxy not in ("_none_",):
if self.yum_base.conf.proxy_username:
namepass = namepass + self.yum_base.conf.proxy_username
proxy_url = self.yum_base.conf.proxy
if self.yum_base.conf.proxy_password:
namepass = namepass + ":" + self.yum_base.conf.proxy_password
elif '@' in self.yum_base.conf.proxy:
namepass = self.yum_base.conf.proxy.split('@')[0].split('//')[-1]
proxy_url = self.yum_base.conf.proxy.replace("{0}@".format(namepass), "")
if namepass:
namepass = namepass + '@'
for item in scheme:
os.environ[item + "_proxy"] = re.sub(
r"(http://)",
r"\g<1>" + namepass, proxy_url
)
else:
for item in scheme:
os.environ[item + "_proxy"] = self.yum_base.conf.proxy
yield
except yum.Errors.YumBaseError:
raise
finally:
# revert back to previously system configuration
for item in scheme:
if os.getenv("{0}_proxy".format(item)):
del os.environ["{0}_proxy".format(item)]
if old_proxy_env[0]:
os.environ["http_proxy"] = old_proxy_env[0]
if old_proxy_env[1]:
os.environ["https_proxy"] = old_proxy_env[1]
def pkg_to_dict(self, pkgstr):
if pkgstr.strip() and pkgstr.count('|') == 5:
n, e, v, r, a, repo = pkgstr.split('|')
else:
return {'error_parsing': pkgstr}
d = {
'name': n,
'arch': a,
'epoch': e,
'release': r,
'version': v,
'repo': repo,
'envra': '%s:%s-%s-%s.%s' % (e, n, v, r, a)
}
if repo == 'installed':
d['yumstate'] = 'installed'
else:
d['yumstate'] = 'available'
return d
def repolist(self, repoq, qf="%{repoid}"):
cmd = repoq + ["--qf", qf, "-a"]
if self.releasever:
cmd.extend(['--releasever=%s' % self.releasever])
rc, out, _ = self.module.run_command(cmd)
if rc == 0:
return set(p for p in out.split('\n') if p.strip())
else:
return []
def list_stuff(self, repoquerybin, stuff):
qf = "%{name}|%{epoch}|%{version}|%{release}|%{arch}|%{repoid}"
# is_installed goes through rpm instead of repoquery so it needs a slightly different format
is_installed_qf = "%{name}|%{epoch}|%{version}|%{release}|%{arch}|installed\n"
repoq = [repoquerybin, '--show-duplicates', '--plugins', '--quiet']
if self.disablerepo:
repoq.extend(['--disablerepo', ','.join(self.disablerepo)])
if self.enablerepo:
repoq.extend(['--enablerepo', ','.join(self.enablerepo)])
if self.installroot != '/':
repoq.extend(['--installroot', self.installroot])
if self.conf_file and os.path.exists(self.conf_file):
repoq += ['-c', self.conf_file]
if stuff == 'installed':
return [self.pkg_to_dict(p) for p in sorted(self.is_installed(repoq, '-a', qf=is_installed_qf)) if p.strip()]
if stuff == 'updates':
return [self.pkg_to_dict(p) for p in sorted(self.is_update(repoq, '-a', qf=qf)) if p.strip()]
if stuff == 'available':
return [self.pkg_to_dict(p) for p in sorted(self.is_available(repoq, '-a', qf=qf)) if p.strip()]
if stuff == 'repos':
return [dict(repoid=name, state='enabled') for name in sorted(self.repolist(repoq)) if name.strip()]
return [
self.pkg_to_dict(p) for p in
sorted(self.is_installed(repoq, stuff, qf=is_installed_qf) + self.is_available(repoq, stuff, qf=qf))
if p.strip()
]
def exec_install(self, items, action, pkgs, res):
cmd = self.yum_basecmd + [action] + pkgs
if self.releasever:
cmd.extend(['--releasever=%s' % self.releasever])
# setting sslverify using --setopt is required as conf.sslverify only
# affects the metadata retrieval.
if not self.sslverify:
cmd.extend(['--setopt', 'sslverify=0'])
if self.module.check_mode:
self.module.exit_json(changed=True, results=res['results'], changes=dict(installed=pkgs))
else:
res['changes'] = dict(installed=pkgs)
locale = get_best_parsable_locale(self.module)
lang_env = dict(LANG=locale, LC_ALL=locale, LC_MESSAGES=locale)
rc, out, err = self.module.run_command(cmd, environ_update=lang_env)
if rc == 1:
for spec in items:
# Fail on invalid urls:
if ('://' in spec and ('No package %s available.' % spec in out or 'Cannot open: %s. Skipping.' % spec in err)):
err = 'Package at %s could not be installed' % spec
self.module.fail_json(changed=False, msg=err, rc=rc)
res['rc'] = rc
res['results'].append(out)
res['msg'] += err
res['changed'] = True
if ('Nothing to do' in out and rc == 0) or ('does not have any packages' in err):
res['changed'] = False
if rc != 0:
res['changed'] = False
self.module.fail_json(**res)
# Fail if yum prints 'No space left on device' because that means some
# packages failed executing their post install scripts because of lack of
# free space (e.g. kernel package couldn't generate initramfs). Note that
# yum can still exit with rc=0 even if some post scripts didn't execute
# correctly.
if 'No space left on device' in (out or err):
res['changed'] = False
res['msg'] = 'No space left on device'
self.module.fail_json(**res)
# FIXME - if we did an install - go and check the rpmdb to see if it actually installed
# look for each pkg in rpmdb
# look for each pkg via obsoletes
return res
def install(self, items, repoq):
pkgs = []
downgrade_pkgs = []
res = {}
res['results'] = []
res['msg'] = ''
res['rc'] = 0
res['changed'] = False
for spec in items:
pkg = None
downgrade_candidate = False
# check if pkgspec is installed (if possible for idempotence)
if spec.endswith('.rpm') or '://' in spec:
if '://' not in spec and not os.path.exists(spec):
res['msg'] += "No RPM file matching '%s' found on system" % spec
res['results'].append("No RPM file matching '%s' found on system" % spec)
res['rc'] = 127 # Ensure the task fails in with-loop
self.module.fail_json(**res)
if '://' in spec:
with self.set_env_proxy():
package = fetch_file(self.module, spec)
if not package.endswith('.rpm'):
# yum requires a local file to have the extension of .rpm and we
# can not guarantee that from an URL (redirects, proxies, etc)
new_package_path = '%s.rpm' % package
os.rename(package, new_package_path)
package = new_package_path
else:
package = spec
# most common case is the pkg is already installed
envra = self.local_envra(package)
if envra is None:
self.module.fail_json(msg="Failed to get envra information from RPM package: %s" % spec)
installed_pkgs = self.is_installed(repoq, envra)
if installed_pkgs:
res['results'].append('%s providing %s is already installed' % (installed_pkgs[0], package))
continue
(name, ver, rel, epoch, arch) = splitFilename(envra)
installed_pkgs = self.is_installed(repoq, name)
# case for two same envr but different archs like x86_64 and i686
if len(installed_pkgs) == 2:
(cur_name0, cur_ver0, cur_rel0, cur_epoch0, cur_arch0) = splitFilename(installed_pkgs[0])
(cur_name1, cur_ver1, cur_rel1, cur_epoch1, cur_arch1) = splitFilename(installed_pkgs[1])
cur_epoch0 = cur_epoch0 or '0'
cur_epoch1 = cur_epoch1 or '0'
compare = compareEVR((cur_epoch0, cur_ver0, cur_rel0), (cur_epoch1, cur_ver1, cur_rel1))
if compare == 0 and cur_arch0 != cur_arch1:
for installed_pkg in installed_pkgs:
if installed_pkg.endswith(arch):
installed_pkgs = [installed_pkg]
if len(installed_pkgs) == 1:
installed_pkg = installed_pkgs[0]
(cur_name, cur_ver, cur_rel, cur_epoch, cur_arch) = splitFilename(installed_pkg)
cur_epoch = cur_epoch or '0'
compare = compareEVR((cur_epoch, cur_ver, cur_rel), (epoch, ver, rel))
# compare > 0 -> higher version is installed
# compare == 0 -> exact version is installed
# compare < 0 -> lower version is installed
if compare > 0 and self.allow_downgrade:
downgrade_candidate = True
elif compare >= 0:
continue
# else: if there are more installed packages with the same name, that would mean
# kernel, gpg-pubkey or like, so just let yum deal with it and try to install it
pkg = package
# groups
elif spec.startswith('@'):
if self.is_group_env_installed(spec):
continue
pkg = spec
# range requires or file-requires or pkgname :(
else:
# most common case is the pkg is already installed and done
# short circuit all the bs - and search for it as a pkg in is_installed
# if you find it then we're done
if not set(['*', '?']).intersection(set(spec)):
installed_pkgs = self.is_installed(repoq, spec, is_pkg=True)
if installed_pkgs:
res['results'].append('%s providing %s is already installed' % (installed_pkgs[0], spec))
continue
# look up what pkgs provide this
pkglist = self.what_provides(repoq, spec)
if not pkglist:
res['msg'] += "No package matching '%s' found available, installed or updated" % spec
res['results'].append("No package matching '%s' found available, installed or updated" % spec)
res['rc'] = 126 # Ensure the task fails in with-loop
self.module.fail_json(**res)
# if any of the packages are involved in a transaction, fail now
# so that we don't hang on the yum operation later
conflicts = self.transaction_exists(pkglist)
if conflicts:
res['msg'] += "The following packages have pending transactions: %s" % ", ".join(conflicts)
res['rc'] = 125 # Ensure the task fails in with-loop
self.module.fail_json(**res)
# if any of them are installed
# then nothing to do
found = False
for this in pkglist:
if self.is_installed(repoq, this, is_pkg=True):
found = True
res['results'].append('%s providing %s is already installed' % (this, spec))
break
# if the version of the pkg you have installed is not in ANY repo, but there are
# other versions in the repos (both higher and lower) then the previous checks won't work.
# so we check one more time. This really only works for pkgname - not for file provides or virt provides
# but virt provides should be all caught in what_provides on its own.
# highly irritating
if not found:
if self.is_installed(repoq, spec):
found = True
res['results'].append('package providing %s is already installed' % (spec))
if found:
continue
# Downgrade - The yum install command will only install or upgrade to a spec version, it will
# not install an older version of an RPM even if specified by the install spec. So we need to
# determine if this is a downgrade, and then use the yum downgrade command to install the RPM.
if self.allow_downgrade:
for package in pkglist:
# Get the NEVRA of the requested package using pkglist instead of spec because pkglist
# contains consistently-formatted package names returned by yum, rather than user input
# that is often not parsed correctly by splitFilename().
(name, ver, rel, epoch, arch) = splitFilename(package)
# Check if any version of the requested package is installed
inst_pkgs = self.is_installed(repoq, name, is_pkg=True)
if inst_pkgs:
(cur_name, cur_ver, cur_rel, cur_epoch, cur_arch) = splitFilename(inst_pkgs[0])
compare = compareEVR((cur_epoch, cur_ver, cur_rel), (epoch, ver, rel))
if compare > 0:
downgrade_candidate = True
else:
downgrade_candidate = False
break
# If package needs to be installed/upgraded/downgraded, then pass in the spec
# we could get here if nothing provides it but that's not
# the error we're catching here
pkg = spec
if downgrade_candidate and self.allow_downgrade:
downgrade_pkgs.append(pkg)
else:
pkgs.append(pkg)
if downgrade_pkgs:
res = self.exec_install(items, 'downgrade', downgrade_pkgs, res)
if pkgs:
res = self.exec_install(items, 'install', pkgs, res)
return res
def remove(self, items, repoq):
pkgs = []
res = {}
res['results'] = []
res['msg'] = ''
res['changed'] = False
res['rc'] = 0
for pkg in items:
if pkg.startswith('@'):
installed = self.is_group_env_installed(pkg)
else:
installed = self.is_installed(repoq, pkg)
if installed:
pkgs.append(pkg)
else:
res['results'].append('%s is not installed' % pkg)
if pkgs:
if self.module.check_mode:
self.module.exit_json(changed=True, results=res['results'], changes=dict(removed=pkgs))
else:
res['changes'] = dict(removed=pkgs)
# run an actual yum transaction
if self.autoremove:
cmd = self.yum_basecmd + ["autoremove"] + pkgs
else:
cmd = self.yum_basecmd + ["remove"] + pkgs
rc, out, err = self.module.run_command(cmd)
res['rc'] = rc
res['results'].append(out)
res['msg'] = err
if rc != 0:
if self.autoremove and 'No such command' in out:
self.module.fail_json(msg='Version of YUM too old for autoremove: Requires yum 3.4.3 (RHEL/CentOS 7+)')
else:
self.module.fail_json(**res)
# compile the results into one batch. If anything is changed
# then mark changed
# at the end - if we've end up failed then fail out of the rest
# of the process
# at this point we check to see if the pkg is no longer present
self._yum_base = None # previous YumBase package index is now invalid
for pkg in pkgs:
if pkg.startswith('@'):
installed = self.is_group_env_installed(pkg)
else:
installed = self.is_installed(repoq, pkg, is_pkg=True)
if installed:
# Return a message so it's obvious to the user why yum failed
# and which package couldn't be removed. More details:
# https://github.com/ansible/ansible/issues/35672
res['msg'] = "Package '%s' couldn't be removed!" % pkg
self.module.fail_json(**res)
res['changed'] = True
return res
def run_check_update(self):
# run check-update to see if we have packages pending
if self.releasever:
rc, out, err = self.module.run_command(self.yum_basecmd + ['check-update'] + ['--releasever=%s' % self.releasever])
else:
rc, out, err = self.module.run_command(self.yum_basecmd + ['check-update'])
return rc, out, err
@staticmethod
def parse_check_update(check_update_output):
# preprocess string and filter out empty lines so the regex below works
out = '\n'.join((l for l in check_update_output.splitlines() if l))
# Remove incorrect new lines in longer columns in output from yum check-update
# yum line wrapping can move the repo to the next line:
# some_looooooooooooooooooooooooooooooooooooong_package_name 1:1.2.3-1.el7
# some-repo-label
out = re.sub(r'\n\W+(.*)', r' \1', out)
updates = {}
obsoletes = {}
for line in out.split('\n'):
line = line.split()
"""
Ignore irrelevant lines:
- '*' in line matches lines like mirror lists: "* base: mirror.corbina.net"
- len(line) != 3 or 6 could be strings like:
"This system is not registered with an entitlement server..."
- len(line) = 6 is package obsoletes
- checking for '.' in line[0] (package name) likely ensures that it is of format:
"package_name.arch" (coreutils.x86_64)
"""
if '*' in line or len(line) not in [3, 6] or '.' not in line[0]:
continue
pkg, version, repo = line[0], line[1], line[2]
name, dist = pkg.rsplit('.', 1)
if name not in updates:
updates[name] = []
updates[name].append({'version': version, 'dist': dist, 'repo': repo})
if len(line) == 6:
obsolete_pkg, obsolete_version, obsolete_repo = line[3], line[4], line[5]
obsolete_name, obsolete_dist = obsolete_pkg.rsplit('.', 1)
if obsolete_name not in obsoletes:
obsoletes[obsolete_name] = []
obsoletes[obsolete_name].append({'version': obsolete_version, 'dist': obsolete_dist, 'repo': obsolete_repo})
return updates, obsoletes
def latest(self, items, repoq):
res = {}
res['results'] = []
res['msg'] = ''
res['changed'] = False
res['rc'] = 0
pkgs = {}
pkgs['update'] = []
pkgs['install'] = []
updates = {}
obsoletes = {}
update_all = False
cmd = None
# determine if we're doing an update all
if '*' in items:
update_all = True
rc, out, err = self.run_check_update()
if rc == 0 and update_all:
res['results'].append('Nothing to do here, all packages are up to date')
return res
elif rc == 100:
updates, obsoletes = self.parse_check_update(out)
elif rc == 1:
res['msg'] = err
res['rc'] = rc
self.module.fail_json(**res)
if update_all:
cmd = self.yum_basecmd + ['update']
will_update = set(updates.keys())
will_update_from_other_package = dict()
else:
will_update = set()
will_update_from_other_package = dict()
for spec in items:
# some guess work involved with groups. update @<group> will install the group if missing
if spec.startswith('@'):
pkgs['update'].append(spec)
will_update.add(spec)
continue
# check if pkgspec is installed (if possible for idempotence)
# localpkg
if spec.endswith('.rpm') and '://' not in spec:
if not os.path.exists(spec):
res['msg'] += "No RPM file matching '%s' found on system" % spec
res['results'].append("No RPM file matching '%s' found on system" % spec)
res['rc'] = 127 # Ensure the task fails in with-loop
self.module.fail_json(**res)
# get the pkg e:name-v-r.arch
envra = self.local_envra(spec)
if envra is None:
self.module.fail_json(msg="Failed to get envra information from RPM package: %s" % spec)
# local rpm files can't be updated
if self.is_installed(repoq, envra):
pkgs['update'].append(spec)
else:
pkgs['install'].append(spec)
continue
# URL
if '://' in spec:
# download package so that we can check if it's already installed
with self.set_env_proxy():
package = fetch_file(self.module, spec)
envra = self.local_envra(package)
if envra is None:
self.module.fail_json(msg="Failed to get envra information from RPM package: %s" % spec)
# local rpm files can't be updated
if self.is_installed(repoq, envra):
pkgs['update'].append(spec)
else:
pkgs['install'].append(spec)
continue
# dep/pkgname - find it
if self.is_installed(repoq, spec):
pkgs['update'].append(spec)
else:
pkgs['install'].append(spec)
pkglist = self.what_provides(repoq, spec)
# FIXME..? may not be desirable to throw an exception here if a single package is missing
if not pkglist:
res['msg'] += "No package matching '%s' found available, installed or updated" % spec
res['results'].append("No package matching '%s' found available, installed or updated" % spec)
res['rc'] = 126 # Ensure the task fails in with-loop
self.module.fail_json(**res)
nothing_to_do = True
for pkg in pkglist:
if spec in pkgs['install'] and self.is_available(repoq, pkg):
nothing_to_do = False
break
# this contains the full NVR and spec could contain wildcards
# or virtual provides (like "python-*" or "smtp-daemon") while
# updates contains name only.
pkgname, _, _, _, _ = splitFilename(pkg)
if spec in pkgs['update'] and pkgname in updates:
nothing_to_do = False
will_update.add(spec)
# Massage the updates list
if spec != pkgname:
# For reporting what packages would be updated more
# succinctly
will_update_from_other_package[spec] = pkgname
break
if not self.is_installed(repoq, spec) and self.update_only:
res['results'].append("Packages providing %s not installed due to update_only specified" % spec)
continue
if nothing_to_do:
res['results'].append("All packages providing %s are up to date" % spec)
continue
# if any of the packages are involved in a transaction, fail now
# so that we don't hang on the yum operation later
conflicts = self.transaction_exists(pkglist)
if conflicts:
res['msg'] += "The following packages have pending transactions: %s" % ", ".join(conflicts)
res['results'].append("The following packages have pending transactions: %s" % ", ".join(conflicts))
res['rc'] = 128 # Ensure the task fails in with-loop
self.module.fail_json(**res)
# check_mode output
to_update = []
for w in will_update:
if w.startswith('@'):
# yum groups
to_update.append((w, None))
elif w not in updates:
# There are (at least, probably more) 2 ways we can get here:
#
# * A virtual provides (our user specifies "webserver", but
# "httpd" is the key in 'updates').
#
# * A wildcard. emac* will get us here if there's a package
# called 'emacs' in the pending updates list. 'updates' will
# of course key on 'emacs' in that case.
other_pkg = will_update_from_other_package[w]
# We are guaranteed that: other_pkg in updates
# ...based on the logic above. But we only want to show one
# update in this case (given the wording of "at least") below.
# As an example, consider a package installed twice:
# foobar.x86_64, foobar.i686
# We want to avoid having both:
# ('foo*', 'because of (at least) foobar-1.x86_64 from repo')
# ('foo*', 'because of (at least) foobar-1.i686 from repo')
# We just pick the first one.
#
# TODO: This is something that might be nice to change, but it
# would be a module UI change. But without it, we're
# dropping potentially important information about what
# was updated. Instead of (given_spec, random_matching_package)
# it'd be nice if we appended (given_spec, [all_matching_packages])
#
# ... But then, we also drop information if multiple
# different (distinct) packages match the given spec and
# we should probably fix that too.
pkg = updates[other_pkg][0]
to_update.append(
(
w,
'because of (at least) %s-%s.%s from %s' % (
other_pkg,
pkg['version'],
pkg['dist'],
pkg['repo']
)
)
)
else:
# Otherwise the spec is an exact match
for pkg in updates[w]:
to_update.append(
(
w,
'%s.%s from %s' % (
pkg['version'],
pkg['dist'],
pkg['repo']
)
)
)
if self.update_only:
res['changes'] = dict(installed=[], updated=to_update)
else:
res['changes'] = dict(installed=pkgs['install'], updated=to_update)
if obsoletes:
res['obsoletes'] = obsoletes
# return results before we actually execute stuff
if self.module.check_mode:
if will_update or pkgs['install']:
res['changed'] = True
return res
if self.releasever:
cmd.extend(['--releasever=%s' % self.releasever])
# run commands
if cmd: # update all
rc, out, err = self.module.run_command(cmd)
res['changed'] = True
elif self.update_only:
if pkgs['update']:
cmd = self.yum_basecmd + ['update'] + pkgs['update']
locale = get_best_parsable_locale(self.module)
lang_env = dict(LANG=locale, LC_ALL=locale, LC_MESSAGES=locale)
rc, out, err = self.module.run_command(cmd, environ_update=lang_env)
out_lower = out.strip().lower()
if not out_lower.endswith("no packages marked for update") and \
not out_lower.endswith("nothing to do"):
res['changed'] = True
else:
rc, out, err = [0, '', '']
elif pkgs['install'] or will_update and not self.update_only:
cmd = self.yum_basecmd + ['install'] + pkgs['install'] + pkgs['update']
locale = get_best_parsable_locale(self.module)
lang_env = dict(LANG=locale, LC_ALL=locale, LC_MESSAGES=locale)
rc, out, err = self.module.run_command(cmd, environ_update=lang_env)
out_lower = out.strip().lower()
if not out_lower.endswith("no packages marked for update") and \
not out_lower.endswith("nothing to do"):
res['changed'] = True
else:
rc, out, err = [0, '', '']
res['rc'] = rc
res['msg'] += err
res['results'].append(out)
if rc:
res['failed'] = True
return res
def ensure(self, repoq):
pkgs = self.names
# autoremove was provided without `name`
if not self.names and self.autoremove:
pkgs = []
self.state = 'absent'
if self.conf_file and os.path.exists(self.conf_file):
self.yum_basecmd += ['-c', self.conf_file]
if repoq:
repoq += ['-c', self.conf_file]
if self.skip_broken:
self.yum_basecmd.extend(['--skip-broken'])
if self.disablerepo:
self.yum_basecmd.extend(['--disablerepo=%s' % ','.join(self.disablerepo)])
if self.enablerepo:
self.yum_basecmd.extend(['--enablerepo=%s' % ','.join(self.enablerepo)])
if self.enable_plugin:
self.yum_basecmd.extend(['--enableplugin', ','.join(self.enable_plugin)])
if self.disable_plugin:
self.yum_basecmd.extend(['--disableplugin', ','.join(self.disable_plugin)])
if self.exclude:
e_cmd = ['--exclude=%s' % ','.join(self.exclude)]
self.yum_basecmd.extend(e_cmd)
if self.disable_excludes:
self.yum_basecmd.extend(['--disableexcludes=%s' % self.disable_excludes])
if self.cacheonly:
self.yum_basecmd.extend(['--cacheonly'])
if self.download_only:
self.yum_basecmd.extend(['--downloadonly'])
if self.download_dir:
self.yum_basecmd.extend(['--downloaddir=%s' % self.download_dir])
if self.releasever:
self.yum_basecmd.extend(['--releasever=%s' % self.releasever])
if self.installroot != '/':
# do not setup installroot by default, because of error
# CRITICAL:yum.cli:Config Error: Error accessing file for config file:////etc/yum.conf
# in old yum version (like in CentOS 6.6)
e_cmd = ['--installroot=%s' % self.installroot]
self.yum_basecmd.extend(e_cmd)
if self.state in ('installed', 'present', 'latest'):
""" The need of this entire if conditional has to be changed
this function is the ensure function that is called
in the main section.
This conditional tends to disable/enable repo for
install present latest action, same actually
can be done for remove and absent action
As solution I would advice to cal
try: self.yum_base.repos.disableRepo(disablerepo)
and
try: self.yum_base.repos.enableRepo(enablerepo)
right before any yum_cmd is actually called regardless
of yum action.
Please note that enable/disablerepo options are general
options, this means that we can call those with any action
option. https://linux.die.net/man/8/yum
This docstring will be removed together when issue: #21619
will be solved.
This has been triggered by: #19587
"""
if self.update_cache:
self.module.run_command(self.yum_basecmd + ['clean', 'expire-cache'])
try:
current_repos = self.yum_base.repos.repos.keys()
if self.enablerepo:
try:
new_repos = self.yum_base.repos.repos.keys()
for i in new_repos:
if i not in current_repos:
rid = self.yum_base.repos.getRepo(i)
a = rid.repoXML.repoid # nopep8 - https://github.com/ansible/ansible/pull/21475#pullrequestreview-22404868
current_repos = new_repos
except yum.Errors.YumBaseError as e:
self.module.fail_json(msg="Error setting/accessing repos: %s" % to_native(e))
except yum.Errors.YumBaseError as e:
self.module.fail_json(msg="Error accessing repos: %s" % to_native(e))
if self.state == 'latest' or self.update_only:
if self.disable_gpg_check:
self.yum_basecmd.append('--nogpgcheck')
if self.security:
self.yum_basecmd.append('--security')
if self.bugfix:
self.yum_basecmd.append('--bugfix')
res = self.latest(pkgs, repoq)
elif self.state in ('installed', 'present'):
if self.disable_gpg_check:
self.yum_basecmd.append('--nogpgcheck')
res = self.install(pkgs, repoq)
elif self.state in ('removed', 'absent'):
res = self.remove(pkgs, repoq)
else:
# should be caught by AnsibleModule argument_spec
self.module.fail_json(
msg="we should never get here unless this all failed",
changed=False,
results='',
errors='unexpected state'
)
return res
@staticmethod
def has_yum():
return HAS_YUM_PYTHON
def run(self):
"""
actually execute the module code backend
"""
if (not HAS_RPM_PYTHON or not HAS_YUM_PYTHON) and sys.executable != '/usr/bin/python' and not has_respawned():
respawn_module('/usr/bin/python')
# end of the line for this process; we'll exit here once the respawned module has completed
error_msgs = []
if not HAS_RPM_PYTHON:
error_msgs.append('The Python 2 bindings for rpm are needed for this module. If you require Python 3 support use the `dnf` Ansible module instead.')
if not HAS_YUM_PYTHON:
error_msgs.append('The Python 2 yum module is needed for this module. If you require Python 3 support use the `dnf` Ansible module instead.')
self.wait_for_lock()
if error_msgs:
self.module.fail_json(msg='. '.join(error_msgs))
# fedora will redirect yum to dnf, which has incompatibilities
# with how this module expects yum to operate. If yum-deprecated
# is available, use that instead to emulate the old behaviors.
if self.module.get_bin_path('yum-deprecated'):
yumbin = self.module.get_bin_path('yum-deprecated')
else:
yumbin = self.module.get_bin_path('yum')
# need debug level 2 to get 'Nothing to do' for groupinstall.
self.yum_basecmd = [yumbin, '-d', '2', '-y']
if self.update_cache and not self.names and not self.list:
rc, stdout, stderr = self.module.run_command(self.yum_basecmd + ['clean', 'expire-cache'])
if rc == 0:
self.module.exit_json(
changed=False,
msg="Cache updated",
rc=rc,
results=[]
)
else:
self.module.exit_json(
changed=False,
msg="Failed to update cache",
rc=rc,
results=[stderr],
)
repoquerybin = self.module.get_bin_path('repoquery', required=False)
if self.install_repoquery and not repoquerybin and not self.module.check_mode:
yum_path = self.module.get_bin_path('yum')
if yum_path:
if self.releasever:
self.module.run_command('%s -y install yum-utils --releasever %s' % (yum_path, self.releasever))
else:
self.module.run_command('%s -y install yum-utils' % yum_path)
repoquerybin = self.module.get_bin_path('repoquery', required=False)
if self.list:
if not repoquerybin:
self.module.fail_json(msg="repoquery is required to use list= with this module. Please install the yum-utils package.")
results = {'results': self.list_stuff(repoquerybin, self.list)}
else:
# If rhn-plugin is installed and no rhn-certificate is available on
# the system then users will see an error message using the yum API.
# Use repoquery in those cases.
repoquery = None
try:
yum_plugins = self.yum_base.plugins._plugins
except AttributeError:
pass
else:
if 'rhnplugin' in yum_plugins:
if repoquerybin:
repoquery = [repoquerybin, '--show-duplicates', '--plugins', '--quiet']
if self.installroot != '/':
repoquery.extend(['--installroot', self.installroot])
if self.disable_excludes:
# repoquery does not support --disableexcludes,
# so make a temp copy of yum.conf and get rid of the 'exclude=' line there
try:
with open('/etc/yum.conf', 'r') as f:
content = f.readlines()
tmp_conf_file = tempfile.NamedTemporaryFile(dir=self.module.tmpdir, delete=False)
self.module.add_cleanup_file(tmp_conf_file.name)
tmp_conf_file.writelines([c for c in content if not c.startswith("exclude=")])
tmp_conf_file.close()
except Exception as e:
self.module.fail_json(msg="Failure setting up repoquery: %s" % to_native(e))
repoquery.extend(['-c', tmp_conf_file.name])
results = self.ensure(repoquery)
if repoquery:
results['msg'] = '%s %s' % (
results.get('msg', ''),
'Warning: Due to potential bad behaviour with rhnplugin and certificates, used slower repoquery calls instead of Yum API.'
)
self.module.exit_json(**results)
def main():
# state=installed name=pkgspec
# state=removed name=pkgspec
# state=latest name=pkgspec
#
# informational commands:
# list=installed
# list=updates
# list=available
# list=repos
# list=pkgspec
yumdnf_argument_spec['argument_spec']['use_backend'] = dict(default='auto', choices=['auto', 'yum', 'yum4', 'dnf'])
module = AnsibleModule(
**yumdnf_argument_spec
)
module_implementation = YumModule(module)
module_implementation.run()
if __name__ == '__main__':
main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 76,449 |
Little glitch in the Tests page
|
### Summary
In the page https://docs.ansible.com/ansible/latest/user_guide/playbooks_tests.html, and more specifically in the section https://docs.ansible.com/ansible/latest/user_guide/playbooks_tests.html#testing-if-a-list-value-is-true, the version in which the feature has been added seems to be above the title, when it should be below it.
If someone confirms me that
> New in version 2.4.
indeed relates to
> Testing if a list value is True
Then, I'd be happy to open a PR fixing it.
### Issue Type
Documentation Report
### Component Name
docs/docsite/rst/user_guide/playbooks_tests.rst
### Ansible Version
```console
latest in the docs
```
### Additional Information
Makes is clear that the version stated indeed relates to the section _Testing if a list value is True_ and not to the above section _Testing if a list contains a value_
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/76449
|
https://github.com/ansible/ansible/pull/76539
|
461f30c16012b7c6f2421047a44ef0e2d842617e
|
8a562ea14ac25778fbec4a4aef417a2dcc44abc6
| 2021-12-02T22:21:11Z |
python
| 2021-12-13T20:23:50Z |
docs/docsite/rst/user_guide/playbooks_tests.rst
|
.. _playbooks_tests:
*****
Tests
*****
`Tests <https://jinja.palletsprojects.com/en/latest/templates/#tests>`_ in Jinja are a way of evaluating template expressions and returning True or False. Jinja ships with many of these. See `builtin tests`_ in the official Jinja template documentation.
The main difference between tests and filters are that Jinja tests are used for comparisons, whereas filters are used for data manipulation, and have different applications in jinja. Tests can also be used in list processing filters, like ``map()`` and ``select()`` to choose items in the list.
Like all templating, tests always execute on the Ansible controller, **not** on the target of a task, as they test local data.
In addition to those Jinja2 tests, Ansible supplies a few more and users can easily create their own.
.. contents::
:local:
.. _test_syntax:
Test syntax
===========
`Test syntax <https://jinja.palletsprojects.com/en/latest/templates/#tests>`_ varies from `filter syntax <https://jinja.palletsprojects.com/en/latest/templates/#filters>`_ (``variable | filter``). Historically Ansible has registered tests as both jinja tests and jinja filters, allowing for them to be referenced using filter syntax.
As of Ansible 2.5, using a jinja test as a filter will generate a warning.
The syntax for using a jinja test is as follows
.. code-block:: console
variable is test_name
Such as
.. code-block:: console
result is failed
.. _testing_strings:
Testing strings
===============
To match strings against a substring or a regular expression, use the ``match``, ``search`` or ``regex`` tests
.. code-block:: yaml
vars:
url: "https://example.com/users/foo/resources/bar"
tasks:
- debug:
msg: "matched pattern 1"
when: url is match("https://example.com/users/.*/resources")
- debug:
msg: "matched pattern 2"
when: url is search("users/.*/resources/.*")
- debug:
msg: "matched pattern 3"
when: url is search("users")
- debug:
msg: "matched pattern 4"
when: url is regex("example\.com/\w+/foo")
``match`` succeeds if it finds the pattern at the beginning of the string, while ``search`` succeeds if it finds the pattern anywhere within string. By default, ``regex`` works like ``search``, but ``regex`` can be configured to perform other tests as well, by passing the ``match_type`` keyword argument. In particular, ``match_type`` determines the ``re`` method that gets used to perform the search. The full list can be found in the relevant Python documentation `here <https://docs.python.org/3/library/re.html#regular-expression-objects>`_.
All of the string tests also take optional ``ignorecase`` and ``multiline`` arguments. These correspond to ``re.I`` and ``re.M`` from Python's ``re`` library, respectively.
.. _testing_vault:
Vault
=====
.. versionadded:: 2.10
You can test whether a variable is an inline single vault encrypted value using the ``vault_encrypted`` test.
.. code-block:: yaml
vars:
variable: !vault |
$ANSIBLE_VAULT;1.2;AES256;dev
61323931353866666336306139373937316366366138656131323863373866376666353364373761
3539633234313836346435323766306164626134376564330a373530313635343535343133316133
36643666306434616266376434363239346433643238336464643566386135356334303736353136
6565633133366366360a326566323363363936613664616364623437336130623133343530333739
3039
tasks:
- debug:
msg: '{{ (variable is vault_encrypted) | ternary("Vault encrypted", "Not vault encrypted") }}'
.. _testing_truthiness:
Testing truthiness
==================
.. versionadded:: 2.10
As of Ansible 2.10, you can now perform Python like truthy and falsy checks.
.. code-block:: yaml
- debug:
msg: "Truthy"
when: value is truthy
vars:
value: "some string"
- debug:
msg: "Falsy"
when: value is falsy
vars:
value: ""
Additionally, the ``truthy`` and ``falsy`` tests accept an optional parameter called ``convert_bool`` that will attempt
to convert boolean indicators to actual booleans.
.. code-block:: yaml
- debug:
msg: "Truthy"
when: value is truthy(convert_bool=True)
vars:
value: "yes"
- debug:
msg: "Falsy"
when: value is falsy(convert_bool=True)
vars:
value: "off"
.. _testing_versions:
Comparing versions
==================
.. versionadded:: 1.6
.. note:: In 2.5 ``version_compare`` was renamed to ``version``
To compare a version number, such as checking if the ``ansible_facts['distribution_version']``
version is greater than or equal to '12.04', you can use the ``version`` test.
The ``version`` test can also be used to evaluate the ``ansible_facts['distribution_version']``
.. code-block:: yaml+jinja
{{ ansible_facts['distribution_version'] is version('12.04', '>=') }}
If ``ansible_facts['distribution_version']`` is greater than or equal to 12.04, this test returns True, otherwise False.
The ``version`` test accepts the following operators
.. code-block:: console
<, lt, <=, le, >, gt, >=, ge, ==, =, eq, !=, <>, ne
This test also accepts a 3rd parameter, ``strict`` which defines if strict version parsing as defined by ``distutils.version.StrictVersion`` should be used. The default is ``False`` (using ``distutils.version.LooseVersion``), ``True`` enables strict version parsing
.. code-block:: yaml+jinja
{{ sample_version_var is version('1.0', operator='lt', strict=True) }}
As of Ansible 2.11 the ``version`` test accepts a ``version_type`` parameter which is mutually exclusive with ``strict``, and accepts the following values
.. code-block:: console
loose, strict, semver, semantic
Using ``version_type`` to compare a semantic version would be achieved like the following
.. code-block:: yaml+jinja
{{ sample_semver_var is version('2.0.0-rc.1+build.123', 'lt', version_type='semver') }}
When using ``version`` in a playbook or role, don't use ``{{ }}`` as described in the `FAQ <https://docs.ansible.com/ansible/latest/reference_appendices/faq.html#when-should-i-use-also-how-to-interpolate-variables-or-dynamic-variable-names>`_
.. code-block:: yaml
vars:
my_version: 1.2.3
tasks:
- debug:
msg: "my_version is higher than 1.0.0"
when: my_version is version('1.0.0', '>')
.. _math_tests:
Set theory tests
================
.. versionadded:: 2.1
.. note:: In 2.5 ``issubset`` and ``issuperset`` were renamed to ``subset`` and ``superset``
To see if a list includes or is included by another list, you can use 'subset' and 'superset'
.. code-block:: yaml
vars:
a: [1,2,3,4,5]
b: [2,3]
tasks:
- debug:
msg: "A includes B"
when: a is superset(b)
- debug:
msg: "B is included in A"
when: b is subset(a)
.. _contains_test:
Testing if a list contains a value
==================================
.. versionadded:: 2.8
Ansible includes a ``contains`` test which operates similarly, but in reverse of the Jinja2 provided ``in`` test.
The ``contains`` test is designed to work with the ``select``, ``reject``, ``selectattr``, and ``rejectattr`` filters
.. code-block:: yaml
vars:
lacp_groups:
- master: lacp0
network: 10.65.100.0/24
gateway: 10.65.100.1
dns4:
- 10.65.100.10
- 10.65.100.11
interfaces:
- em1
- em2
- master: lacp1
network: 10.65.120.0/24
gateway: 10.65.120.1
dns4:
- 10.65.100.10
- 10.65.100.11
interfaces:
- em3
- em4
tasks:
- debug:
msg: "{{ (lacp_groups|selectattr('interfaces', 'contains', 'em1')|first).master }}"
.. versionadded:: 2.4
Testing if a list value is True
===============================
You can use `any` and `all` to check if any or all elements in a list are true or not
.. code-block:: yaml
vars:
mylist:
- 1
- "{{ 3 == 3 }}"
- True
myotherlist:
- False
- True
tasks:
- debug:
msg: "all are true!"
when: mylist is all
- debug:
msg: "at least one is true"
when: myotherlist is any
.. _path_tests:
Testing paths
=============
.. note:: In 2.5 the following tests were renamed to remove the ``is_`` prefix
The following tests can provide information about a path on the controller
.. code-block:: yaml
- debug:
msg: "path is a directory"
when: mypath is directory
- debug:
msg: "path is a file"
when: mypath is file
- debug:
msg: "path is a symlink"
when: mypath is link
- debug:
msg: "path already exists"
when: mypath is exists
- debug:
msg: "path is {{ (mypath is abs)|ternary('absolute','relative')}}"
- debug:
msg: "path is the same file as path2"
when: mypath is same_file(path2)
- debug:
msg: "path is a mount"
when: mypath is mount
Testing size formats
====================
The ``human_readable`` and ``human_to_bytes`` functions let you test your
playbooks to make sure you are using the right size format in your tasks, and that
you provide Byte format to computers and human-readable format to people.
Human readable
--------------
Asserts whether the given string is human readable or not.
For example
.. code-block:: yaml+jinja
- name: "Human Readable"
assert:
that:
- '"1.00 Bytes" == 1|human_readable'
- '"1.00 bits" == 1|human_readable(isbits=True)'
- '"10.00 KB" == 10240|human_readable'
- '"97.66 MB" == 102400000|human_readable'
- '"0.10 GB" == 102400000|human_readable(unit="G")'
- '"0.10 Gb" == 102400000|human_readable(isbits=True, unit="G")'
This would result in
.. code-block:: json
{ "changed": false, "msg": "All assertions passed" }
Human to bytes
--------------
Returns the given string in the Bytes format.
For example
.. code-block:: yaml+jinja
- name: "Human to Bytes"
assert:
that:
- "{{'0'|human_to_bytes}} == 0"
- "{{'0.1'|human_to_bytes}} == 0"
- "{{'0.9'|human_to_bytes}} == 1"
- "{{'1'|human_to_bytes}} == 1"
- "{{'10.00 KB'|human_to_bytes}} == 10240"
- "{{ '11 MB'|human_to_bytes}} == 11534336"
- "{{ '1.1 GB'|human_to_bytes}} == 1181116006"
- "{{'10.00 Kb'|human_to_bytes(isbits=True)}} == 10240"
This would result in
.. code-block:: json
{ "changed": false, "msg": "All assertions passed" }
.. _test_task_results:
Testing task results
====================
The following tasks are illustrative of the tests meant to check the status of tasks
.. code-block:: yaml
tasks:
- shell: /usr/bin/foo
register: result
ignore_errors: True
- debug:
msg: "it failed"
when: result is failed
# in most cases you'll want a handler, but if you want to do something right now, this is nice
- debug:
msg: "it changed"
when: result is changed
- debug:
msg: "it succeeded in Ansible >= 2.1"
when: result is succeeded
- debug:
msg: "it succeeded"
when: result is success
- debug:
msg: "it was skipped"
when: result is skipped
.. note:: From 2.1, you can also use success, failure, change, and skip so that the grammar matches, for those who need to be strict about it.
.. _builtin tests: https://jinja.palletsprojects.com/en/latest/templates/#builtin-tests
.. seealso::
:ref:`playbooks_intro`
An introduction to playbooks
:ref:`playbooks_conditionals`
Conditional statements in playbooks
:ref:`playbooks_variables`
All about variables
:ref:`playbooks_loops`
Looping in playbooks
:ref:`playbooks_reuse_roles`
Playbook organization by roles
:ref:`playbooks_best_practices`
Tips and tricks for playbooks
`User Mailing List <https://groups.google.com/group/ansible-devel>`_
Have a question? Stop by the google group!
:ref:`communication_irc`
How to join Ansible chat channels
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 76,530 |
ansible 2.12.1 --connection-password-file or --become-password-file does not work
|
### Summary
ansible-playbook with --connection-password-file or --become-password-file options fails with error:
File "venv/lib/python3.8/site-packages/ansible/cli/__init__.py", line 516, in get_password_from_file
elif os.path.is_executable(b_pwd_file):
AttributeError: module 'posixpath' has no attribute 'is_executable'
### Issue Type
Bug Report
### Component Name
https://github.com/ansible/ansible/blob/devel/lib/ansible/cli/__init__.py#L543
### Ansible Version
```console
ansible [core 2.12.1]
config file = ***/ansible/ansible.cfg
configured module search path = ['.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = ***/ansible/venv/lib/python3.8/site-packages/ansible
ansible collection location = .ansible/collections:/usr/share/ansible/collections
executable location = ***/ansible/venv/bin/ansible
python version = 3.8.6 (default, Nov 20 2020, 18:02:11) [Clang 11.0.0 (clang-1100.0.33.17)]
jinja version = 2.11.1
libyaml = False
```
### Configuration
```console
HOST_KEY_CHECKING=False
```
### OS / Environment
Mac OS 11.6.1
### Steps to Reproduce
1. Implement simple playbook ping.yml:
````yaml
- hosts: all
become: true
tasks:
- ping
````
2. Run command ````ansible-playbook -i hosts ping.yml --connection-password-file file````
3. Error occurs
### Expected Results
I expect ansible-playbook command to load password from file.
### Actual Results
```console
ERROR! Unexpected Exception, this is probably a bug: module 'posixpath' has no attribute 'is_executable'
the full traceback was:
Traceback (most recent call last):
File "***/ansible/venv/bin/ansible-playbook", line 128, in <module>
exit_code = cli.run()
File "***/ansible/venv/lib/python3.8/site-packages/ansible/cli/playbook.py", line 114, in run
(sshpass, becomepass) = self.ask_passwords()
File "***/ansible/venv/lib/python3.8/site-packages/ansible/cli/__init__.py", line 257, in ask_passwords
sshpass = CLI.get_password_from_file(op['connection_password_file'])
File "***/ansible/venv/lib/python3.8/site-packages/ansible/cli/__init__.py", line 516, in get_password_from_file
elif os.path.is_executable(b_pwd_file):
AttributeError: module 'posixpath' has no attribute 'is_executable'
(venv) ➜ ansible git:(PML-4558-lv-qa-prod-ansible) ✗ ansible --version
ansible [core 2.12.1]
config file = ***/ansible/ansible.cfg
configured module search path = ['***/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = ***/ansible/venv/lib/python3.8/site-packages/ansible
ansible collection location = ***/.ansible/collections:/usr/share/ansible/collections
executable location = ***/ansible/venv/bin/ansible
python version = 3.8.6 (default, Nov 20 2020, 18:02:11) [Clang 11.0.0 (clang-1100.0.33.17)]
jinja version = 2.11.1
libyaml = False
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/76530
|
https://github.com/ansible/ansible/pull/76534
|
8a562ea14ac25778fbec4a4aef417a2dcc44abc6
|
ac2bdd68346634dbc19c9b22ed744fdb01edc249
| 2021-12-10T13:25:56Z |
python
| 2021-12-14T15:07:36Z |
changelogs/fragments/76530-connection-password-file-tb-fix.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 76,530 |
ansible 2.12.1 --connection-password-file or --become-password-file does not work
|
### Summary
ansible-playbook with --connection-password-file or --become-password-file options fails with error:
File "venv/lib/python3.8/site-packages/ansible/cli/__init__.py", line 516, in get_password_from_file
elif os.path.is_executable(b_pwd_file):
AttributeError: module 'posixpath' has no attribute 'is_executable'
### Issue Type
Bug Report
### Component Name
https://github.com/ansible/ansible/blob/devel/lib/ansible/cli/__init__.py#L543
### Ansible Version
```console
ansible [core 2.12.1]
config file = ***/ansible/ansible.cfg
configured module search path = ['.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = ***/ansible/venv/lib/python3.8/site-packages/ansible
ansible collection location = .ansible/collections:/usr/share/ansible/collections
executable location = ***/ansible/venv/bin/ansible
python version = 3.8.6 (default, Nov 20 2020, 18:02:11) [Clang 11.0.0 (clang-1100.0.33.17)]
jinja version = 2.11.1
libyaml = False
```
### Configuration
```console
HOST_KEY_CHECKING=False
```
### OS / Environment
Mac OS 11.6.1
### Steps to Reproduce
1. Implement simple playbook ping.yml:
````yaml
- hosts: all
become: true
tasks:
- ping
````
2. Run command ````ansible-playbook -i hosts ping.yml --connection-password-file file````
3. Error occurs
### Expected Results
I expect ansible-playbook command to load password from file.
### Actual Results
```console
ERROR! Unexpected Exception, this is probably a bug: module 'posixpath' has no attribute 'is_executable'
the full traceback was:
Traceback (most recent call last):
File "***/ansible/venv/bin/ansible-playbook", line 128, in <module>
exit_code = cli.run()
File "***/ansible/venv/lib/python3.8/site-packages/ansible/cli/playbook.py", line 114, in run
(sshpass, becomepass) = self.ask_passwords()
File "***/ansible/venv/lib/python3.8/site-packages/ansible/cli/__init__.py", line 257, in ask_passwords
sshpass = CLI.get_password_from_file(op['connection_password_file'])
File "***/ansible/venv/lib/python3.8/site-packages/ansible/cli/__init__.py", line 516, in get_password_from_file
elif os.path.is_executable(b_pwd_file):
AttributeError: module 'posixpath' has no attribute 'is_executable'
(venv) ➜ ansible git:(PML-4558-lv-qa-prod-ansible) ✗ ansible --version
ansible [core 2.12.1]
config file = ***/ansible/ansible.cfg
configured module search path = ['***/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = ***/ansible/venv/lib/python3.8/site-packages/ansible
ansible collection location = ***/.ansible/collections:/usr/share/ansible/collections
executable location = ***/ansible/venv/bin/ansible
python version = 3.8.6 (default, Nov 20 2020, 18:02:11) [Clang 11.0.0 (clang-1100.0.33.17)]
jinja version = 2.11.1
libyaml = False
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/76530
|
https://github.com/ansible/ansible/pull/76534
|
8a562ea14ac25778fbec4a4aef417a2dcc44abc6
|
ac2bdd68346634dbc19c9b22ed744fdb01edc249
| 2021-12-10T13:25:56Z |
python
| 2021-12-14T15:07:36Z |
lib/ansible/cli/__init__.py
|
# Copyright: (c) 2012-2014, Michael DeHaan <[email protected]>
# Copyright: (c) 2016, Toshio Kuratomi <[email protected]>
# Copyright: (c) 2018, Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
# Make coding more python3-ish
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import sys
# Used for determining if the system is running a new enough python version
# and should only restrict on our documented minimum versions
if sys.version_info < (3, 8):
raise SystemExit(
'ERROR: Ansible requires Python 3.8 or newer on the controller. '
'Current version: %s' % ''.join(sys.version.splitlines())
)
from importlib.metadata import version
from ansible.module_utils.compat.version import LooseVersion
# Used for determining if the system is running a new enough Jinja2 version
# and should only restrict on our documented minimum versions
jinja2_version = version('jinja2')
if jinja2_version < LooseVersion('3.0'):
raise SystemExit(
'ERROR: Ansible requires Jinja2 3.0 or newer on the controller. '
'Current version: %s' % jinja2_version
)
import errno
import getpass
import os
import subprocess
import traceback
from abc import ABC, abstractmethod
from pathlib import Path
try:
from ansible import constants as C
from ansible.utils.display import Display, initialize_locale
initialize_locale()
display = Display()
except Exception as e:
print('ERROR: %s' % e, file=sys.stderr)
sys.exit(5)
from ansible import context
from ansible.cli.arguments import option_helpers as opt_help
from ansible.errors import AnsibleError, AnsibleOptionsError, AnsibleParserError
from ansible.inventory.manager import InventoryManager
from ansible.module_utils.six import string_types
from ansible.module_utils._text import to_bytes, to_text
from ansible.parsing.dataloader import DataLoader
from ansible.parsing.vault import PromptVaultSecret, get_file_vault_secret
from ansible.plugins.loader import add_all_plugin_dirs
from ansible.release import __version__
from ansible.utils.collection_loader import AnsibleCollectionConfig
from ansible.utils.collection_loader._collection_finder import _get_collection_name_from_path
from ansible.utils.path import unfrackpath
from ansible.utils.unsafe_proxy import to_unsafe_text
from ansible.vars.manager import VariableManager
try:
import argcomplete
HAS_ARGCOMPLETE = True
except ImportError:
HAS_ARGCOMPLETE = False
class CLI(ABC):
''' code behind bin/ansible* programs '''
PAGER = 'less'
# -F (quit-if-one-screen) -R (allow raw ansi control chars)
# -S (chop long lines) -X (disable termcap init and de-init)
LESS_OPTS = 'FRSX'
SKIP_INVENTORY_DEFAULTS = False
def __init__(self, args, callback=None):
"""
Base init method for all command line programs
"""
if not args:
raise ValueError('A non-empty list for args is required')
self.args = args
self.parser = None
self.callback = callback
if C.DEVEL_WARNING and __version__.endswith('dev0'):
display.warning(
'You are running the development version of Ansible. You should only run Ansible from "devel" if '
'you are modifying the Ansible engine, or trying out features under development. This is a rapidly '
'changing source of code and can become unstable at any point.'
)
@abstractmethod
def run(self):
"""Run the ansible command
Subclasses must implement this method. It does the actual work of
running an Ansible command.
"""
self.parse()
display.vv(to_text(opt_help.version(self.parser.prog)))
if C.CONFIG_FILE:
display.v(u"Using %s as config file" % to_text(C.CONFIG_FILE))
else:
display.v(u"No config file found; using defaults")
# warn about deprecated config options
for deprecated in C.config.DEPRECATED:
name = deprecated[0]
why = deprecated[1]['why']
if 'alternatives' in deprecated[1]:
alt = ', use %s instead' % deprecated[1]['alternatives']
else:
alt = ''
ver = deprecated[1].get('version')
date = deprecated[1].get('date')
collection_name = deprecated[1].get('collection_name')
display.deprecated("%s option, %s%s" % (name, why, alt),
version=ver, date=date, collection_name=collection_name)
@staticmethod
def split_vault_id(vault_id):
# return (before_@, after_@)
# if no @, return whole string as after_
if '@' not in vault_id:
return (None, vault_id)
parts = vault_id.split('@', 1)
ret = tuple(parts)
return ret
@staticmethod
def build_vault_ids(vault_ids, vault_password_files=None,
ask_vault_pass=None, create_new_password=None,
auto_prompt=True):
vault_password_files = vault_password_files or []
vault_ids = vault_ids or []
# convert vault_password_files into vault_ids slugs
for password_file in vault_password_files:
id_slug = u'%s@%s' % (C.DEFAULT_VAULT_IDENTITY, password_file)
# note this makes --vault-id higher precedence than --vault-password-file
# if we want to intertwingle them in order probably need a cli callback to populate vault_ids
# used by --vault-id and --vault-password-file
vault_ids.append(id_slug)
# if an action needs an encrypt password (create_new_password=True) and we dont
# have other secrets setup, then automatically add a password prompt as well.
# prompts cant/shouldnt work without a tty, so dont add prompt secrets
if ask_vault_pass or (not vault_ids and auto_prompt):
id_slug = u'%s@%s' % (C.DEFAULT_VAULT_IDENTITY, u'prompt_ask_vault_pass')
vault_ids.append(id_slug)
return vault_ids
# TODO: remove the now unused args
@staticmethod
def setup_vault_secrets(loader, vault_ids, vault_password_files=None,
ask_vault_pass=None, create_new_password=False,
auto_prompt=True):
# list of tuples
vault_secrets = []
# Depending on the vault_id value (including how --ask-vault-pass / --vault-password-file create a vault_id)
# we need to show different prompts. This is for compat with older Towers that expect a
# certain vault password prompt format, so 'promp_ask_vault_pass' vault_id gets the old format.
prompt_formats = {}
# If there are configured default vault identities, they are considered 'first'
# so we prepend them to vault_ids (from cli) here
vault_password_files = vault_password_files or []
if C.DEFAULT_VAULT_PASSWORD_FILE:
vault_password_files.append(C.DEFAULT_VAULT_PASSWORD_FILE)
if create_new_password:
prompt_formats['prompt'] = ['New vault password (%(vault_id)s): ',
'Confirm new vault password (%(vault_id)s): ']
# 2.3 format prompts for --ask-vault-pass
prompt_formats['prompt_ask_vault_pass'] = ['New Vault password: ',
'Confirm New Vault password: ']
else:
prompt_formats['prompt'] = ['Vault password (%(vault_id)s): ']
# The format when we use just --ask-vault-pass needs to match 'Vault password:\s*?$'
prompt_formats['prompt_ask_vault_pass'] = ['Vault password: ']
vault_ids = CLI.build_vault_ids(vault_ids,
vault_password_files,
ask_vault_pass,
create_new_password,
auto_prompt=auto_prompt)
last_exception = found_vault_secret = None
for vault_id_slug in vault_ids:
vault_id_name, vault_id_value = CLI.split_vault_id(vault_id_slug)
if vault_id_value in ['prompt', 'prompt_ask_vault_pass']:
# --vault-id some_name@prompt_ask_vault_pass --vault-id other_name@prompt_ask_vault_pass will be a little
# confusing since it will use the old format without the vault id in the prompt
built_vault_id = vault_id_name or C.DEFAULT_VAULT_IDENTITY
# choose the prompt based on --vault-id=prompt or --ask-vault-pass. --ask-vault-pass
# always gets the old format for Tower compatibility.
# ie, we used --ask-vault-pass, so we need to use the old vault password prompt
# format since Tower needs to match on that format.
prompted_vault_secret = PromptVaultSecret(prompt_formats=prompt_formats[vault_id_value],
vault_id=built_vault_id)
# a empty or invalid password from the prompt will warn and continue to the next
# without erroring globally
try:
prompted_vault_secret.load()
except AnsibleError as exc:
display.warning('Error in vault password prompt (%s): %s' % (vault_id_name, exc))
raise
found_vault_secret = True
vault_secrets.append((built_vault_id, prompted_vault_secret))
# update loader with new secrets incrementally, so we can load a vault password
# that is encrypted with a vault secret provided earlier
loader.set_vault_secrets(vault_secrets)
continue
# assuming anything else is a password file
display.vvvvv('Reading vault password file: %s' % vault_id_value)
# read vault_pass from a file
try:
file_vault_secret = get_file_vault_secret(filename=vault_id_value,
vault_id=vault_id_name,
loader=loader)
except AnsibleError as exc:
display.warning('Error getting vault password file (%s): %s' % (vault_id_name, to_text(exc)))
last_exception = exc
continue
try:
file_vault_secret.load()
except AnsibleError as exc:
display.warning('Error in vault password file loading (%s): %s' % (vault_id_name, to_text(exc)))
last_exception = exc
continue
found_vault_secret = True
if vault_id_name:
vault_secrets.append((vault_id_name, file_vault_secret))
else:
vault_secrets.append((C.DEFAULT_VAULT_IDENTITY, file_vault_secret))
# update loader with as-yet-known vault secrets
loader.set_vault_secrets(vault_secrets)
# An invalid or missing password file will error globally
# if no valid vault secret was found.
if last_exception and not found_vault_secret:
raise last_exception
return vault_secrets
@staticmethod
def _get_secret(prompt):
secret = getpass.getpass(prompt=prompt)
if secret:
secret = to_unsafe_text(secret)
return secret
@staticmethod
def ask_passwords():
''' prompt for connection and become passwords if needed '''
op = context.CLIARGS
sshpass = None
becomepass = None
become_prompt = ''
become_prompt_method = "BECOME" if C.AGNOSTIC_BECOME_PROMPT else op['become_method'].upper()
try:
become_prompt = "%s password: " % become_prompt_method
if op['ask_pass']:
sshpass = CLI._get_secret("SSH password: ")
become_prompt = "%s password[defaults to SSH password]: " % become_prompt_method
elif op['connection_password_file']:
sshpass = CLI.get_password_from_file(op['connection_password_file'])
if op['become_ask_pass']:
becomepass = CLI._get_secret(become_prompt)
if op['ask_pass'] and becomepass == '':
becomepass = sshpass
elif op['become_password_file']:
becomepass = CLI.get_password_from_file(op['become_password_file'])
except EOFError:
pass
return (sshpass, becomepass)
def validate_conflicts(self, op, runas_opts=False, fork_opts=False):
''' check for conflicting options '''
if fork_opts:
if op.forks < 1:
self.parser.error("The number of processes (--forks) must be >= 1")
return op
@abstractmethod
def init_parser(self, usage="", desc=None, epilog=None):
"""
Create an options parser for most ansible scripts
Subclasses need to implement this method. They will usually call the base class's
init_parser to create a basic version and then add their own options on top of that.
An implementation will look something like this::
def init_parser(self):
super(MyCLI, self).init_parser(usage="My Ansible CLI", inventory_opts=True)
ansible.arguments.option_helpers.add_runas_options(self.parser)
self.parser.add_option('--my-option', dest='my_option', action='store')
"""
self.parser = opt_help.create_base_parser(self.name, usage=usage, desc=desc, epilog=epilog)
@abstractmethod
def post_process_args(self, options):
"""Process the command line args
Subclasses need to implement this method. This method validates and transforms the command
line arguments. It can be used to check whether conflicting values were given, whether filenames
exist, etc.
An implementation will look something like this::
def post_process_args(self, options):
options = super(MyCLI, self).post_process_args(options)
if options.addition and options.subtraction:
raise AnsibleOptionsError('Only one of --addition and --subtraction can be specified')
if isinstance(options.listofhosts, string_types):
options.listofhosts = string_types.split(',')
return options
"""
# process tags
if hasattr(options, 'tags') and not options.tags:
# optparse defaults does not do what's expected
# More specifically, we want `--tags` to be additive. So we cannot
# simply change C.TAGS_RUN's default to ["all"] because then passing
# --tags foo would cause us to have ['all', 'foo']
options.tags = ['all']
if hasattr(options, 'tags') and options.tags:
tags = set()
for tag_set in options.tags:
for tag in tag_set.split(u','):
tags.add(tag.strip())
options.tags = list(tags)
# process skip_tags
if hasattr(options, 'skip_tags') and options.skip_tags:
skip_tags = set()
for tag_set in options.skip_tags:
for tag in tag_set.split(u','):
skip_tags.add(tag.strip())
options.skip_tags = list(skip_tags)
# process inventory options except for CLIs that require their own processing
if hasattr(options, 'inventory') and not self.SKIP_INVENTORY_DEFAULTS:
if options.inventory:
# should always be list
if isinstance(options.inventory, string_types):
options.inventory = [options.inventory]
# Ensure full paths when needed
options.inventory = [unfrackpath(opt, follow=False) if ',' not in opt else opt for opt in options.inventory]
else:
options.inventory = C.DEFAULT_HOST_LIST
return options
def parse(self):
"""Parse the command line args
This method parses the command line arguments. It uses the parser
stored in the self.parser attribute and saves the args and options in
context.CLIARGS.
Subclasses need to implement two helper methods, init_parser() and post_process_args() which
are called from this function before and after parsing the arguments.
"""
self.init_parser()
if HAS_ARGCOMPLETE:
argcomplete.autocomplete(self.parser)
try:
options = self.parser.parse_args(self.args[1:])
except SystemExit as e:
if(e.code != 0):
self.parser.exit(status=2, message=" \n%s" % self.parser.format_help())
raise
options = self.post_process_args(options)
context._init_global_context(options)
@staticmethod
def version_info(gitinfo=False):
''' return full ansible version info '''
if gitinfo:
# expensive call, user with care
ansible_version_string = opt_help.version()
else:
ansible_version_string = __version__
ansible_version = ansible_version_string.split()[0]
ansible_versions = ansible_version.split('.')
for counter in range(len(ansible_versions)):
if ansible_versions[counter] == "":
ansible_versions[counter] = 0
try:
ansible_versions[counter] = int(ansible_versions[counter])
except Exception:
pass
if len(ansible_versions) < 3:
for counter in range(len(ansible_versions), 3):
ansible_versions.append(0)
return {'string': ansible_version_string.strip(),
'full': ansible_version,
'major': ansible_versions[0],
'minor': ansible_versions[1],
'revision': ansible_versions[2]}
@staticmethod
def pager(text):
''' find reasonable way to display text '''
# this is a much simpler form of what is in pydoc.py
if not sys.stdout.isatty():
display.display(text, screen_only=True)
elif 'PAGER' in os.environ:
if sys.platform == 'win32':
display.display(text, screen_only=True)
else:
CLI.pager_pipe(text, os.environ['PAGER'])
else:
p = subprocess.Popen('less --version', shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
p.communicate()
if p.returncode == 0:
CLI.pager_pipe(text, 'less')
else:
display.display(text, screen_only=True)
@staticmethod
def pager_pipe(text, cmd):
''' pipe text through a pager '''
if 'LESS' not in os.environ:
os.environ['LESS'] = CLI.LESS_OPTS
try:
cmd = subprocess.Popen(cmd, shell=True, stdin=subprocess.PIPE, stdout=sys.stdout)
cmd.communicate(input=to_bytes(text))
except IOError:
pass
except KeyboardInterrupt:
pass
@staticmethod
def _play_prereqs():
options = context.CLIARGS
# all needs loader
loader = DataLoader()
basedir = options.get('basedir', False)
if basedir:
loader.set_basedir(basedir)
add_all_plugin_dirs(basedir)
AnsibleCollectionConfig.playbook_paths = basedir
default_collection = _get_collection_name_from_path(basedir)
if default_collection:
display.warning(u'running with default collection {0}'.format(default_collection))
AnsibleCollectionConfig.default_collection = default_collection
vault_ids = list(options['vault_ids'])
default_vault_ids = C.DEFAULT_VAULT_IDENTITY_LIST
vault_ids = default_vault_ids + vault_ids
vault_secrets = CLI.setup_vault_secrets(loader,
vault_ids=vault_ids,
vault_password_files=list(options['vault_password_files']),
ask_vault_pass=options['ask_vault_pass'],
auto_prompt=False)
loader.set_vault_secrets(vault_secrets)
# create the inventory, and filter it based on the subset specified (if any)
inventory = InventoryManager(loader=loader, sources=options['inventory'])
# create the variable manager, which will be shared throughout
# the code, ensuring a consistent view of global variables
variable_manager = VariableManager(loader=loader, inventory=inventory, version_info=CLI.version_info(gitinfo=False))
return loader, inventory, variable_manager
@staticmethod
def get_host_list(inventory, subset, pattern='all'):
no_hosts = False
if len(inventory.list_hosts()) == 0:
# Empty inventory
if C.LOCALHOST_WARNING and pattern not in C.LOCALHOST:
display.warning("provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all'")
no_hosts = True
inventory.subset(subset)
hosts = inventory.list_hosts(pattern)
if not hosts and no_hosts is False:
raise AnsibleError("Specified hosts and/or --limit does not match any hosts")
return hosts
@staticmethod
def get_password_from_file(pwd_file):
b_pwd_file = to_bytes(pwd_file)
secret = None
if b_pwd_file == b'-':
# ensure its read as bytes
secret = sys.stdin.buffer.read()
elif not os.path.exists(b_pwd_file):
raise AnsibleError("The password file %s was not found" % pwd_file)
elif os.path.is_executable(b_pwd_file):
display.vvvv(u'The password file %s is a script.' % to_text(pwd_file))
cmd = [b_pwd_file]
try:
p = subprocess.Popen(cmd, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
except OSError as e:
raise AnsibleError("Problem occured when trying to run the password script %s (%s)."
" If this is not a script, remove the executable bit from the file." % (pwd_file, e))
stdout, stderr = p.communicate()
if p.returncode != 0:
raise AnsibleError("The password script %s returned an error (rc=%s): %s" % (pwd_file, p.returncode, stderr))
secret = stdout
else:
try:
f = open(b_pwd_file, "rb")
secret = f.read().strip()
f.close()
except (OSError, IOError) as e:
raise AnsibleError("Could not read password file %s: %s" % (pwd_file, e))
secret = secret.strip(b'\r\n')
if not secret:
raise AnsibleError('Empty password was provided from file (%s)' % pwd_file)
return to_unsafe_text(secret)
@classmethod
def cli_executor(cls, args=None):
if args is None:
args = sys.argv
try:
display.debug("starting run")
ansible_dir = Path("~/.ansible").expanduser()
try:
ansible_dir.mkdir(mode=0o700)
except OSError as exc:
if exc.errno != errno.EEXIST:
display.warning(
"Failed to create the directory '%s': %s" % (ansible_dir, to_text(exc, errors='surrogate_or_replace'))
)
else:
display.debug("Created the '%s' directory" % ansible_dir)
try:
args = [to_text(a, errors='surrogate_or_strict') for a in args]
except UnicodeError:
display.error('Command line args are not in utf-8, unable to continue. Ansible currently only understands utf-8')
display.display(u"The full traceback was:\n\n%s" % to_text(traceback.format_exc()))
exit_code = 6
else:
cli = cls(args)
exit_code = cli.run()
except AnsibleOptionsError as e:
cli.parser.print_help()
display.error(to_text(e), wrap_text=False)
exit_code = 5
except AnsibleParserError as e:
display.error(to_text(e), wrap_text=False)
exit_code = 4
# TQM takes care of these, but leaving comment to reserve the exit codes
# except AnsibleHostUnreachable as e:
# display.error(str(e))
# exit_code = 3
# except AnsibleHostFailed as e:
# display.error(str(e))
# exit_code = 2
except AnsibleError as e:
display.error(to_text(e), wrap_text=False)
exit_code = 1
except KeyboardInterrupt:
display.error("User interrupted execution")
exit_code = 99
except Exception as e:
if C.DEFAULT_DEBUG:
# Show raw stacktraces in debug mode, It also allow pdb to
# enter post mortem mode.
raise
have_cli_options = bool(context.CLIARGS)
display.error("Unexpected Exception, this is probably a bug: %s" % to_text(e), wrap_text=False)
if not have_cli_options or have_cli_options and context.CLIARGS['verbosity'] > 2:
log_only = False
if hasattr(e, 'orig_exc'):
display.vvv('\nexception type: %s' % to_text(type(e.orig_exc)))
why = to_text(e.orig_exc)
if to_text(e) != why:
display.vvv('\noriginal msg: %s' % why)
else:
display.display("to see the full traceback, use -vvv")
log_only = True
display.display(u"the full traceback was:\n\n%s" % to_text(traceback.format_exc()), log_only=log_only)
exit_code = 250
sys.exit(exit_code)
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 75,508 |
Docs survey feedback: add tips on when to turn a playbook into a role
|
### Summary
When is it a good idea to turn a playbook into a role? When should it remain a playbook?
Add some rules of thumb to the role documentation.
### Issue Type
Documentation Report
### Component Name
docs/docsite/rst/user_guide/playbooks_roles.rst
### Ansible Version
```console
$ ansible --version
N/A
```
### Configuration
```console
$ ansible-config dump --only-changed
N/A
```
### OS / Environment
N/A
### Additional Information
None needed.
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/75508
|
https://github.com/ansible/ansible/pull/76560
|
4c738b08b4688892acb23d58ac782a068de95f79
|
67e5649a264db2696b46d9b9c80724c03c9f7527
| 2021-08-16T19:38:36Z |
python
| 2021-12-21T16:15:48Z |
docs/docsite/rst/user_guide/playbooks_reuse.rst
|
.. _playbooks_reuse:
**************************
Re-using Ansible artifacts
**************************
You can write a simple playbook in one very large file, and most users learn the one-file approach first. However, breaking tasks up into different files is an excellent way to organize complex sets of tasks and reuse them. Smaller, more distributed artifacts let you re-use the same variables, tasks, and plays in multiple playbooks to address different use cases. You can use distributed artifacts across multiple parent playbooks or even multiple times within one playbook. For example, you might want to update your customer database as part of several different playbooks. If you put all the tasks related to updating your database in a tasks file, you can re-use them in many playbooks while only maintaining them in one place.
.. contents::
:local:
Creating re-usable files and roles
==================================
Ansible offers four distributed, re-usable artifacts: variables files, task files, playbooks, and roles.
- A variables file contains only variables.
- A task file contains only tasks.
- A playbook contains at least one play, and may contain variables, tasks, and other content. You can re-use tightly focused playbooks, but you can only re-use them statically, not dynamically.
- A role contains a set of related tasks, variables, defaults, handlers, and even modules or other plugins in a defined file-tree. Unlike variables files, task files, or playbooks, roles can be easily uploaded and shared via Ansible Galaxy. See :ref:`playbooks_reuse_roles` for details about creating and using roles.
.. versionadded:: 2.4
Re-using playbooks
==================
You can incorporate multiple playbooks into a main playbook. However, you can only use imports to re-use playbooks. For example:
.. code-block:: yaml
- import_playbook: webservers.yml
- import_playbook: databases.yml
Importing incorporates playbooks in other playbooks statically. Ansible runs the plays and tasks in each imported playbook in the order they are listed, just as if they had been defined directly in the main playbook.
You can select which playbook you want to import at runtime by defining your imported playbook filename with a variable, then passing the variable with either ``--extra-vars`` or the ``vars`` keyword. For example:
.. code-block:: yaml
- import_playbook: "/path/to/{{ import_from_extra_var }}"
- import_playbook: "{{ import_from_vars }}"
vars:
import_from_vars: /path/to/one_playbook.yml
If you run this playbook with ``ansible-playbook my_playbook -e import_from_extra_var=other_playbook.yml``, Ansible imports both one_playbook.yml and other_playbook.yml.
Re-using files and roles
========================
Ansible offers two ways to re-use files and roles in a playbook: dynamic and static.
- For dynamic re-use, add an ``include_*`` task in the tasks section of a play:
- :ref:`include_role <include_role_module>`
- :ref:`include_tasks <include_tasks_module>`
- :ref:`include_vars <include_vars_module>`
- For static re-use, add an ``import_*`` task in the tasks section of a play:
- :ref:`import_role <import_role_module>`
- :ref:`import_tasks <import_tasks_module>`
Task include and import statements can be used at arbitrary depth.
You can still use the bare :ref:`roles <roles_keyword>` keyword at the play level to incorporate a role in a playbook statically. However, the bare :ref:`include <include_module>` keyword, once used for both task files and playbook-level includes, is now deprecated.
Includes: dynamic re-use
------------------------
Including roles, tasks, or variables adds them to a playbook dynamically. Ansible processes included files and roles as they come up in a playbook, so included tasks can be affected by the results of earlier tasks within the top-level playbook. Included roles and tasks are similar to handlers - they may or may not run, depending on the results of other tasks in the top-level playbook.
The primary advantage of using ``include_*`` statements is looping. When a loop is used with an include, the included tasks or role will be executed once for each item in the loop.
The filenames for included roles, tasks, and vars are templated before inclusion.
You can pass variables into includes. See :ref:`ansible_variable_precedence` for more details on variable inheritance and precedence.
Imports: static re-use
----------------------
Importing roles, tasks, or playbooks adds them to a playbook statically. Ansible pre-processes imported files and roles before it runs any tasks in a playbook, so imported content is never affected by other tasks within the top-level playbook.
The filenames for imported roles and tasks support templating, but the variables must be available when Ansible is pre-processing the imports. This can be done with the ``vars`` keyword or by using ``--extra-vars``.
You can pass variables to imports. You must pass variables if you want to run an imported file more than once in a playbook. For example:
.. code-block:: yaml
tasks:
- import_tasks: wordpress.yml
vars:
wp_user: timmy
- import_tasks: wordpress.yml
vars:
wp_user: alice
- import_tasks: wordpress.yml
vars:
wp_user: bob
See :ref:`ansible_variable_precedence` for more details on variable inheritance and precedence.
.. _dynamic_vs_static:
Comparing includes and imports: dynamic and static re-use
------------------------------------------------------------
Each approach to re-using distributed Ansible artifacts has advantages and limitations. You may choose dynamic re-use for some playbooks and static re-use for others. Although you can use both dynamic and static re-use in a single playbook, it is best to select one approach per playbook. Mixing static and dynamic re-use can introduce difficult-to-diagnose bugs into your playbooks. This table summarizes the main differences so you can choose the best approach for each playbook you create.
.. table::
:class: documentation-table
========================= ======================================== ========================================
.. Include_* Import_*
========================= ======================================== ========================================
Type of re-use Dynamic Static
When processed At runtime, when encountered Pre-processed during playbook parsing
Task or play All includes are tasks ``import_playbook`` cannot be a task
Task options Apply only to include task itself Apply to all child tasks in import
Calling from loops Executed once for each loop item Cannot be used in a loop
Using ``--list-tags`` Tags within includes not listed All tags appear with ``--list-tags``
Using ``--list-tasks`` Tasks within includes not listed All tasks appear with ``--list-tasks``
Notifying handlers Cannot trigger handlers within includes Can trigger individual imported handlers
Using ``--start-at-task`` Cannot start at tasks within includes Can start at imported tasks
Using inventory variables Can ``include_*: {{ inventory_var }}`` Cannot ``import_*: {{ inventory_var }}``
With playbooks No ``include_playbook`` Can import full playbooks
With variables files Can include variables files Use ``vars_files:`` to import variables
========================= ======================================== ========================================
.. note::
* There are also big differences in resource consumption and performance, imports are quite lean and fast, while includes require a lot of management
and accounting.
Re-using tasks as handlers
==========================
You can also use includes and imports in the :ref:`handlers` section of a playbook. For instance, if you want to define how to restart Apache, you only have to do that once for all of your playbooks. You might make a ``restarts.yml`` file that looks like:
.. code-block:: yaml
# restarts.yml
- name: Restart apache
ansible.builtin.service:
name: apache
state: restarted
- name: Restart mysql
ansible.builtin.service:
name: mysql
state: restarted
You can trigger handlers from either an import or an include, but the procedure is different for each method of re-use. If you include the file, you must notify the include itself, which triggers all the tasks in ``restarts.yml``. If you import the file, you must notify the individual task(s) within ``restarts.yml``. You can mix direct tasks and handlers with included or imported tasks and handlers.
Triggering included (dynamic) handlers
--------------------------------------
Includes are executed at run-time, so the name of the include exists during play execution, but the included tasks do not exist until the include itself is triggered. To use the ``Restart apache`` task with dynamic re-use, refer to the name of the include itself. This approach triggers all tasks in the included file as handlers. For example, with the task file shown above:
.. code-block:: yaml
- name: Trigger an included (dynamic) handler
hosts: localhost
handlers:
- name: Restart services
include_tasks: restarts.yml
tasks:
- command: "true"
notify: Restart services
Triggering imported (static) handlers
-------------------------------------
Imports are processed before the play begins, so the name of the import no longer exists during play execution, but the names of the individual imported tasks do exist. To use the ``Restart apache`` task with static re-use, refer to the name of each task or tasks within the imported file. For example, with the task file shown above:
.. code-block:: yaml
- name: Trigger an imported (static) handler
hosts: localhost
handlers:
- name: Restart services
import_tasks: restarts.yml
tasks:
- command: "true"
notify: Restart apache
- command: "true"
notify: Restart mysql
.. seealso::
:ref:`utilities_modules`
Documentation of the ``include*`` and ``import*`` modules discussed here.
:ref:`working_with_playbooks`
Review the basic Playbook language features
:ref:`playbooks_variables`
All about variables in playbooks
:ref:`playbooks_conditionals`
Conditionals in playbooks
:ref:`playbooks_loops`
Loops in playbooks
:ref:`playbooks_best_practices`
Tips and tricks for playbooks
:ref:`ansible_galaxy`
How to share roles on galaxy, role management
`GitHub Ansible examples <https://github.com/ansible/ansible-examples>`_
Complete playbook files from the GitHub project source
`Mailing List <https://groups.google.com/group/ansible-project>`_
Questions? Help? Ideas? Stop by the list on Google Groups
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 76,425 |
AttributeError installing ansible-galaxy collection from git
|
### Summary
Ansible thows an error just after processing the install dependency map.
```Process install dependency map
ERROR! Unexpected Exception, this is probably a bug: 'float' object has no attribute 'startswith'
```
### Issue Type
Bug Report
### Component Name
ansible-galaxy
### Ansible Version
```console
[cloud-user@test-centos ~]$ ansible --version
ansible [core 2.12.0]
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/cloud-user/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.9/site-packages/ansible
ansible collection location = /home/cloud-user/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/bin/ansible
python version = 3.9.8 (main, Nov 8 2021, 00:00:00) [GCC 11.2.1 20211019 (Red Hat 11.2.1-6)]
jinja version = 2.11.3
libyaml = True
```
### Configuration
```console
[cloud-user@test-centos ~]$ ansible-config dump --only-changed
[cloud-user@test-centos ~]$
```
### OS / Environment
CentOS Stream 9
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
galaxy.yml:
namespace: my
name: cool-collection
version: 1.0
authors: David *****
readme: README.md
```
Install a collection from a git repo with this galaxy.yml:
`ansible-galaxy collection install git+https://.../my_project.git,ISSUE-1`
### Expected Results
The collection is installed
### Actual Results
```console
[cloud-user@test-centos ~]$ ansible-galaxy -vvvv collection install git+https://....git,ISSUE-1
ansible-galaxy [core 2.12.0]
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/cloud-user/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.9/site-packages/ansible
ansible collection location = /home/cloud-user/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/bin/ansible-galaxy
python version = 3.9.8 (main, Nov 8 2021, 00:00:00) [GCC 11.2.1 20211019 (Red Hat 11.2.1-6)]
jinja version = 2.11.3
libyaml = True
Using /etc/ansible/ansible.cfg as config file
Cloning into '/home/cloud-user/.ansible/tmp/ansible-local-12630phroh0cw/tmpelcczl4o/netbox-addp5e_slda'...
Username for 'https://...': my_user
Password for 'https://my_user@my_gitsrv':
remote: Enumerating objects: 425, done.
remote: Counting objects: 100% (61/61), done.
remote: Compressing objects: 100% (29/29), done.
remote: Total 425 (delta 18), reused 52 (delta 14), pack-reused 364
Receiving objects: 100% (425/425), 47.25 KiB | 11.81 MiB/s, done.
Resolving deltas: 100% (178/178), done.
Branch 'ISSUE-1' set up to track remote branch 'ISSUE-1' from 'origin'.
Switched to a new branch 'ISSUE-1'
Starting galaxy collection install process
Process install dependency map
ERROR! Unexpected Exception, this is probably a bug: 'float' object has no attribute 'startswith'
the full traceback was:
Traceback (most recent call last):
File "/usr/bin/ansible-galaxy", line 128, in <module>
exit_code = cli.run()
File "/usr/lib/python3.9/site-packages/ansible/cli/galaxy.py", line 567, in run
return context.CLIARGS['func']()
File "/usr/lib/python3.9/site-packages/ansible/cli/galaxy.py", line 86, in method_wrapper
return wrapped_method(*args, **kwargs)
File "/usr/lib/python3.9/site-packages/ansible/cli/galaxy.py", line 1201, in execute_install
self._execute_install_collection(
File "/usr/lib/python3.9/site-packages/ansible/cli/galaxy.py", line 1228, in _execute_install_collection
install_collections(
File "/usr/lib/python3.9/site-packages/ansible/galaxy/collection/__init__.py", line 513, in install_collections
dependency_map = _resolve_depenency_map(
File "/usr/lib/python3.9/site-packages/ansible/galaxy/collection/__init__.py", line 1327, in _resolve_depenency_map
return collection_dep_resolver.resolve(
File "/usr/lib/python3.9/site-packages/resolvelib/resolvers.py", line 453, in resolve
state = resolution.resolve(requirements, max_rounds=max_rounds)
File "/usr/lib/python3.9/site-packages/resolvelib/resolvers.py", line 347, in resolve
failure_causes = self._attempt_to_pin_criterion(name, criterion)
File "/usr/lib/python3.9/site-packages/resolvelib/resolvers.py", line 216, in _attempt_to_pin_criterion
satisfied = all(
File "/usr/lib/python3.9/site-packages/resolvelib/resolvers.py", line 217, in <genexpr>
self._p.is_satisfied_by(r, candidate)
File "/usr/lib/python3.9/site-packages/ansible/galaxy/dependency_resolution/providers.py", line 280, in is_satisfied_by
requirement.ver.startswith('<') or
AttributeError: 'float' object has no attribute 'startswith'
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/76425
|
https://github.com/ansible/ansible/pull/76579
|
6e57c8c0844c44a102dc081b93d5299cae9619e2
|
15ace5a854886cd75894e7214209e4b027034bdb
| 2021-12-01T22:41:01Z |
python
| 2022-01-06T19:05:20Z |
changelogs/fragments/76579-fix-ansible-galaxy-collection-version-error.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 76,425 |
AttributeError installing ansible-galaxy collection from git
|
### Summary
Ansible thows an error just after processing the install dependency map.
```Process install dependency map
ERROR! Unexpected Exception, this is probably a bug: 'float' object has no attribute 'startswith'
```
### Issue Type
Bug Report
### Component Name
ansible-galaxy
### Ansible Version
```console
[cloud-user@test-centos ~]$ ansible --version
ansible [core 2.12.0]
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/cloud-user/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.9/site-packages/ansible
ansible collection location = /home/cloud-user/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/bin/ansible
python version = 3.9.8 (main, Nov 8 2021, 00:00:00) [GCC 11.2.1 20211019 (Red Hat 11.2.1-6)]
jinja version = 2.11.3
libyaml = True
```
### Configuration
```console
[cloud-user@test-centos ~]$ ansible-config dump --only-changed
[cloud-user@test-centos ~]$
```
### OS / Environment
CentOS Stream 9
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
galaxy.yml:
namespace: my
name: cool-collection
version: 1.0
authors: David *****
readme: README.md
```
Install a collection from a git repo with this galaxy.yml:
`ansible-galaxy collection install git+https://.../my_project.git,ISSUE-1`
### Expected Results
The collection is installed
### Actual Results
```console
[cloud-user@test-centos ~]$ ansible-galaxy -vvvv collection install git+https://....git,ISSUE-1
ansible-galaxy [core 2.12.0]
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/cloud-user/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.9/site-packages/ansible
ansible collection location = /home/cloud-user/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/bin/ansible-galaxy
python version = 3.9.8 (main, Nov 8 2021, 00:00:00) [GCC 11.2.1 20211019 (Red Hat 11.2.1-6)]
jinja version = 2.11.3
libyaml = True
Using /etc/ansible/ansible.cfg as config file
Cloning into '/home/cloud-user/.ansible/tmp/ansible-local-12630phroh0cw/tmpelcczl4o/netbox-addp5e_slda'...
Username for 'https://...': my_user
Password for 'https://my_user@my_gitsrv':
remote: Enumerating objects: 425, done.
remote: Counting objects: 100% (61/61), done.
remote: Compressing objects: 100% (29/29), done.
remote: Total 425 (delta 18), reused 52 (delta 14), pack-reused 364
Receiving objects: 100% (425/425), 47.25 KiB | 11.81 MiB/s, done.
Resolving deltas: 100% (178/178), done.
Branch 'ISSUE-1' set up to track remote branch 'ISSUE-1' from 'origin'.
Switched to a new branch 'ISSUE-1'
Starting galaxy collection install process
Process install dependency map
ERROR! Unexpected Exception, this is probably a bug: 'float' object has no attribute 'startswith'
the full traceback was:
Traceback (most recent call last):
File "/usr/bin/ansible-galaxy", line 128, in <module>
exit_code = cli.run()
File "/usr/lib/python3.9/site-packages/ansible/cli/galaxy.py", line 567, in run
return context.CLIARGS['func']()
File "/usr/lib/python3.9/site-packages/ansible/cli/galaxy.py", line 86, in method_wrapper
return wrapped_method(*args, **kwargs)
File "/usr/lib/python3.9/site-packages/ansible/cli/galaxy.py", line 1201, in execute_install
self._execute_install_collection(
File "/usr/lib/python3.9/site-packages/ansible/cli/galaxy.py", line 1228, in _execute_install_collection
install_collections(
File "/usr/lib/python3.9/site-packages/ansible/galaxy/collection/__init__.py", line 513, in install_collections
dependency_map = _resolve_depenency_map(
File "/usr/lib/python3.9/site-packages/ansible/galaxy/collection/__init__.py", line 1327, in _resolve_depenency_map
return collection_dep_resolver.resolve(
File "/usr/lib/python3.9/site-packages/resolvelib/resolvers.py", line 453, in resolve
state = resolution.resolve(requirements, max_rounds=max_rounds)
File "/usr/lib/python3.9/site-packages/resolvelib/resolvers.py", line 347, in resolve
failure_causes = self._attempt_to_pin_criterion(name, criterion)
File "/usr/lib/python3.9/site-packages/resolvelib/resolvers.py", line 216, in _attempt_to_pin_criterion
satisfied = all(
File "/usr/lib/python3.9/site-packages/resolvelib/resolvers.py", line 217, in <genexpr>
self._p.is_satisfied_by(r, candidate)
File "/usr/lib/python3.9/site-packages/ansible/galaxy/dependency_resolution/providers.py", line 280, in is_satisfied_by
requirement.ver.startswith('<') or
AttributeError: 'float' object has no attribute 'startswith'
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/76425
|
https://github.com/ansible/ansible/pull/76579
|
6e57c8c0844c44a102dc081b93d5299cae9619e2
|
15ace5a854886cd75894e7214209e4b027034bdb
| 2021-12-01T22:41:01Z |
python
| 2022-01-06T19:05:20Z |
lib/ansible/galaxy/collection/__init__.py
|
# -*- coding: utf-8 -*-
# Copyright: (c) 2019-2021, Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
"""Installed collections management package."""
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import errno
import fnmatch
import functools
import json
import os
import shutil
import stat
import sys
import tarfile
import tempfile
import threading
import time
import yaml
from collections import namedtuple
from contextlib import contextmanager
from ansible.module_utils.compat.version import LooseVersion
from hashlib import sha256
from io import BytesIO
from itertools import chain
from yaml.error import YAMLError
# NOTE: Adding type ignores is a hack for mypy to shut up wrt bug #1153
try:
import queue # type: ignore[import]
except ImportError: # Python 2
import Queue as queue # type: ignore[import,no-redef]
try:
# NOTE: It's in Python 3 stdlib and can be installed on Python 2
# NOTE: via `pip install typing`. Unnecessary in runtime.
# NOTE: `TYPE_CHECKING` is True during mypy-typecheck-time.
from typing import TYPE_CHECKING
except ImportError:
TYPE_CHECKING = False
if TYPE_CHECKING:
from typing import Dict, Iterable, List, Optional, Text, Union
if sys.version_info[:2] >= (3, 8):
from typing import Literal
else: # Python 2 + Python 3.4-3.7
from typing_extensions import Literal
from ansible.galaxy.api import GalaxyAPI
from ansible.galaxy.collection.concrete_artifact_manager import (
ConcreteArtifactsManager,
)
ManifestKeysType = Literal[
'collection_info', 'file_manifest_file', 'format',
]
FileMetaKeysType = Literal[
'name',
'ftype',
'chksum_type',
'chksum_sha256',
'format',
]
CollectionInfoKeysType = Literal[
# collection meta:
'namespace', 'name', 'version',
'authors', 'readme',
'tags', 'description',
'license', 'license_file',
'dependencies',
'repository', 'documentation',
'homepage', 'issues',
# files meta:
FileMetaKeysType,
]
ManifestValueType = Dict[
CollectionInfoKeysType,
Optional[
Union[
int, str, # scalars, like name/ns, schema version
List[str], # lists of scalars, like tags
Dict[str, str], # deps map
],
],
]
CollectionManifestType = Dict[ManifestKeysType, ManifestValueType]
FileManifestEntryType = Dict[FileMetaKeysType, Optional[Union[str, int]]]
FilesManifestType = Dict[
Literal['files', 'format'],
Union[List[FileManifestEntryType], int],
]
import ansible.constants as C
from ansible.errors import AnsibleError
from ansible.galaxy import get_collections_galaxy_meta_info
from ansible.galaxy.collection.concrete_artifact_manager import (
_consume_file,
_download_file,
_get_json_from_installed_dir,
_get_meta_from_src_dir,
_tarfile_extract,
)
from ansible.galaxy.collection.galaxy_api_proxy import MultiGalaxyAPIProxy
from ansible.galaxy.dependency_resolution import (
build_collection_dependency_resolver,
)
from ansible.galaxy.dependency_resolution.dataclasses import (
Candidate, Requirement, _is_installed_collection_dir,
)
from ansible.galaxy.dependency_resolution.errors import (
CollectionDependencyResolutionImpossible,
CollectionDependencyInconsistentCandidate,
)
from ansible.galaxy.dependency_resolution.versioning import meets_requirements
from ansible.module_utils.six import raise_from
from ansible.module_utils._text import to_bytes, to_native, to_text
from ansible.module_utils.common.yaml import yaml_dump
from ansible.utils.collection_loader import AnsibleCollectionRef
from ansible.utils.display import Display
from ansible.utils.hashing import secure_hash, secure_hash_s
from ansible.utils.version import SemanticVersion
display = Display()
MANIFEST_FORMAT = 1
MANIFEST_FILENAME = 'MANIFEST.json'
ModifiedContent = namedtuple('ModifiedContent', ['filename', 'expected', 'installed'])
# FUTURE: expose actual verify result details for a collection on this object, maybe reimplement as dataclass on py3.8+
class CollectionVerifyResult:
def __init__(self, collection_name): # type: (str) -> None
self.collection_name = collection_name # type: str
self.success = True # type: bool
def verify_local_collection(
local_collection, remote_collection,
artifacts_manager,
): # type: (Candidate, Optional[Candidate], ConcreteArtifactsManager) -> CollectionVerifyResult
"""Verify integrity of the locally installed collection.
:param local_collection: Collection being checked.
:param remote_collection: Upstream collection (optional, if None, only verify local artifact)
:param artifacts_manager: Artifacts manager.
:return: a collection verify result object.
"""
result = CollectionVerifyResult(local_collection.fqcn)
b_collection_path = to_bytes(
local_collection.src, errors='surrogate_or_strict',
)
display.display("Verifying '{coll!s}'.".format(coll=local_collection))
display.display(
u"Installed collection found at '{path!s}'".
format(path=to_text(local_collection.src)),
)
modified_content = [] # type: List[ModifiedContent]
verify_local_only = remote_collection is None
if verify_local_only:
# partial away the local FS detail so we can just ask generically during validation
get_json_from_validation_source = functools.partial(_get_json_from_installed_dir, b_collection_path)
get_hash_from_validation_source = functools.partial(_get_file_hash, b_collection_path)
# since we're not downloading this, just seed it with the value from disk
manifest_hash = get_hash_from_validation_source(MANIFEST_FILENAME)
else:
# fetch remote
b_temp_tar_path = ( # NOTE: AnsibleError is raised on URLError
artifacts_manager.get_artifact_path
if remote_collection.is_concrete_artifact
else artifacts_manager.get_galaxy_artifact_path
)(remote_collection)
display.vvv(
u"Remote collection cached as '{path!s}'".format(path=to_text(b_temp_tar_path))
)
# partial away the tarball details so we can just ask generically during validation
get_json_from_validation_source = functools.partial(_get_json_from_tar_file, b_temp_tar_path)
get_hash_from_validation_source = functools.partial(_get_tar_file_hash, b_temp_tar_path)
# Compare installed version versus requirement version
if local_collection.ver != remote_collection.ver:
err = (
"{local_fqcn!s} has the version '{local_ver!s}' but "
"is being compared to '{remote_ver!s}'".format(
local_fqcn=local_collection.fqcn,
local_ver=local_collection.ver,
remote_ver=remote_collection.ver,
)
)
display.display(err)
result.success = False
return result
# Verify the downloaded manifest hash matches the installed copy before verifying the file manifest
manifest_hash = get_hash_from_validation_source(MANIFEST_FILENAME)
_verify_file_hash(b_collection_path, MANIFEST_FILENAME, manifest_hash, modified_content)
display.display('MANIFEST.json hash: {manifest_hash}'.format(manifest_hash=manifest_hash))
manifest = get_json_from_validation_source(MANIFEST_FILENAME)
# Use the manifest to verify the file manifest checksum
file_manifest_data = manifest['file_manifest_file']
file_manifest_filename = file_manifest_data['name']
expected_hash = file_manifest_data['chksum_%s' % file_manifest_data['chksum_type']]
# Verify the file manifest before using it to verify individual files
_verify_file_hash(b_collection_path, file_manifest_filename, expected_hash, modified_content)
file_manifest = get_json_from_validation_source(file_manifest_filename)
# Use the file manifest to verify individual file checksums
for manifest_data in file_manifest['files']:
if manifest_data['ftype'] == 'file':
expected_hash = manifest_data['chksum_%s' % manifest_data['chksum_type']]
_verify_file_hash(b_collection_path, manifest_data['name'], expected_hash, modified_content)
if modified_content:
result.success = False
display.display(
'Collection {fqcn!s} contains modified content '
'in the following files:'.
format(fqcn=to_text(local_collection.fqcn)),
)
for content_change in modified_content:
display.display(' %s' % content_change.filename)
display.v(" Expected: %s\n Found: %s" % (content_change.expected, content_change.installed))
else:
what = "are internally consistent with its manifest" if verify_local_only else "match the remote collection"
display.display(
"Successfully verified that checksums for '{coll!s}' {what!s}.".
format(coll=local_collection, what=what),
)
return result
def build_collection(u_collection_path, u_output_path, force):
# type: (Text, Text, bool) -> Text
"""Creates the Ansible collection artifact in a .tar.gz file.
:param u_collection_path: The path to the collection to build. This should be the directory that contains the
galaxy.yml file.
:param u_output_path: The path to create the collection build artifact. This should be a directory.
:param force: Whether to overwrite an existing collection build artifact or fail.
:return: The path to the collection build artifact.
"""
b_collection_path = to_bytes(u_collection_path, errors='surrogate_or_strict')
try:
collection_meta = _get_meta_from_src_dir(b_collection_path)
except LookupError as lookup_err:
raise_from(AnsibleError(to_native(lookup_err)), lookup_err)
collection_manifest = _build_manifest(**collection_meta)
file_manifest = _build_files_manifest(
b_collection_path,
collection_meta['namespace'], # type: ignore[arg-type]
collection_meta['name'], # type: ignore[arg-type]
collection_meta['build_ignore'], # type: ignore[arg-type]
)
artifact_tarball_file_name = '{ns!s}-{name!s}-{ver!s}.tar.gz'.format(
name=collection_meta['name'],
ns=collection_meta['namespace'],
ver=collection_meta['version'],
)
b_collection_output = os.path.join(
to_bytes(u_output_path),
to_bytes(artifact_tarball_file_name, errors='surrogate_or_strict'),
)
if os.path.exists(b_collection_output):
if os.path.isdir(b_collection_output):
raise AnsibleError("The output collection artifact '%s' already exists, "
"but is a directory - aborting" % to_native(b_collection_output))
elif not force:
raise AnsibleError("The file '%s' already exists. You can use --force to re-create "
"the collection artifact." % to_native(b_collection_output))
collection_output = _build_collection_tar(b_collection_path, b_collection_output, collection_manifest, file_manifest)
return collection_output
def download_collections(
collections, # type: Iterable[Requirement]
output_path, # type: str
apis, # type: Iterable[GalaxyAPI]
no_deps, # type: bool
allow_pre_release, # type: bool
artifacts_manager, # type: ConcreteArtifactsManager
): # type: (...) -> None
"""Download Ansible collections as their tarball from a Galaxy server to the path specified and creates a requirements
file of the downloaded requirements to be used for an install.
:param collections: The collections to download, should be a list of tuples with (name, requirement, Galaxy Server).
:param output_path: The path to download the collections to.
:param apis: A list of GalaxyAPIs to query when search for a collection.
:param validate_certs: Whether to validate the certificate if downloading a tarball from a non-Galaxy host.
:param no_deps: Ignore any collection dependencies and only download the base requirements.
:param allow_pre_release: Do not ignore pre-release versions when selecting the latest.
"""
with _display_progress("Process download dependency map"):
dep_map = _resolve_depenency_map(
set(collections),
galaxy_apis=apis,
preferred_candidates=None,
concrete_artifacts_manager=artifacts_manager,
no_deps=no_deps,
allow_pre_release=allow_pre_release,
upgrade=False,
)
b_output_path = to_bytes(output_path, errors='surrogate_or_strict')
requirements = []
with _display_progress(
"Starting collection download process to '{path!s}'".
format(path=output_path),
):
for fqcn, concrete_coll_pin in dep_map.copy().items(): # FIXME: move into the provider
if concrete_coll_pin.is_virtual:
display.display(
'Virtual collection {coll!s} is not downloadable'.
format(coll=to_text(concrete_coll_pin)),
)
continue
display.display(
u"Downloading collection '{coll!s}' to '{path!s}'".
format(coll=to_text(concrete_coll_pin), path=to_text(b_output_path)),
)
b_src_path = (
artifacts_manager.get_artifact_path
if concrete_coll_pin.is_concrete_artifact
else artifacts_manager.get_galaxy_artifact_path
)(concrete_coll_pin)
b_dest_path = os.path.join(
b_output_path,
os.path.basename(b_src_path),
)
if concrete_coll_pin.is_dir:
b_dest_path = to_bytes(
build_collection(
to_text(b_src_path, errors='surrogate_or_strict'),
to_text(output_path, errors='surrogate_or_strict'),
force=True,
),
errors='surrogate_or_strict',
)
else:
shutil.copy(to_native(b_src_path), to_native(b_dest_path))
display.display(
"Collection '{coll!s}' was downloaded successfully".
format(coll=concrete_coll_pin),
)
requirements.append({
# FIXME: Consider using a more specific upgraded format
# FIXME: having FQCN in the name field, with src field
# FIXME: pointing to the file path, and explicitly set
# FIXME: type. If version and name are set, it'd
# FIXME: perform validation against the actual metadata
# FIXME: in the artifact src points at.
'name': to_native(os.path.basename(b_dest_path)),
'version': concrete_coll_pin.ver,
})
requirements_path = os.path.join(output_path, 'requirements.yml')
b_requirements_path = to_bytes(
requirements_path, errors='surrogate_or_strict',
)
display.display(
u'Writing requirements.yml file of downloaded collections '
"to '{path!s}'".format(path=to_text(requirements_path)),
)
yaml_bytes = to_bytes(
yaml_dump({'collections': requirements}),
errors='surrogate_or_strict',
)
with open(b_requirements_path, mode='wb') as req_fd:
req_fd.write(yaml_bytes)
def publish_collection(collection_path, api, wait, timeout):
"""Publish an Ansible collection tarball into an Ansible Galaxy server.
:param collection_path: The path to the collection tarball to publish.
:param api: A GalaxyAPI to publish the collection to.
:param wait: Whether to wait until the import process is complete.
:param timeout: The time in seconds to wait for the import process to finish, 0 is indefinite.
"""
import_uri = api.publish_collection(collection_path)
if wait:
# Galaxy returns a url fragment which differs between v2 and v3. The second to last entry is
# always the task_id, though.
# v2: {"task": "https://galaxy-dev.ansible.com/api/v2/collection-imports/35573/"}
# v3: {"task": "/api/automation-hub/v3/imports/collections/838d1308-a8f4-402c-95cb-7823f3806cd8/"}
task_id = None
for path_segment in reversed(import_uri.split('/')):
if path_segment:
task_id = path_segment
break
if not task_id:
raise AnsibleError("Publishing the collection did not return valid task info. Cannot wait for task status. Returned task info: '%s'" % import_uri)
with _display_progress(
"Collection has been published to the Galaxy server "
"{api.name!s} {api.api_server!s}".format(api=api),
):
api.wait_import_task(task_id, timeout)
display.display("Collection has been successfully published and imported to the Galaxy server %s %s"
% (api.name, api.api_server))
else:
display.display("Collection has been pushed to the Galaxy server %s %s, not waiting until import has "
"completed due to --no-wait being set. Import task results can be found at %s"
% (api.name, api.api_server, import_uri))
def install_collections(
collections, # type: Iterable[Requirement]
output_path, # type: str
apis, # type: Iterable[GalaxyAPI]
ignore_errors, # type: bool
no_deps, # type: bool
force, # type: bool
force_deps, # type: bool
upgrade, # type: bool
allow_pre_release, # type: bool
artifacts_manager, # type: ConcreteArtifactsManager
): # type: (...) -> None
"""Install Ansible collections to the path specified.
:param collections: The collections to install.
:param output_path: The path to install the collections to.
:param apis: A list of GalaxyAPIs to query when searching for a collection.
:param validate_certs: Whether to validate the certificates if downloading a tarball.
:param ignore_errors: Whether to ignore any errors when installing the collection.
:param no_deps: Ignore any collection dependencies and only install the base requirements.
:param force: Re-install a collection if it has already been installed.
:param force_deps: Re-install a collection as well as its dependencies if they have already been installed.
"""
existing_collections = {
Requirement(coll.fqcn, coll.ver, coll.src, coll.type)
for coll in find_existing_collections(output_path, artifacts_manager)
}
unsatisfied_requirements = set(
chain.from_iterable(
(
Requirement.from_dir_path(sub_coll, artifacts_manager)
for sub_coll in (
artifacts_manager.
get_direct_collection_dependencies(install_req).
keys()
)
)
if install_req.is_subdirs else (install_req, )
for install_req in collections
),
)
requested_requirements_names = {req.fqcn for req in unsatisfied_requirements}
# NOTE: Don't attempt to reevaluate already installed deps
# NOTE: unless `--force` or `--force-with-deps` is passed
unsatisfied_requirements -= set() if force or force_deps else {
req
for req in unsatisfied_requirements
for exs in existing_collections
if req.fqcn == exs.fqcn and meets_requirements(exs.ver, req.ver)
}
if not unsatisfied_requirements and not upgrade:
display.display(
'Nothing to do. All requested collections are already '
'installed. If you want to reinstall them, '
'consider using `--force`.'
)
return
# FIXME: This probably needs to be improved to
# FIXME: properly match differing src/type.
existing_non_requested_collections = {
coll for coll in existing_collections
if coll.fqcn not in requested_requirements_names
}
preferred_requirements = (
[] if force_deps
else existing_non_requested_collections if force
else existing_collections
)
preferred_collections = {
Candidate(coll.fqcn, coll.ver, coll.src, coll.type)
for coll in preferred_requirements
}
with _display_progress("Process install dependency map"):
dependency_map = _resolve_depenency_map(
collections,
galaxy_apis=apis,
preferred_candidates=preferred_collections,
concrete_artifacts_manager=artifacts_manager,
no_deps=no_deps,
allow_pre_release=allow_pre_release,
upgrade=upgrade,
)
with _display_progress("Starting collection install process"):
for fqcn, concrete_coll_pin in dependency_map.items():
if concrete_coll_pin.is_virtual:
display.vvvv(
"'{coll!s}' is virtual, skipping.".
format(coll=to_text(concrete_coll_pin)),
)
continue
if concrete_coll_pin in preferred_collections:
display.display(
"'{coll!s}' is already installed, skipping.".
format(coll=to_text(concrete_coll_pin)),
)
continue
try:
install(concrete_coll_pin, output_path, artifacts_manager)
except AnsibleError as err:
if ignore_errors:
display.warning(
'Failed to install collection {coll!s} but skipping '
'due to --ignore-errors being set. Error: {error!s}'.
format(
coll=to_text(concrete_coll_pin),
error=to_text(err),
)
)
else:
raise
# NOTE: imported in ansible.cli.galaxy
def validate_collection_name(name): # type: (str) -> str
"""Validates the collection name as an input from the user or a requirements file fit the requirements.
:param name: The input name with optional range specifier split by ':'.
:return: The input value, required for argparse validation.
"""
collection, dummy, dummy = name.partition(':')
if AnsibleCollectionRef.is_valid_collection_name(collection):
return name
raise AnsibleError("Invalid collection name '%s', "
"name must be in the format <namespace>.<collection>. \n"
"Please make sure namespace and collection name contains "
"characters from [a-zA-Z0-9_] only." % name)
# NOTE: imported in ansible.cli.galaxy
def validate_collection_path(collection_path): # type: (str) -> str
"""Ensure a given path ends with 'ansible_collections'
:param collection_path: The path that should end in 'ansible_collections'
:return: collection_path ending in 'ansible_collections' if it does not already.
"""
if os.path.split(collection_path)[1] != 'ansible_collections':
return os.path.join(collection_path, 'ansible_collections')
return collection_path
def verify_collections(
collections, # type: Iterable[Requirement]
search_paths, # type: Iterable[str]
apis, # type: Iterable[GalaxyAPI]
ignore_errors, # type: bool
local_verify_only, # type: bool
artifacts_manager, # type: ConcreteArtifactsManager
): # type: (...) -> List[CollectionVerifyResult]
r"""Verify the integrity of locally installed collections.
:param collections: The collections to check.
:param search_paths: Locations for the local collection lookup.
:param apis: A list of GalaxyAPIs to query when searching for a collection.
:param ignore_errors: Whether to ignore any errors when verifying the collection.
:param local_verify_only: When True, skip downloads and only verify local manifests.
:param artifacts_manager: Artifacts manager.
:return: list of CollectionVerifyResult objects describing the results of each collection verification
"""
results = [] # type: List[CollectionVerifyResult]
api_proxy = MultiGalaxyAPIProxy(apis, artifacts_manager)
with _display_progress():
for collection in collections:
try:
if collection.is_concrete_artifact:
raise AnsibleError(
message="'{coll_type!s}' type is not supported. "
'The format namespace.name is expected.'.
format(coll_type=collection.type)
)
# NOTE: Verify local collection exists before
# NOTE: downloading its source artifact from
# NOTE: a galaxy server.
default_err = 'Collection %s is not installed in any of the collection paths.' % collection.fqcn
for search_path in search_paths:
b_search_path = to_bytes(
os.path.join(
search_path,
collection.namespace, collection.name,
),
errors='surrogate_or_strict',
)
if not os.path.isdir(b_search_path):
continue
if not _is_installed_collection_dir(b_search_path):
default_err = (
"Collection %s does not have a MANIFEST.json. "
"A MANIFEST.json is expected if the collection has been built "
"and installed via ansible-galaxy" % collection.fqcn
)
continue
local_collection = Candidate.from_dir_path(
b_search_path, artifacts_manager,
)
break
else:
raise AnsibleError(message=default_err)
if local_verify_only:
remote_collection = None
else:
remote_collection = Candidate(
collection.fqcn,
collection.ver if collection.ver != '*'
else local_collection.ver,
None, 'galaxy',
)
# Download collection on a galaxy server for comparison
try:
# NOTE: Trigger the lookup. If found, it'll cache
# NOTE: download URL and token in artifact manager.
api_proxy.get_collection_version_metadata(
remote_collection,
)
except AnsibleError as e: # FIXME: does this actually emit any errors?
# FIXME: extract the actual message and adjust this:
expected_error_msg = (
'Failed to find collection {coll.fqcn!s}:{coll.ver!s}'.
format(coll=collection)
)
if e.message == expected_error_msg:
raise AnsibleError(
'Failed to find remote collection '
"'{coll!s}' on any of the galaxy servers".
format(coll=collection)
)
raise
result = verify_local_collection(
local_collection, remote_collection,
artifacts_manager,
)
results.append(result)
except AnsibleError as err:
if ignore_errors:
display.warning(
"Failed to verify collection '{coll!s}' but skipping "
'due to --ignore-errors being set. '
'Error: {err!s}'.
format(coll=collection, err=to_text(err)),
)
else:
raise
return results
@contextmanager
def _tempdir():
b_temp_path = tempfile.mkdtemp(dir=to_bytes(C.DEFAULT_LOCAL_TMP, errors='surrogate_or_strict'))
try:
yield b_temp_path
finally:
shutil.rmtree(b_temp_path)
@contextmanager
def _display_progress(msg=None):
config_display = C.GALAXY_DISPLAY_PROGRESS
display_wheel = sys.stdout.isatty() if config_display is None else config_display
global display
if msg is not None:
display.display(msg)
if not display_wheel:
yield
return
def progress(display_queue, actual_display):
actual_display.debug("Starting display_progress display thread")
t = threading.current_thread()
while True:
for c in "|/-\\":
actual_display.display(c + "\b", newline=False)
time.sleep(0.1)
# Display a message from the main thread
while True:
try:
method, args, kwargs = display_queue.get(block=False, timeout=0.1)
except queue.Empty:
break
else:
func = getattr(actual_display, method)
func(*args, **kwargs)
if getattr(t, "finish", False):
actual_display.debug("Received end signal for display_progress display thread")
return
class DisplayThread(object):
def __init__(self, display_queue):
self.display_queue = display_queue
def __getattr__(self, attr):
def call_display(*args, **kwargs):
self.display_queue.put((attr, args, kwargs))
return call_display
# Temporary override the global display class with our own which add the calls to a queue for the thread to call.
old_display = display
try:
display_queue = queue.Queue()
display = DisplayThread(display_queue)
t = threading.Thread(target=progress, args=(display_queue, old_display))
t.daemon = True
t.start()
try:
yield
finally:
t.finish = True
t.join()
except Exception:
# The exception is re-raised so we can sure the thread is finished and not using the display anymore
raise
finally:
display = old_display
def _verify_file_hash(b_path, filename, expected_hash, error_queue):
b_file_path = to_bytes(os.path.join(to_text(b_path), filename), errors='surrogate_or_strict')
if not os.path.isfile(b_file_path):
actual_hash = None
else:
with open(b_file_path, mode='rb') as file_object:
actual_hash = _consume_file(file_object)
if expected_hash != actual_hash:
error_queue.append(ModifiedContent(filename=filename, expected=expected_hash, installed=actual_hash))
def _build_files_manifest(b_collection_path, namespace, name, ignore_patterns):
# type: (bytes, str, str, List[str]) -> FilesManifestType
# We always ignore .pyc and .retry files as well as some well known version control directories. The ignore
# patterns can be extended by the build_ignore key in galaxy.yml
b_ignore_patterns = [
b'MANIFEST.json',
b'FILES.json',
b'galaxy.yml',
b'galaxy.yaml',
b'.git',
b'*.pyc',
b'*.retry',
b'tests/output', # Ignore ansible-test result output directory.
to_bytes('{0}-{1}-*.tar.gz'.format(namespace, name)), # Ignores previously built artifacts in the root dir.
]
b_ignore_patterns += [to_bytes(p) for p in ignore_patterns]
b_ignore_dirs = frozenset([b'CVS', b'.bzr', b'.hg', b'.git', b'.svn', b'__pycache__', b'.tox'])
entry_template = {
'name': None,
'ftype': None,
'chksum_type': None,
'chksum_sha256': None,
'format': MANIFEST_FORMAT
}
manifest = {
'files': [
{
'name': '.',
'ftype': 'dir',
'chksum_type': None,
'chksum_sha256': None,
'format': MANIFEST_FORMAT,
},
],
'format': MANIFEST_FORMAT,
} # type: FilesManifestType
def _walk(b_path, b_top_level_dir):
for b_item in os.listdir(b_path):
b_abs_path = os.path.join(b_path, b_item)
b_rel_base_dir = b'' if b_path == b_top_level_dir else b_path[len(b_top_level_dir) + 1:]
b_rel_path = os.path.join(b_rel_base_dir, b_item)
rel_path = to_text(b_rel_path, errors='surrogate_or_strict')
if os.path.isdir(b_abs_path):
if any(b_item == b_path for b_path in b_ignore_dirs) or \
any(fnmatch.fnmatch(b_rel_path, b_pattern) for b_pattern in b_ignore_patterns):
display.vvv("Skipping '%s' for collection build" % to_text(b_abs_path))
continue
if os.path.islink(b_abs_path):
b_link_target = os.path.realpath(b_abs_path)
if not _is_child_path(b_link_target, b_top_level_dir):
display.warning("Skipping '%s' as it is a symbolic link to a directory outside the collection"
% to_text(b_abs_path))
continue
manifest_entry = entry_template.copy()
manifest_entry['name'] = rel_path
manifest_entry['ftype'] = 'dir'
manifest['files'].append(manifest_entry)
if not os.path.islink(b_abs_path):
_walk(b_abs_path, b_top_level_dir)
else:
if any(fnmatch.fnmatch(b_rel_path, b_pattern) for b_pattern in b_ignore_patterns):
display.vvv("Skipping '%s' for collection build" % to_text(b_abs_path))
continue
# Handling of file symlinks occur in _build_collection_tar, the manifest for a symlink is the same for
# a normal file.
manifest_entry = entry_template.copy()
manifest_entry['name'] = rel_path
manifest_entry['ftype'] = 'file'
manifest_entry['chksum_type'] = 'sha256'
manifest_entry['chksum_sha256'] = secure_hash(b_abs_path, hash_func=sha256)
manifest['files'].append(manifest_entry)
_walk(b_collection_path, b_collection_path)
return manifest
# FIXME: accept a dict produced from `galaxy.yml` instead of separate args
def _build_manifest(namespace, name, version, authors, readme, tags, description, license_file,
dependencies, repository, documentation, homepage, issues, **kwargs):
manifest = {
'collection_info': {
'namespace': namespace,
'name': name,
'version': version,
'authors': authors,
'readme': readme,
'tags': tags,
'description': description,
'license': kwargs['license'],
'license_file': license_file or None, # Handle galaxy.yml having an empty string (None)
'dependencies': dependencies,
'repository': repository,
'documentation': documentation,
'homepage': homepage,
'issues': issues,
},
'file_manifest_file': {
'name': 'FILES.json',
'ftype': 'file',
'chksum_type': 'sha256',
'chksum_sha256': None, # Filled out in _build_collection_tar
'format': MANIFEST_FORMAT
},
'format': MANIFEST_FORMAT,
}
return manifest
def _build_collection_tar(
b_collection_path, # type: bytes
b_tar_path, # type: bytes
collection_manifest, # type: CollectionManifestType
file_manifest, # type: FilesManifestType
): # type: (...) -> Text
"""Build a tar.gz collection artifact from the manifest data."""
files_manifest_json = to_bytes(json.dumps(file_manifest, indent=True), errors='surrogate_or_strict')
collection_manifest['file_manifest_file']['chksum_sha256'] = secure_hash_s(files_manifest_json, hash_func=sha256)
collection_manifest_json = to_bytes(json.dumps(collection_manifest, indent=True), errors='surrogate_or_strict')
with _tempdir() as b_temp_path:
b_tar_filepath = os.path.join(b_temp_path, os.path.basename(b_tar_path))
with tarfile.open(b_tar_filepath, mode='w:gz') as tar_file:
# Add the MANIFEST.json and FILES.json file to the archive
for name, b in [(MANIFEST_FILENAME, collection_manifest_json), ('FILES.json', files_manifest_json)]:
b_io = BytesIO(b)
tar_info = tarfile.TarInfo(name)
tar_info.size = len(b)
tar_info.mtime = int(time.time())
tar_info.mode = 0o0644
tar_file.addfile(tarinfo=tar_info, fileobj=b_io)
for file_info in file_manifest['files']: # type: ignore[union-attr]
if file_info['name'] == '.':
continue
# arcname expects a native string, cannot be bytes
filename = to_native(file_info['name'], errors='surrogate_or_strict')
b_src_path = os.path.join(b_collection_path, to_bytes(filename, errors='surrogate_or_strict'))
def reset_stat(tarinfo):
if tarinfo.type != tarfile.SYMTYPE:
existing_is_exec = tarinfo.mode & stat.S_IXUSR
tarinfo.mode = 0o0755 if existing_is_exec or tarinfo.isdir() else 0o0644
tarinfo.uid = tarinfo.gid = 0
tarinfo.uname = tarinfo.gname = ''
return tarinfo
if os.path.islink(b_src_path):
b_link_target = os.path.realpath(b_src_path)
if _is_child_path(b_link_target, b_collection_path):
b_rel_path = os.path.relpath(b_link_target, start=os.path.dirname(b_src_path))
tar_info = tarfile.TarInfo(filename)
tar_info.type = tarfile.SYMTYPE
tar_info.linkname = to_native(b_rel_path, errors='surrogate_or_strict')
tar_info = reset_stat(tar_info)
tar_file.addfile(tarinfo=tar_info)
continue
# Dealing with a normal file, just add it by name.
tar_file.add(
to_native(os.path.realpath(b_src_path)),
arcname=filename,
recursive=False,
filter=reset_stat,
)
shutil.copy(to_native(b_tar_filepath), to_native(b_tar_path))
collection_name = "%s.%s" % (collection_manifest['collection_info']['namespace'],
collection_manifest['collection_info']['name'])
tar_path = to_text(b_tar_path)
display.display(u'Created collection for %s at %s' % (collection_name, tar_path))
return tar_path
def _build_collection_dir(b_collection_path, b_collection_output, collection_manifest, file_manifest):
"""Build a collection directory from the manifest data.
This should follow the same pattern as _build_collection_tar.
"""
os.makedirs(b_collection_output, mode=0o0755)
files_manifest_json = to_bytes(json.dumps(file_manifest, indent=True), errors='surrogate_or_strict')
collection_manifest['file_manifest_file']['chksum_sha256'] = secure_hash_s(files_manifest_json, hash_func=sha256)
collection_manifest_json = to_bytes(json.dumps(collection_manifest, indent=True), errors='surrogate_or_strict')
# Write contents to the files
for name, b in [(MANIFEST_FILENAME, collection_manifest_json), ('FILES.json', files_manifest_json)]:
b_path = os.path.join(b_collection_output, to_bytes(name, errors='surrogate_or_strict'))
with open(b_path, 'wb') as file_obj, BytesIO(b) as b_io:
shutil.copyfileobj(b_io, file_obj)
os.chmod(b_path, 0o0644)
base_directories = []
for file_info in sorted(file_manifest['files'], key=lambda x: x['name']):
if file_info['name'] == '.':
continue
src_file = os.path.join(b_collection_path, to_bytes(file_info['name'], errors='surrogate_or_strict'))
dest_file = os.path.join(b_collection_output, to_bytes(file_info['name'], errors='surrogate_or_strict'))
existing_is_exec = os.stat(src_file).st_mode & stat.S_IXUSR
mode = 0o0755 if existing_is_exec else 0o0644
if os.path.isdir(src_file):
mode = 0o0755
base_directories.append(src_file)
os.mkdir(dest_file, mode)
else:
shutil.copyfile(src_file, dest_file)
os.chmod(dest_file, mode)
collection_output = to_text(b_collection_output)
return collection_output
def find_existing_collections(path, artifacts_manager):
"""Locate all collections under a given path.
:param path: Collection dirs layout search path.
:param artifacts_manager: Artifacts manager.
"""
b_path = to_bytes(path, errors='surrogate_or_strict')
# FIXME: consider using `glob.glob()` to simplify looping
for b_namespace in os.listdir(b_path):
b_namespace_path = os.path.join(b_path, b_namespace)
if os.path.isfile(b_namespace_path):
continue
# FIXME: consider feeding b_namespace_path to Candidate.from_dir_path to get subdirs automatically
for b_collection in os.listdir(b_namespace_path):
b_collection_path = os.path.join(b_namespace_path, b_collection)
if not os.path.isdir(b_collection_path):
continue
try:
req = Candidate.from_dir_path_as_unknown(
b_collection_path,
artifacts_manager,
)
except ValueError as val_err:
raise_from(AnsibleError(val_err), val_err)
display.vvv(
u"Found installed collection {coll!s} at '{path!s}'".
format(coll=to_text(req), path=to_text(req.src))
)
yield req
def install(collection, path, artifacts_manager): # FIXME: mv to dataclasses?
# type: (Candidate, str, ConcreteArtifactsManager) -> None
"""Install a collection under a given path.
:param collection: Collection to be installed.
:param path: Collection dirs layout path.
:param artifacts_manager: Artifacts manager.
"""
b_artifact_path = (
artifacts_manager.get_artifact_path if collection.is_concrete_artifact
else artifacts_manager.get_galaxy_artifact_path
)(collection)
collection_path = os.path.join(path, collection.namespace, collection.name)
b_collection_path = to_bytes(collection_path, errors='surrogate_or_strict')
display.display(
u"Installing '{coll!s}' to '{path!s}'".
format(coll=to_text(collection), path=collection_path),
)
if os.path.exists(b_collection_path):
shutil.rmtree(b_collection_path)
if collection.is_dir:
install_src(collection, b_artifact_path, b_collection_path, artifacts_manager)
else:
install_artifact(b_artifact_path, b_collection_path, artifacts_manager._b_working_directory)
display.display(
'{coll!s} was installed successfully'.
format(coll=to_text(collection)),
)
def install_artifact(b_coll_targz_path, b_collection_path, b_temp_path):
"""Install a collection from tarball under a given path.
:param b_coll_targz_path: Collection tarball to be installed.
:param b_collection_path: Collection dirs layout path.
:param b_temp_path: Temporary dir path.
"""
try:
with tarfile.open(b_coll_targz_path, mode='r') as collection_tar:
files_member_obj = collection_tar.getmember('FILES.json')
with _tarfile_extract(collection_tar, files_member_obj) as (dummy, files_obj):
files = json.loads(to_text(files_obj.read(), errors='surrogate_or_strict'))
_extract_tar_file(collection_tar, MANIFEST_FILENAME, b_collection_path, b_temp_path)
_extract_tar_file(collection_tar, 'FILES.json', b_collection_path, b_temp_path)
for file_info in files['files']:
file_name = file_info['name']
if file_name == '.':
continue
if file_info['ftype'] == 'file':
_extract_tar_file(collection_tar, file_name, b_collection_path, b_temp_path,
expected_hash=file_info['chksum_sha256'])
else:
_extract_tar_dir(collection_tar, file_name, b_collection_path)
except Exception:
# Ensure we don't leave the dir behind in case of a failure.
shutil.rmtree(b_collection_path)
b_namespace_path = os.path.dirname(b_collection_path)
if not os.listdir(b_namespace_path):
os.rmdir(b_namespace_path)
raise
def install_src(
collection,
b_collection_path, b_collection_output_path,
artifacts_manager,
):
r"""Install the collection from source control into given dir.
Generates the Ansible collection artifact data from a galaxy.yml and
installs the artifact to a directory.
This should follow the same pattern as build_collection, but instead
of creating an artifact, install it.
:param collection: Collection to be installed.
:param b_collection_path: Collection dirs layout path.
:param b_collection_output_path: The installation directory for the \
collection artifact.
:param artifacts_manager: Artifacts manager.
:raises AnsibleError: If no collection metadata found.
"""
collection_meta = artifacts_manager.get_direct_collection_meta(collection)
if 'build_ignore' not in collection_meta: # installed collection, not src
# FIXME: optimize this? use a different process? copy instead of build?
collection_meta['build_ignore'] = []
collection_manifest = _build_manifest(**collection_meta)
file_manifest = _build_files_manifest(
b_collection_path,
collection_meta['namespace'], collection_meta['name'],
collection_meta['build_ignore'],
)
collection_output_path = _build_collection_dir(
b_collection_path, b_collection_output_path,
collection_manifest, file_manifest,
)
display.display(
'Created collection for {coll!s} at {path!s}'.
format(coll=collection, path=collection_output_path)
)
def _extract_tar_dir(tar, dirname, b_dest):
""" Extracts a directory from a collection tar. """
member_names = [to_native(dirname, errors='surrogate_or_strict')]
# Create list of members with and without trailing separator
if not member_names[-1].endswith(os.path.sep):
member_names.append(member_names[-1] + os.path.sep)
# Try all of the member names and stop on the first one that are able to successfully get
for member in member_names:
try:
tar_member = tar.getmember(member)
except KeyError:
continue
break
else:
# If we still can't find the member, raise a nice error.
raise AnsibleError("Unable to extract '%s' from collection" % to_native(member, errors='surrogate_or_strict'))
b_dir_path = os.path.join(b_dest, to_bytes(dirname, errors='surrogate_or_strict'))
b_parent_path = os.path.dirname(b_dir_path)
try:
os.makedirs(b_parent_path, mode=0o0755)
except OSError as e:
if e.errno != errno.EEXIST:
raise
if tar_member.type == tarfile.SYMTYPE:
b_link_path = to_bytes(tar_member.linkname, errors='surrogate_or_strict')
if not _is_child_path(b_link_path, b_dest, link_name=b_dir_path):
raise AnsibleError("Cannot extract symlink '%s' in collection: path points to location outside of "
"collection '%s'" % (to_native(dirname), b_link_path))
os.symlink(b_link_path, b_dir_path)
else:
if not os.path.isdir(b_dir_path):
os.mkdir(b_dir_path, 0o0755)
def _extract_tar_file(tar, filename, b_dest, b_temp_path, expected_hash=None):
""" Extracts a file from a collection tar. """
with _get_tar_file_member(tar, filename) as (tar_member, tar_obj):
if tar_member.type == tarfile.SYMTYPE:
actual_hash = _consume_file(tar_obj)
else:
with tempfile.NamedTemporaryFile(dir=b_temp_path, delete=False) as tmpfile_obj:
actual_hash = _consume_file(tar_obj, tmpfile_obj)
if expected_hash and actual_hash != expected_hash:
raise AnsibleError("Checksum mismatch for '%s' inside collection at '%s'"
% (to_native(filename, errors='surrogate_or_strict'), to_native(tar.name)))
b_dest_filepath = os.path.abspath(os.path.join(b_dest, to_bytes(filename, errors='surrogate_or_strict')))
b_parent_dir = os.path.dirname(b_dest_filepath)
if not _is_child_path(b_parent_dir, b_dest):
raise AnsibleError("Cannot extract tar entry '%s' as it will be placed outside the collection directory"
% to_native(filename, errors='surrogate_or_strict'))
if not os.path.exists(b_parent_dir):
# Seems like Galaxy does not validate if all file entries have a corresponding dir ftype entry. This check
# makes sure we create the parent directory even if it wasn't set in the metadata.
os.makedirs(b_parent_dir, mode=0o0755)
if tar_member.type == tarfile.SYMTYPE:
b_link_path = to_bytes(tar_member.linkname, errors='surrogate_or_strict')
if not _is_child_path(b_link_path, b_dest, link_name=b_dest_filepath):
raise AnsibleError("Cannot extract symlink '%s' in collection: path points to location outside of "
"collection '%s'" % (to_native(filename), b_link_path))
os.symlink(b_link_path, b_dest_filepath)
else:
shutil.move(to_bytes(tmpfile_obj.name, errors='surrogate_or_strict'), b_dest_filepath)
# Default to rw-r--r-- and only add execute if the tar file has execute.
tar_member = tar.getmember(to_native(filename, errors='surrogate_or_strict'))
new_mode = 0o644
if stat.S_IMODE(tar_member.mode) & stat.S_IXUSR:
new_mode |= 0o0111
os.chmod(b_dest_filepath, new_mode)
def _get_tar_file_member(tar, filename):
n_filename = to_native(filename, errors='surrogate_or_strict')
try:
member = tar.getmember(n_filename)
except KeyError:
raise AnsibleError("Collection tar at '%s' does not contain the expected file '%s'." % (
to_native(tar.name),
n_filename))
return _tarfile_extract(tar, member)
def _get_json_from_tar_file(b_path, filename):
file_contents = ''
with tarfile.open(b_path, mode='r') as collection_tar:
with _get_tar_file_member(collection_tar, filename) as (dummy, tar_obj):
bufsize = 65536
data = tar_obj.read(bufsize)
while data:
file_contents += to_text(data)
data = tar_obj.read(bufsize)
return json.loads(file_contents)
def _get_tar_file_hash(b_path, filename):
with tarfile.open(b_path, mode='r') as collection_tar:
with _get_tar_file_member(collection_tar, filename) as (dummy, tar_obj):
return _consume_file(tar_obj)
def _get_file_hash(b_path, filename): # type: (bytes, str) -> str
filepath = os.path.join(b_path, to_bytes(filename, errors='surrogate_or_strict'))
with open(filepath, 'rb') as fp:
return _consume_file(fp)
def _is_child_path(path, parent_path, link_name=None):
""" Checks that path is a path within the parent_path specified. """
b_path = to_bytes(path, errors='surrogate_or_strict')
if link_name and not os.path.isabs(b_path):
# If link_name is specified, path is the source of the link and we need to resolve the absolute path.
b_link_dir = os.path.dirname(to_bytes(link_name, errors='surrogate_or_strict'))
b_path = os.path.abspath(os.path.join(b_link_dir, b_path))
b_parent_path = to_bytes(parent_path, errors='surrogate_or_strict')
return b_path == b_parent_path or b_path.startswith(b_parent_path + to_bytes(os.path.sep))
def _resolve_depenency_map(
requested_requirements, # type: Iterable[Requirement]
galaxy_apis, # type: Iterable[GalaxyAPI]
concrete_artifacts_manager, # type: ConcreteArtifactsManager
preferred_candidates, # type: Optional[Iterable[Candidate]]
no_deps, # type: bool
allow_pre_release, # type: bool
upgrade, # type: bool
): # type: (...) -> Dict[str, Candidate]
"""Return the resolved dependency map."""
collection_dep_resolver = build_collection_dependency_resolver(
galaxy_apis=galaxy_apis,
concrete_artifacts_manager=concrete_artifacts_manager,
user_requirements=requested_requirements,
preferred_candidates=preferred_candidates,
with_deps=not no_deps,
with_pre_releases=allow_pre_release,
upgrade=upgrade,
)
try:
return collection_dep_resolver.resolve(
requested_requirements,
max_rounds=2000000, # NOTE: same constant pip uses
).mapping
except CollectionDependencyResolutionImpossible as dep_exc:
conflict_causes = (
'* {req.fqcn!s}:{req.ver!s} ({dep_origin!s})'.format(
req=req_inf.requirement,
dep_origin='direct request'
if req_inf.parent is None
else 'dependency of {parent!s}'.
format(parent=req_inf.parent),
)
for req_inf in dep_exc.causes
)
error_msg_lines = chain(
(
'Failed to resolve the requested '
'dependencies map. Could not satisfy the following '
'requirements:',
),
conflict_causes,
)
raise raise_from( # NOTE: Leading "raise" is a hack for mypy bug #9717
AnsibleError('\n'.join(error_msg_lines)),
dep_exc,
)
except CollectionDependencyInconsistentCandidate as dep_exc:
parents = [
"%s.%s:%s" % (p.namespace, p.name, p.ver)
for p in dep_exc.criterion.iter_parent()
if p is not None
]
error_msg_lines = [
(
'Failed to resolve the requested dependencies map. '
'Got the candidate {req.fqcn!s}:{req.ver!s} ({dep_origin!s}) '
'which didn\'t satisfy all of the following requirements:'.
format(
req=dep_exc.candidate,
dep_origin='direct request'
if not parents else 'dependency of {parent!s}'.
format(parent=', '.join(parents))
)
)
]
for req in dep_exc.criterion.iter_requirement():
error_msg_lines.append(
'* {req.fqcn!s}:{req.ver!s}'.format(req=req)
)
raise raise_from( # NOTE: Leading "raise" is a hack for mypy bug #9717
AnsibleError('\n'.join(error_msg_lines)),
dep_exc,
)
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 76,425 |
AttributeError installing ansible-galaxy collection from git
|
### Summary
Ansible thows an error just after processing the install dependency map.
```Process install dependency map
ERROR! Unexpected Exception, this is probably a bug: 'float' object has no attribute 'startswith'
```
### Issue Type
Bug Report
### Component Name
ansible-galaxy
### Ansible Version
```console
[cloud-user@test-centos ~]$ ansible --version
ansible [core 2.12.0]
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/cloud-user/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.9/site-packages/ansible
ansible collection location = /home/cloud-user/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/bin/ansible
python version = 3.9.8 (main, Nov 8 2021, 00:00:00) [GCC 11.2.1 20211019 (Red Hat 11.2.1-6)]
jinja version = 2.11.3
libyaml = True
```
### Configuration
```console
[cloud-user@test-centos ~]$ ansible-config dump --only-changed
[cloud-user@test-centos ~]$
```
### OS / Environment
CentOS Stream 9
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
galaxy.yml:
namespace: my
name: cool-collection
version: 1.0
authors: David *****
readme: README.md
```
Install a collection from a git repo with this galaxy.yml:
`ansible-galaxy collection install git+https://.../my_project.git,ISSUE-1`
### Expected Results
The collection is installed
### Actual Results
```console
[cloud-user@test-centos ~]$ ansible-galaxy -vvvv collection install git+https://....git,ISSUE-1
ansible-galaxy [core 2.12.0]
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/cloud-user/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.9/site-packages/ansible
ansible collection location = /home/cloud-user/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/bin/ansible-galaxy
python version = 3.9.8 (main, Nov 8 2021, 00:00:00) [GCC 11.2.1 20211019 (Red Hat 11.2.1-6)]
jinja version = 2.11.3
libyaml = True
Using /etc/ansible/ansible.cfg as config file
Cloning into '/home/cloud-user/.ansible/tmp/ansible-local-12630phroh0cw/tmpelcczl4o/netbox-addp5e_slda'...
Username for 'https://...': my_user
Password for 'https://my_user@my_gitsrv':
remote: Enumerating objects: 425, done.
remote: Counting objects: 100% (61/61), done.
remote: Compressing objects: 100% (29/29), done.
remote: Total 425 (delta 18), reused 52 (delta 14), pack-reused 364
Receiving objects: 100% (425/425), 47.25 KiB | 11.81 MiB/s, done.
Resolving deltas: 100% (178/178), done.
Branch 'ISSUE-1' set up to track remote branch 'ISSUE-1' from 'origin'.
Switched to a new branch 'ISSUE-1'
Starting galaxy collection install process
Process install dependency map
ERROR! Unexpected Exception, this is probably a bug: 'float' object has no attribute 'startswith'
the full traceback was:
Traceback (most recent call last):
File "/usr/bin/ansible-galaxy", line 128, in <module>
exit_code = cli.run()
File "/usr/lib/python3.9/site-packages/ansible/cli/galaxy.py", line 567, in run
return context.CLIARGS['func']()
File "/usr/lib/python3.9/site-packages/ansible/cli/galaxy.py", line 86, in method_wrapper
return wrapped_method(*args, **kwargs)
File "/usr/lib/python3.9/site-packages/ansible/cli/galaxy.py", line 1201, in execute_install
self._execute_install_collection(
File "/usr/lib/python3.9/site-packages/ansible/cli/galaxy.py", line 1228, in _execute_install_collection
install_collections(
File "/usr/lib/python3.9/site-packages/ansible/galaxy/collection/__init__.py", line 513, in install_collections
dependency_map = _resolve_depenency_map(
File "/usr/lib/python3.9/site-packages/ansible/galaxy/collection/__init__.py", line 1327, in _resolve_depenency_map
return collection_dep_resolver.resolve(
File "/usr/lib/python3.9/site-packages/resolvelib/resolvers.py", line 453, in resolve
state = resolution.resolve(requirements, max_rounds=max_rounds)
File "/usr/lib/python3.9/site-packages/resolvelib/resolvers.py", line 347, in resolve
failure_causes = self._attempt_to_pin_criterion(name, criterion)
File "/usr/lib/python3.9/site-packages/resolvelib/resolvers.py", line 216, in _attempt_to_pin_criterion
satisfied = all(
File "/usr/lib/python3.9/site-packages/resolvelib/resolvers.py", line 217, in <genexpr>
self._p.is_satisfied_by(r, candidate)
File "/usr/lib/python3.9/site-packages/ansible/galaxy/dependency_resolution/providers.py", line 280, in is_satisfied_by
requirement.ver.startswith('<') or
AttributeError: 'float' object has no attribute 'startswith'
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/76425
|
https://github.com/ansible/ansible/pull/76579
|
6e57c8c0844c44a102dc081b93d5299cae9619e2
|
15ace5a854886cd75894e7214209e4b027034bdb
| 2021-12-01T22:41:01Z |
python
| 2022-01-06T19:05:20Z |
lib/ansible/galaxy/dependency_resolution/providers.py
|
# -*- coding: utf-8 -*-
# Copyright: (c) 2020-2021, Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
"""Requirement provider interfaces."""
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import functools
try:
from typing import TYPE_CHECKING
except ImportError:
TYPE_CHECKING = False
if TYPE_CHECKING:
from typing import Iterable, List, NamedTuple, Optional, Union
from ansible.galaxy.collection.concrete_artifact_manager import (
ConcreteArtifactsManager,
)
from ansible.galaxy.collection.galaxy_api_proxy import MultiGalaxyAPIProxy
from ansible.galaxy.dependency_resolution.dataclasses import (
Candidate,
Requirement,
)
from ansible.galaxy.dependency_resolution.versioning import (
is_pre_release,
meets_requirements,
)
from ansible.utils.version import SemanticVersion
from resolvelib import AbstractProvider
class CollectionDependencyProvider(AbstractProvider):
"""Delegate providing a requirement interface for the resolver."""
def __init__(
self, # type: CollectionDependencyProvider
apis, # type: MultiGalaxyAPIProxy
concrete_artifacts_manager=None, # type: ConcreteArtifactsManager
user_requirements=None, # type: Iterable[Requirement]
preferred_candidates=None, # type: Iterable[Candidate]
with_deps=True, # type: bool
with_pre_releases=False, # type: bool
upgrade=False, # type: bool
): # type: (...) -> None
r"""Initialize helper attributes.
:param api: An instance of the multiple Galaxy APIs wrapper.
:param concrete_artifacts_manager: An instance of the caching \
concrete artifacts manager.
:param with_deps: A flag specifying whether the resolver \
should attempt to pull-in the deps of the \
requested requirements. On by default.
:param with_pre_releases: A flag specifying whether the \
resolver should skip pre-releases. \
Off by default.
"""
self._api_proxy = apis
self._make_req_from_dict = functools.partial(
Requirement.from_requirement_dict,
art_mgr=concrete_artifacts_manager,
)
self._pinned_candidate_requests = set(
Candidate(req.fqcn, req.ver, req.src, req.type)
for req in (user_requirements or ())
if req.is_concrete_artifact or (
req.ver != '*' and
not req.ver.startswith(('<', '>', '!='))
)
)
self._preferred_candidates = set(preferred_candidates or ())
self._with_deps = with_deps
self._with_pre_releases = with_pre_releases
self._upgrade = upgrade
def _is_user_requested(self, candidate): # type: (Candidate) -> bool
"""Check if the candidate is requested by the user."""
if candidate in self._pinned_candidate_requests:
return True
if candidate.is_online_index_pointer and candidate.src is not None:
# NOTE: Candidate is a namedtuple, it has a source server set
# NOTE: to a specific GalaxyAPI instance or `None`. When the
# NOTE: user runs
# NOTE:
# NOTE: $ ansible-galaxy collection install ns.coll
# NOTE:
# NOTE: then it's saved in `self._pinned_candidate_requests`
# NOTE: as `('ns.coll', '*', None, 'galaxy')` but then
# NOTE: `self.find_matches()` calls `self.is_satisfied_by()`
# NOTE: with Candidate instances bound to each specific
# NOTE: server available, those look like
# NOTE: `('ns.coll', '*', GalaxyAPI(...), 'galaxy')` and
# NOTE: wouldn't match the user requests saved in
# NOTE: `self._pinned_candidate_requests`. This is why we
# NOTE: normalize the collection to have `src=None` and try
# NOTE: again.
# NOTE:
# NOTE: When the user request comes from `requirements.yml`
# NOTE: with the `source:` set, it'll match the first check
# NOTE: but it still can have entries with `src=None` so this
# NOTE: normalized check is still necessary.
return Candidate(
candidate.fqcn, candidate.ver, None, candidate.type,
) in self._pinned_candidate_requests
return False
def identify(self, requirement_or_candidate):
# type: (Union[Candidate, Requirement]) -> str
"""Given requirement or candidate, return an identifier for it.
This is used to identify a requirement or candidate, e.g.
whether two requirements should have their specifier parts
(version ranges or pins) merged, whether two candidates would
conflict with each other (because they have same name but
different versions).
"""
return requirement_or_candidate.canonical_package_id
def get_preference(
self, # type: CollectionDependencyProvider
resolution, # type: Optional[Candidate]
candidates, # type: List[Candidate]
information, # type: List[NamedTuple]
): # type: (...) -> Union[float, int]
"""Return sort key function return value for given requirement.
This result should be based on preference that is defined as
"I think this requirement should be resolved first".
The lower the return value is, the more preferred this
group of arguments is.
:param resolution: Currently pinned candidate, or ``None``.
:param candidates: A list of possible candidates.
:param information: A list of requirement information.
Each ``information`` instance is a named tuple with two entries:
* ``requirement`` specifies a requirement contributing to
the current candidate list
* ``parent`` specifies the candidate that provides
(dependend on) the requirement, or `None`
to indicate a root requirement.
The preference could depend on a various of issues, including
(not necessarily in this order):
* Is this package pinned in the current resolution result?
* How relaxed is the requirement? Stricter ones should
probably be worked on first? (I don't know, actually.)
* How many possibilities are there to satisfy this
requirement? Those with few left should likely be worked on
first, I guess?
* Are there any known conflicts for this requirement?
We should probably work on those with the most
known conflicts.
A sortable value should be returned (this will be used as the
`key` parameter of the built-in sorting function). The smaller
the value is, the more preferred this requirement is (i.e. the
sorting function is called with ``reverse=False``).
"""
if any(
candidate in self._preferred_candidates
for candidate in candidates
):
# NOTE: Prefer pre-installed candidates over newer versions
# NOTE: available from Galaxy or other sources.
return float('-inf')
return len(candidates)
def find_matches(self, requirements):
# type: (List[Requirement]) -> List[Candidate]
r"""Find all possible candidates satisfying given requirements.
This tries to get candidates based on the requirements' types.
For concrete requirements (SCM, dir, namespace dir, local or
remote archives), the one-and-only match is returned
For a "named" requirement, Galaxy-compatible APIs are consulted
to find concrete candidates for this requirement. Of theres a
pre-installed candidate, it's prepended in front of others.
:param requirements: A collection of requirements which all of \
the returned candidates must match. \
All requirements are guaranteed to have \
the same identifier. \
The collection is never empty.
:returns: An iterable that orders candidates by preference, \
e.g. the most preferred candidate comes first.
"""
# FIXME: The first requirement may be a Git repo followed by
# FIXME: its cloned tmp dir. Using only the first one creates
# FIXME: loops that prevent any further dependency exploration.
# FIXME: We need to figure out how to prevent this.
first_req = requirements[0]
fqcn = first_req.fqcn
# The fqcn is guaranteed to be the same
coll_versions = self._api_proxy.get_collection_versions(first_req)
if first_req.is_concrete_artifact:
# FIXME: do we assume that all the following artifacts are also concrete?
# FIXME: does using fqcn==None cause us problems here?
return [
Candidate(fqcn, version, _none_src_server, first_req.type)
for version, _none_src_server in coll_versions
]
latest_matches = sorted(
{
candidate for candidate in (
Candidate(fqcn, version, src_server, 'galaxy')
for version, src_server in coll_versions
)
if all(self.is_satisfied_by(requirement, candidate) for requirement in requirements)
# FIXME
# if all(self.is_satisfied_by(requirement, candidate) and (
# requirement.src is None or # if this is true for some candidates but not all it will break key param - Nonetype can't be compared to str
# requirement.src == candidate.src
# ))
},
key=lambda candidate: (
SemanticVersion(candidate.ver), candidate.src,
),
reverse=True, # prefer newer versions over older ones
)
preinstalled_candidates = {
candidate for candidate in self._preferred_candidates
if candidate.fqcn == fqcn and
(
# check if an upgrade is necessary
all(self.is_satisfied_by(requirement, candidate) for requirement in requirements) and
(
not self._upgrade or
# check if an upgrade is preferred
all(SemanticVersion(latest.ver) <= SemanticVersion(candidate.ver) for latest in latest_matches)
)
)
}
return list(preinstalled_candidates) + latest_matches
def is_satisfied_by(self, requirement, candidate):
# type: (Requirement, Candidate) -> bool
r"""Whether the given requirement is satisfiable by a candidate.
:param requirement: A requirement that produced the `candidate`.
:param candidate: A pinned candidate supposedly matchine the \
`requirement` specifier. It is guaranteed to \
have been generated from the `requirement`.
:returns: Indication whether the `candidate` is a viable \
solution to the `requirement`.
"""
# NOTE: Only allow pre-release candidates if we want pre-releases
# NOTE: or the req ver was an exact match with the pre-release
# NOTE: version. Another case where we'd want to allow
# NOTE: pre-releases is when there are several user requirements
# NOTE: and one of them is a pre-release that also matches a
# NOTE: transitive dependency of another requirement.
allow_pre_release = self._with_pre_releases or not (
requirement.ver == '*' or
requirement.ver.startswith('<') or
requirement.ver.startswith('>') or
requirement.ver.startswith('!=')
) or self._is_user_requested(candidate)
if is_pre_release(candidate.ver) and not allow_pre_release:
return False
# NOTE: This is a set of Pipenv-inspired optimizations. Ref:
# https://github.com/sarugaku/passa/blob/2ac00f1/src/passa/models/providers.py#L58-L74
if (
requirement.is_virtual or
candidate.is_virtual or
requirement.ver == '*'
):
return True
return meets_requirements(
version=candidate.ver,
requirements=requirement.ver,
)
def get_dependencies(self, candidate):
# type: (Candidate) -> List[Candidate]
r"""Get direct dependencies of a candidate.
:returns: A collection of requirements that `candidate` \
specifies as its dependencies.
"""
# FIXME: If there's several galaxy servers set, there may be a
# FIXME: situation when the metadata of the same collection
# FIXME: differs. So how do we resolve this case? Priority?
# FIXME: Taking into account a pinned hash? Exploding on
# FIXME: any differences?
# NOTE: The underlying implmentation currently uses first found
req_map = self._api_proxy.get_collection_dependencies(candidate)
# NOTE: This guard expression MUST perform an early exit only
# NOTE: after the `get_collection_dependencies()` call because
# NOTE: internally it polulates the artifact URL of the candidate,
# NOTE: its SHA hash and the Galaxy API token. These are still
# NOTE: necessary with `--no-deps` because even with the disabled
# NOTE: dependency resolution the outer layer will still need to
# NOTE: know how to download and validate the artifact.
#
# NOTE: Virtual candidates should always return dependencies
# NOTE: because they are ephemeral and non-installable.
if not self._with_deps and not candidate.is_virtual:
return []
return [
self._make_req_from_dict({'name': dep_name, 'version': dep_req})
for dep_name, dep_req in req_map.items()
]
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 76,425 |
AttributeError installing ansible-galaxy collection from git
|
### Summary
Ansible thows an error just after processing the install dependency map.
```Process install dependency map
ERROR! Unexpected Exception, this is probably a bug: 'float' object has no attribute 'startswith'
```
### Issue Type
Bug Report
### Component Name
ansible-galaxy
### Ansible Version
```console
[cloud-user@test-centos ~]$ ansible --version
ansible [core 2.12.0]
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/cloud-user/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.9/site-packages/ansible
ansible collection location = /home/cloud-user/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/bin/ansible
python version = 3.9.8 (main, Nov 8 2021, 00:00:00) [GCC 11.2.1 20211019 (Red Hat 11.2.1-6)]
jinja version = 2.11.3
libyaml = True
```
### Configuration
```console
[cloud-user@test-centos ~]$ ansible-config dump --only-changed
[cloud-user@test-centos ~]$
```
### OS / Environment
CentOS Stream 9
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
galaxy.yml:
namespace: my
name: cool-collection
version: 1.0
authors: David *****
readme: README.md
```
Install a collection from a git repo with this galaxy.yml:
`ansible-galaxy collection install git+https://.../my_project.git,ISSUE-1`
### Expected Results
The collection is installed
### Actual Results
```console
[cloud-user@test-centos ~]$ ansible-galaxy -vvvv collection install git+https://....git,ISSUE-1
ansible-galaxy [core 2.12.0]
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/cloud-user/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.9/site-packages/ansible
ansible collection location = /home/cloud-user/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/bin/ansible-galaxy
python version = 3.9.8 (main, Nov 8 2021, 00:00:00) [GCC 11.2.1 20211019 (Red Hat 11.2.1-6)]
jinja version = 2.11.3
libyaml = True
Using /etc/ansible/ansible.cfg as config file
Cloning into '/home/cloud-user/.ansible/tmp/ansible-local-12630phroh0cw/tmpelcczl4o/netbox-addp5e_slda'...
Username for 'https://...': my_user
Password for 'https://my_user@my_gitsrv':
remote: Enumerating objects: 425, done.
remote: Counting objects: 100% (61/61), done.
remote: Compressing objects: 100% (29/29), done.
remote: Total 425 (delta 18), reused 52 (delta 14), pack-reused 364
Receiving objects: 100% (425/425), 47.25 KiB | 11.81 MiB/s, done.
Resolving deltas: 100% (178/178), done.
Branch 'ISSUE-1' set up to track remote branch 'ISSUE-1' from 'origin'.
Switched to a new branch 'ISSUE-1'
Starting galaxy collection install process
Process install dependency map
ERROR! Unexpected Exception, this is probably a bug: 'float' object has no attribute 'startswith'
the full traceback was:
Traceback (most recent call last):
File "/usr/bin/ansible-galaxy", line 128, in <module>
exit_code = cli.run()
File "/usr/lib/python3.9/site-packages/ansible/cli/galaxy.py", line 567, in run
return context.CLIARGS['func']()
File "/usr/lib/python3.9/site-packages/ansible/cli/galaxy.py", line 86, in method_wrapper
return wrapped_method(*args, **kwargs)
File "/usr/lib/python3.9/site-packages/ansible/cli/galaxy.py", line 1201, in execute_install
self._execute_install_collection(
File "/usr/lib/python3.9/site-packages/ansible/cli/galaxy.py", line 1228, in _execute_install_collection
install_collections(
File "/usr/lib/python3.9/site-packages/ansible/galaxy/collection/__init__.py", line 513, in install_collections
dependency_map = _resolve_depenency_map(
File "/usr/lib/python3.9/site-packages/ansible/galaxy/collection/__init__.py", line 1327, in _resolve_depenency_map
return collection_dep_resolver.resolve(
File "/usr/lib/python3.9/site-packages/resolvelib/resolvers.py", line 453, in resolve
state = resolution.resolve(requirements, max_rounds=max_rounds)
File "/usr/lib/python3.9/site-packages/resolvelib/resolvers.py", line 347, in resolve
failure_causes = self._attempt_to_pin_criterion(name, criterion)
File "/usr/lib/python3.9/site-packages/resolvelib/resolvers.py", line 216, in _attempt_to_pin_criterion
satisfied = all(
File "/usr/lib/python3.9/site-packages/resolvelib/resolvers.py", line 217, in <genexpr>
self._p.is_satisfied_by(r, candidate)
File "/usr/lib/python3.9/site-packages/ansible/galaxy/dependency_resolution/providers.py", line 280, in is_satisfied_by
requirement.ver.startswith('<') or
AttributeError: 'float' object has no attribute 'startswith'
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/76425
|
https://github.com/ansible/ansible/pull/76579
|
6e57c8c0844c44a102dc081b93d5299cae9619e2
|
15ace5a854886cd75894e7214209e4b027034bdb
| 2021-12-01T22:41:01Z |
python
| 2022-01-06T19:05:20Z |
test/integration/targets/ansible-galaxy-collection-scm/tasks/main.yml
|
---
- name: set the temp test directory
set_fact:
galaxy_dir: "{{ remote_tmp_dir }}/galaxy"
- name: Test installing collections from git repositories
environment:
ANSIBLE_COLLECTIONS_PATHS: "{{ galaxy_dir }}/collections"
vars:
cleanup: True
galaxy_dir: "{{ galaxy_dir }}"
block:
- include_tasks: ./setup.yml
- include_tasks: ./requirements.yml
- include_tasks: ./individual_collection_repo.yml
- include_tasks: ./setup_multi_collection_repo.yml
- include_tasks: ./multi_collection_repo_all.yml
- include_tasks: ./scm_dependency.yml
vars:
cleanup: False
- include_tasks: ./reinstalling.yml
- include_tasks: ./multi_collection_repo_individual.yml
- include_tasks: ./setup_recursive_scm_dependency.yml
- include_tasks: ./scm_dependency_deduplication.yml
- include_tasks: ./download.yml
always:
- name: Remove the directories for installing collections and git repositories
file:
path: '{{ item }}'
state: absent
loop:
- "{{ install_path }}"
- "{{ alt_install_path }}"
- "{{ scm_path }}"
- name: remove git
package:
name: git
state: absent
when: git_install is changed
# This gets dragged in as a dependency of git on FreeBSD.
# We need to remove it too when done.
- name: remove python37 if necessary
package:
name: python37
state: absent
when:
- git_install is changed
- ansible_distribution == 'FreeBSD'
- ansible_python.version.major == 2
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 76,425 |
AttributeError installing ansible-galaxy collection from git
|
### Summary
Ansible thows an error just after processing the install dependency map.
```Process install dependency map
ERROR! Unexpected Exception, this is probably a bug: 'float' object has no attribute 'startswith'
```
### Issue Type
Bug Report
### Component Name
ansible-galaxy
### Ansible Version
```console
[cloud-user@test-centos ~]$ ansible --version
ansible [core 2.12.0]
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/cloud-user/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.9/site-packages/ansible
ansible collection location = /home/cloud-user/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/bin/ansible
python version = 3.9.8 (main, Nov 8 2021, 00:00:00) [GCC 11.2.1 20211019 (Red Hat 11.2.1-6)]
jinja version = 2.11.3
libyaml = True
```
### Configuration
```console
[cloud-user@test-centos ~]$ ansible-config dump --only-changed
[cloud-user@test-centos ~]$
```
### OS / Environment
CentOS Stream 9
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
galaxy.yml:
namespace: my
name: cool-collection
version: 1.0
authors: David *****
readme: README.md
```
Install a collection from a git repo with this galaxy.yml:
`ansible-galaxy collection install git+https://.../my_project.git,ISSUE-1`
### Expected Results
The collection is installed
### Actual Results
```console
[cloud-user@test-centos ~]$ ansible-galaxy -vvvv collection install git+https://....git,ISSUE-1
ansible-galaxy [core 2.12.0]
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/cloud-user/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.9/site-packages/ansible
ansible collection location = /home/cloud-user/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/bin/ansible-galaxy
python version = 3.9.8 (main, Nov 8 2021, 00:00:00) [GCC 11.2.1 20211019 (Red Hat 11.2.1-6)]
jinja version = 2.11.3
libyaml = True
Using /etc/ansible/ansible.cfg as config file
Cloning into '/home/cloud-user/.ansible/tmp/ansible-local-12630phroh0cw/tmpelcczl4o/netbox-addp5e_slda'...
Username for 'https://...': my_user
Password for 'https://my_user@my_gitsrv':
remote: Enumerating objects: 425, done.
remote: Counting objects: 100% (61/61), done.
remote: Compressing objects: 100% (29/29), done.
remote: Total 425 (delta 18), reused 52 (delta 14), pack-reused 364
Receiving objects: 100% (425/425), 47.25 KiB | 11.81 MiB/s, done.
Resolving deltas: 100% (178/178), done.
Branch 'ISSUE-1' set up to track remote branch 'ISSUE-1' from 'origin'.
Switched to a new branch 'ISSUE-1'
Starting galaxy collection install process
Process install dependency map
ERROR! Unexpected Exception, this is probably a bug: 'float' object has no attribute 'startswith'
the full traceback was:
Traceback (most recent call last):
File "/usr/bin/ansible-galaxy", line 128, in <module>
exit_code = cli.run()
File "/usr/lib/python3.9/site-packages/ansible/cli/galaxy.py", line 567, in run
return context.CLIARGS['func']()
File "/usr/lib/python3.9/site-packages/ansible/cli/galaxy.py", line 86, in method_wrapper
return wrapped_method(*args, **kwargs)
File "/usr/lib/python3.9/site-packages/ansible/cli/galaxy.py", line 1201, in execute_install
self._execute_install_collection(
File "/usr/lib/python3.9/site-packages/ansible/cli/galaxy.py", line 1228, in _execute_install_collection
install_collections(
File "/usr/lib/python3.9/site-packages/ansible/galaxy/collection/__init__.py", line 513, in install_collections
dependency_map = _resolve_depenency_map(
File "/usr/lib/python3.9/site-packages/ansible/galaxy/collection/__init__.py", line 1327, in _resolve_depenency_map
return collection_dep_resolver.resolve(
File "/usr/lib/python3.9/site-packages/resolvelib/resolvers.py", line 453, in resolve
state = resolution.resolve(requirements, max_rounds=max_rounds)
File "/usr/lib/python3.9/site-packages/resolvelib/resolvers.py", line 347, in resolve
failure_causes = self._attempt_to_pin_criterion(name, criterion)
File "/usr/lib/python3.9/site-packages/resolvelib/resolvers.py", line 216, in _attempt_to_pin_criterion
satisfied = all(
File "/usr/lib/python3.9/site-packages/resolvelib/resolvers.py", line 217, in <genexpr>
self._p.is_satisfied_by(r, candidate)
File "/usr/lib/python3.9/site-packages/ansible/galaxy/dependency_resolution/providers.py", line 280, in is_satisfied_by
requirement.ver.startswith('<') or
AttributeError: 'float' object has no attribute 'startswith'
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/76425
|
https://github.com/ansible/ansible/pull/76579
|
6e57c8c0844c44a102dc081b93d5299cae9619e2
|
15ace5a854886cd75894e7214209e4b027034bdb
| 2021-12-01T22:41:01Z |
python
| 2022-01-06T19:05:20Z |
test/integration/targets/ansible-galaxy-collection-scm/tasks/setup_collection_bad_version.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 76,425 |
AttributeError installing ansible-galaxy collection from git
|
### Summary
Ansible thows an error just after processing the install dependency map.
```Process install dependency map
ERROR! Unexpected Exception, this is probably a bug: 'float' object has no attribute 'startswith'
```
### Issue Type
Bug Report
### Component Name
ansible-galaxy
### Ansible Version
```console
[cloud-user@test-centos ~]$ ansible --version
ansible [core 2.12.0]
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/cloud-user/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.9/site-packages/ansible
ansible collection location = /home/cloud-user/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/bin/ansible
python version = 3.9.8 (main, Nov 8 2021, 00:00:00) [GCC 11.2.1 20211019 (Red Hat 11.2.1-6)]
jinja version = 2.11.3
libyaml = True
```
### Configuration
```console
[cloud-user@test-centos ~]$ ansible-config dump --only-changed
[cloud-user@test-centos ~]$
```
### OS / Environment
CentOS Stream 9
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
galaxy.yml:
namespace: my
name: cool-collection
version: 1.0
authors: David *****
readme: README.md
```
Install a collection from a git repo with this galaxy.yml:
`ansible-galaxy collection install git+https://.../my_project.git,ISSUE-1`
### Expected Results
The collection is installed
### Actual Results
```console
[cloud-user@test-centos ~]$ ansible-galaxy -vvvv collection install git+https://....git,ISSUE-1
ansible-galaxy [core 2.12.0]
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/cloud-user/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.9/site-packages/ansible
ansible collection location = /home/cloud-user/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/bin/ansible-galaxy
python version = 3.9.8 (main, Nov 8 2021, 00:00:00) [GCC 11.2.1 20211019 (Red Hat 11.2.1-6)]
jinja version = 2.11.3
libyaml = True
Using /etc/ansible/ansible.cfg as config file
Cloning into '/home/cloud-user/.ansible/tmp/ansible-local-12630phroh0cw/tmpelcczl4o/netbox-addp5e_slda'...
Username for 'https://...': my_user
Password for 'https://my_user@my_gitsrv':
remote: Enumerating objects: 425, done.
remote: Counting objects: 100% (61/61), done.
remote: Compressing objects: 100% (29/29), done.
remote: Total 425 (delta 18), reused 52 (delta 14), pack-reused 364
Receiving objects: 100% (425/425), 47.25 KiB | 11.81 MiB/s, done.
Resolving deltas: 100% (178/178), done.
Branch 'ISSUE-1' set up to track remote branch 'ISSUE-1' from 'origin'.
Switched to a new branch 'ISSUE-1'
Starting galaxy collection install process
Process install dependency map
ERROR! Unexpected Exception, this is probably a bug: 'float' object has no attribute 'startswith'
the full traceback was:
Traceback (most recent call last):
File "/usr/bin/ansible-galaxy", line 128, in <module>
exit_code = cli.run()
File "/usr/lib/python3.9/site-packages/ansible/cli/galaxy.py", line 567, in run
return context.CLIARGS['func']()
File "/usr/lib/python3.9/site-packages/ansible/cli/galaxy.py", line 86, in method_wrapper
return wrapped_method(*args, **kwargs)
File "/usr/lib/python3.9/site-packages/ansible/cli/galaxy.py", line 1201, in execute_install
self._execute_install_collection(
File "/usr/lib/python3.9/site-packages/ansible/cli/galaxy.py", line 1228, in _execute_install_collection
install_collections(
File "/usr/lib/python3.9/site-packages/ansible/galaxy/collection/__init__.py", line 513, in install_collections
dependency_map = _resolve_depenency_map(
File "/usr/lib/python3.9/site-packages/ansible/galaxy/collection/__init__.py", line 1327, in _resolve_depenency_map
return collection_dep_resolver.resolve(
File "/usr/lib/python3.9/site-packages/resolvelib/resolvers.py", line 453, in resolve
state = resolution.resolve(requirements, max_rounds=max_rounds)
File "/usr/lib/python3.9/site-packages/resolvelib/resolvers.py", line 347, in resolve
failure_causes = self._attempt_to_pin_criterion(name, criterion)
File "/usr/lib/python3.9/site-packages/resolvelib/resolvers.py", line 216, in _attempt_to_pin_criterion
satisfied = all(
File "/usr/lib/python3.9/site-packages/resolvelib/resolvers.py", line 217, in <genexpr>
self._p.is_satisfied_by(r, candidate)
File "/usr/lib/python3.9/site-packages/ansible/galaxy/dependency_resolution/providers.py", line 280, in is_satisfied_by
requirement.ver.startswith('<') or
AttributeError: 'float' object has no attribute 'startswith'
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/76425
|
https://github.com/ansible/ansible/pull/76579
|
6e57c8c0844c44a102dc081b93d5299cae9619e2
|
15ace5a854886cd75894e7214209e4b027034bdb
| 2021-12-01T22:41:01Z |
python
| 2022-01-06T19:05:20Z |
test/integration/targets/ansible-galaxy-collection-scm/tasks/test_invalid_version.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 76,425 |
AttributeError installing ansible-galaxy collection from git
|
### Summary
Ansible thows an error just after processing the install dependency map.
```Process install dependency map
ERROR! Unexpected Exception, this is probably a bug: 'float' object has no attribute 'startswith'
```
### Issue Type
Bug Report
### Component Name
ansible-galaxy
### Ansible Version
```console
[cloud-user@test-centos ~]$ ansible --version
ansible [core 2.12.0]
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/cloud-user/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.9/site-packages/ansible
ansible collection location = /home/cloud-user/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/bin/ansible
python version = 3.9.8 (main, Nov 8 2021, 00:00:00) [GCC 11.2.1 20211019 (Red Hat 11.2.1-6)]
jinja version = 2.11.3
libyaml = True
```
### Configuration
```console
[cloud-user@test-centos ~]$ ansible-config dump --only-changed
[cloud-user@test-centos ~]$
```
### OS / Environment
CentOS Stream 9
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
galaxy.yml:
namespace: my
name: cool-collection
version: 1.0
authors: David *****
readme: README.md
```
Install a collection from a git repo with this galaxy.yml:
`ansible-galaxy collection install git+https://.../my_project.git,ISSUE-1`
### Expected Results
The collection is installed
### Actual Results
```console
[cloud-user@test-centos ~]$ ansible-galaxy -vvvv collection install git+https://....git,ISSUE-1
ansible-galaxy [core 2.12.0]
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/cloud-user/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.9/site-packages/ansible
ansible collection location = /home/cloud-user/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/bin/ansible-galaxy
python version = 3.9.8 (main, Nov 8 2021, 00:00:00) [GCC 11.2.1 20211019 (Red Hat 11.2.1-6)]
jinja version = 2.11.3
libyaml = True
Using /etc/ansible/ansible.cfg as config file
Cloning into '/home/cloud-user/.ansible/tmp/ansible-local-12630phroh0cw/tmpelcczl4o/netbox-addp5e_slda'...
Username for 'https://...': my_user
Password for 'https://my_user@my_gitsrv':
remote: Enumerating objects: 425, done.
remote: Counting objects: 100% (61/61), done.
remote: Compressing objects: 100% (29/29), done.
remote: Total 425 (delta 18), reused 52 (delta 14), pack-reused 364
Receiving objects: 100% (425/425), 47.25 KiB | 11.81 MiB/s, done.
Resolving deltas: 100% (178/178), done.
Branch 'ISSUE-1' set up to track remote branch 'ISSUE-1' from 'origin'.
Switched to a new branch 'ISSUE-1'
Starting galaxy collection install process
Process install dependency map
ERROR! Unexpected Exception, this is probably a bug: 'float' object has no attribute 'startswith'
the full traceback was:
Traceback (most recent call last):
File "/usr/bin/ansible-galaxy", line 128, in <module>
exit_code = cli.run()
File "/usr/lib/python3.9/site-packages/ansible/cli/galaxy.py", line 567, in run
return context.CLIARGS['func']()
File "/usr/lib/python3.9/site-packages/ansible/cli/galaxy.py", line 86, in method_wrapper
return wrapped_method(*args, **kwargs)
File "/usr/lib/python3.9/site-packages/ansible/cli/galaxy.py", line 1201, in execute_install
self._execute_install_collection(
File "/usr/lib/python3.9/site-packages/ansible/cli/galaxy.py", line 1228, in _execute_install_collection
install_collections(
File "/usr/lib/python3.9/site-packages/ansible/galaxy/collection/__init__.py", line 513, in install_collections
dependency_map = _resolve_depenency_map(
File "/usr/lib/python3.9/site-packages/ansible/galaxy/collection/__init__.py", line 1327, in _resolve_depenency_map
return collection_dep_resolver.resolve(
File "/usr/lib/python3.9/site-packages/resolvelib/resolvers.py", line 453, in resolve
state = resolution.resolve(requirements, max_rounds=max_rounds)
File "/usr/lib/python3.9/site-packages/resolvelib/resolvers.py", line 347, in resolve
failure_causes = self._attempt_to_pin_criterion(name, criterion)
File "/usr/lib/python3.9/site-packages/resolvelib/resolvers.py", line 216, in _attempt_to_pin_criterion
satisfied = all(
File "/usr/lib/python3.9/site-packages/resolvelib/resolvers.py", line 217, in <genexpr>
self._p.is_satisfied_by(r, candidate)
File "/usr/lib/python3.9/site-packages/ansible/galaxy/dependency_resolution/providers.py", line 280, in is_satisfied_by
requirement.ver.startswith('<') or
AttributeError: 'float' object has no attribute 'startswith'
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/76425
|
https://github.com/ansible/ansible/pull/76579
|
6e57c8c0844c44a102dc081b93d5299cae9619e2
|
15ace5a854886cd75894e7214209e4b027034bdb
| 2021-12-01T22:41:01Z |
python
| 2022-01-06T19:05:20Z |
test/integration/targets/ansible-galaxy-collection-scm/vars/main.yml
|
install_path: "{{ galaxy_dir }}/collections/ansible_collections"
alt_install_path: "{{ galaxy_dir }}/other_collections/ansible_collections"
scm_path: "{{ galaxy_dir }}/development"
test_repo_path: "{{ galaxy_dir }}/development/ansible_test"
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 75,404 |
Explicit version results in 'no attribute' error since core 2.11
|
### Summary
Upgraded from Ansible 3.4/core 2.10 to 4.3 and core 2.11. Get an error that says `ERROR! Unexpected Exception, this is probably a bug: 'float' object has no attribute 'startswith'` when running ansible-galaxy with a git repository in it.
Tested against
### Issue Type
Bug Report
### Component Name
galaxy
### Ansible Version
```console
ansible [core 2.11.2]
config file = /var/lib/buildkite-agent/.ansible.cfg
configured module search path = ['/var/lib/buildkite-agent/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /var/lib/buildkite-agent/.local/lib/python3.8/site-packages/ansible
ansible collection location = /var/lib/buildkite-agent/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/local/bin/ansible
python version = 3.8.0 (default, Feb 25 2021, 22:10:10) [GCC 8.4.0]
jinja version = 2.10
libyaml = False
```
### Configuration
```console
$ ansible-config dump --only-changed
DEFAULT_ASK_PASS(/var/lib/buildkite-agent/.ansible.cfg) = False
DEFAULT_LOCAL_TMP(/var/lib/buildkite-agent/.ansible.cfg) = /tmp/ansible/tmp/ansible-local-10804znovzlh6
```
### OS / Environment
Ubuntu 18.04, with python upgraded to 3.8.0 and ansible installed via pip
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
collections:
# With just the collection name
- community.aws
- name: [email protected]:VoiceThread/ansible-collection-buildkite.git
type: git
version: "0.7"
```
Note that I tested with ansible 4.2 and 4.3, which are all that I could install via pip.
It appears that despite quoting the version 0.7, that string was turned into a float in some portion of the code.
### Expected Results
I expected the AWS community and our custom buildkite collection to be installed.
### Actual Results
```console
buildkite-agent@buildkite-agent-tools3208:~/builds/buildkite-agent-tools3208-1/voicethread-llc/ami-bakery/buildkite_agent_frontend$ansible-galaxy -vvv install -r ./requirements.yml
[DEPRECATION WARNING]: Setting verbosity before the arg sub command is deprecated, set the verbosity after the sub command. This feature will be removed from
ansible-core in version 2.13. Deprecation warnings can be disabled by setting deprecation_warnings=False in ansible.cfg.
ansible-galaxy [core 2.11.2]
config file = /var/lib/buildkite-agent/.ansible.cfg
configured module search path = ['/var/lib/buildkite-agent/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /var/lib/buildkite-agent/.local/lib/python3.8/site-packages/ansible
ansible collection location = /var/lib/buildkite-agent/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/local/bin/ansible-galaxy
python version = 3.8.0 (default, Feb 25 2021, 22:10:10) [GCC 8.4.0]
jinja version = 2.10
libyaml = False
Using /var/lib/buildkite-agent/.ansible.cfg as config file
Reading requirement file at '/var/lib/buildkite-agent/builds/buildkite-agent-tools3208-1/voicethread-llc/ami-bakery/buildkite_agent_frontend/requirements.yml'
Cloning into '/tmp/ansible/tmp/ansible-local-10652xjwcbw9q/tmphvi5md99/ansible-collection-buildkitek95mgz7r'...
remote: Enumerating objects: 215, done.
remote: Counting objects: 100% (215/215), done.
remote: Compressing objects: 100% (160/160), done.
remote: Total 215 (delta 73), reused 176 (delta 42), pack-reused 0
Receiving objects: 100% (215/215), 28.94 KiB | 14.47 MiB/s, done.
Resolving deltas: 100% (73/73), done.
Your branch is up to date with 'origin/main'.
Starting galaxy collection install process
Process install dependency map
Opened /var/lib/buildkite-agent/.ansible/galaxy_token
ERROR! Unexpected Exception, this is probably a bug: 'float' object has no attribute 'startswith'
the full traceback was:
Traceback (most recent call last):
File "/usr/local/bin/ansible-galaxy", line 135, in <module>
exit_code = cli.run()
File "/var/lib/buildkite-agent/.local/lib/python3.8/site-packages/ansible/cli/galaxy.py", line 552, in run
return context.CLIARGS['func']()
File "/var/lib/buildkite-agent/.local/lib/python3.8/site-packages/ansible/cli/galaxy.py", line 75, in method_wrapper
return wrapped_method(*args, **kwargs)
File "/var/lib/buildkite-agent/.local/lib/python3.8/site-packages/ansible/cli/galaxy.py", line 1186, in execute_install
self._execute_install_collection(
File "/var/lib/buildkite-agent/.local/lib/python3.8/site-packages/ansible/cli/galaxy.py", line 1213, in _execute_install_collection
install_collections(
File "/var/lib/buildkite-agent/.local/lib/python3.8/site-packages/ansible/galaxy/collection/__init__.py", line 511, in install_collections
dependency_map = _resolve_depenency_map(
File "/var/lib/buildkite-agent/.local/lib/python3.8/site-packages/ansible/galaxy/collection/__init__.py", line 1328, in _resolve_depenency_map
return collection_dep_resolver.resolve(
File "/usr/local/lib/python3.8/dist-packages/resolvelib/resolvers.py", line 453, in resolve
state = resolution.resolve(requirements, max_rounds=max_rounds)
File "/usr/local/lib/python3.8/dist-packages/resolvelib/resolvers.py", line 347, in resolve
failure_causes = self._attempt_to_pin_criterion(name, criterion)
File "/usr/local/lib/python3.8/dist-packages/resolvelib/resolvers.py", line 216, in _attempt_to_pin_criterion
satisfied = all(
File "/usr/local/lib/python3.8/dist-packages/resolvelib/resolvers.py", line 217, in <genexpr>
self._p.is_satisfied_by(r, candidate)
File "/var/lib/buildkite-agent/.local/lib/python3.8/site-packages/ansible/galaxy/dependency_resolution/providers.py", line 280, in is_satisfied_by
requirement.ver.startswith('<') or
AttributeError: 'float' object has no attribute 'startswith'
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/75404
|
https://github.com/ansible/ansible/pull/76579
|
6e57c8c0844c44a102dc081b93d5299cae9619e2
|
15ace5a854886cd75894e7214209e4b027034bdb
| 2021-08-04T17:46:33Z |
python
| 2022-01-06T19:05:20Z |
changelogs/fragments/76579-fix-ansible-galaxy-collection-version-error.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 75,404 |
Explicit version results in 'no attribute' error since core 2.11
|
### Summary
Upgraded from Ansible 3.4/core 2.10 to 4.3 and core 2.11. Get an error that says `ERROR! Unexpected Exception, this is probably a bug: 'float' object has no attribute 'startswith'` when running ansible-galaxy with a git repository in it.
Tested against
### Issue Type
Bug Report
### Component Name
galaxy
### Ansible Version
```console
ansible [core 2.11.2]
config file = /var/lib/buildkite-agent/.ansible.cfg
configured module search path = ['/var/lib/buildkite-agent/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /var/lib/buildkite-agent/.local/lib/python3.8/site-packages/ansible
ansible collection location = /var/lib/buildkite-agent/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/local/bin/ansible
python version = 3.8.0 (default, Feb 25 2021, 22:10:10) [GCC 8.4.0]
jinja version = 2.10
libyaml = False
```
### Configuration
```console
$ ansible-config dump --only-changed
DEFAULT_ASK_PASS(/var/lib/buildkite-agent/.ansible.cfg) = False
DEFAULT_LOCAL_TMP(/var/lib/buildkite-agent/.ansible.cfg) = /tmp/ansible/tmp/ansible-local-10804znovzlh6
```
### OS / Environment
Ubuntu 18.04, with python upgraded to 3.8.0 and ansible installed via pip
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
collections:
# With just the collection name
- community.aws
- name: [email protected]:VoiceThread/ansible-collection-buildkite.git
type: git
version: "0.7"
```
Note that I tested with ansible 4.2 and 4.3, which are all that I could install via pip.
It appears that despite quoting the version 0.7, that string was turned into a float in some portion of the code.
### Expected Results
I expected the AWS community and our custom buildkite collection to be installed.
### Actual Results
```console
buildkite-agent@buildkite-agent-tools3208:~/builds/buildkite-agent-tools3208-1/voicethread-llc/ami-bakery/buildkite_agent_frontend$ansible-galaxy -vvv install -r ./requirements.yml
[DEPRECATION WARNING]: Setting verbosity before the arg sub command is deprecated, set the verbosity after the sub command. This feature will be removed from
ansible-core in version 2.13. Deprecation warnings can be disabled by setting deprecation_warnings=False in ansible.cfg.
ansible-galaxy [core 2.11.2]
config file = /var/lib/buildkite-agent/.ansible.cfg
configured module search path = ['/var/lib/buildkite-agent/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /var/lib/buildkite-agent/.local/lib/python3.8/site-packages/ansible
ansible collection location = /var/lib/buildkite-agent/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/local/bin/ansible-galaxy
python version = 3.8.0 (default, Feb 25 2021, 22:10:10) [GCC 8.4.0]
jinja version = 2.10
libyaml = False
Using /var/lib/buildkite-agent/.ansible.cfg as config file
Reading requirement file at '/var/lib/buildkite-agent/builds/buildkite-agent-tools3208-1/voicethread-llc/ami-bakery/buildkite_agent_frontend/requirements.yml'
Cloning into '/tmp/ansible/tmp/ansible-local-10652xjwcbw9q/tmphvi5md99/ansible-collection-buildkitek95mgz7r'...
remote: Enumerating objects: 215, done.
remote: Counting objects: 100% (215/215), done.
remote: Compressing objects: 100% (160/160), done.
remote: Total 215 (delta 73), reused 176 (delta 42), pack-reused 0
Receiving objects: 100% (215/215), 28.94 KiB | 14.47 MiB/s, done.
Resolving deltas: 100% (73/73), done.
Your branch is up to date with 'origin/main'.
Starting galaxy collection install process
Process install dependency map
Opened /var/lib/buildkite-agent/.ansible/galaxy_token
ERROR! Unexpected Exception, this is probably a bug: 'float' object has no attribute 'startswith'
the full traceback was:
Traceback (most recent call last):
File "/usr/local/bin/ansible-galaxy", line 135, in <module>
exit_code = cli.run()
File "/var/lib/buildkite-agent/.local/lib/python3.8/site-packages/ansible/cli/galaxy.py", line 552, in run
return context.CLIARGS['func']()
File "/var/lib/buildkite-agent/.local/lib/python3.8/site-packages/ansible/cli/galaxy.py", line 75, in method_wrapper
return wrapped_method(*args, **kwargs)
File "/var/lib/buildkite-agent/.local/lib/python3.8/site-packages/ansible/cli/galaxy.py", line 1186, in execute_install
self._execute_install_collection(
File "/var/lib/buildkite-agent/.local/lib/python3.8/site-packages/ansible/cli/galaxy.py", line 1213, in _execute_install_collection
install_collections(
File "/var/lib/buildkite-agent/.local/lib/python3.8/site-packages/ansible/galaxy/collection/__init__.py", line 511, in install_collections
dependency_map = _resolve_depenency_map(
File "/var/lib/buildkite-agent/.local/lib/python3.8/site-packages/ansible/galaxy/collection/__init__.py", line 1328, in _resolve_depenency_map
return collection_dep_resolver.resolve(
File "/usr/local/lib/python3.8/dist-packages/resolvelib/resolvers.py", line 453, in resolve
state = resolution.resolve(requirements, max_rounds=max_rounds)
File "/usr/local/lib/python3.8/dist-packages/resolvelib/resolvers.py", line 347, in resolve
failure_causes = self._attempt_to_pin_criterion(name, criterion)
File "/usr/local/lib/python3.8/dist-packages/resolvelib/resolvers.py", line 216, in _attempt_to_pin_criterion
satisfied = all(
File "/usr/local/lib/python3.8/dist-packages/resolvelib/resolvers.py", line 217, in <genexpr>
self._p.is_satisfied_by(r, candidate)
File "/var/lib/buildkite-agent/.local/lib/python3.8/site-packages/ansible/galaxy/dependency_resolution/providers.py", line 280, in is_satisfied_by
requirement.ver.startswith('<') or
AttributeError: 'float' object has no attribute 'startswith'
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/75404
|
https://github.com/ansible/ansible/pull/76579
|
6e57c8c0844c44a102dc081b93d5299cae9619e2
|
15ace5a854886cd75894e7214209e4b027034bdb
| 2021-08-04T17:46:33Z |
python
| 2022-01-06T19:05:20Z |
lib/ansible/galaxy/collection/__init__.py
|
# -*- coding: utf-8 -*-
# Copyright: (c) 2019-2021, Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
"""Installed collections management package."""
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import errno
import fnmatch
import functools
import json
import os
import shutil
import stat
import sys
import tarfile
import tempfile
import threading
import time
import yaml
from collections import namedtuple
from contextlib import contextmanager
from ansible.module_utils.compat.version import LooseVersion
from hashlib import sha256
from io import BytesIO
from itertools import chain
from yaml.error import YAMLError
# NOTE: Adding type ignores is a hack for mypy to shut up wrt bug #1153
try:
import queue # type: ignore[import]
except ImportError: # Python 2
import Queue as queue # type: ignore[import,no-redef]
try:
# NOTE: It's in Python 3 stdlib and can be installed on Python 2
# NOTE: via `pip install typing`. Unnecessary in runtime.
# NOTE: `TYPE_CHECKING` is True during mypy-typecheck-time.
from typing import TYPE_CHECKING
except ImportError:
TYPE_CHECKING = False
if TYPE_CHECKING:
from typing import Dict, Iterable, List, Optional, Text, Union
if sys.version_info[:2] >= (3, 8):
from typing import Literal
else: # Python 2 + Python 3.4-3.7
from typing_extensions import Literal
from ansible.galaxy.api import GalaxyAPI
from ansible.galaxy.collection.concrete_artifact_manager import (
ConcreteArtifactsManager,
)
ManifestKeysType = Literal[
'collection_info', 'file_manifest_file', 'format',
]
FileMetaKeysType = Literal[
'name',
'ftype',
'chksum_type',
'chksum_sha256',
'format',
]
CollectionInfoKeysType = Literal[
# collection meta:
'namespace', 'name', 'version',
'authors', 'readme',
'tags', 'description',
'license', 'license_file',
'dependencies',
'repository', 'documentation',
'homepage', 'issues',
# files meta:
FileMetaKeysType,
]
ManifestValueType = Dict[
CollectionInfoKeysType,
Optional[
Union[
int, str, # scalars, like name/ns, schema version
List[str], # lists of scalars, like tags
Dict[str, str], # deps map
],
],
]
CollectionManifestType = Dict[ManifestKeysType, ManifestValueType]
FileManifestEntryType = Dict[FileMetaKeysType, Optional[Union[str, int]]]
FilesManifestType = Dict[
Literal['files', 'format'],
Union[List[FileManifestEntryType], int],
]
import ansible.constants as C
from ansible.errors import AnsibleError
from ansible.galaxy import get_collections_galaxy_meta_info
from ansible.galaxy.collection.concrete_artifact_manager import (
_consume_file,
_download_file,
_get_json_from_installed_dir,
_get_meta_from_src_dir,
_tarfile_extract,
)
from ansible.galaxy.collection.galaxy_api_proxy import MultiGalaxyAPIProxy
from ansible.galaxy.dependency_resolution import (
build_collection_dependency_resolver,
)
from ansible.galaxy.dependency_resolution.dataclasses import (
Candidate, Requirement, _is_installed_collection_dir,
)
from ansible.galaxy.dependency_resolution.errors import (
CollectionDependencyResolutionImpossible,
CollectionDependencyInconsistentCandidate,
)
from ansible.galaxy.dependency_resolution.versioning import meets_requirements
from ansible.module_utils.six import raise_from
from ansible.module_utils._text import to_bytes, to_native, to_text
from ansible.module_utils.common.yaml import yaml_dump
from ansible.utils.collection_loader import AnsibleCollectionRef
from ansible.utils.display import Display
from ansible.utils.hashing import secure_hash, secure_hash_s
from ansible.utils.version import SemanticVersion
display = Display()
MANIFEST_FORMAT = 1
MANIFEST_FILENAME = 'MANIFEST.json'
ModifiedContent = namedtuple('ModifiedContent', ['filename', 'expected', 'installed'])
# FUTURE: expose actual verify result details for a collection on this object, maybe reimplement as dataclass on py3.8+
class CollectionVerifyResult:
def __init__(self, collection_name): # type: (str) -> None
self.collection_name = collection_name # type: str
self.success = True # type: bool
def verify_local_collection(
local_collection, remote_collection,
artifacts_manager,
): # type: (Candidate, Optional[Candidate], ConcreteArtifactsManager) -> CollectionVerifyResult
"""Verify integrity of the locally installed collection.
:param local_collection: Collection being checked.
:param remote_collection: Upstream collection (optional, if None, only verify local artifact)
:param artifacts_manager: Artifacts manager.
:return: a collection verify result object.
"""
result = CollectionVerifyResult(local_collection.fqcn)
b_collection_path = to_bytes(
local_collection.src, errors='surrogate_or_strict',
)
display.display("Verifying '{coll!s}'.".format(coll=local_collection))
display.display(
u"Installed collection found at '{path!s}'".
format(path=to_text(local_collection.src)),
)
modified_content = [] # type: List[ModifiedContent]
verify_local_only = remote_collection is None
if verify_local_only:
# partial away the local FS detail so we can just ask generically during validation
get_json_from_validation_source = functools.partial(_get_json_from_installed_dir, b_collection_path)
get_hash_from_validation_source = functools.partial(_get_file_hash, b_collection_path)
# since we're not downloading this, just seed it with the value from disk
manifest_hash = get_hash_from_validation_source(MANIFEST_FILENAME)
else:
# fetch remote
b_temp_tar_path = ( # NOTE: AnsibleError is raised on URLError
artifacts_manager.get_artifact_path
if remote_collection.is_concrete_artifact
else artifacts_manager.get_galaxy_artifact_path
)(remote_collection)
display.vvv(
u"Remote collection cached as '{path!s}'".format(path=to_text(b_temp_tar_path))
)
# partial away the tarball details so we can just ask generically during validation
get_json_from_validation_source = functools.partial(_get_json_from_tar_file, b_temp_tar_path)
get_hash_from_validation_source = functools.partial(_get_tar_file_hash, b_temp_tar_path)
# Compare installed version versus requirement version
if local_collection.ver != remote_collection.ver:
err = (
"{local_fqcn!s} has the version '{local_ver!s}' but "
"is being compared to '{remote_ver!s}'".format(
local_fqcn=local_collection.fqcn,
local_ver=local_collection.ver,
remote_ver=remote_collection.ver,
)
)
display.display(err)
result.success = False
return result
# Verify the downloaded manifest hash matches the installed copy before verifying the file manifest
manifest_hash = get_hash_from_validation_source(MANIFEST_FILENAME)
_verify_file_hash(b_collection_path, MANIFEST_FILENAME, manifest_hash, modified_content)
display.display('MANIFEST.json hash: {manifest_hash}'.format(manifest_hash=manifest_hash))
manifest = get_json_from_validation_source(MANIFEST_FILENAME)
# Use the manifest to verify the file manifest checksum
file_manifest_data = manifest['file_manifest_file']
file_manifest_filename = file_manifest_data['name']
expected_hash = file_manifest_data['chksum_%s' % file_manifest_data['chksum_type']]
# Verify the file manifest before using it to verify individual files
_verify_file_hash(b_collection_path, file_manifest_filename, expected_hash, modified_content)
file_manifest = get_json_from_validation_source(file_manifest_filename)
# Use the file manifest to verify individual file checksums
for manifest_data in file_manifest['files']:
if manifest_data['ftype'] == 'file':
expected_hash = manifest_data['chksum_%s' % manifest_data['chksum_type']]
_verify_file_hash(b_collection_path, manifest_data['name'], expected_hash, modified_content)
if modified_content:
result.success = False
display.display(
'Collection {fqcn!s} contains modified content '
'in the following files:'.
format(fqcn=to_text(local_collection.fqcn)),
)
for content_change in modified_content:
display.display(' %s' % content_change.filename)
display.v(" Expected: %s\n Found: %s" % (content_change.expected, content_change.installed))
else:
what = "are internally consistent with its manifest" if verify_local_only else "match the remote collection"
display.display(
"Successfully verified that checksums for '{coll!s}' {what!s}.".
format(coll=local_collection, what=what),
)
return result
def build_collection(u_collection_path, u_output_path, force):
# type: (Text, Text, bool) -> Text
"""Creates the Ansible collection artifact in a .tar.gz file.
:param u_collection_path: The path to the collection to build. This should be the directory that contains the
galaxy.yml file.
:param u_output_path: The path to create the collection build artifact. This should be a directory.
:param force: Whether to overwrite an existing collection build artifact or fail.
:return: The path to the collection build artifact.
"""
b_collection_path = to_bytes(u_collection_path, errors='surrogate_or_strict')
try:
collection_meta = _get_meta_from_src_dir(b_collection_path)
except LookupError as lookup_err:
raise_from(AnsibleError(to_native(lookup_err)), lookup_err)
collection_manifest = _build_manifest(**collection_meta)
file_manifest = _build_files_manifest(
b_collection_path,
collection_meta['namespace'], # type: ignore[arg-type]
collection_meta['name'], # type: ignore[arg-type]
collection_meta['build_ignore'], # type: ignore[arg-type]
)
artifact_tarball_file_name = '{ns!s}-{name!s}-{ver!s}.tar.gz'.format(
name=collection_meta['name'],
ns=collection_meta['namespace'],
ver=collection_meta['version'],
)
b_collection_output = os.path.join(
to_bytes(u_output_path),
to_bytes(artifact_tarball_file_name, errors='surrogate_or_strict'),
)
if os.path.exists(b_collection_output):
if os.path.isdir(b_collection_output):
raise AnsibleError("The output collection artifact '%s' already exists, "
"but is a directory - aborting" % to_native(b_collection_output))
elif not force:
raise AnsibleError("The file '%s' already exists. You can use --force to re-create "
"the collection artifact." % to_native(b_collection_output))
collection_output = _build_collection_tar(b_collection_path, b_collection_output, collection_manifest, file_manifest)
return collection_output
def download_collections(
collections, # type: Iterable[Requirement]
output_path, # type: str
apis, # type: Iterable[GalaxyAPI]
no_deps, # type: bool
allow_pre_release, # type: bool
artifacts_manager, # type: ConcreteArtifactsManager
): # type: (...) -> None
"""Download Ansible collections as their tarball from a Galaxy server to the path specified and creates a requirements
file of the downloaded requirements to be used for an install.
:param collections: The collections to download, should be a list of tuples with (name, requirement, Galaxy Server).
:param output_path: The path to download the collections to.
:param apis: A list of GalaxyAPIs to query when search for a collection.
:param validate_certs: Whether to validate the certificate if downloading a tarball from a non-Galaxy host.
:param no_deps: Ignore any collection dependencies and only download the base requirements.
:param allow_pre_release: Do not ignore pre-release versions when selecting the latest.
"""
with _display_progress("Process download dependency map"):
dep_map = _resolve_depenency_map(
set(collections),
galaxy_apis=apis,
preferred_candidates=None,
concrete_artifacts_manager=artifacts_manager,
no_deps=no_deps,
allow_pre_release=allow_pre_release,
upgrade=False,
)
b_output_path = to_bytes(output_path, errors='surrogate_or_strict')
requirements = []
with _display_progress(
"Starting collection download process to '{path!s}'".
format(path=output_path),
):
for fqcn, concrete_coll_pin in dep_map.copy().items(): # FIXME: move into the provider
if concrete_coll_pin.is_virtual:
display.display(
'Virtual collection {coll!s} is not downloadable'.
format(coll=to_text(concrete_coll_pin)),
)
continue
display.display(
u"Downloading collection '{coll!s}' to '{path!s}'".
format(coll=to_text(concrete_coll_pin), path=to_text(b_output_path)),
)
b_src_path = (
artifacts_manager.get_artifact_path
if concrete_coll_pin.is_concrete_artifact
else artifacts_manager.get_galaxy_artifact_path
)(concrete_coll_pin)
b_dest_path = os.path.join(
b_output_path,
os.path.basename(b_src_path),
)
if concrete_coll_pin.is_dir:
b_dest_path = to_bytes(
build_collection(
to_text(b_src_path, errors='surrogate_or_strict'),
to_text(output_path, errors='surrogate_or_strict'),
force=True,
),
errors='surrogate_or_strict',
)
else:
shutil.copy(to_native(b_src_path), to_native(b_dest_path))
display.display(
"Collection '{coll!s}' was downloaded successfully".
format(coll=concrete_coll_pin),
)
requirements.append({
# FIXME: Consider using a more specific upgraded format
# FIXME: having FQCN in the name field, with src field
# FIXME: pointing to the file path, and explicitly set
# FIXME: type. If version and name are set, it'd
# FIXME: perform validation against the actual metadata
# FIXME: in the artifact src points at.
'name': to_native(os.path.basename(b_dest_path)),
'version': concrete_coll_pin.ver,
})
requirements_path = os.path.join(output_path, 'requirements.yml')
b_requirements_path = to_bytes(
requirements_path, errors='surrogate_or_strict',
)
display.display(
u'Writing requirements.yml file of downloaded collections '
"to '{path!s}'".format(path=to_text(requirements_path)),
)
yaml_bytes = to_bytes(
yaml_dump({'collections': requirements}),
errors='surrogate_or_strict',
)
with open(b_requirements_path, mode='wb') as req_fd:
req_fd.write(yaml_bytes)
def publish_collection(collection_path, api, wait, timeout):
"""Publish an Ansible collection tarball into an Ansible Galaxy server.
:param collection_path: The path to the collection tarball to publish.
:param api: A GalaxyAPI to publish the collection to.
:param wait: Whether to wait until the import process is complete.
:param timeout: The time in seconds to wait for the import process to finish, 0 is indefinite.
"""
import_uri = api.publish_collection(collection_path)
if wait:
# Galaxy returns a url fragment which differs between v2 and v3. The second to last entry is
# always the task_id, though.
# v2: {"task": "https://galaxy-dev.ansible.com/api/v2/collection-imports/35573/"}
# v3: {"task": "/api/automation-hub/v3/imports/collections/838d1308-a8f4-402c-95cb-7823f3806cd8/"}
task_id = None
for path_segment in reversed(import_uri.split('/')):
if path_segment:
task_id = path_segment
break
if not task_id:
raise AnsibleError("Publishing the collection did not return valid task info. Cannot wait for task status. Returned task info: '%s'" % import_uri)
with _display_progress(
"Collection has been published to the Galaxy server "
"{api.name!s} {api.api_server!s}".format(api=api),
):
api.wait_import_task(task_id, timeout)
display.display("Collection has been successfully published and imported to the Galaxy server %s %s"
% (api.name, api.api_server))
else:
display.display("Collection has been pushed to the Galaxy server %s %s, not waiting until import has "
"completed due to --no-wait being set. Import task results can be found at %s"
% (api.name, api.api_server, import_uri))
def install_collections(
collections, # type: Iterable[Requirement]
output_path, # type: str
apis, # type: Iterable[GalaxyAPI]
ignore_errors, # type: bool
no_deps, # type: bool
force, # type: bool
force_deps, # type: bool
upgrade, # type: bool
allow_pre_release, # type: bool
artifacts_manager, # type: ConcreteArtifactsManager
): # type: (...) -> None
"""Install Ansible collections to the path specified.
:param collections: The collections to install.
:param output_path: The path to install the collections to.
:param apis: A list of GalaxyAPIs to query when searching for a collection.
:param validate_certs: Whether to validate the certificates if downloading a tarball.
:param ignore_errors: Whether to ignore any errors when installing the collection.
:param no_deps: Ignore any collection dependencies and only install the base requirements.
:param force: Re-install a collection if it has already been installed.
:param force_deps: Re-install a collection as well as its dependencies if they have already been installed.
"""
existing_collections = {
Requirement(coll.fqcn, coll.ver, coll.src, coll.type)
for coll in find_existing_collections(output_path, artifacts_manager)
}
unsatisfied_requirements = set(
chain.from_iterable(
(
Requirement.from_dir_path(sub_coll, artifacts_manager)
for sub_coll in (
artifacts_manager.
get_direct_collection_dependencies(install_req).
keys()
)
)
if install_req.is_subdirs else (install_req, )
for install_req in collections
),
)
requested_requirements_names = {req.fqcn for req in unsatisfied_requirements}
# NOTE: Don't attempt to reevaluate already installed deps
# NOTE: unless `--force` or `--force-with-deps` is passed
unsatisfied_requirements -= set() if force or force_deps else {
req
for req in unsatisfied_requirements
for exs in existing_collections
if req.fqcn == exs.fqcn and meets_requirements(exs.ver, req.ver)
}
if not unsatisfied_requirements and not upgrade:
display.display(
'Nothing to do. All requested collections are already '
'installed. If you want to reinstall them, '
'consider using `--force`.'
)
return
# FIXME: This probably needs to be improved to
# FIXME: properly match differing src/type.
existing_non_requested_collections = {
coll for coll in existing_collections
if coll.fqcn not in requested_requirements_names
}
preferred_requirements = (
[] if force_deps
else existing_non_requested_collections if force
else existing_collections
)
preferred_collections = {
Candidate(coll.fqcn, coll.ver, coll.src, coll.type)
for coll in preferred_requirements
}
with _display_progress("Process install dependency map"):
dependency_map = _resolve_depenency_map(
collections,
galaxy_apis=apis,
preferred_candidates=preferred_collections,
concrete_artifacts_manager=artifacts_manager,
no_deps=no_deps,
allow_pre_release=allow_pre_release,
upgrade=upgrade,
)
with _display_progress("Starting collection install process"):
for fqcn, concrete_coll_pin in dependency_map.items():
if concrete_coll_pin.is_virtual:
display.vvvv(
"'{coll!s}' is virtual, skipping.".
format(coll=to_text(concrete_coll_pin)),
)
continue
if concrete_coll_pin in preferred_collections:
display.display(
"'{coll!s}' is already installed, skipping.".
format(coll=to_text(concrete_coll_pin)),
)
continue
try:
install(concrete_coll_pin, output_path, artifacts_manager)
except AnsibleError as err:
if ignore_errors:
display.warning(
'Failed to install collection {coll!s} but skipping '
'due to --ignore-errors being set. Error: {error!s}'.
format(
coll=to_text(concrete_coll_pin),
error=to_text(err),
)
)
else:
raise
# NOTE: imported in ansible.cli.galaxy
def validate_collection_name(name): # type: (str) -> str
"""Validates the collection name as an input from the user or a requirements file fit the requirements.
:param name: The input name with optional range specifier split by ':'.
:return: The input value, required for argparse validation.
"""
collection, dummy, dummy = name.partition(':')
if AnsibleCollectionRef.is_valid_collection_name(collection):
return name
raise AnsibleError("Invalid collection name '%s', "
"name must be in the format <namespace>.<collection>. \n"
"Please make sure namespace and collection name contains "
"characters from [a-zA-Z0-9_] only." % name)
# NOTE: imported in ansible.cli.galaxy
def validate_collection_path(collection_path): # type: (str) -> str
"""Ensure a given path ends with 'ansible_collections'
:param collection_path: The path that should end in 'ansible_collections'
:return: collection_path ending in 'ansible_collections' if it does not already.
"""
if os.path.split(collection_path)[1] != 'ansible_collections':
return os.path.join(collection_path, 'ansible_collections')
return collection_path
def verify_collections(
collections, # type: Iterable[Requirement]
search_paths, # type: Iterable[str]
apis, # type: Iterable[GalaxyAPI]
ignore_errors, # type: bool
local_verify_only, # type: bool
artifacts_manager, # type: ConcreteArtifactsManager
): # type: (...) -> List[CollectionVerifyResult]
r"""Verify the integrity of locally installed collections.
:param collections: The collections to check.
:param search_paths: Locations for the local collection lookup.
:param apis: A list of GalaxyAPIs to query when searching for a collection.
:param ignore_errors: Whether to ignore any errors when verifying the collection.
:param local_verify_only: When True, skip downloads and only verify local manifests.
:param artifacts_manager: Artifacts manager.
:return: list of CollectionVerifyResult objects describing the results of each collection verification
"""
results = [] # type: List[CollectionVerifyResult]
api_proxy = MultiGalaxyAPIProxy(apis, artifacts_manager)
with _display_progress():
for collection in collections:
try:
if collection.is_concrete_artifact:
raise AnsibleError(
message="'{coll_type!s}' type is not supported. "
'The format namespace.name is expected.'.
format(coll_type=collection.type)
)
# NOTE: Verify local collection exists before
# NOTE: downloading its source artifact from
# NOTE: a galaxy server.
default_err = 'Collection %s is not installed in any of the collection paths.' % collection.fqcn
for search_path in search_paths:
b_search_path = to_bytes(
os.path.join(
search_path,
collection.namespace, collection.name,
),
errors='surrogate_or_strict',
)
if not os.path.isdir(b_search_path):
continue
if not _is_installed_collection_dir(b_search_path):
default_err = (
"Collection %s does not have a MANIFEST.json. "
"A MANIFEST.json is expected if the collection has been built "
"and installed via ansible-galaxy" % collection.fqcn
)
continue
local_collection = Candidate.from_dir_path(
b_search_path, artifacts_manager,
)
break
else:
raise AnsibleError(message=default_err)
if local_verify_only:
remote_collection = None
else:
remote_collection = Candidate(
collection.fqcn,
collection.ver if collection.ver != '*'
else local_collection.ver,
None, 'galaxy',
)
# Download collection on a galaxy server for comparison
try:
# NOTE: Trigger the lookup. If found, it'll cache
# NOTE: download URL and token in artifact manager.
api_proxy.get_collection_version_metadata(
remote_collection,
)
except AnsibleError as e: # FIXME: does this actually emit any errors?
# FIXME: extract the actual message and adjust this:
expected_error_msg = (
'Failed to find collection {coll.fqcn!s}:{coll.ver!s}'.
format(coll=collection)
)
if e.message == expected_error_msg:
raise AnsibleError(
'Failed to find remote collection '
"'{coll!s}' on any of the galaxy servers".
format(coll=collection)
)
raise
result = verify_local_collection(
local_collection, remote_collection,
artifacts_manager,
)
results.append(result)
except AnsibleError as err:
if ignore_errors:
display.warning(
"Failed to verify collection '{coll!s}' but skipping "
'due to --ignore-errors being set. '
'Error: {err!s}'.
format(coll=collection, err=to_text(err)),
)
else:
raise
return results
@contextmanager
def _tempdir():
b_temp_path = tempfile.mkdtemp(dir=to_bytes(C.DEFAULT_LOCAL_TMP, errors='surrogate_or_strict'))
try:
yield b_temp_path
finally:
shutil.rmtree(b_temp_path)
@contextmanager
def _display_progress(msg=None):
config_display = C.GALAXY_DISPLAY_PROGRESS
display_wheel = sys.stdout.isatty() if config_display is None else config_display
global display
if msg is not None:
display.display(msg)
if not display_wheel:
yield
return
def progress(display_queue, actual_display):
actual_display.debug("Starting display_progress display thread")
t = threading.current_thread()
while True:
for c in "|/-\\":
actual_display.display(c + "\b", newline=False)
time.sleep(0.1)
# Display a message from the main thread
while True:
try:
method, args, kwargs = display_queue.get(block=False, timeout=0.1)
except queue.Empty:
break
else:
func = getattr(actual_display, method)
func(*args, **kwargs)
if getattr(t, "finish", False):
actual_display.debug("Received end signal for display_progress display thread")
return
class DisplayThread(object):
def __init__(self, display_queue):
self.display_queue = display_queue
def __getattr__(self, attr):
def call_display(*args, **kwargs):
self.display_queue.put((attr, args, kwargs))
return call_display
# Temporary override the global display class with our own which add the calls to a queue for the thread to call.
old_display = display
try:
display_queue = queue.Queue()
display = DisplayThread(display_queue)
t = threading.Thread(target=progress, args=(display_queue, old_display))
t.daemon = True
t.start()
try:
yield
finally:
t.finish = True
t.join()
except Exception:
# The exception is re-raised so we can sure the thread is finished and not using the display anymore
raise
finally:
display = old_display
def _verify_file_hash(b_path, filename, expected_hash, error_queue):
b_file_path = to_bytes(os.path.join(to_text(b_path), filename), errors='surrogate_or_strict')
if not os.path.isfile(b_file_path):
actual_hash = None
else:
with open(b_file_path, mode='rb') as file_object:
actual_hash = _consume_file(file_object)
if expected_hash != actual_hash:
error_queue.append(ModifiedContent(filename=filename, expected=expected_hash, installed=actual_hash))
def _build_files_manifest(b_collection_path, namespace, name, ignore_patterns):
# type: (bytes, str, str, List[str]) -> FilesManifestType
# We always ignore .pyc and .retry files as well as some well known version control directories. The ignore
# patterns can be extended by the build_ignore key in galaxy.yml
b_ignore_patterns = [
b'MANIFEST.json',
b'FILES.json',
b'galaxy.yml',
b'galaxy.yaml',
b'.git',
b'*.pyc',
b'*.retry',
b'tests/output', # Ignore ansible-test result output directory.
to_bytes('{0}-{1}-*.tar.gz'.format(namespace, name)), # Ignores previously built artifacts in the root dir.
]
b_ignore_patterns += [to_bytes(p) for p in ignore_patterns]
b_ignore_dirs = frozenset([b'CVS', b'.bzr', b'.hg', b'.git', b'.svn', b'__pycache__', b'.tox'])
entry_template = {
'name': None,
'ftype': None,
'chksum_type': None,
'chksum_sha256': None,
'format': MANIFEST_FORMAT
}
manifest = {
'files': [
{
'name': '.',
'ftype': 'dir',
'chksum_type': None,
'chksum_sha256': None,
'format': MANIFEST_FORMAT,
},
],
'format': MANIFEST_FORMAT,
} # type: FilesManifestType
def _walk(b_path, b_top_level_dir):
for b_item in os.listdir(b_path):
b_abs_path = os.path.join(b_path, b_item)
b_rel_base_dir = b'' if b_path == b_top_level_dir else b_path[len(b_top_level_dir) + 1:]
b_rel_path = os.path.join(b_rel_base_dir, b_item)
rel_path = to_text(b_rel_path, errors='surrogate_or_strict')
if os.path.isdir(b_abs_path):
if any(b_item == b_path for b_path in b_ignore_dirs) or \
any(fnmatch.fnmatch(b_rel_path, b_pattern) for b_pattern in b_ignore_patterns):
display.vvv("Skipping '%s' for collection build" % to_text(b_abs_path))
continue
if os.path.islink(b_abs_path):
b_link_target = os.path.realpath(b_abs_path)
if not _is_child_path(b_link_target, b_top_level_dir):
display.warning("Skipping '%s' as it is a symbolic link to a directory outside the collection"
% to_text(b_abs_path))
continue
manifest_entry = entry_template.copy()
manifest_entry['name'] = rel_path
manifest_entry['ftype'] = 'dir'
manifest['files'].append(manifest_entry)
if not os.path.islink(b_abs_path):
_walk(b_abs_path, b_top_level_dir)
else:
if any(fnmatch.fnmatch(b_rel_path, b_pattern) for b_pattern in b_ignore_patterns):
display.vvv("Skipping '%s' for collection build" % to_text(b_abs_path))
continue
# Handling of file symlinks occur in _build_collection_tar, the manifest for a symlink is the same for
# a normal file.
manifest_entry = entry_template.copy()
manifest_entry['name'] = rel_path
manifest_entry['ftype'] = 'file'
manifest_entry['chksum_type'] = 'sha256'
manifest_entry['chksum_sha256'] = secure_hash(b_abs_path, hash_func=sha256)
manifest['files'].append(manifest_entry)
_walk(b_collection_path, b_collection_path)
return manifest
# FIXME: accept a dict produced from `galaxy.yml` instead of separate args
def _build_manifest(namespace, name, version, authors, readme, tags, description, license_file,
dependencies, repository, documentation, homepage, issues, **kwargs):
manifest = {
'collection_info': {
'namespace': namespace,
'name': name,
'version': version,
'authors': authors,
'readme': readme,
'tags': tags,
'description': description,
'license': kwargs['license'],
'license_file': license_file or None, # Handle galaxy.yml having an empty string (None)
'dependencies': dependencies,
'repository': repository,
'documentation': documentation,
'homepage': homepage,
'issues': issues,
},
'file_manifest_file': {
'name': 'FILES.json',
'ftype': 'file',
'chksum_type': 'sha256',
'chksum_sha256': None, # Filled out in _build_collection_tar
'format': MANIFEST_FORMAT
},
'format': MANIFEST_FORMAT,
}
return manifest
def _build_collection_tar(
b_collection_path, # type: bytes
b_tar_path, # type: bytes
collection_manifest, # type: CollectionManifestType
file_manifest, # type: FilesManifestType
): # type: (...) -> Text
"""Build a tar.gz collection artifact from the manifest data."""
files_manifest_json = to_bytes(json.dumps(file_manifest, indent=True), errors='surrogate_or_strict')
collection_manifest['file_manifest_file']['chksum_sha256'] = secure_hash_s(files_manifest_json, hash_func=sha256)
collection_manifest_json = to_bytes(json.dumps(collection_manifest, indent=True), errors='surrogate_or_strict')
with _tempdir() as b_temp_path:
b_tar_filepath = os.path.join(b_temp_path, os.path.basename(b_tar_path))
with tarfile.open(b_tar_filepath, mode='w:gz') as tar_file:
# Add the MANIFEST.json and FILES.json file to the archive
for name, b in [(MANIFEST_FILENAME, collection_manifest_json), ('FILES.json', files_manifest_json)]:
b_io = BytesIO(b)
tar_info = tarfile.TarInfo(name)
tar_info.size = len(b)
tar_info.mtime = int(time.time())
tar_info.mode = 0o0644
tar_file.addfile(tarinfo=tar_info, fileobj=b_io)
for file_info in file_manifest['files']: # type: ignore[union-attr]
if file_info['name'] == '.':
continue
# arcname expects a native string, cannot be bytes
filename = to_native(file_info['name'], errors='surrogate_or_strict')
b_src_path = os.path.join(b_collection_path, to_bytes(filename, errors='surrogate_or_strict'))
def reset_stat(tarinfo):
if tarinfo.type != tarfile.SYMTYPE:
existing_is_exec = tarinfo.mode & stat.S_IXUSR
tarinfo.mode = 0o0755 if existing_is_exec or tarinfo.isdir() else 0o0644
tarinfo.uid = tarinfo.gid = 0
tarinfo.uname = tarinfo.gname = ''
return tarinfo
if os.path.islink(b_src_path):
b_link_target = os.path.realpath(b_src_path)
if _is_child_path(b_link_target, b_collection_path):
b_rel_path = os.path.relpath(b_link_target, start=os.path.dirname(b_src_path))
tar_info = tarfile.TarInfo(filename)
tar_info.type = tarfile.SYMTYPE
tar_info.linkname = to_native(b_rel_path, errors='surrogate_or_strict')
tar_info = reset_stat(tar_info)
tar_file.addfile(tarinfo=tar_info)
continue
# Dealing with a normal file, just add it by name.
tar_file.add(
to_native(os.path.realpath(b_src_path)),
arcname=filename,
recursive=False,
filter=reset_stat,
)
shutil.copy(to_native(b_tar_filepath), to_native(b_tar_path))
collection_name = "%s.%s" % (collection_manifest['collection_info']['namespace'],
collection_manifest['collection_info']['name'])
tar_path = to_text(b_tar_path)
display.display(u'Created collection for %s at %s' % (collection_name, tar_path))
return tar_path
def _build_collection_dir(b_collection_path, b_collection_output, collection_manifest, file_manifest):
"""Build a collection directory from the manifest data.
This should follow the same pattern as _build_collection_tar.
"""
os.makedirs(b_collection_output, mode=0o0755)
files_manifest_json = to_bytes(json.dumps(file_manifest, indent=True), errors='surrogate_or_strict')
collection_manifest['file_manifest_file']['chksum_sha256'] = secure_hash_s(files_manifest_json, hash_func=sha256)
collection_manifest_json = to_bytes(json.dumps(collection_manifest, indent=True), errors='surrogate_or_strict')
# Write contents to the files
for name, b in [(MANIFEST_FILENAME, collection_manifest_json), ('FILES.json', files_manifest_json)]:
b_path = os.path.join(b_collection_output, to_bytes(name, errors='surrogate_or_strict'))
with open(b_path, 'wb') as file_obj, BytesIO(b) as b_io:
shutil.copyfileobj(b_io, file_obj)
os.chmod(b_path, 0o0644)
base_directories = []
for file_info in sorted(file_manifest['files'], key=lambda x: x['name']):
if file_info['name'] == '.':
continue
src_file = os.path.join(b_collection_path, to_bytes(file_info['name'], errors='surrogate_or_strict'))
dest_file = os.path.join(b_collection_output, to_bytes(file_info['name'], errors='surrogate_or_strict'))
existing_is_exec = os.stat(src_file).st_mode & stat.S_IXUSR
mode = 0o0755 if existing_is_exec else 0o0644
if os.path.isdir(src_file):
mode = 0o0755
base_directories.append(src_file)
os.mkdir(dest_file, mode)
else:
shutil.copyfile(src_file, dest_file)
os.chmod(dest_file, mode)
collection_output = to_text(b_collection_output)
return collection_output
def find_existing_collections(path, artifacts_manager):
"""Locate all collections under a given path.
:param path: Collection dirs layout search path.
:param artifacts_manager: Artifacts manager.
"""
b_path = to_bytes(path, errors='surrogate_or_strict')
# FIXME: consider using `glob.glob()` to simplify looping
for b_namespace in os.listdir(b_path):
b_namespace_path = os.path.join(b_path, b_namespace)
if os.path.isfile(b_namespace_path):
continue
# FIXME: consider feeding b_namespace_path to Candidate.from_dir_path to get subdirs automatically
for b_collection in os.listdir(b_namespace_path):
b_collection_path = os.path.join(b_namespace_path, b_collection)
if not os.path.isdir(b_collection_path):
continue
try:
req = Candidate.from_dir_path_as_unknown(
b_collection_path,
artifacts_manager,
)
except ValueError as val_err:
raise_from(AnsibleError(val_err), val_err)
display.vvv(
u"Found installed collection {coll!s} at '{path!s}'".
format(coll=to_text(req), path=to_text(req.src))
)
yield req
def install(collection, path, artifacts_manager): # FIXME: mv to dataclasses?
# type: (Candidate, str, ConcreteArtifactsManager) -> None
"""Install a collection under a given path.
:param collection: Collection to be installed.
:param path: Collection dirs layout path.
:param artifacts_manager: Artifacts manager.
"""
b_artifact_path = (
artifacts_manager.get_artifact_path if collection.is_concrete_artifact
else artifacts_manager.get_galaxy_artifact_path
)(collection)
collection_path = os.path.join(path, collection.namespace, collection.name)
b_collection_path = to_bytes(collection_path, errors='surrogate_or_strict')
display.display(
u"Installing '{coll!s}' to '{path!s}'".
format(coll=to_text(collection), path=collection_path),
)
if os.path.exists(b_collection_path):
shutil.rmtree(b_collection_path)
if collection.is_dir:
install_src(collection, b_artifact_path, b_collection_path, artifacts_manager)
else:
install_artifact(b_artifact_path, b_collection_path, artifacts_manager._b_working_directory)
display.display(
'{coll!s} was installed successfully'.
format(coll=to_text(collection)),
)
def install_artifact(b_coll_targz_path, b_collection_path, b_temp_path):
"""Install a collection from tarball under a given path.
:param b_coll_targz_path: Collection tarball to be installed.
:param b_collection_path: Collection dirs layout path.
:param b_temp_path: Temporary dir path.
"""
try:
with tarfile.open(b_coll_targz_path, mode='r') as collection_tar:
files_member_obj = collection_tar.getmember('FILES.json')
with _tarfile_extract(collection_tar, files_member_obj) as (dummy, files_obj):
files = json.loads(to_text(files_obj.read(), errors='surrogate_or_strict'))
_extract_tar_file(collection_tar, MANIFEST_FILENAME, b_collection_path, b_temp_path)
_extract_tar_file(collection_tar, 'FILES.json', b_collection_path, b_temp_path)
for file_info in files['files']:
file_name = file_info['name']
if file_name == '.':
continue
if file_info['ftype'] == 'file':
_extract_tar_file(collection_tar, file_name, b_collection_path, b_temp_path,
expected_hash=file_info['chksum_sha256'])
else:
_extract_tar_dir(collection_tar, file_name, b_collection_path)
except Exception:
# Ensure we don't leave the dir behind in case of a failure.
shutil.rmtree(b_collection_path)
b_namespace_path = os.path.dirname(b_collection_path)
if not os.listdir(b_namespace_path):
os.rmdir(b_namespace_path)
raise
def install_src(
collection,
b_collection_path, b_collection_output_path,
artifacts_manager,
):
r"""Install the collection from source control into given dir.
Generates the Ansible collection artifact data from a galaxy.yml and
installs the artifact to a directory.
This should follow the same pattern as build_collection, but instead
of creating an artifact, install it.
:param collection: Collection to be installed.
:param b_collection_path: Collection dirs layout path.
:param b_collection_output_path: The installation directory for the \
collection artifact.
:param artifacts_manager: Artifacts manager.
:raises AnsibleError: If no collection metadata found.
"""
collection_meta = artifacts_manager.get_direct_collection_meta(collection)
if 'build_ignore' not in collection_meta: # installed collection, not src
# FIXME: optimize this? use a different process? copy instead of build?
collection_meta['build_ignore'] = []
collection_manifest = _build_manifest(**collection_meta)
file_manifest = _build_files_manifest(
b_collection_path,
collection_meta['namespace'], collection_meta['name'],
collection_meta['build_ignore'],
)
collection_output_path = _build_collection_dir(
b_collection_path, b_collection_output_path,
collection_manifest, file_manifest,
)
display.display(
'Created collection for {coll!s} at {path!s}'.
format(coll=collection, path=collection_output_path)
)
def _extract_tar_dir(tar, dirname, b_dest):
""" Extracts a directory from a collection tar. """
member_names = [to_native(dirname, errors='surrogate_or_strict')]
# Create list of members with and without trailing separator
if not member_names[-1].endswith(os.path.sep):
member_names.append(member_names[-1] + os.path.sep)
# Try all of the member names and stop on the first one that are able to successfully get
for member in member_names:
try:
tar_member = tar.getmember(member)
except KeyError:
continue
break
else:
# If we still can't find the member, raise a nice error.
raise AnsibleError("Unable to extract '%s' from collection" % to_native(member, errors='surrogate_or_strict'))
b_dir_path = os.path.join(b_dest, to_bytes(dirname, errors='surrogate_or_strict'))
b_parent_path = os.path.dirname(b_dir_path)
try:
os.makedirs(b_parent_path, mode=0o0755)
except OSError as e:
if e.errno != errno.EEXIST:
raise
if tar_member.type == tarfile.SYMTYPE:
b_link_path = to_bytes(tar_member.linkname, errors='surrogate_or_strict')
if not _is_child_path(b_link_path, b_dest, link_name=b_dir_path):
raise AnsibleError("Cannot extract symlink '%s' in collection: path points to location outside of "
"collection '%s'" % (to_native(dirname), b_link_path))
os.symlink(b_link_path, b_dir_path)
else:
if not os.path.isdir(b_dir_path):
os.mkdir(b_dir_path, 0o0755)
def _extract_tar_file(tar, filename, b_dest, b_temp_path, expected_hash=None):
""" Extracts a file from a collection tar. """
with _get_tar_file_member(tar, filename) as (tar_member, tar_obj):
if tar_member.type == tarfile.SYMTYPE:
actual_hash = _consume_file(tar_obj)
else:
with tempfile.NamedTemporaryFile(dir=b_temp_path, delete=False) as tmpfile_obj:
actual_hash = _consume_file(tar_obj, tmpfile_obj)
if expected_hash and actual_hash != expected_hash:
raise AnsibleError("Checksum mismatch for '%s' inside collection at '%s'"
% (to_native(filename, errors='surrogate_or_strict'), to_native(tar.name)))
b_dest_filepath = os.path.abspath(os.path.join(b_dest, to_bytes(filename, errors='surrogate_or_strict')))
b_parent_dir = os.path.dirname(b_dest_filepath)
if not _is_child_path(b_parent_dir, b_dest):
raise AnsibleError("Cannot extract tar entry '%s' as it will be placed outside the collection directory"
% to_native(filename, errors='surrogate_or_strict'))
if not os.path.exists(b_parent_dir):
# Seems like Galaxy does not validate if all file entries have a corresponding dir ftype entry. This check
# makes sure we create the parent directory even if it wasn't set in the metadata.
os.makedirs(b_parent_dir, mode=0o0755)
if tar_member.type == tarfile.SYMTYPE:
b_link_path = to_bytes(tar_member.linkname, errors='surrogate_or_strict')
if not _is_child_path(b_link_path, b_dest, link_name=b_dest_filepath):
raise AnsibleError("Cannot extract symlink '%s' in collection: path points to location outside of "
"collection '%s'" % (to_native(filename), b_link_path))
os.symlink(b_link_path, b_dest_filepath)
else:
shutil.move(to_bytes(tmpfile_obj.name, errors='surrogate_or_strict'), b_dest_filepath)
# Default to rw-r--r-- and only add execute if the tar file has execute.
tar_member = tar.getmember(to_native(filename, errors='surrogate_or_strict'))
new_mode = 0o644
if stat.S_IMODE(tar_member.mode) & stat.S_IXUSR:
new_mode |= 0o0111
os.chmod(b_dest_filepath, new_mode)
def _get_tar_file_member(tar, filename):
n_filename = to_native(filename, errors='surrogate_or_strict')
try:
member = tar.getmember(n_filename)
except KeyError:
raise AnsibleError("Collection tar at '%s' does not contain the expected file '%s'." % (
to_native(tar.name),
n_filename))
return _tarfile_extract(tar, member)
def _get_json_from_tar_file(b_path, filename):
file_contents = ''
with tarfile.open(b_path, mode='r') as collection_tar:
with _get_tar_file_member(collection_tar, filename) as (dummy, tar_obj):
bufsize = 65536
data = tar_obj.read(bufsize)
while data:
file_contents += to_text(data)
data = tar_obj.read(bufsize)
return json.loads(file_contents)
def _get_tar_file_hash(b_path, filename):
with tarfile.open(b_path, mode='r') as collection_tar:
with _get_tar_file_member(collection_tar, filename) as (dummy, tar_obj):
return _consume_file(tar_obj)
def _get_file_hash(b_path, filename): # type: (bytes, str) -> str
filepath = os.path.join(b_path, to_bytes(filename, errors='surrogate_or_strict'))
with open(filepath, 'rb') as fp:
return _consume_file(fp)
def _is_child_path(path, parent_path, link_name=None):
""" Checks that path is a path within the parent_path specified. """
b_path = to_bytes(path, errors='surrogate_or_strict')
if link_name and not os.path.isabs(b_path):
# If link_name is specified, path is the source of the link and we need to resolve the absolute path.
b_link_dir = os.path.dirname(to_bytes(link_name, errors='surrogate_or_strict'))
b_path = os.path.abspath(os.path.join(b_link_dir, b_path))
b_parent_path = to_bytes(parent_path, errors='surrogate_or_strict')
return b_path == b_parent_path or b_path.startswith(b_parent_path + to_bytes(os.path.sep))
def _resolve_depenency_map(
requested_requirements, # type: Iterable[Requirement]
galaxy_apis, # type: Iterable[GalaxyAPI]
concrete_artifacts_manager, # type: ConcreteArtifactsManager
preferred_candidates, # type: Optional[Iterable[Candidate]]
no_deps, # type: bool
allow_pre_release, # type: bool
upgrade, # type: bool
): # type: (...) -> Dict[str, Candidate]
"""Return the resolved dependency map."""
collection_dep_resolver = build_collection_dependency_resolver(
galaxy_apis=galaxy_apis,
concrete_artifacts_manager=concrete_artifacts_manager,
user_requirements=requested_requirements,
preferred_candidates=preferred_candidates,
with_deps=not no_deps,
with_pre_releases=allow_pre_release,
upgrade=upgrade,
)
try:
return collection_dep_resolver.resolve(
requested_requirements,
max_rounds=2000000, # NOTE: same constant pip uses
).mapping
except CollectionDependencyResolutionImpossible as dep_exc:
conflict_causes = (
'* {req.fqcn!s}:{req.ver!s} ({dep_origin!s})'.format(
req=req_inf.requirement,
dep_origin='direct request'
if req_inf.parent is None
else 'dependency of {parent!s}'.
format(parent=req_inf.parent),
)
for req_inf in dep_exc.causes
)
error_msg_lines = chain(
(
'Failed to resolve the requested '
'dependencies map. Could not satisfy the following '
'requirements:',
),
conflict_causes,
)
raise raise_from( # NOTE: Leading "raise" is a hack for mypy bug #9717
AnsibleError('\n'.join(error_msg_lines)),
dep_exc,
)
except CollectionDependencyInconsistentCandidate as dep_exc:
parents = [
"%s.%s:%s" % (p.namespace, p.name, p.ver)
for p in dep_exc.criterion.iter_parent()
if p is not None
]
error_msg_lines = [
(
'Failed to resolve the requested dependencies map. '
'Got the candidate {req.fqcn!s}:{req.ver!s} ({dep_origin!s}) '
'which didn\'t satisfy all of the following requirements:'.
format(
req=dep_exc.candidate,
dep_origin='direct request'
if not parents else 'dependency of {parent!s}'.
format(parent=', '.join(parents))
)
)
]
for req in dep_exc.criterion.iter_requirement():
error_msg_lines.append(
'* {req.fqcn!s}:{req.ver!s}'.format(req=req)
)
raise raise_from( # NOTE: Leading "raise" is a hack for mypy bug #9717
AnsibleError('\n'.join(error_msg_lines)),
dep_exc,
)
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 75,404 |
Explicit version results in 'no attribute' error since core 2.11
|
### Summary
Upgraded from Ansible 3.4/core 2.10 to 4.3 and core 2.11. Get an error that says `ERROR! Unexpected Exception, this is probably a bug: 'float' object has no attribute 'startswith'` when running ansible-galaxy with a git repository in it.
Tested against
### Issue Type
Bug Report
### Component Name
galaxy
### Ansible Version
```console
ansible [core 2.11.2]
config file = /var/lib/buildkite-agent/.ansible.cfg
configured module search path = ['/var/lib/buildkite-agent/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /var/lib/buildkite-agent/.local/lib/python3.8/site-packages/ansible
ansible collection location = /var/lib/buildkite-agent/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/local/bin/ansible
python version = 3.8.0 (default, Feb 25 2021, 22:10:10) [GCC 8.4.0]
jinja version = 2.10
libyaml = False
```
### Configuration
```console
$ ansible-config dump --only-changed
DEFAULT_ASK_PASS(/var/lib/buildkite-agent/.ansible.cfg) = False
DEFAULT_LOCAL_TMP(/var/lib/buildkite-agent/.ansible.cfg) = /tmp/ansible/tmp/ansible-local-10804znovzlh6
```
### OS / Environment
Ubuntu 18.04, with python upgraded to 3.8.0 and ansible installed via pip
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
collections:
# With just the collection name
- community.aws
- name: [email protected]:VoiceThread/ansible-collection-buildkite.git
type: git
version: "0.7"
```
Note that I tested with ansible 4.2 and 4.3, which are all that I could install via pip.
It appears that despite quoting the version 0.7, that string was turned into a float in some portion of the code.
### Expected Results
I expected the AWS community and our custom buildkite collection to be installed.
### Actual Results
```console
buildkite-agent@buildkite-agent-tools3208:~/builds/buildkite-agent-tools3208-1/voicethread-llc/ami-bakery/buildkite_agent_frontend$ansible-galaxy -vvv install -r ./requirements.yml
[DEPRECATION WARNING]: Setting verbosity before the arg sub command is deprecated, set the verbosity after the sub command. This feature will be removed from
ansible-core in version 2.13. Deprecation warnings can be disabled by setting deprecation_warnings=False in ansible.cfg.
ansible-galaxy [core 2.11.2]
config file = /var/lib/buildkite-agent/.ansible.cfg
configured module search path = ['/var/lib/buildkite-agent/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /var/lib/buildkite-agent/.local/lib/python3.8/site-packages/ansible
ansible collection location = /var/lib/buildkite-agent/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/local/bin/ansible-galaxy
python version = 3.8.0 (default, Feb 25 2021, 22:10:10) [GCC 8.4.0]
jinja version = 2.10
libyaml = False
Using /var/lib/buildkite-agent/.ansible.cfg as config file
Reading requirement file at '/var/lib/buildkite-agent/builds/buildkite-agent-tools3208-1/voicethread-llc/ami-bakery/buildkite_agent_frontend/requirements.yml'
Cloning into '/tmp/ansible/tmp/ansible-local-10652xjwcbw9q/tmphvi5md99/ansible-collection-buildkitek95mgz7r'...
remote: Enumerating objects: 215, done.
remote: Counting objects: 100% (215/215), done.
remote: Compressing objects: 100% (160/160), done.
remote: Total 215 (delta 73), reused 176 (delta 42), pack-reused 0
Receiving objects: 100% (215/215), 28.94 KiB | 14.47 MiB/s, done.
Resolving deltas: 100% (73/73), done.
Your branch is up to date with 'origin/main'.
Starting galaxy collection install process
Process install dependency map
Opened /var/lib/buildkite-agent/.ansible/galaxy_token
ERROR! Unexpected Exception, this is probably a bug: 'float' object has no attribute 'startswith'
the full traceback was:
Traceback (most recent call last):
File "/usr/local/bin/ansible-galaxy", line 135, in <module>
exit_code = cli.run()
File "/var/lib/buildkite-agent/.local/lib/python3.8/site-packages/ansible/cli/galaxy.py", line 552, in run
return context.CLIARGS['func']()
File "/var/lib/buildkite-agent/.local/lib/python3.8/site-packages/ansible/cli/galaxy.py", line 75, in method_wrapper
return wrapped_method(*args, **kwargs)
File "/var/lib/buildkite-agent/.local/lib/python3.8/site-packages/ansible/cli/galaxy.py", line 1186, in execute_install
self._execute_install_collection(
File "/var/lib/buildkite-agent/.local/lib/python3.8/site-packages/ansible/cli/galaxy.py", line 1213, in _execute_install_collection
install_collections(
File "/var/lib/buildkite-agent/.local/lib/python3.8/site-packages/ansible/galaxy/collection/__init__.py", line 511, in install_collections
dependency_map = _resolve_depenency_map(
File "/var/lib/buildkite-agent/.local/lib/python3.8/site-packages/ansible/galaxy/collection/__init__.py", line 1328, in _resolve_depenency_map
return collection_dep_resolver.resolve(
File "/usr/local/lib/python3.8/dist-packages/resolvelib/resolvers.py", line 453, in resolve
state = resolution.resolve(requirements, max_rounds=max_rounds)
File "/usr/local/lib/python3.8/dist-packages/resolvelib/resolvers.py", line 347, in resolve
failure_causes = self._attempt_to_pin_criterion(name, criterion)
File "/usr/local/lib/python3.8/dist-packages/resolvelib/resolvers.py", line 216, in _attempt_to_pin_criterion
satisfied = all(
File "/usr/local/lib/python3.8/dist-packages/resolvelib/resolvers.py", line 217, in <genexpr>
self._p.is_satisfied_by(r, candidate)
File "/var/lib/buildkite-agent/.local/lib/python3.8/site-packages/ansible/galaxy/dependency_resolution/providers.py", line 280, in is_satisfied_by
requirement.ver.startswith('<') or
AttributeError: 'float' object has no attribute 'startswith'
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/75404
|
https://github.com/ansible/ansible/pull/76579
|
6e57c8c0844c44a102dc081b93d5299cae9619e2
|
15ace5a854886cd75894e7214209e4b027034bdb
| 2021-08-04T17:46:33Z |
python
| 2022-01-06T19:05:20Z |
lib/ansible/galaxy/dependency_resolution/providers.py
|
# -*- coding: utf-8 -*-
# Copyright: (c) 2020-2021, Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
"""Requirement provider interfaces."""
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import functools
try:
from typing import TYPE_CHECKING
except ImportError:
TYPE_CHECKING = False
if TYPE_CHECKING:
from typing import Iterable, List, NamedTuple, Optional, Union
from ansible.galaxy.collection.concrete_artifact_manager import (
ConcreteArtifactsManager,
)
from ansible.galaxy.collection.galaxy_api_proxy import MultiGalaxyAPIProxy
from ansible.galaxy.dependency_resolution.dataclasses import (
Candidate,
Requirement,
)
from ansible.galaxy.dependency_resolution.versioning import (
is_pre_release,
meets_requirements,
)
from ansible.utils.version import SemanticVersion
from resolvelib import AbstractProvider
class CollectionDependencyProvider(AbstractProvider):
"""Delegate providing a requirement interface for the resolver."""
def __init__(
self, # type: CollectionDependencyProvider
apis, # type: MultiGalaxyAPIProxy
concrete_artifacts_manager=None, # type: ConcreteArtifactsManager
user_requirements=None, # type: Iterable[Requirement]
preferred_candidates=None, # type: Iterable[Candidate]
with_deps=True, # type: bool
with_pre_releases=False, # type: bool
upgrade=False, # type: bool
): # type: (...) -> None
r"""Initialize helper attributes.
:param api: An instance of the multiple Galaxy APIs wrapper.
:param concrete_artifacts_manager: An instance of the caching \
concrete artifacts manager.
:param with_deps: A flag specifying whether the resolver \
should attempt to pull-in the deps of the \
requested requirements. On by default.
:param with_pre_releases: A flag specifying whether the \
resolver should skip pre-releases. \
Off by default.
"""
self._api_proxy = apis
self._make_req_from_dict = functools.partial(
Requirement.from_requirement_dict,
art_mgr=concrete_artifacts_manager,
)
self._pinned_candidate_requests = set(
Candidate(req.fqcn, req.ver, req.src, req.type)
for req in (user_requirements or ())
if req.is_concrete_artifact or (
req.ver != '*' and
not req.ver.startswith(('<', '>', '!='))
)
)
self._preferred_candidates = set(preferred_candidates or ())
self._with_deps = with_deps
self._with_pre_releases = with_pre_releases
self._upgrade = upgrade
def _is_user_requested(self, candidate): # type: (Candidate) -> bool
"""Check if the candidate is requested by the user."""
if candidate in self._pinned_candidate_requests:
return True
if candidate.is_online_index_pointer and candidate.src is not None:
# NOTE: Candidate is a namedtuple, it has a source server set
# NOTE: to a specific GalaxyAPI instance or `None`. When the
# NOTE: user runs
# NOTE:
# NOTE: $ ansible-galaxy collection install ns.coll
# NOTE:
# NOTE: then it's saved in `self._pinned_candidate_requests`
# NOTE: as `('ns.coll', '*', None, 'galaxy')` but then
# NOTE: `self.find_matches()` calls `self.is_satisfied_by()`
# NOTE: with Candidate instances bound to each specific
# NOTE: server available, those look like
# NOTE: `('ns.coll', '*', GalaxyAPI(...), 'galaxy')` and
# NOTE: wouldn't match the user requests saved in
# NOTE: `self._pinned_candidate_requests`. This is why we
# NOTE: normalize the collection to have `src=None` and try
# NOTE: again.
# NOTE:
# NOTE: When the user request comes from `requirements.yml`
# NOTE: with the `source:` set, it'll match the first check
# NOTE: but it still can have entries with `src=None` so this
# NOTE: normalized check is still necessary.
return Candidate(
candidate.fqcn, candidate.ver, None, candidate.type,
) in self._pinned_candidate_requests
return False
def identify(self, requirement_or_candidate):
# type: (Union[Candidate, Requirement]) -> str
"""Given requirement or candidate, return an identifier for it.
This is used to identify a requirement or candidate, e.g.
whether two requirements should have their specifier parts
(version ranges or pins) merged, whether two candidates would
conflict with each other (because they have same name but
different versions).
"""
return requirement_or_candidate.canonical_package_id
def get_preference(
self, # type: CollectionDependencyProvider
resolution, # type: Optional[Candidate]
candidates, # type: List[Candidate]
information, # type: List[NamedTuple]
): # type: (...) -> Union[float, int]
"""Return sort key function return value for given requirement.
This result should be based on preference that is defined as
"I think this requirement should be resolved first".
The lower the return value is, the more preferred this
group of arguments is.
:param resolution: Currently pinned candidate, or ``None``.
:param candidates: A list of possible candidates.
:param information: A list of requirement information.
Each ``information`` instance is a named tuple with two entries:
* ``requirement`` specifies a requirement contributing to
the current candidate list
* ``parent`` specifies the candidate that provides
(dependend on) the requirement, or `None`
to indicate a root requirement.
The preference could depend on a various of issues, including
(not necessarily in this order):
* Is this package pinned in the current resolution result?
* How relaxed is the requirement? Stricter ones should
probably be worked on first? (I don't know, actually.)
* How many possibilities are there to satisfy this
requirement? Those with few left should likely be worked on
first, I guess?
* Are there any known conflicts for this requirement?
We should probably work on those with the most
known conflicts.
A sortable value should be returned (this will be used as the
`key` parameter of the built-in sorting function). The smaller
the value is, the more preferred this requirement is (i.e. the
sorting function is called with ``reverse=False``).
"""
if any(
candidate in self._preferred_candidates
for candidate in candidates
):
# NOTE: Prefer pre-installed candidates over newer versions
# NOTE: available from Galaxy or other sources.
return float('-inf')
return len(candidates)
def find_matches(self, requirements):
# type: (List[Requirement]) -> List[Candidate]
r"""Find all possible candidates satisfying given requirements.
This tries to get candidates based on the requirements' types.
For concrete requirements (SCM, dir, namespace dir, local or
remote archives), the one-and-only match is returned
For a "named" requirement, Galaxy-compatible APIs are consulted
to find concrete candidates for this requirement. Of theres a
pre-installed candidate, it's prepended in front of others.
:param requirements: A collection of requirements which all of \
the returned candidates must match. \
All requirements are guaranteed to have \
the same identifier. \
The collection is never empty.
:returns: An iterable that orders candidates by preference, \
e.g. the most preferred candidate comes first.
"""
# FIXME: The first requirement may be a Git repo followed by
# FIXME: its cloned tmp dir. Using only the first one creates
# FIXME: loops that prevent any further dependency exploration.
# FIXME: We need to figure out how to prevent this.
first_req = requirements[0]
fqcn = first_req.fqcn
# The fqcn is guaranteed to be the same
coll_versions = self._api_proxy.get_collection_versions(first_req)
if first_req.is_concrete_artifact:
# FIXME: do we assume that all the following artifacts are also concrete?
# FIXME: does using fqcn==None cause us problems here?
return [
Candidate(fqcn, version, _none_src_server, first_req.type)
for version, _none_src_server in coll_versions
]
latest_matches = sorted(
{
candidate for candidate in (
Candidate(fqcn, version, src_server, 'galaxy')
for version, src_server in coll_versions
)
if all(self.is_satisfied_by(requirement, candidate) for requirement in requirements)
# FIXME
# if all(self.is_satisfied_by(requirement, candidate) and (
# requirement.src is None or # if this is true for some candidates but not all it will break key param - Nonetype can't be compared to str
# requirement.src == candidate.src
# ))
},
key=lambda candidate: (
SemanticVersion(candidate.ver), candidate.src,
),
reverse=True, # prefer newer versions over older ones
)
preinstalled_candidates = {
candidate for candidate in self._preferred_candidates
if candidate.fqcn == fqcn and
(
# check if an upgrade is necessary
all(self.is_satisfied_by(requirement, candidate) for requirement in requirements) and
(
not self._upgrade or
# check if an upgrade is preferred
all(SemanticVersion(latest.ver) <= SemanticVersion(candidate.ver) for latest in latest_matches)
)
)
}
return list(preinstalled_candidates) + latest_matches
def is_satisfied_by(self, requirement, candidate):
# type: (Requirement, Candidate) -> bool
r"""Whether the given requirement is satisfiable by a candidate.
:param requirement: A requirement that produced the `candidate`.
:param candidate: A pinned candidate supposedly matchine the \
`requirement` specifier. It is guaranteed to \
have been generated from the `requirement`.
:returns: Indication whether the `candidate` is a viable \
solution to the `requirement`.
"""
# NOTE: Only allow pre-release candidates if we want pre-releases
# NOTE: or the req ver was an exact match with the pre-release
# NOTE: version. Another case where we'd want to allow
# NOTE: pre-releases is when there are several user requirements
# NOTE: and one of them is a pre-release that also matches a
# NOTE: transitive dependency of another requirement.
allow_pre_release = self._with_pre_releases or not (
requirement.ver == '*' or
requirement.ver.startswith('<') or
requirement.ver.startswith('>') or
requirement.ver.startswith('!=')
) or self._is_user_requested(candidate)
if is_pre_release(candidate.ver) and not allow_pre_release:
return False
# NOTE: This is a set of Pipenv-inspired optimizations. Ref:
# https://github.com/sarugaku/passa/blob/2ac00f1/src/passa/models/providers.py#L58-L74
if (
requirement.is_virtual or
candidate.is_virtual or
requirement.ver == '*'
):
return True
return meets_requirements(
version=candidate.ver,
requirements=requirement.ver,
)
def get_dependencies(self, candidate):
# type: (Candidate) -> List[Candidate]
r"""Get direct dependencies of a candidate.
:returns: A collection of requirements that `candidate` \
specifies as its dependencies.
"""
# FIXME: If there's several galaxy servers set, there may be a
# FIXME: situation when the metadata of the same collection
# FIXME: differs. So how do we resolve this case? Priority?
# FIXME: Taking into account a pinned hash? Exploding on
# FIXME: any differences?
# NOTE: The underlying implmentation currently uses first found
req_map = self._api_proxy.get_collection_dependencies(candidate)
# NOTE: This guard expression MUST perform an early exit only
# NOTE: after the `get_collection_dependencies()` call because
# NOTE: internally it polulates the artifact URL of the candidate,
# NOTE: its SHA hash and the Galaxy API token. These are still
# NOTE: necessary with `--no-deps` because even with the disabled
# NOTE: dependency resolution the outer layer will still need to
# NOTE: know how to download and validate the artifact.
#
# NOTE: Virtual candidates should always return dependencies
# NOTE: because they are ephemeral and non-installable.
if not self._with_deps and not candidate.is_virtual:
return []
return [
self._make_req_from_dict({'name': dep_name, 'version': dep_req})
for dep_name, dep_req in req_map.items()
]
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 75,404 |
Explicit version results in 'no attribute' error since core 2.11
|
### Summary
Upgraded from Ansible 3.4/core 2.10 to 4.3 and core 2.11. Get an error that says `ERROR! Unexpected Exception, this is probably a bug: 'float' object has no attribute 'startswith'` when running ansible-galaxy with a git repository in it.
Tested against
### Issue Type
Bug Report
### Component Name
galaxy
### Ansible Version
```console
ansible [core 2.11.2]
config file = /var/lib/buildkite-agent/.ansible.cfg
configured module search path = ['/var/lib/buildkite-agent/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /var/lib/buildkite-agent/.local/lib/python3.8/site-packages/ansible
ansible collection location = /var/lib/buildkite-agent/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/local/bin/ansible
python version = 3.8.0 (default, Feb 25 2021, 22:10:10) [GCC 8.4.0]
jinja version = 2.10
libyaml = False
```
### Configuration
```console
$ ansible-config dump --only-changed
DEFAULT_ASK_PASS(/var/lib/buildkite-agent/.ansible.cfg) = False
DEFAULT_LOCAL_TMP(/var/lib/buildkite-agent/.ansible.cfg) = /tmp/ansible/tmp/ansible-local-10804znovzlh6
```
### OS / Environment
Ubuntu 18.04, with python upgraded to 3.8.0 and ansible installed via pip
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
collections:
# With just the collection name
- community.aws
- name: [email protected]:VoiceThread/ansible-collection-buildkite.git
type: git
version: "0.7"
```
Note that I tested with ansible 4.2 and 4.3, which are all that I could install via pip.
It appears that despite quoting the version 0.7, that string was turned into a float in some portion of the code.
### Expected Results
I expected the AWS community and our custom buildkite collection to be installed.
### Actual Results
```console
buildkite-agent@buildkite-agent-tools3208:~/builds/buildkite-agent-tools3208-1/voicethread-llc/ami-bakery/buildkite_agent_frontend$ansible-galaxy -vvv install -r ./requirements.yml
[DEPRECATION WARNING]: Setting verbosity before the arg sub command is deprecated, set the verbosity after the sub command. This feature will be removed from
ansible-core in version 2.13. Deprecation warnings can be disabled by setting deprecation_warnings=False in ansible.cfg.
ansible-galaxy [core 2.11.2]
config file = /var/lib/buildkite-agent/.ansible.cfg
configured module search path = ['/var/lib/buildkite-agent/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /var/lib/buildkite-agent/.local/lib/python3.8/site-packages/ansible
ansible collection location = /var/lib/buildkite-agent/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/local/bin/ansible-galaxy
python version = 3.8.0 (default, Feb 25 2021, 22:10:10) [GCC 8.4.0]
jinja version = 2.10
libyaml = False
Using /var/lib/buildkite-agent/.ansible.cfg as config file
Reading requirement file at '/var/lib/buildkite-agent/builds/buildkite-agent-tools3208-1/voicethread-llc/ami-bakery/buildkite_agent_frontend/requirements.yml'
Cloning into '/tmp/ansible/tmp/ansible-local-10652xjwcbw9q/tmphvi5md99/ansible-collection-buildkitek95mgz7r'...
remote: Enumerating objects: 215, done.
remote: Counting objects: 100% (215/215), done.
remote: Compressing objects: 100% (160/160), done.
remote: Total 215 (delta 73), reused 176 (delta 42), pack-reused 0
Receiving objects: 100% (215/215), 28.94 KiB | 14.47 MiB/s, done.
Resolving deltas: 100% (73/73), done.
Your branch is up to date with 'origin/main'.
Starting galaxy collection install process
Process install dependency map
Opened /var/lib/buildkite-agent/.ansible/galaxy_token
ERROR! Unexpected Exception, this is probably a bug: 'float' object has no attribute 'startswith'
the full traceback was:
Traceback (most recent call last):
File "/usr/local/bin/ansible-galaxy", line 135, in <module>
exit_code = cli.run()
File "/var/lib/buildkite-agent/.local/lib/python3.8/site-packages/ansible/cli/galaxy.py", line 552, in run
return context.CLIARGS['func']()
File "/var/lib/buildkite-agent/.local/lib/python3.8/site-packages/ansible/cli/galaxy.py", line 75, in method_wrapper
return wrapped_method(*args, **kwargs)
File "/var/lib/buildkite-agent/.local/lib/python3.8/site-packages/ansible/cli/galaxy.py", line 1186, in execute_install
self._execute_install_collection(
File "/var/lib/buildkite-agent/.local/lib/python3.8/site-packages/ansible/cli/galaxy.py", line 1213, in _execute_install_collection
install_collections(
File "/var/lib/buildkite-agent/.local/lib/python3.8/site-packages/ansible/galaxy/collection/__init__.py", line 511, in install_collections
dependency_map = _resolve_depenency_map(
File "/var/lib/buildkite-agent/.local/lib/python3.8/site-packages/ansible/galaxy/collection/__init__.py", line 1328, in _resolve_depenency_map
return collection_dep_resolver.resolve(
File "/usr/local/lib/python3.8/dist-packages/resolvelib/resolvers.py", line 453, in resolve
state = resolution.resolve(requirements, max_rounds=max_rounds)
File "/usr/local/lib/python3.8/dist-packages/resolvelib/resolvers.py", line 347, in resolve
failure_causes = self._attempt_to_pin_criterion(name, criterion)
File "/usr/local/lib/python3.8/dist-packages/resolvelib/resolvers.py", line 216, in _attempt_to_pin_criterion
satisfied = all(
File "/usr/local/lib/python3.8/dist-packages/resolvelib/resolvers.py", line 217, in <genexpr>
self._p.is_satisfied_by(r, candidate)
File "/var/lib/buildkite-agent/.local/lib/python3.8/site-packages/ansible/galaxy/dependency_resolution/providers.py", line 280, in is_satisfied_by
requirement.ver.startswith('<') or
AttributeError: 'float' object has no attribute 'startswith'
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/75404
|
https://github.com/ansible/ansible/pull/76579
|
6e57c8c0844c44a102dc081b93d5299cae9619e2
|
15ace5a854886cd75894e7214209e4b027034bdb
| 2021-08-04T17:46:33Z |
python
| 2022-01-06T19:05:20Z |
test/integration/targets/ansible-galaxy-collection-scm/tasks/main.yml
|
---
- name: set the temp test directory
set_fact:
galaxy_dir: "{{ remote_tmp_dir }}/galaxy"
- name: Test installing collections from git repositories
environment:
ANSIBLE_COLLECTIONS_PATHS: "{{ galaxy_dir }}/collections"
vars:
cleanup: True
galaxy_dir: "{{ galaxy_dir }}"
block:
- include_tasks: ./setup.yml
- include_tasks: ./requirements.yml
- include_tasks: ./individual_collection_repo.yml
- include_tasks: ./setup_multi_collection_repo.yml
- include_tasks: ./multi_collection_repo_all.yml
- include_tasks: ./scm_dependency.yml
vars:
cleanup: False
- include_tasks: ./reinstalling.yml
- include_tasks: ./multi_collection_repo_individual.yml
- include_tasks: ./setup_recursive_scm_dependency.yml
- include_tasks: ./scm_dependency_deduplication.yml
- include_tasks: ./download.yml
always:
- name: Remove the directories for installing collections and git repositories
file:
path: '{{ item }}'
state: absent
loop:
- "{{ install_path }}"
- "{{ alt_install_path }}"
- "{{ scm_path }}"
- name: remove git
package:
name: git
state: absent
when: git_install is changed
# This gets dragged in as a dependency of git on FreeBSD.
# We need to remove it too when done.
- name: remove python37 if necessary
package:
name: python37
state: absent
when:
- git_install is changed
- ansible_distribution == 'FreeBSD'
- ansible_python.version.major == 2
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 75,404 |
Explicit version results in 'no attribute' error since core 2.11
|
### Summary
Upgraded from Ansible 3.4/core 2.10 to 4.3 and core 2.11. Get an error that says `ERROR! Unexpected Exception, this is probably a bug: 'float' object has no attribute 'startswith'` when running ansible-galaxy with a git repository in it.
Tested against
### Issue Type
Bug Report
### Component Name
galaxy
### Ansible Version
```console
ansible [core 2.11.2]
config file = /var/lib/buildkite-agent/.ansible.cfg
configured module search path = ['/var/lib/buildkite-agent/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /var/lib/buildkite-agent/.local/lib/python3.8/site-packages/ansible
ansible collection location = /var/lib/buildkite-agent/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/local/bin/ansible
python version = 3.8.0 (default, Feb 25 2021, 22:10:10) [GCC 8.4.0]
jinja version = 2.10
libyaml = False
```
### Configuration
```console
$ ansible-config dump --only-changed
DEFAULT_ASK_PASS(/var/lib/buildkite-agent/.ansible.cfg) = False
DEFAULT_LOCAL_TMP(/var/lib/buildkite-agent/.ansible.cfg) = /tmp/ansible/tmp/ansible-local-10804znovzlh6
```
### OS / Environment
Ubuntu 18.04, with python upgraded to 3.8.0 and ansible installed via pip
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
collections:
# With just the collection name
- community.aws
- name: [email protected]:VoiceThread/ansible-collection-buildkite.git
type: git
version: "0.7"
```
Note that I tested with ansible 4.2 and 4.3, which are all that I could install via pip.
It appears that despite quoting the version 0.7, that string was turned into a float in some portion of the code.
### Expected Results
I expected the AWS community and our custom buildkite collection to be installed.
### Actual Results
```console
buildkite-agent@buildkite-agent-tools3208:~/builds/buildkite-agent-tools3208-1/voicethread-llc/ami-bakery/buildkite_agent_frontend$ansible-galaxy -vvv install -r ./requirements.yml
[DEPRECATION WARNING]: Setting verbosity before the arg sub command is deprecated, set the verbosity after the sub command. This feature will be removed from
ansible-core in version 2.13. Deprecation warnings can be disabled by setting deprecation_warnings=False in ansible.cfg.
ansible-galaxy [core 2.11.2]
config file = /var/lib/buildkite-agent/.ansible.cfg
configured module search path = ['/var/lib/buildkite-agent/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /var/lib/buildkite-agent/.local/lib/python3.8/site-packages/ansible
ansible collection location = /var/lib/buildkite-agent/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/local/bin/ansible-galaxy
python version = 3.8.0 (default, Feb 25 2021, 22:10:10) [GCC 8.4.0]
jinja version = 2.10
libyaml = False
Using /var/lib/buildkite-agent/.ansible.cfg as config file
Reading requirement file at '/var/lib/buildkite-agent/builds/buildkite-agent-tools3208-1/voicethread-llc/ami-bakery/buildkite_agent_frontend/requirements.yml'
Cloning into '/tmp/ansible/tmp/ansible-local-10652xjwcbw9q/tmphvi5md99/ansible-collection-buildkitek95mgz7r'...
remote: Enumerating objects: 215, done.
remote: Counting objects: 100% (215/215), done.
remote: Compressing objects: 100% (160/160), done.
remote: Total 215 (delta 73), reused 176 (delta 42), pack-reused 0
Receiving objects: 100% (215/215), 28.94 KiB | 14.47 MiB/s, done.
Resolving deltas: 100% (73/73), done.
Your branch is up to date with 'origin/main'.
Starting galaxy collection install process
Process install dependency map
Opened /var/lib/buildkite-agent/.ansible/galaxy_token
ERROR! Unexpected Exception, this is probably a bug: 'float' object has no attribute 'startswith'
the full traceback was:
Traceback (most recent call last):
File "/usr/local/bin/ansible-galaxy", line 135, in <module>
exit_code = cli.run()
File "/var/lib/buildkite-agent/.local/lib/python3.8/site-packages/ansible/cli/galaxy.py", line 552, in run
return context.CLIARGS['func']()
File "/var/lib/buildkite-agent/.local/lib/python3.8/site-packages/ansible/cli/galaxy.py", line 75, in method_wrapper
return wrapped_method(*args, **kwargs)
File "/var/lib/buildkite-agent/.local/lib/python3.8/site-packages/ansible/cli/galaxy.py", line 1186, in execute_install
self._execute_install_collection(
File "/var/lib/buildkite-agent/.local/lib/python3.8/site-packages/ansible/cli/galaxy.py", line 1213, in _execute_install_collection
install_collections(
File "/var/lib/buildkite-agent/.local/lib/python3.8/site-packages/ansible/galaxy/collection/__init__.py", line 511, in install_collections
dependency_map = _resolve_depenency_map(
File "/var/lib/buildkite-agent/.local/lib/python3.8/site-packages/ansible/galaxy/collection/__init__.py", line 1328, in _resolve_depenency_map
return collection_dep_resolver.resolve(
File "/usr/local/lib/python3.8/dist-packages/resolvelib/resolvers.py", line 453, in resolve
state = resolution.resolve(requirements, max_rounds=max_rounds)
File "/usr/local/lib/python3.8/dist-packages/resolvelib/resolvers.py", line 347, in resolve
failure_causes = self._attempt_to_pin_criterion(name, criterion)
File "/usr/local/lib/python3.8/dist-packages/resolvelib/resolvers.py", line 216, in _attempt_to_pin_criterion
satisfied = all(
File "/usr/local/lib/python3.8/dist-packages/resolvelib/resolvers.py", line 217, in <genexpr>
self._p.is_satisfied_by(r, candidate)
File "/var/lib/buildkite-agent/.local/lib/python3.8/site-packages/ansible/galaxy/dependency_resolution/providers.py", line 280, in is_satisfied_by
requirement.ver.startswith('<') or
AttributeError: 'float' object has no attribute 'startswith'
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/75404
|
https://github.com/ansible/ansible/pull/76579
|
6e57c8c0844c44a102dc081b93d5299cae9619e2
|
15ace5a854886cd75894e7214209e4b027034bdb
| 2021-08-04T17:46:33Z |
python
| 2022-01-06T19:05:20Z |
test/integration/targets/ansible-galaxy-collection-scm/tasks/setup_collection_bad_version.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 75,404 |
Explicit version results in 'no attribute' error since core 2.11
|
### Summary
Upgraded from Ansible 3.4/core 2.10 to 4.3 and core 2.11. Get an error that says `ERROR! Unexpected Exception, this is probably a bug: 'float' object has no attribute 'startswith'` when running ansible-galaxy with a git repository in it.
Tested against
### Issue Type
Bug Report
### Component Name
galaxy
### Ansible Version
```console
ansible [core 2.11.2]
config file = /var/lib/buildkite-agent/.ansible.cfg
configured module search path = ['/var/lib/buildkite-agent/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /var/lib/buildkite-agent/.local/lib/python3.8/site-packages/ansible
ansible collection location = /var/lib/buildkite-agent/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/local/bin/ansible
python version = 3.8.0 (default, Feb 25 2021, 22:10:10) [GCC 8.4.0]
jinja version = 2.10
libyaml = False
```
### Configuration
```console
$ ansible-config dump --only-changed
DEFAULT_ASK_PASS(/var/lib/buildkite-agent/.ansible.cfg) = False
DEFAULT_LOCAL_TMP(/var/lib/buildkite-agent/.ansible.cfg) = /tmp/ansible/tmp/ansible-local-10804znovzlh6
```
### OS / Environment
Ubuntu 18.04, with python upgraded to 3.8.0 and ansible installed via pip
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
collections:
# With just the collection name
- community.aws
- name: [email protected]:VoiceThread/ansible-collection-buildkite.git
type: git
version: "0.7"
```
Note that I tested with ansible 4.2 and 4.3, which are all that I could install via pip.
It appears that despite quoting the version 0.7, that string was turned into a float in some portion of the code.
### Expected Results
I expected the AWS community and our custom buildkite collection to be installed.
### Actual Results
```console
buildkite-agent@buildkite-agent-tools3208:~/builds/buildkite-agent-tools3208-1/voicethread-llc/ami-bakery/buildkite_agent_frontend$ansible-galaxy -vvv install -r ./requirements.yml
[DEPRECATION WARNING]: Setting verbosity before the arg sub command is deprecated, set the verbosity after the sub command. This feature will be removed from
ansible-core in version 2.13. Deprecation warnings can be disabled by setting deprecation_warnings=False in ansible.cfg.
ansible-galaxy [core 2.11.2]
config file = /var/lib/buildkite-agent/.ansible.cfg
configured module search path = ['/var/lib/buildkite-agent/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /var/lib/buildkite-agent/.local/lib/python3.8/site-packages/ansible
ansible collection location = /var/lib/buildkite-agent/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/local/bin/ansible-galaxy
python version = 3.8.0 (default, Feb 25 2021, 22:10:10) [GCC 8.4.0]
jinja version = 2.10
libyaml = False
Using /var/lib/buildkite-agent/.ansible.cfg as config file
Reading requirement file at '/var/lib/buildkite-agent/builds/buildkite-agent-tools3208-1/voicethread-llc/ami-bakery/buildkite_agent_frontend/requirements.yml'
Cloning into '/tmp/ansible/tmp/ansible-local-10652xjwcbw9q/tmphvi5md99/ansible-collection-buildkitek95mgz7r'...
remote: Enumerating objects: 215, done.
remote: Counting objects: 100% (215/215), done.
remote: Compressing objects: 100% (160/160), done.
remote: Total 215 (delta 73), reused 176 (delta 42), pack-reused 0
Receiving objects: 100% (215/215), 28.94 KiB | 14.47 MiB/s, done.
Resolving deltas: 100% (73/73), done.
Your branch is up to date with 'origin/main'.
Starting galaxy collection install process
Process install dependency map
Opened /var/lib/buildkite-agent/.ansible/galaxy_token
ERROR! Unexpected Exception, this is probably a bug: 'float' object has no attribute 'startswith'
the full traceback was:
Traceback (most recent call last):
File "/usr/local/bin/ansible-galaxy", line 135, in <module>
exit_code = cli.run()
File "/var/lib/buildkite-agent/.local/lib/python3.8/site-packages/ansible/cli/galaxy.py", line 552, in run
return context.CLIARGS['func']()
File "/var/lib/buildkite-agent/.local/lib/python3.8/site-packages/ansible/cli/galaxy.py", line 75, in method_wrapper
return wrapped_method(*args, **kwargs)
File "/var/lib/buildkite-agent/.local/lib/python3.8/site-packages/ansible/cli/galaxy.py", line 1186, in execute_install
self._execute_install_collection(
File "/var/lib/buildkite-agent/.local/lib/python3.8/site-packages/ansible/cli/galaxy.py", line 1213, in _execute_install_collection
install_collections(
File "/var/lib/buildkite-agent/.local/lib/python3.8/site-packages/ansible/galaxy/collection/__init__.py", line 511, in install_collections
dependency_map = _resolve_depenency_map(
File "/var/lib/buildkite-agent/.local/lib/python3.8/site-packages/ansible/galaxy/collection/__init__.py", line 1328, in _resolve_depenency_map
return collection_dep_resolver.resolve(
File "/usr/local/lib/python3.8/dist-packages/resolvelib/resolvers.py", line 453, in resolve
state = resolution.resolve(requirements, max_rounds=max_rounds)
File "/usr/local/lib/python3.8/dist-packages/resolvelib/resolvers.py", line 347, in resolve
failure_causes = self._attempt_to_pin_criterion(name, criterion)
File "/usr/local/lib/python3.8/dist-packages/resolvelib/resolvers.py", line 216, in _attempt_to_pin_criterion
satisfied = all(
File "/usr/local/lib/python3.8/dist-packages/resolvelib/resolvers.py", line 217, in <genexpr>
self._p.is_satisfied_by(r, candidate)
File "/var/lib/buildkite-agent/.local/lib/python3.8/site-packages/ansible/galaxy/dependency_resolution/providers.py", line 280, in is_satisfied_by
requirement.ver.startswith('<') or
AttributeError: 'float' object has no attribute 'startswith'
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/75404
|
https://github.com/ansible/ansible/pull/76579
|
6e57c8c0844c44a102dc081b93d5299cae9619e2
|
15ace5a854886cd75894e7214209e4b027034bdb
| 2021-08-04T17:46:33Z |
python
| 2022-01-06T19:05:20Z |
test/integration/targets/ansible-galaxy-collection-scm/tasks/test_invalid_version.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 75,404 |
Explicit version results in 'no attribute' error since core 2.11
|
### Summary
Upgraded from Ansible 3.4/core 2.10 to 4.3 and core 2.11. Get an error that says `ERROR! Unexpected Exception, this is probably a bug: 'float' object has no attribute 'startswith'` when running ansible-galaxy with a git repository in it.
Tested against
### Issue Type
Bug Report
### Component Name
galaxy
### Ansible Version
```console
ansible [core 2.11.2]
config file = /var/lib/buildkite-agent/.ansible.cfg
configured module search path = ['/var/lib/buildkite-agent/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /var/lib/buildkite-agent/.local/lib/python3.8/site-packages/ansible
ansible collection location = /var/lib/buildkite-agent/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/local/bin/ansible
python version = 3.8.0 (default, Feb 25 2021, 22:10:10) [GCC 8.4.0]
jinja version = 2.10
libyaml = False
```
### Configuration
```console
$ ansible-config dump --only-changed
DEFAULT_ASK_PASS(/var/lib/buildkite-agent/.ansible.cfg) = False
DEFAULT_LOCAL_TMP(/var/lib/buildkite-agent/.ansible.cfg) = /tmp/ansible/tmp/ansible-local-10804znovzlh6
```
### OS / Environment
Ubuntu 18.04, with python upgraded to 3.8.0 and ansible installed via pip
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
collections:
# With just the collection name
- community.aws
- name: [email protected]:VoiceThread/ansible-collection-buildkite.git
type: git
version: "0.7"
```
Note that I tested with ansible 4.2 and 4.3, which are all that I could install via pip.
It appears that despite quoting the version 0.7, that string was turned into a float in some portion of the code.
### Expected Results
I expected the AWS community and our custom buildkite collection to be installed.
### Actual Results
```console
buildkite-agent@buildkite-agent-tools3208:~/builds/buildkite-agent-tools3208-1/voicethread-llc/ami-bakery/buildkite_agent_frontend$ansible-galaxy -vvv install -r ./requirements.yml
[DEPRECATION WARNING]: Setting verbosity before the arg sub command is deprecated, set the verbosity after the sub command. This feature will be removed from
ansible-core in version 2.13. Deprecation warnings can be disabled by setting deprecation_warnings=False in ansible.cfg.
ansible-galaxy [core 2.11.2]
config file = /var/lib/buildkite-agent/.ansible.cfg
configured module search path = ['/var/lib/buildkite-agent/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /var/lib/buildkite-agent/.local/lib/python3.8/site-packages/ansible
ansible collection location = /var/lib/buildkite-agent/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/local/bin/ansible-galaxy
python version = 3.8.0 (default, Feb 25 2021, 22:10:10) [GCC 8.4.0]
jinja version = 2.10
libyaml = False
Using /var/lib/buildkite-agent/.ansible.cfg as config file
Reading requirement file at '/var/lib/buildkite-agent/builds/buildkite-agent-tools3208-1/voicethread-llc/ami-bakery/buildkite_agent_frontend/requirements.yml'
Cloning into '/tmp/ansible/tmp/ansible-local-10652xjwcbw9q/tmphvi5md99/ansible-collection-buildkitek95mgz7r'...
remote: Enumerating objects: 215, done.
remote: Counting objects: 100% (215/215), done.
remote: Compressing objects: 100% (160/160), done.
remote: Total 215 (delta 73), reused 176 (delta 42), pack-reused 0
Receiving objects: 100% (215/215), 28.94 KiB | 14.47 MiB/s, done.
Resolving deltas: 100% (73/73), done.
Your branch is up to date with 'origin/main'.
Starting galaxy collection install process
Process install dependency map
Opened /var/lib/buildkite-agent/.ansible/galaxy_token
ERROR! Unexpected Exception, this is probably a bug: 'float' object has no attribute 'startswith'
the full traceback was:
Traceback (most recent call last):
File "/usr/local/bin/ansible-galaxy", line 135, in <module>
exit_code = cli.run()
File "/var/lib/buildkite-agent/.local/lib/python3.8/site-packages/ansible/cli/galaxy.py", line 552, in run
return context.CLIARGS['func']()
File "/var/lib/buildkite-agent/.local/lib/python3.8/site-packages/ansible/cli/galaxy.py", line 75, in method_wrapper
return wrapped_method(*args, **kwargs)
File "/var/lib/buildkite-agent/.local/lib/python3.8/site-packages/ansible/cli/galaxy.py", line 1186, in execute_install
self._execute_install_collection(
File "/var/lib/buildkite-agent/.local/lib/python3.8/site-packages/ansible/cli/galaxy.py", line 1213, in _execute_install_collection
install_collections(
File "/var/lib/buildkite-agent/.local/lib/python3.8/site-packages/ansible/galaxy/collection/__init__.py", line 511, in install_collections
dependency_map = _resolve_depenency_map(
File "/var/lib/buildkite-agent/.local/lib/python3.8/site-packages/ansible/galaxy/collection/__init__.py", line 1328, in _resolve_depenency_map
return collection_dep_resolver.resolve(
File "/usr/local/lib/python3.8/dist-packages/resolvelib/resolvers.py", line 453, in resolve
state = resolution.resolve(requirements, max_rounds=max_rounds)
File "/usr/local/lib/python3.8/dist-packages/resolvelib/resolvers.py", line 347, in resolve
failure_causes = self._attempt_to_pin_criterion(name, criterion)
File "/usr/local/lib/python3.8/dist-packages/resolvelib/resolvers.py", line 216, in _attempt_to_pin_criterion
satisfied = all(
File "/usr/local/lib/python3.8/dist-packages/resolvelib/resolvers.py", line 217, in <genexpr>
self._p.is_satisfied_by(r, candidate)
File "/var/lib/buildkite-agent/.local/lib/python3.8/site-packages/ansible/galaxy/dependency_resolution/providers.py", line 280, in is_satisfied_by
requirement.ver.startswith('<') or
AttributeError: 'float' object has no attribute 'startswith'
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/75404
|
https://github.com/ansible/ansible/pull/76579
|
6e57c8c0844c44a102dc081b93d5299cae9619e2
|
15ace5a854886cd75894e7214209e4b027034bdb
| 2021-08-04T17:46:33Z |
python
| 2022-01-06T19:05:20Z |
test/integration/targets/ansible-galaxy-collection-scm/vars/main.yml
|
install_path: "{{ galaxy_dir }}/collections/ansible_collections"
alt_install_path: "{{ galaxy_dir }}/other_collections/ansible_collections"
scm_path: "{{ galaxy_dir }}/development"
test_repo_path: "{{ galaxy_dir }}/development/ansible_test"
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 76,588 |
ansible.builtin.stat returns a property 'version' that is not documented
|
### Summary
The result of the module ansible.builtin.stat contains a variable "version".
It is not clear what that variable means. There is no documentation for this variable.
E.g.
```
"item": "/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.312.b07-1.el8_4.x86_64/jre/bin/java",
"stat": {
"atime": 1640017871.1250868,
...
"path": "/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.312.b07-1.el8_4.x86_64/jre/bin/java",
...
"uid": 0,
"version": "381700746",
...
```
### Issue Type
Documentation Report
### Component Name
lib/ansible/modules/stat.py
### Ansible Version
```console
ansible 2.9.27
config file = /home/bram.mertens/workspace/abxcfg/ansible.cfg
configured module search path = ['/home/bram.mertens/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.6/site-packages/ansible
executable location = /usr/bin/ansible
python version = 3.6.8 (default, Sep 12 2021, 04:40:35) [GCC 8.4.1 20200928 (Red Hat 8.4.1-1)]
```
### Configuration
```console
DEFAULT_HOST_LIST(/home/bram.mertens/workspace/abxcfg/ansible.cfg) = ['/home/bram.mertens/workspace/abxcfg/production']
DEFAULT_LOG_PATH(/home/bram.mertens/workspace/abxcfg/ansible.cfg) = /home/bram.mertens/workspace/abxcfg/ansible.log
DEFAULT_MANAGED_STR(/home/bram.mertens/workspace/abxcfg/ansible.cfg) = Ansible managed: {file} modified on %Y-%m-%d %H:%M:%S by {uid} on {host}
DEFAULT_ROLES_PATH(/home/bram.mertens/workspace/abxcfg/ansible.cfg) = ['/etc/ansible/roles', '/usr/share/ansible/roles', '/home/bram.mertens/workspace/abxcfg/roles']
DEFAULT_STDOUT_CALLBACK(/home/bram.mertens/workspace/abxcfg/ansible.cfg) = debug
INTERPRETER_PYTHON(/home/bram.mertens/workspace/abxcfg/ansible.cfg) = auto
```
### OS / Environment
RHEL8
### Additional Information
It is not clear what this variable means.
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/76588
|
https://github.com/ansible/ansible/pull/76589
|
15ace5a854886cd75894e7214209e4b027034bdb
|
8e0654504fa5c1b5813e4136b3718d2640eb08f2
| 2021-12-20T17:08:24Z |
python
| 2022-01-06T19:16:01Z |
lib/ansible/modules/stat.py
|
#!/usr/bin/python
# Copyright: (c) 2017, Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
DOCUMENTATION = r'''
---
module: stat
version_added: "1.3"
short_description: Retrieve file or file system status
description:
- Retrieves facts for a file similar to the Linux/Unix 'stat' command.
- For Windows targets, use the M(ansible.windows.win_stat) module instead.
options:
path:
description:
- The full path of the file/object to get the facts of.
type: path
required: true
aliases: [ dest, name ]
follow:
description:
- Whether to follow symlinks.
type: bool
default: no
get_checksum:
description:
- Whether to return a checksum of the file.
type: bool
default: yes
version_added: "1.8"
checksum_algorithm:
description:
- Algorithm to determine checksum of file.
- Will throw an error if the host is unable to use specified algorithm.
- The remote host has to support the hashing method specified, C(md5)
can be unavailable if the host is FIPS-140 compliant.
type: str
choices: [ md5, sha1, sha224, sha256, sha384, sha512 ]
default: sha1
aliases: [ checksum, checksum_algo ]
version_added: "2.0"
get_mime:
description:
- Use file magic and return data about the nature of the file. this uses
the 'file' utility found on most Linux/Unix systems.
- This will add both C(mime_type) and C(charset) fields to the return, if possible.
- In Ansible 2.3 this option changed from I(mime) to I(get_mime) and the default changed to C(yes).
type: bool
default: yes
aliases: [ mime, mime_type, mime-type ]
version_added: "2.1"
get_attributes:
description:
- Get file attributes using lsattr tool if present.
type: bool
default: yes
aliases: [ attr, attributes ]
version_added: "2.3"
extends_documentation_fragment:
- action_common_attributes
attributes:
check_mode:
support: full
diff_mode:
support: none
platform:
platforms: posix
seealso:
- module: ansible.builtin.file
- module: ansible.windows.win_stat
author: Bruce Pennypacker (@bpennypacker)
'''
EXAMPLES = r'''
# Obtain the stats of /etc/foo.conf, and check that the file still belongs
# to 'root'. Fail otherwise.
- name: Get stats of a file
ansible.builtin.stat:
path: /etc/foo.conf
register: st
- name: Fail if the file does not belong to 'root'
ansible.builtin.fail:
msg: "Whoops! file ownership has changed"
when: st.stat.pw_name != 'root'
# Determine if a path exists and is a symlink. Note that if the path does
# not exist, and we test sym.stat.islnk, it will fail with an error. So
# therefore, we must test whether it is defined.
# Run this to understand the structure, the skipped ones do not pass the
# check performed by 'when'
- name: Get stats of the FS object
ansible.builtin.stat:
path: /path/to/something
register: sym
- name: Print a debug message
ansible.builtin.debug:
msg: "islnk isn't defined (path doesn't exist)"
when: sym.stat.islnk is not defined
- name: Print a debug message
ansible.builtin.debug:
msg: "islnk is defined (path must exist)"
when: sym.stat.islnk is defined
- name: Print a debug message
ansible.builtin.debug:
msg: "Path exists and is a symlink"
when: sym.stat.islnk is defined and sym.stat.islnk
- name: Print a debug message
ansible.builtin.debug:
msg: "Path exists and isn't a symlink"
when: sym.stat.islnk is defined and sym.stat.islnk == False
# Determine if a path exists and is a directory. Note that we need to test
# both that p.stat.isdir actually exists, and also that it's set to true.
- name: Get stats of the FS object
ansible.builtin.stat:
path: /path/to/something
register: p
- name: Print a debug message
ansible.builtin.debug:
msg: "Path exists and is a directory"
when: p.stat.isdir is defined and p.stat.isdir
- name: Don not do checksum
ansible.builtin.stat:
path: /path/to/myhugefile
get_checksum: no
- name: Use sha256 to calculate checksum
ansible.builtin.stat:
path: /path/to/something
checksum_algorithm: sha256
'''
RETURN = r'''
stat:
description: Dictionary containing all the stat data, some platforms might add additional fields.
returned: success
type: complex
contains:
exists:
description: If the destination path actually exists or not
returned: success
type: bool
sample: True
path:
description: The full path of the file/object to get the facts of
returned: success and if path exists
type: str
sample: '/path/to/file'
mode:
description: Unix permissions of the file in octal representation as a string
returned: success, path exists and user can read stats
type: str
sample: 1755
isdir:
description: Tells you if the path is a directory
returned: success, path exists and user can read stats
type: bool
sample: False
ischr:
description: Tells you if the path is a character device
returned: success, path exists and user can read stats
type: bool
sample: False
isblk:
description: Tells you if the path is a block device
returned: success, path exists and user can read stats
type: bool
sample: False
isreg:
description: Tells you if the path is a regular file
returned: success, path exists and user can read stats
type: bool
sample: True
isfifo:
description: Tells you if the path is a named pipe
returned: success, path exists and user can read stats
type: bool
sample: False
islnk:
description: Tells you if the path is a symbolic link
returned: success, path exists and user can read stats
type: bool
sample: False
issock:
description: Tells you if the path is a unix domain socket
returned: success, path exists and user can read stats
type: bool
sample: False
uid:
description: Numeric id representing the file owner
returned: success, path exists and user can read stats
type: int
sample: 1003
gid:
description: Numeric id representing the group of the owner
returned: success, path exists and user can read stats
type: int
sample: 1003
size:
description: Size in bytes for a plain file, amount of data for some special files
returned: success, path exists and user can read stats
type: int
sample: 203
inode:
description: Inode number of the path
returned: success, path exists and user can read stats
type: int
sample: 12758
dev:
description: Device the inode resides on
returned: success, path exists and user can read stats
type: int
sample: 33
nlink:
description: Number of links to the inode (hard links)
returned: success, path exists and user can read stats
type: int
sample: 1
atime:
description: Time of last access
returned: success, path exists and user can read stats
type: float
sample: 1424348972.575
mtime:
description: Time of last modification
returned: success, path exists and user can read stats
type: float
sample: 1424348972.575
ctime:
description: Time of last metadata update or creation (depends on OS)
returned: success, path exists and user can read stats
type: float
sample: 1424348972.575
wusr:
description: Tells you if the owner has write permission
returned: success, path exists and user can read stats
type: bool
sample: True
rusr:
description: Tells you if the owner has read permission
returned: success, path exists and user can read stats
type: bool
sample: True
xusr:
description: Tells you if the owner has execute permission
returned: success, path exists and user can read stats
type: bool
sample: True
wgrp:
description: Tells you if the owner's group has write permission
returned: success, path exists and user can read stats
type: bool
sample: False
rgrp:
description: Tells you if the owner's group has read permission
returned: success, path exists and user can read stats
type: bool
sample: True
xgrp:
description: Tells you if the owner's group has execute permission
returned: success, path exists and user can read stats
type: bool
sample: True
woth:
description: Tells you if others have write permission
returned: success, path exists and user can read stats
type: bool
sample: False
roth:
description: Tells you if others have read permission
returned: success, path exists and user can read stats
type: bool
sample: True
xoth:
description: Tells you if others have execute permission
returned: success, path exists and user can read stats
type: bool
sample: True
isuid:
description: Tells you if the invoking user's id matches the owner's id
returned: success, path exists and user can read stats
type: bool
sample: False
isgid:
description: Tells you if the invoking user's group id matches the owner's group id
returned: success, path exists and user can read stats
type: bool
sample: False
lnk_source:
description: Target of the symlink normalized for the remote filesystem
returned: success, path exists and user can read stats and the path is a symbolic link
type: str
sample: /home/foobar/21102015-1445431274-908472971
lnk_target:
description: Target of the symlink. Note that relative paths remain relative
returned: success, path exists and user can read stats and the path is a symbolic link
type: str
sample: ../foobar/21102015-1445431274-908472971
version_added: 2.4
md5:
description: md5 hash of the file; this will be removed in Ansible 2.9 in
favor of the checksum return value
returned: success, path exists and user can read stats and path
supports hashing and md5 is supported
type: str
sample: f88fa92d8cf2eeecf4c0a50ccc96d0c0
checksum:
description: hash of the file
returned: success, path exists, user can read stats, path supports
hashing and supplied checksum algorithm is available
type: str
sample: 50ba294cdf28c0d5bcde25708df53346825a429f
pw_name:
description: User name of owner
returned: success, path exists and user can read stats and installed python supports it
type: str
sample: httpd
gr_name:
description: Group name of owner
returned: success, path exists and user can read stats and installed python supports it
type: str
sample: www-data
mimetype:
description: file magic data or mime-type
returned: success, path exists and user can read stats and
installed python supports it and the I(mime) option was true, will
return C(unknown) on error.
type: str
sample: application/pdf; charset=binary
charset:
description: file character set or encoding
returned: success, path exists and user can read stats and
installed python supports it and the I(mime) option was true, will
return C(unknown) on error.
type: str
sample: us-ascii
readable:
description: Tells you if the invoking user has the right to read the path
returned: success, path exists and user can read the path
type: bool
sample: False
version_added: 2.2
writeable:
description: Tells you if the invoking user has the right to write the path
returned: success, path exists and user can write the path
type: bool
sample: False
version_added: 2.2
executable:
description: Tells you if the invoking user has execute permission on the path
returned: success, path exists and user can execute the path
type: bool
sample: False
version_added: 2.2
attributes:
description: list of file attributes
returned: success, path exists and user can execute the path
type: list
sample: [ immutable, extent ]
version_added: 2.3
'''
import errno
import grp
import os
import pwd
import stat
# import module snippets
from ansible.module_utils.basic import AnsibleModule
from ansible.module_utils._text import to_bytes
def format_output(module, path, st):
mode = st.st_mode
# back to ansible
output = dict(
exists=True,
path=path,
mode="%04o" % stat.S_IMODE(mode),
isdir=stat.S_ISDIR(mode),
ischr=stat.S_ISCHR(mode),
isblk=stat.S_ISBLK(mode),
isreg=stat.S_ISREG(mode),
isfifo=stat.S_ISFIFO(mode),
islnk=stat.S_ISLNK(mode),
issock=stat.S_ISSOCK(mode),
uid=st.st_uid,
gid=st.st_gid,
size=st.st_size,
inode=st.st_ino,
dev=st.st_dev,
nlink=st.st_nlink,
atime=st.st_atime,
mtime=st.st_mtime,
ctime=st.st_ctime,
wusr=bool(mode & stat.S_IWUSR),
rusr=bool(mode & stat.S_IRUSR),
xusr=bool(mode & stat.S_IXUSR),
wgrp=bool(mode & stat.S_IWGRP),
rgrp=bool(mode & stat.S_IRGRP),
xgrp=bool(mode & stat.S_IXGRP),
woth=bool(mode & stat.S_IWOTH),
roth=bool(mode & stat.S_IROTH),
xoth=bool(mode & stat.S_IXOTH),
isuid=bool(mode & stat.S_ISUID),
isgid=bool(mode & stat.S_ISGID),
)
# Platform dependent flags:
for other in [
# Some Linux
('st_blocks', 'blocks'),
('st_blksize', 'block_size'),
('st_rdev', 'device_type'),
('st_flags', 'flags'),
# Some Berkley based
('st_gen', 'generation'),
('st_birthtime', 'birthtime'),
# RISCOS
('st_ftype', 'file_type'),
('st_attrs', 'attrs'),
('st_obtype', 'object_type'),
# macOS
('st_rsize', 'real_size'),
('st_creator', 'creator'),
('st_type', 'file_type'),
]:
if hasattr(st, other[0]):
output[other[1]] = getattr(st, other[0])
return output
def main():
module = AnsibleModule(
argument_spec=dict(
path=dict(type='path', required=True, aliases=['dest', 'name']),
follow=dict(type='bool', default=False),
get_md5=dict(type='bool', default=False),
get_checksum=dict(type='bool', default=True),
get_mime=dict(type='bool', default=True, aliases=['mime', 'mime_type', 'mime-type']),
get_attributes=dict(type='bool', default=True, aliases=['attr', 'attributes']),
checksum_algorithm=dict(type='str', default='sha1',
choices=['md5', 'sha1', 'sha224', 'sha256', 'sha384', 'sha512'],
aliases=['checksum', 'checksum_algo']),
),
supports_check_mode=True,
)
path = module.params.get('path')
b_path = to_bytes(path, errors='surrogate_or_strict')
follow = module.params.get('follow')
get_mime = module.params.get('get_mime')
get_attr = module.params.get('get_attributes')
get_checksum = module.params.get('get_checksum')
checksum_algorithm = module.params.get('checksum_algorithm')
# NOTE: undocumented option since 2.9 to be removed at a later date if possible (3.0+)
# no real reason for keeping other than fear we may break older content.
get_md5 = module.params.get('get_md5')
# main stat data
try:
if follow:
st = os.stat(b_path)
else:
st = os.lstat(b_path)
except OSError as e:
if e.errno == errno.ENOENT:
output = {'exists': False}
module.exit_json(changed=False, stat=output)
module.fail_json(msg=e.strerror)
# process base results
output = format_output(module, path, st)
# resolved permissions
for perm in [('readable', os.R_OK), ('writeable', os.W_OK), ('executable', os.X_OK)]:
output[perm[0]] = os.access(b_path, perm[1])
# symlink info
if output.get('islnk'):
output['lnk_source'] = os.path.realpath(b_path)
output['lnk_target'] = os.readlink(b_path)
try: # user data
pw = pwd.getpwuid(st.st_uid)
output['pw_name'] = pw.pw_name
except (TypeError, KeyError):
pass
try: # group data
grp_info = grp.getgrgid(st.st_gid)
output['gr_name'] = grp_info.gr_name
except (KeyError, ValueError, OverflowError):
pass
# checksums
if output.get('isreg') and output.get('readable'):
# NOTE: see above about get_md5
if get_md5:
# Will fail on FIPS-140 compliant systems
try:
output['md5'] = module.md5(b_path)
except ValueError:
output['md5'] = None
if get_checksum:
output['checksum'] = module.digest_from_file(b_path, checksum_algorithm)
# try to get mime data if requested
if get_mime:
output['mimetype'] = output['charset'] = 'unknown'
mimecmd = module.get_bin_path('file')
if mimecmd:
mimecmd = [mimecmd, '--mime-type', '--mime-encoding', b_path]
try:
rc, out, err = module.run_command(mimecmd)
if rc == 0:
mimetype, charset = out.rsplit(':', 1)[1].split(';')
output['mimetype'] = mimetype.strip()
output['charset'] = charset.split('=')[1].strip()
except Exception:
pass
# try to get attr data
if get_attr:
output['version'] = None
output['attributes'] = []
output['attr_flags'] = ''
out = module.get_file_attributes(b_path)
for x in ('version', 'attributes', 'attr_flags'):
if x in out:
output[x] = out[x]
module.exit_json(changed=False, stat=output)
if __name__ == '__main__':
main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 76,588 |
ansible.builtin.stat returns a property 'version' that is not documented
|
### Summary
The result of the module ansible.builtin.stat contains a variable "version".
It is not clear what that variable means. There is no documentation for this variable.
E.g.
```
"item": "/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.312.b07-1.el8_4.x86_64/jre/bin/java",
"stat": {
"atime": 1640017871.1250868,
...
"path": "/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.312.b07-1.el8_4.x86_64/jre/bin/java",
...
"uid": 0,
"version": "381700746",
...
```
### Issue Type
Documentation Report
### Component Name
lib/ansible/modules/stat.py
### Ansible Version
```console
ansible 2.9.27
config file = /home/bram.mertens/workspace/abxcfg/ansible.cfg
configured module search path = ['/home/bram.mertens/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.6/site-packages/ansible
executable location = /usr/bin/ansible
python version = 3.6.8 (default, Sep 12 2021, 04:40:35) [GCC 8.4.1 20200928 (Red Hat 8.4.1-1)]
```
### Configuration
```console
DEFAULT_HOST_LIST(/home/bram.mertens/workspace/abxcfg/ansible.cfg) = ['/home/bram.mertens/workspace/abxcfg/production']
DEFAULT_LOG_PATH(/home/bram.mertens/workspace/abxcfg/ansible.cfg) = /home/bram.mertens/workspace/abxcfg/ansible.log
DEFAULT_MANAGED_STR(/home/bram.mertens/workspace/abxcfg/ansible.cfg) = Ansible managed: {file} modified on %Y-%m-%d %H:%M:%S by {uid} on {host}
DEFAULT_ROLES_PATH(/home/bram.mertens/workspace/abxcfg/ansible.cfg) = ['/etc/ansible/roles', '/usr/share/ansible/roles', '/home/bram.mertens/workspace/abxcfg/roles']
DEFAULT_STDOUT_CALLBACK(/home/bram.mertens/workspace/abxcfg/ansible.cfg) = debug
INTERPRETER_PYTHON(/home/bram.mertens/workspace/abxcfg/ansible.cfg) = auto
```
### OS / Environment
RHEL8
### Additional Information
It is not clear what this variable means.
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/76588
|
https://github.com/ansible/ansible/pull/76589
|
15ace5a854886cd75894e7214209e4b027034bdb
|
8e0654504fa5c1b5813e4136b3718d2640eb08f2
| 2021-12-20T17:08:24Z |
python
| 2022-01-06T19:16:01Z |
test/integration/targets/stat/meta/main.yml
|
dependencies:
- prepare_tests
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 76,588 |
ansible.builtin.stat returns a property 'version' that is not documented
|
### Summary
The result of the module ansible.builtin.stat contains a variable "version".
It is not clear what that variable means. There is no documentation for this variable.
E.g.
```
"item": "/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.312.b07-1.el8_4.x86_64/jre/bin/java",
"stat": {
"atime": 1640017871.1250868,
...
"path": "/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.312.b07-1.el8_4.x86_64/jre/bin/java",
...
"uid": 0,
"version": "381700746",
...
```
### Issue Type
Documentation Report
### Component Name
lib/ansible/modules/stat.py
### Ansible Version
```console
ansible 2.9.27
config file = /home/bram.mertens/workspace/abxcfg/ansible.cfg
configured module search path = ['/home/bram.mertens/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.6/site-packages/ansible
executable location = /usr/bin/ansible
python version = 3.6.8 (default, Sep 12 2021, 04:40:35) [GCC 8.4.1 20200928 (Red Hat 8.4.1-1)]
```
### Configuration
```console
DEFAULT_HOST_LIST(/home/bram.mertens/workspace/abxcfg/ansible.cfg) = ['/home/bram.mertens/workspace/abxcfg/production']
DEFAULT_LOG_PATH(/home/bram.mertens/workspace/abxcfg/ansible.cfg) = /home/bram.mertens/workspace/abxcfg/ansible.log
DEFAULT_MANAGED_STR(/home/bram.mertens/workspace/abxcfg/ansible.cfg) = Ansible managed: {file} modified on %Y-%m-%d %H:%M:%S by {uid} on {host}
DEFAULT_ROLES_PATH(/home/bram.mertens/workspace/abxcfg/ansible.cfg) = ['/etc/ansible/roles', '/usr/share/ansible/roles', '/home/bram.mertens/workspace/abxcfg/roles']
DEFAULT_STDOUT_CALLBACK(/home/bram.mertens/workspace/abxcfg/ansible.cfg) = debug
INTERPRETER_PYTHON(/home/bram.mertens/workspace/abxcfg/ansible.cfg) = auto
```
### OS / Environment
RHEL8
### Additional Information
It is not clear what this variable means.
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/76588
|
https://github.com/ansible/ansible/pull/76589
|
15ace5a854886cd75894e7214209e4b027034bdb
|
8e0654504fa5c1b5813e4136b3718d2640eb08f2
| 2021-12-20T17:08:24Z |
python
| 2022-01-06T19:16:01Z |
test/integration/targets/stat/tasks/main.yml
|
# test code for the stat module
# (c) 2014, James Tanner <[email protected]>
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
- name: make a new file
copy: dest={{output_dir}}/foo.txt mode=0644 content="hello world"
- name: check stat of file
stat: path={{output_dir}}/foo.txt
register: stat_result
- debug: var=stat_result
- assert:
that:
- "'changed' in stat_result"
- "stat_result.changed == false"
- "'stat' in stat_result"
- "'atime' in stat_result.stat"
- "'ctime' in stat_result.stat"
- "'dev' in stat_result.stat"
- "'exists' in stat_result.stat"
- "'gid' in stat_result.stat"
- "'inode' in stat_result.stat"
- "'isblk' in stat_result.stat"
- "'ischr' in stat_result.stat"
- "'isdir' in stat_result.stat"
- "'isfifo' in stat_result.stat"
- "'isgid' in stat_result.stat"
- "'isreg' in stat_result.stat"
- "'issock' in stat_result.stat"
- "'isuid' in stat_result.stat"
- "'checksum' in stat_result.stat"
- "stat_result.stat.checksum == '2aae6c35c94fcfb415dbe95f408b9ce91ee846ed'"
- "'mode' in stat_result.stat"
- "'mtime' in stat_result.stat"
- "'nlink' in stat_result.stat"
- "'pw_name' in stat_result.stat"
- "'rgrp' in stat_result.stat"
- "'roth' in stat_result.stat"
- "'rusr' in stat_result.stat"
- "'size' in stat_result.stat"
- "'uid' in stat_result.stat"
- "'wgrp' in stat_result.stat"
- "'woth' in stat_result.stat"
- "'wusr' in stat_result.stat"
- "'xgrp' in stat_result.stat"
- "'xoth' in stat_result.stat"
- "'xusr' in stat_result.stat"
- name: make a symlink
file:
src: "{{ output_dir }}/foo.txt"
path: "{{ output_dir }}/foo-link"
state: link
- name: check stat of a symlink with follow off
stat:
path: "{{ output_dir }}/foo-link"
register: stat_result
- debug: var=stat_result
- assert:
that:
- "'changed' in stat_result"
- "stat_result.changed == false"
- "'stat' in stat_result"
- "'atime' in stat_result.stat"
- "'ctime' in stat_result.stat"
- "'dev' in stat_result.stat"
- "'exists' in stat_result.stat"
- "'gid' in stat_result.stat"
- "'inode' in stat_result.stat"
- "'isblk' in stat_result.stat"
- "'ischr' in stat_result.stat"
- "'isdir' in stat_result.stat"
- "'isfifo' in stat_result.stat"
- "'isgid' in stat_result.stat"
- "'isreg' in stat_result.stat"
- "'issock' in stat_result.stat"
- "'isuid' in stat_result.stat"
- "'islnk' in stat_result.stat"
- "'mode' in stat_result.stat"
- "'mtime' in stat_result.stat"
- "'nlink' in stat_result.stat"
- "'pw_name' in stat_result.stat"
- "'rgrp' in stat_result.stat"
- "'roth' in stat_result.stat"
- "'rusr' in stat_result.stat"
- "'size' in stat_result.stat"
- "'uid' in stat_result.stat"
- "'wgrp' in stat_result.stat"
- "'woth' in stat_result.stat"
- "'wusr' in stat_result.stat"
- "'xgrp' in stat_result.stat"
- "'xoth' in stat_result.stat"
- "'xusr' in stat_result.stat"
- name: check stat of a symlink with follow on
stat:
path: "{{ output_dir }}/foo-link"
follow: True
register: stat_result
- debug: var=stat_result
- assert:
that:
- "'changed' in stat_result"
- "stat_result.changed == false"
- "'stat' in stat_result"
- "'atime' in stat_result.stat"
- "'ctime' in stat_result.stat"
- "'dev' in stat_result.stat"
- "'exists' in stat_result.stat"
- "'gid' in stat_result.stat"
- "'inode' in stat_result.stat"
- "'isblk' in stat_result.stat"
- "'ischr' in stat_result.stat"
- "'isdir' in stat_result.stat"
- "'isfifo' in stat_result.stat"
- "'isgid' in stat_result.stat"
- "'isreg' in stat_result.stat"
- "'issock' in stat_result.stat"
- "'isuid' in stat_result.stat"
- "'checksum' in stat_result.stat"
- "stat_result.stat.checksum == '2aae6c35c94fcfb415dbe95f408b9ce91ee846ed'"
- "'mode' in stat_result.stat"
- "'mtime' in stat_result.stat"
- "'nlink' in stat_result.stat"
- "'pw_name' in stat_result.stat"
- "'rgrp' in stat_result.stat"
- "'roth' in stat_result.stat"
- "'rusr' in stat_result.stat"
- "'size' in stat_result.stat"
- "'uid' in stat_result.stat"
- "'wgrp' in stat_result.stat"
- "'woth' in stat_result.stat"
- "'wusr' in stat_result.stat"
- "'xgrp' in stat_result.stat"
- "'xoth' in stat_result.stat"
- "'xusr' in stat_result.stat"
- name: make a new file with colon in filename
copy:
dest: "{{ output_dir }}/foo:bar.txt"
mode: '0644'
content: "hello world"
- name: check stat of a file with colon in name
stat:
path: "{{ output_dir }}/foo:bar.txt"
follow: True
register: stat_result
- debug:
var: stat_result
- assert:
that:
- "'changed' in stat_result"
- "stat_result.changed == false"
- "stat_result.stat.mimetype == 'text/plain'"
- "stat_result.stat.charset == 'us-ascii'"
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 75,538 |
systemd module not idempotent with aliased service and systemd 247 on Debian 11 (bullseye)
|
### Summary
Disabling the `openbsd-inetd` service on Debian 11 (bullseye) works, but keeps on reporting 'changed' incorrectly. The reason seems to be that `systemctl is-enabled openbsd-inetd` returns `alias` instead of the expected `disabled`.
### Issue Type
Bug Report
### Component Name
systemd
### Ansible Version
```console
$ ansible --version
ansible [core 2.11.4]
config file = /home/ansible/ansible.cfg
configured module search path = ['/home/ansible/library']
ansible python module location = /usr/local/share/venv_projects/ansible/lib64/python3.8/site-packages/ansible
ansible collection location = /home/ansible/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/local/share/venv_projects/ansible/bin/ansible
python version = 3.8.6 (default, Oct 27 2020, 09:13:12) [GCC 9.3.1 20200408 (Red Hat 9.3.1-2)]
jinja version = 3.0.1
libyaml = True
```
### Configuration
```console
$ ansible-config dump --only-changed
```
### OS / Environment
Target OS runs Debian 11 (bullseye):
```
# cat /etc/os-release
PRETTY_NAME="Debian GNU/Linux 11 (bullseye)"
NAME="Debian GNU/Linux"
VERSION_ID="11"
VERSION="11 (bullseye)"
VERSION_CODENAME=bullseye
ID=debian
HOME_URL="https://www.debian.org/"
SUPPORT_URL="https://www.debian.org/support"
BUG_REPORT_URL="https://bugs.debian.org/"
```
Systemd version:
```
# systemctl --version
systemd 247 (247.3-6)
+PAM +AUDIT +SELINUX +IMA +APPARMOR +SMACK +SYSVINIT +UTMP +LIBCRYPTSETUP +GCRYPT +GNUTLS +ACL +XZ +LZ4 +ZSTD +SECCOMP +BLKID +ELFUTILS +KMOD +IDN2 -IDN +PCRE2 default-hierarchy=unified
```
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
Test playbook
```yaml (paste below)
---
- hosts: all
tasks:
- name: Disable services
systemd:
name: openbsd-inetd
enabled: no
```
### Expected Results
After running this twice, or when running on a system with `openbsd-inetd` disabled, I would expect this to return `ok`.
### Actual Results
Instead, it continues to return `changed`:
```
TASK [Disable services] *******************************************************************************************************************************************************************************************************************
[..]
changed: [IP] => {
"changed": true,
"enabled": false,
"invocation": {
"module_args": {
"daemon_reexec": false,
"daemon_reload": false,
"enabled": false,
"force": null,
"masked": null,
"name": "openbsd-inetd",
"no_block": false,
"scope": "system",
"state": null
}
},
"name": "openbsd-inetd",
"status": {
"ActiveEnterTimestampMonotonic": "0",
"ActiveExitTimestampMonotonic": "0",
"ActiveState": "inactive",
"After": "sysinit.target network-online.target system.slice basic.target systemd-journald.socket",
"AllowIsolate": "no",
"AllowedCPUs": "",
"AllowedMemoryNodes": "",
"AmbientCapabilities": "",
"AssertResult": "no",
"AssertTimestampMonotonic": "0",
"Before": "shutdown.target",
"BlockIOAccounting": "no",
"BlockIOWeight": "[not set]",
"CPUAccounting": "yes",
"CPUAffinity": "",
"CPUAffinityFromNUMA": "no",
"CPUQuotaPerSecUSec": "infinity",
"CPUQuotaPeriodUSec": "infinity",
"CPUSchedulingPolicy": "0",
"CPUSchedulingPriority": "0",
"CPUSchedulingResetOnFork": "no",
"CPUShares": "[not set]",
"CPUUsageNSec": "[not set]",
"CPUWeight": "[not set]",
"CacheDirectoryMode": "0755",
"CanFreeze": "yes",
"CanIsolate": "no",
"CanReload": "yes",
"CanStart": "yes",
"CanStop": "yes",
"CapabilityBoundingSet": "cap_chown cap_dac_override cap_dac_read_search cap_fowner cap_fsetid cap_kill cap_setgid cap_setuid cap_setpcap cap_linux_immutable cap_net_bind_service cap_net_broadcast cap_net_admin cap_net_raw cap_ipc_lock cap_ipc_owner cap_sys_module cap_sys_rawio cap_sys_chroot cap_sys_ptrace cap_sys_pacct cap_sys_admin cap_sys_boot cap_sys_nice cap_sys_resource cap_sys_time cap_sys_tty_config cap_mknod cap_lease cap_audit_write cap_audit_control cap_setfcap cap_mac_override cap_mac_admin cap_syslog cap_wake_alarm cap_block_suspend cap_audit_read cap_perfmon cap_bpf cap_checkpoint_restore",
"CleanResult": "success",
"CollectMode": "inactive",
"ConditionResult": "no",
"ConditionTimestampMonotonic": "0",
"ConfigurationDirectoryMode": "0755",
"Conflicts": "shutdown.target",
"ControlPID": "0",
"CoredumpFilter": "0x33",
"DefaultDependencies": "yes",
"DefaultMemoryLow": "0",
"DefaultMemoryMin": "0",
"Delegate": "no",
"Description": "Internet superserver",
"DevicePolicy": "auto",
"Documentation": "\"man:inetd(8)\"",
"DynamicUser": "no",
"EffectiveCPUs": "",
"EffectiveMemoryNodes": "",
"ExecMainCode": "0",
"ExecMainExitTimestampMonotonic": "0",
"ExecMainPID": "0",
"ExecMainStartTimestampMonotonic": "0",
"ExecMainStatus": "0",
"ExecReload": "{ path=/bin/kill ; argv[]=/bin/kill -HUP $MAINPID ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }",
"ExecReloadEx": "{ path=/bin/kill ; argv[]=/bin/kill -HUP $MAINPID ; flags= ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }",
"ExecStart": "{ path=/usr/sbin/inetd ; argv[]=/usr/sbin/inetd ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }",
"ExecStartEx": "{ path=/usr/sbin/inetd ; argv[]=/usr/sbin/inetd ; flags= ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }",
"FailureAction": "none",
"FileDescriptorStoreMax": "0",
"FinalKillSignal": "9",
"FragmentPath": "/lib/systemd/system/inetd.service",
"FreezerState": "running",
"GID": "[not set]",
"GuessMainPID": "yes",
"IOAccounting": "no",
"IOReadBytes": "18446744073709551615",
"IOReadOperations": "18446744073709551615",
"IOSchedulingClass": "0",
"IOSchedulingPriority": "0",
"IOWeight": "[not set]",
"IOWriteBytes": "18446744073709551615",
"IOWriteOperations": "18446744073709551615",
"IPAccounting": "no",
"IPEgressBytes": "[no data]",
"IPEgressPackets": "[no data]",
"IPIngressBytes": "[no data]",
"IPIngressPackets": "[no data]",
"Id": "inetd.service",
"IgnoreOnIsolate": "no",
"IgnoreSIGPIPE": "yes",
"InactiveEnterTimestampMonotonic": "0",
"InactiveExitTimestampMonotonic": "0",
"JobRunningTimeoutUSec": "infinity",
"JobTimeoutAction": "none",
"JobTimeoutUSec": "infinity",
"KeyringMode": "private",
"KillMode": "process",
"KillSignal": "15",
"LimitAS": "infinity",
"LimitASSoft": "infinity",
"LimitCORE": "infinity",
"LimitCORESoft": "0",
"LimitCPU": "infinity",
"LimitCPUSoft": "infinity",
"LimitDATA": "infinity",
"LimitDATASoft": "infinity",
"LimitFSIZE": "infinity",
"LimitFSIZESoft": "infinity",
"LimitLOCKS": "infinity",
"LimitLOCKSSoft": "infinity",
"LimitMEMLOCK": "65536",
"LimitMEMLOCKSoft": "65536",
"LimitMSGQUEUE": "819200",
"LimitMSGQUEUESoft": "819200",
"LimitNICE": "0",
"LimitNICESoft": "0",
"LimitNOFILE": "524288",
"LimitNOFILESoft": "1024",
"LimitNPROC": "3775",
"LimitNPROCSoft": "3775",
"LimitRSS": "infinity",
"LimitRSSSoft": "infinity",
"LimitRTPRIO": "0",
"LimitRTPRIOSoft": "0",
"LimitRTTIME": "infinity",
"LimitRTTIMESoft": "infinity",
"LimitSIGPENDING": "3775",
"LimitSIGPENDINGSoft": "3775",
"LimitSTACK": "infinity",
"LimitSTACKSoft": "8388608",
"LoadState": "loaded",
"LockPersonality": "no",
"LogLevelMax": "-1",
"LogRateLimitBurst": "0",
"LogRateLimitIntervalUSec": "0",
"LogsDirectoryMode": "0755",
"MainPID": "0",
"ManagedOOMMemoryPressure": "auto",
"ManagedOOMMemoryPressureLimitPercent": "0%",
"ManagedOOMSwap": "auto",
"MemoryAccounting": "yes",
"MemoryCurrent": "[not set]",
"MemoryDenyWriteExecute": "no",
"MemoryHigh": "infinity",
"MemoryLimit": "infinity",
"MemoryLow": "0",
"MemoryMax": "infinity",
"MemoryMin": "0",
"MemorySwapMax": "infinity",
"MountAPIVFS": "no",
"MountFlags": "",
"NFileDescriptorStore": "0",
"NRestarts": "0",
"NUMAMask": "",
"NUMAPolicy": "n/a",
"Names": "inetd.service openbsd-inetd.service",
"NeedDaemonReload": "no",
"Nice": "0",
"NoNewPrivileges": "no",
"NonBlocking": "no",
"NotifyAccess": "main",
"OOMPolicy": "stop",
"OOMScoreAdjust": "0",
"OnFailureJobMode": "replace",
"Perpetual": "no",
"PrivateDevices": "no",
"PrivateMounts": "no",
"PrivateNetwork": "no",
"PrivateTmp": "no",
"PrivateUsers": "no",
"ProcSubset": "all",
"ProtectClock": "no",
"ProtectControlGroups": "no",
"ProtectHome": "no",
"ProtectHostname": "no",
"ProtectKernelLogs": "no",
"ProtectKernelModules": "no",
"ProtectKernelTunables": "no",
"ProtectProc": "default",
"ProtectSystem": "no",
"RefuseManualStart": "no",
"RefuseManualStop": "no",
"ReloadResult": "success",
"RemainAfterExit": "no",
"RemoveIPC": "no",
"Requires": "system.slice sysinit.target",
"Restart": "on-failure",
"RestartKillSignal": "15",
"RestartUSec": "100ms",
"RestrictNamespaces": "no",
"RestrictRealtime": "no",
"RestrictSUIDSGID": "no",
"Result": "success",
"RootDirectoryStartOnly": "no",
"RootHashSignature": "",
"RuntimeDirectoryMode": "0755",
"RuntimeDirectoryPreserve": "no",
"RuntimeMaxUSec": "infinity",
"SameProcessGroup": "no",
"SecureBits": "0",
"SendSIGHUP": "no",
"SendSIGKILL": "yes",
"Slice": "system.slice",
"StandardError": "inherit",
"StandardInput": "null",
"StandardInputData": "",
"StandardOutput": "journal",
"StartLimitAction": "none",
"StartLimitBurst": "5",
"StartLimitIntervalUSec": "10s",
"StartupBlockIOWeight": "[not set]",
"StartupCPUShares": "[not set]",
"StartupCPUWeight": "[not set]",
"StartupIOWeight": "[not set]",
"StateChangeTimestampMonotonic": "0",
"StateDirectoryMode": "0755",
"StatusErrno": "0",
"StopWhenUnneeded": "no",
"SubState": "dead",
"SuccessAction": "none",
"SyslogFacility": "3",
"SyslogLevel": "6",
"SyslogLevelPrefix": "yes",
"SyslogPriority": "30",
"SystemCallErrorNumber": "2147483646",
"TTYReset": "no",
"TTYVHangup": "no",
"TTYVTDisallocate": "no",
"TasksAccounting": "yes",
"TasksCurrent": "[not set]",
"TasksMax": "1132",
"TimeoutAbortUSec": "1min 30s",
"TimeoutCleanUSec": "infinity",
"TimeoutStartFailureMode": "terminate",
"TimeoutStartUSec": "1min 30s",
"TimeoutStopFailureMode": "terminate",
"TimeoutStopUSec": "1min 30s",
"TimerSlackNSec": "50000",
"Transient": "no",
"Type": "notify",
"UID": "[not set]",
"UMask": "0022",
"UnitFilePreset": "enabled",
"UnitFileState": "disabled",
"UtmpMode": "init",
"Wants": "network-online.target",
"WatchdogSignal": "6",
"WatchdogTimestampMonotonic": "0",
"WatchdogUSec": "infinity"
}
}
```
On the system it is clear that it has been disabled:
```
# systemctl status openbsd-inetd
â— inetd.service - Internet superserver
Loaded: loaded (/lib/systemd/system/inetd.service; disabled; vendor preset: enabled)
Active: inactive (dead)
Docs: man:inetd(8)
Aug 20 07:11:24 testing-fc18-20210819-163020-0 systemd[1]: Starting Internet superserver...
Aug 20 07:11:24 testing-fc18-20210819-163020-0 systemd[1]: Started Internet superserver.
Aug 20 07:12:05 testing-fc18-20210819-163020-0 systemd[1]: Stopping Internet superserver...
Aug 20 07:12:05 testing-fc18-20210819-163020-0 systemd[1]: inetd.service: Succeeded.
Aug 20 07:12:05 testing-fc18-20210819-163020-0 systemd[1]: Stopped Internet superserver.
```
Looking at the `systemd.py` code, I believe the problem is that `is-enabled` returns `alias`:
```
# systemctl is-enabled openbsd-inetd
alias
```
The alias is `inetd`:
```
# ls -l /lib/systemd/system/openbsd-inetd.service
lrwxrwxrwx 1 root root 13 Aug 25 2020 /lib/systemd/system/openbsd-inetd.service -> inetd.service
```
When targeting `inetd` instead of `openbsd-inetd` in the test playbook above, things work as expected.
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/75538
|
https://github.com/ansible/ansible/pull/76608
|
4d380dcbaa73a98fec9916abf738a958baf363ea
|
ffd0343670aeb597ed5c015322bcd098c6df25b9
| 2021-08-20T09:40:13Z |
python
| 2022-01-06T20:10:29Z |
changelogs/fragments/75538-systemd-alias-check.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 75,538 |
systemd module not idempotent with aliased service and systemd 247 on Debian 11 (bullseye)
|
### Summary
Disabling the `openbsd-inetd` service on Debian 11 (bullseye) works, but keeps on reporting 'changed' incorrectly. The reason seems to be that `systemctl is-enabled openbsd-inetd` returns `alias` instead of the expected `disabled`.
### Issue Type
Bug Report
### Component Name
systemd
### Ansible Version
```console
$ ansible --version
ansible [core 2.11.4]
config file = /home/ansible/ansible.cfg
configured module search path = ['/home/ansible/library']
ansible python module location = /usr/local/share/venv_projects/ansible/lib64/python3.8/site-packages/ansible
ansible collection location = /home/ansible/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/local/share/venv_projects/ansible/bin/ansible
python version = 3.8.6 (default, Oct 27 2020, 09:13:12) [GCC 9.3.1 20200408 (Red Hat 9.3.1-2)]
jinja version = 3.0.1
libyaml = True
```
### Configuration
```console
$ ansible-config dump --only-changed
```
### OS / Environment
Target OS runs Debian 11 (bullseye):
```
# cat /etc/os-release
PRETTY_NAME="Debian GNU/Linux 11 (bullseye)"
NAME="Debian GNU/Linux"
VERSION_ID="11"
VERSION="11 (bullseye)"
VERSION_CODENAME=bullseye
ID=debian
HOME_URL="https://www.debian.org/"
SUPPORT_URL="https://www.debian.org/support"
BUG_REPORT_URL="https://bugs.debian.org/"
```
Systemd version:
```
# systemctl --version
systemd 247 (247.3-6)
+PAM +AUDIT +SELINUX +IMA +APPARMOR +SMACK +SYSVINIT +UTMP +LIBCRYPTSETUP +GCRYPT +GNUTLS +ACL +XZ +LZ4 +ZSTD +SECCOMP +BLKID +ELFUTILS +KMOD +IDN2 -IDN +PCRE2 default-hierarchy=unified
```
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
Test playbook
```yaml (paste below)
---
- hosts: all
tasks:
- name: Disable services
systemd:
name: openbsd-inetd
enabled: no
```
### Expected Results
After running this twice, or when running on a system with `openbsd-inetd` disabled, I would expect this to return `ok`.
### Actual Results
Instead, it continues to return `changed`:
```
TASK [Disable services] *******************************************************************************************************************************************************************************************************************
[..]
changed: [IP] => {
"changed": true,
"enabled": false,
"invocation": {
"module_args": {
"daemon_reexec": false,
"daemon_reload": false,
"enabled": false,
"force": null,
"masked": null,
"name": "openbsd-inetd",
"no_block": false,
"scope": "system",
"state": null
}
},
"name": "openbsd-inetd",
"status": {
"ActiveEnterTimestampMonotonic": "0",
"ActiveExitTimestampMonotonic": "0",
"ActiveState": "inactive",
"After": "sysinit.target network-online.target system.slice basic.target systemd-journald.socket",
"AllowIsolate": "no",
"AllowedCPUs": "",
"AllowedMemoryNodes": "",
"AmbientCapabilities": "",
"AssertResult": "no",
"AssertTimestampMonotonic": "0",
"Before": "shutdown.target",
"BlockIOAccounting": "no",
"BlockIOWeight": "[not set]",
"CPUAccounting": "yes",
"CPUAffinity": "",
"CPUAffinityFromNUMA": "no",
"CPUQuotaPerSecUSec": "infinity",
"CPUQuotaPeriodUSec": "infinity",
"CPUSchedulingPolicy": "0",
"CPUSchedulingPriority": "0",
"CPUSchedulingResetOnFork": "no",
"CPUShares": "[not set]",
"CPUUsageNSec": "[not set]",
"CPUWeight": "[not set]",
"CacheDirectoryMode": "0755",
"CanFreeze": "yes",
"CanIsolate": "no",
"CanReload": "yes",
"CanStart": "yes",
"CanStop": "yes",
"CapabilityBoundingSet": "cap_chown cap_dac_override cap_dac_read_search cap_fowner cap_fsetid cap_kill cap_setgid cap_setuid cap_setpcap cap_linux_immutable cap_net_bind_service cap_net_broadcast cap_net_admin cap_net_raw cap_ipc_lock cap_ipc_owner cap_sys_module cap_sys_rawio cap_sys_chroot cap_sys_ptrace cap_sys_pacct cap_sys_admin cap_sys_boot cap_sys_nice cap_sys_resource cap_sys_time cap_sys_tty_config cap_mknod cap_lease cap_audit_write cap_audit_control cap_setfcap cap_mac_override cap_mac_admin cap_syslog cap_wake_alarm cap_block_suspend cap_audit_read cap_perfmon cap_bpf cap_checkpoint_restore",
"CleanResult": "success",
"CollectMode": "inactive",
"ConditionResult": "no",
"ConditionTimestampMonotonic": "0",
"ConfigurationDirectoryMode": "0755",
"Conflicts": "shutdown.target",
"ControlPID": "0",
"CoredumpFilter": "0x33",
"DefaultDependencies": "yes",
"DefaultMemoryLow": "0",
"DefaultMemoryMin": "0",
"Delegate": "no",
"Description": "Internet superserver",
"DevicePolicy": "auto",
"Documentation": "\"man:inetd(8)\"",
"DynamicUser": "no",
"EffectiveCPUs": "",
"EffectiveMemoryNodes": "",
"ExecMainCode": "0",
"ExecMainExitTimestampMonotonic": "0",
"ExecMainPID": "0",
"ExecMainStartTimestampMonotonic": "0",
"ExecMainStatus": "0",
"ExecReload": "{ path=/bin/kill ; argv[]=/bin/kill -HUP $MAINPID ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }",
"ExecReloadEx": "{ path=/bin/kill ; argv[]=/bin/kill -HUP $MAINPID ; flags= ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }",
"ExecStart": "{ path=/usr/sbin/inetd ; argv[]=/usr/sbin/inetd ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }",
"ExecStartEx": "{ path=/usr/sbin/inetd ; argv[]=/usr/sbin/inetd ; flags= ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }",
"FailureAction": "none",
"FileDescriptorStoreMax": "0",
"FinalKillSignal": "9",
"FragmentPath": "/lib/systemd/system/inetd.service",
"FreezerState": "running",
"GID": "[not set]",
"GuessMainPID": "yes",
"IOAccounting": "no",
"IOReadBytes": "18446744073709551615",
"IOReadOperations": "18446744073709551615",
"IOSchedulingClass": "0",
"IOSchedulingPriority": "0",
"IOWeight": "[not set]",
"IOWriteBytes": "18446744073709551615",
"IOWriteOperations": "18446744073709551615",
"IPAccounting": "no",
"IPEgressBytes": "[no data]",
"IPEgressPackets": "[no data]",
"IPIngressBytes": "[no data]",
"IPIngressPackets": "[no data]",
"Id": "inetd.service",
"IgnoreOnIsolate": "no",
"IgnoreSIGPIPE": "yes",
"InactiveEnterTimestampMonotonic": "0",
"InactiveExitTimestampMonotonic": "0",
"JobRunningTimeoutUSec": "infinity",
"JobTimeoutAction": "none",
"JobTimeoutUSec": "infinity",
"KeyringMode": "private",
"KillMode": "process",
"KillSignal": "15",
"LimitAS": "infinity",
"LimitASSoft": "infinity",
"LimitCORE": "infinity",
"LimitCORESoft": "0",
"LimitCPU": "infinity",
"LimitCPUSoft": "infinity",
"LimitDATA": "infinity",
"LimitDATASoft": "infinity",
"LimitFSIZE": "infinity",
"LimitFSIZESoft": "infinity",
"LimitLOCKS": "infinity",
"LimitLOCKSSoft": "infinity",
"LimitMEMLOCK": "65536",
"LimitMEMLOCKSoft": "65536",
"LimitMSGQUEUE": "819200",
"LimitMSGQUEUESoft": "819200",
"LimitNICE": "0",
"LimitNICESoft": "0",
"LimitNOFILE": "524288",
"LimitNOFILESoft": "1024",
"LimitNPROC": "3775",
"LimitNPROCSoft": "3775",
"LimitRSS": "infinity",
"LimitRSSSoft": "infinity",
"LimitRTPRIO": "0",
"LimitRTPRIOSoft": "0",
"LimitRTTIME": "infinity",
"LimitRTTIMESoft": "infinity",
"LimitSIGPENDING": "3775",
"LimitSIGPENDINGSoft": "3775",
"LimitSTACK": "infinity",
"LimitSTACKSoft": "8388608",
"LoadState": "loaded",
"LockPersonality": "no",
"LogLevelMax": "-1",
"LogRateLimitBurst": "0",
"LogRateLimitIntervalUSec": "0",
"LogsDirectoryMode": "0755",
"MainPID": "0",
"ManagedOOMMemoryPressure": "auto",
"ManagedOOMMemoryPressureLimitPercent": "0%",
"ManagedOOMSwap": "auto",
"MemoryAccounting": "yes",
"MemoryCurrent": "[not set]",
"MemoryDenyWriteExecute": "no",
"MemoryHigh": "infinity",
"MemoryLimit": "infinity",
"MemoryLow": "0",
"MemoryMax": "infinity",
"MemoryMin": "0",
"MemorySwapMax": "infinity",
"MountAPIVFS": "no",
"MountFlags": "",
"NFileDescriptorStore": "0",
"NRestarts": "0",
"NUMAMask": "",
"NUMAPolicy": "n/a",
"Names": "inetd.service openbsd-inetd.service",
"NeedDaemonReload": "no",
"Nice": "0",
"NoNewPrivileges": "no",
"NonBlocking": "no",
"NotifyAccess": "main",
"OOMPolicy": "stop",
"OOMScoreAdjust": "0",
"OnFailureJobMode": "replace",
"Perpetual": "no",
"PrivateDevices": "no",
"PrivateMounts": "no",
"PrivateNetwork": "no",
"PrivateTmp": "no",
"PrivateUsers": "no",
"ProcSubset": "all",
"ProtectClock": "no",
"ProtectControlGroups": "no",
"ProtectHome": "no",
"ProtectHostname": "no",
"ProtectKernelLogs": "no",
"ProtectKernelModules": "no",
"ProtectKernelTunables": "no",
"ProtectProc": "default",
"ProtectSystem": "no",
"RefuseManualStart": "no",
"RefuseManualStop": "no",
"ReloadResult": "success",
"RemainAfterExit": "no",
"RemoveIPC": "no",
"Requires": "system.slice sysinit.target",
"Restart": "on-failure",
"RestartKillSignal": "15",
"RestartUSec": "100ms",
"RestrictNamespaces": "no",
"RestrictRealtime": "no",
"RestrictSUIDSGID": "no",
"Result": "success",
"RootDirectoryStartOnly": "no",
"RootHashSignature": "",
"RuntimeDirectoryMode": "0755",
"RuntimeDirectoryPreserve": "no",
"RuntimeMaxUSec": "infinity",
"SameProcessGroup": "no",
"SecureBits": "0",
"SendSIGHUP": "no",
"SendSIGKILL": "yes",
"Slice": "system.slice",
"StandardError": "inherit",
"StandardInput": "null",
"StandardInputData": "",
"StandardOutput": "journal",
"StartLimitAction": "none",
"StartLimitBurst": "5",
"StartLimitIntervalUSec": "10s",
"StartupBlockIOWeight": "[not set]",
"StartupCPUShares": "[not set]",
"StartupCPUWeight": "[not set]",
"StartupIOWeight": "[not set]",
"StateChangeTimestampMonotonic": "0",
"StateDirectoryMode": "0755",
"StatusErrno": "0",
"StopWhenUnneeded": "no",
"SubState": "dead",
"SuccessAction": "none",
"SyslogFacility": "3",
"SyslogLevel": "6",
"SyslogLevelPrefix": "yes",
"SyslogPriority": "30",
"SystemCallErrorNumber": "2147483646",
"TTYReset": "no",
"TTYVHangup": "no",
"TTYVTDisallocate": "no",
"TasksAccounting": "yes",
"TasksCurrent": "[not set]",
"TasksMax": "1132",
"TimeoutAbortUSec": "1min 30s",
"TimeoutCleanUSec": "infinity",
"TimeoutStartFailureMode": "terminate",
"TimeoutStartUSec": "1min 30s",
"TimeoutStopFailureMode": "terminate",
"TimeoutStopUSec": "1min 30s",
"TimerSlackNSec": "50000",
"Transient": "no",
"Type": "notify",
"UID": "[not set]",
"UMask": "0022",
"UnitFilePreset": "enabled",
"UnitFileState": "disabled",
"UtmpMode": "init",
"Wants": "network-online.target",
"WatchdogSignal": "6",
"WatchdogTimestampMonotonic": "0",
"WatchdogUSec": "infinity"
}
}
```
On the system it is clear that it has been disabled:
```
# systemctl status openbsd-inetd
â— inetd.service - Internet superserver
Loaded: loaded (/lib/systemd/system/inetd.service; disabled; vendor preset: enabled)
Active: inactive (dead)
Docs: man:inetd(8)
Aug 20 07:11:24 testing-fc18-20210819-163020-0 systemd[1]: Starting Internet superserver...
Aug 20 07:11:24 testing-fc18-20210819-163020-0 systemd[1]: Started Internet superserver.
Aug 20 07:12:05 testing-fc18-20210819-163020-0 systemd[1]: Stopping Internet superserver...
Aug 20 07:12:05 testing-fc18-20210819-163020-0 systemd[1]: inetd.service: Succeeded.
Aug 20 07:12:05 testing-fc18-20210819-163020-0 systemd[1]: Stopped Internet superserver.
```
Looking at the `systemd.py` code, I believe the problem is that `is-enabled` returns `alias`:
```
# systemctl is-enabled openbsd-inetd
alias
```
The alias is `inetd`:
```
# ls -l /lib/systemd/system/openbsd-inetd.service
lrwxrwxrwx 1 root root 13 Aug 25 2020 /lib/systemd/system/openbsd-inetd.service -> inetd.service
```
When targeting `inetd` instead of `openbsd-inetd` in the test playbook above, things work as expected.
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/75538
|
https://github.com/ansible/ansible/pull/76608
|
4d380dcbaa73a98fec9916abf738a958baf363ea
|
ffd0343670aeb597ed5c015322bcd098c6df25b9
| 2021-08-20T09:40:13Z |
python
| 2022-01-06T20:10:29Z |
lib/ansible/modules/systemd.py
|
#!/usr/bin/python
# -*- coding: utf-8 -*-
# Copyright: (c) 2016, Brian Coca <[email protected]>
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
DOCUMENTATION = '''
module: systemd
author:
- Ansible Core Team
version_added: "2.2"
short_description: Manage systemd units
description:
- Controls systemd units (services, timers, and so on) on remote hosts.
options:
name:
description:
- Name of the unit. This parameter takes the name of exactly one unit to work with.
- When no extension is given, it is implied to a C(.service) as systemd.
- When using in a chroot environment you always need to specify the name of the unit with the extension. For example, C(crond.service).
type: str
aliases: [ service, unit ]
state:
description:
- C(started)/C(stopped) are idempotent actions that will not run commands unless necessary.
C(restarted) will always bounce the unit. C(reloaded) will always reload.
type: str
choices: [ reloaded, restarted, started, stopped ]
enabled:
description:
- Whether the unit should start on boot. B(At least one of state and enabled are required.)
type: bool
force:
description:
- Whether to override existing symlinks.
type: bool
version_added: 2.6
masked:
description:
- Whether the unit should be masked or not, a masked unit is impossible to start.
type: bool
daemon_reload:
description:
- Run daemon-reload before doing any other operations, to make sure systemd has read any changes.
- When set to C(true), runs daemon-reload even if the module does not start or stop anything.
type: bool
default: no
aliases: [ daemon-reload ]
daemon_reexec:
description:
- Run daemon_reexec command before doing any other operations, the systemd manager will serialize the manager state.
type: bool
default: no
aliases: [ daemon-reexec ]
version_added: "2.8"
scope:
description:
- Run systemctl within a given service manager scope, either as the default system scope C(system),
the current user's scope C(user), or the scope of all users C(global).
- "For systemd to work with 'user', the executing user must have its own instance of dbus started and accessible (systemd requirement)."
- "The user dbus process is normally started during normal login, but not during the run of Ansible tasks.
Otherwise you will probably get a 'Failed to connect to bus: no such file or directory' error."
- The user must have access, normally given via setting the C(XDG_RUNTIME_DIR) variable, see example below.
type: str
choices: [ system, user, global ]
default: system
version_added: "2.7"
no_block:
description:
- Do not synchronously wait for the requested operation to finish.
Enqueued job will continue without Ansible blocking on its completion.
type: bool
default: no
version_added: "2.3"
extends_documentation_fragment: action_common_attributes
attributes:
check_mode:
support: full
diff_mode:
support: none
platform:
platforms: posix
notes:
- Since 2.4, one of the following options is required C(state), C(enabled), C(masked), C(daemon_reload), (C(daemon_reexec) since 2.8),
and all except C(daemon_reload) and (C(daemon_reexec) since 2.8) also require C(name).
- Before 2.4 you always required C(name).
- Globs are not supported in name, i.e C(postgres*.service).
- The service names might vary by specific OS/distribution
requirements:
- A system managed by systemd.
'''
EXAMPLES = '''
- name: Make sure a service unit is running
ansible.builtin.systemd:
state: started
name: httpd
- name: Stop service cron on debian, if running
ansible.builtin.systemd:
name: cron
state: stopped
- name: Restart service cron on centos, in all cases, also issue daemon-reload to pick up config changes
ansible.builtin.systemd:
state: restarted
daemon_reload: yes
name: crond
- name: Reload service httpd, in all cases
ansible.builtin.systemd:
name: httpd.service
state: reloaded
- name: Enable service httpd and ensure it is not masked
ansible.builtin.systemd:
name: httpd
enabled: yes
masked: no
- name: Enable a timer unit for dnf-automatic
ansible.builtin.systemd:
name: dnf-automatic.timer
state: started
enabled: yes
- name: Just force systemd to reread configs (2.4 and above)
ansible.builtin.systemd:
daemon_reload: yes
- name: Just force systemd to re-execute itself (2.8 and above)
ansible.builtin.systemd:
daemon_reexec: yes
- name: Run a user service when XDG_RUNTIME_DIR is not set on remote login
ansible.builtin.systemd:
name: myservice
state: started
scope: user
environment:
XDG_RUNTIME_DIR: "/run/user/{{ myuid }}"
'''
RETURN = '''
status:
description: A dictionary with the key=value pairs returned from C(systemctl show).
returned: success
type: complex
sample: {
"ActiveEnterTimestamp": "Sun 2016-05-15 18:28:49 EDT",
"ActiveEnterTimestampMonotonic": "8135942",
"ActiveExitTimestampMonotonic": "0",
"ActiveState": "active",
"After": "auditd.service systemd-user-sessions.service time-sync.target systemd-journald.socket basic.target system.slice",
"AllowIsolate": "no",
"Before": "shutdown.target multi-user.target",
"BlockIOAccounting": "no",
"BlockIOWeight": "1000",
"CPUAccounting": "no",
"CPUSchedulingPolicy": "0",
"CPUSchedulingPriority": "0",
"CPUSchedulingResetOnFork": "no",
"CPUShares": "1024",
"CanIsolate": "no",
"CanReload": "yes",
"CanStart": "yes",
"CanStop": "yes",
"CapabilityBoundingSet": "18446744073709551615",
"ConditionResult": "yes",
"ConditionTimestamp": "Sun 2016-05-15 18:28:49 EDT",
"ConditionTimestampMonotonic": "7902742",
"Conflicts": "shutdown.target",
"ControlGroup": "/system.slice/crond.service",
"ControlPID": "0",
"DefaultDependencies": "yes",
"Delegate": "no",
"Description": "Command Scheduler",
"DevicePolicy": "auto",
"EnvironmentFile": "/etc/sysconfig/crond (ignore_errors=no)",
"ExecMainCode": "0",
"ExecMainExitTimestampMonotonic": "0",
"ExecMainPID": "595",
"ExecMainStartTimestamp": "Sun 2016-05-15 18:28:49 EDT",
"ExecMainStartTimestampMonotonic": "8134990",
"ExecMainStatus": "0",
"ExecReload": "{ path=/bin/kill ; argv[]=/bin/kill -HUP $MAINPID ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }",
"ExecStart": "{ path=/usr/sbin/crond ; argv[]=/usr/sbin/crond -n $CRONDARGS ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }",
"FragmentPath": "/usr/lib/systemd/system/crond.service",
"GuessMainPID": "yes",
"IOScheduling": "0",
"Id": "crond.service",
"IgnoreOnIsolate": "no",
"IgnoreOnSnapshot": "no",
"IgnoreSIGPIPE": "yes",
"InactiveEnterTimestampMonotonic": "0",
"InactiveExitTimestamp": "Sun 2016-05-15 18:28:49 EDT",
"InactiveExitTimestampMonotonic": "8135942",
"JobTimeoutUSec": "0",
"KillMode": "process",
"KillSignal": "15",
"LimitAS": "18446744073709551615",
"LimitCORE": "18446744073709551615",
"LimitCPU": "18446744073709551615",
"LimitDATA": "18446744073709551615",
"LimitFSIZE": "18446744073709551615",
"LimitLOCKS": "18446744073709551615",
"LimitMEMLOCK": "65536",
"LimitMSGQUEUE": "819200",
"LimitNICE": "0",
"LimitNOFILE": "4096",
"LimitNPROC": "3902",
"LimitRSS": "18446744073709551615",
"LimitRTPRIO": "0",
"LimitRTTIME": "18446744073709551615",
"LimitSIGPENDING": "3902",
"LimitSTACK": "18446744073709551615",
"LoadState": "loaded",
"MainPID": "595",
"MemoryAccounting": "no",
"MemoryLimit": "18446744073709551615",
"MountFlags": "0",
"Names": "crond.service",
"NeedDaemonReload": "no",
"Nice": "0",
"NoNewPrivileges": "no",
"NonBlocking": "no",
"NotifyAccess": "none",
"OOMScoreAdjust": "0",
"OnFailureIsolate": "no",
"PermissionsStartOnly": "no",
"PrivateNetwork": "no",
"PrivateTmp": "no",
"RefuseManualStart": "no",
"RefuseManualStop": "no",
"RemainAfterExit": "no",
"Requires": "basic.target",
"Restart": "no",
"RestartUSec": "100ms",
"Result": "success",
"RootDirectoryStartOnly": "no",
"SameProcessGroup": "no",
"SecureBits": "0",
"SendSIGHUP": "no",
"SendSIGKILL": "yes",
"Slice": "system.slice",
"StandardError": "inherit",
"StandardInput": "null",
"StandardOutput": "journal",
"StartLimitAction": "none",
"StartLimitBurst": "5",
"StartLimitInterval": "10000000",
"StatusErrno": "0",
"StopWhenUnneeded": "no",
"SubState": "running",
"SyslogLevelPrefix": "yes",
"SyslogPriority": "30",
"TTYReset": "no",
"TTYVHangup": "no",
"TTYVTDisallocate": "no",
"TimeoutStartUSec": "1min 30s",
"TimeoutStopUSec": "1min 30s",
"TimerSlackNSec": "50000",
"Transient": "no",
"Type": "simple",
"UMask": "0022",
"UnitFileState": "enabled",
"WantedBy": "multi-user.target",
"Wants": "system.slice",
"WatchdogTimestampMonotonic": "0",
"WatchdogUSec": "0",
}
''' # NOQA
import os
from ansible.module_utils.basic import AnsibleModule
from ansible.module_utils.facts.system.chroot import is_chroot
from ansible.module_utils.service import sysv_exists, sysv_is_enabled, fail_if_missing
from ansible.module_utils._text import to_native
def is_running_service(service_status):
return service_status['ActiveState'] in set(['active', 'activating'])
def is_deactivating_service(service_status):
return service_status['ActiveState'] in set(['deactivating'])
def request_was_ignored(out):
return '=' not in out and ('ignoring request' in out or 'ignoring command' in out)
def parse_systemctl_show(lines):
# The output of 'systemctl show' can contain values that span multiple lines. At first glance it
# appears that such values are always surrounded by {}, so the previous version of this code
# assumed that any value starting with { was a multi-line value; it would then consume lines
# until it saw a line that ended with }. However, it is possible to have a single-line value
# that starts with { but does not end with } (this could happen in the value for Description=,
# for example), and the previous version of this code would then consume all remaining lines as
# part of that value. Cryptically, this would lead to Ansible reporting that the service file
# couldn't be found.
#
# To avoid this issue, the following code only accepts multi-line values for keys whose names
# start with Exec (e.g., ExecStart=), since these are the only keys whose values are known to
# span multiple lines.
parsed = {}
multival = []
k = None
for line in lines:
if k is None:
if '=' in line:
k, v = line.split('=', 1)
if k.startswith('Exec') and v.lstrip().startswith('{'):
if not v.rstrip().endswith('}'):
multival.append(v)
continue
parsed[k] = v.strip()
k = None
else:
multival.append(line)
if line.rstrip().endswith('}'):
parsed[k] = '\n'.join(multival).strip()
multival = []
k = None
return parsed
# ===========================================
# Main control flow
def main():
# initialize
module = AnsibleModule(
argument_spec=dict(
name=dict(type='str', aliases=['service', 'unit']),
state=dict(type='str', choices=['reloaded', 'restarted', 'started', 'stopped']),
enabled=dict(type='bool'),
force=dict(type='bool'),
masked=dict(type='bool'),
daemon_reload=dict(type='bool', default=False, aliases=['daemon-reload']),
daemon_reexec=dict(type='bool', default=False, aliases=['daemon-reexec']),
scope=dict(type='str', default='system', choices=['system', 'user', 'global']),
no_block=dict(type='bool', default=False),
),
supports_check_mode=True,
required_one_of=[['state', 'enabled', 'masked', 'daemon_reload', 'daemon_reexec']],
required_by=dict(
state=('name', ),
enabled=('name', ),
masked=('name', ),
),
)
unit = module.params['name']
if unit is not None:
for globpattern in (r"*", r"?", r"["):
if globpattern in unit:
module.fail_json(msg="This module does not currently support using glob patterns, found '%s' in service name: %s" % (globpattern, unit))
systemctl = module.get_bin_path('systemctl', True)
if os.getenv('XDG_RUNTIME_DIR') is None:
os.environ['XDG_RUNTIME_DIR'] = '/run/user/%s' % os.geteuid()
''' Set CLI options depending on params '''
# if scope is 'system' or None, we can ignore as there is no extra switch.
# The other choices match the corresponding switch
if module.params['scope'] != 'system':
systemctl += " --%s" % module.params['scope']
if module.params['no_block']:
systemctl += " --no-block"
if module.params['force']:
systemctl += " --force"
rc = 0
out = err = ''
result = dict(
name=unit,
changed=False,
status=dict(),
)
# Run daemon-reload first, if requested
if module.params['daemon_reload'] and not module.check_mode:
(rc, out, err) = module.run_command("%s daemon-reload" % (systemctl))
if rc != 0:
module.fail_json(msg='failure %d during daemon-reload: %s' % (rc, err))
# Run daemon-reexec
if module.params['daemon_reexec'] and not module.check_mode:
(rc, out, err) = module.run_command("%s daemon-reexec" % (systemctl))
if rc != 0:
module.fail_json(msg='failure %d during daemon-reexec: %s' % (rc, err))
if unit:
found = False
is_initd = sysv_exists(unit)
is_systemd = False
# check service data, cannot error out on rc as it changes across versions, assume not found
(rc, out, err) = module.run_command("%s show '%s'" % (systemctl, unit))
if rc == 0 and not (request_was_ignored(out) or request_was_ignored(err)):
# load return of systemctl show into dictionary for easy access and return
if out:
result['status'] = parse_systemctl_show(to_native(out).split('\n'))
is_systemd = 'LoadState' in result['status'] and result['status']['LoadState'] != 'not-found'
is_masked = 'LoadState' in result['status'] and result['status']['LoadState'] == 'masked'
# Check for loading error
if is_systemd and not is_masked and 'LoadError' in result['status']:
module.fail_json(msg="Error loading unit file '%s': %s" % (unit, result['status']['LoadError']))
# Workaround for https://github.com/ansible/ansible/issues/71528
elif err and rc == 1 and 'Failed to parse bus message' in err:
result['status'] = parse_systemctl_show(to_native(out).split('\n'))
unit_base, sep, suffix = unit.partition('@')
unit_search = '{unit_base}{sep}'.format(unit_base=unit_base, sep=sep)
(rc, out, err) = module.run_command("{systemctl} list-unit-files '{unit_search}*'".format(systemctl=systemctl, unit_search=unit_search))
is_systemd = unit_search in out
(rc, out, err) = module.run_command("{systemctl} is-active '{unit}'".format(systemctl=systemctl, unit=unit))
result['status']['ActiveState'] = out.rstrip('\n')
else:
# list taken from man systemctl(1) for systemd 244
valid_enabled_states = [
"enabled",
"enabled-runtime",
"linked",
"linked-runtime",
"masked",
"masked-runtime",
"static",
"indirect",
"disabled",
"generated",
"transient"]
(rc, out, err) = module.run_command("%s is-enabled '%s'" % (systemctl, unit))
if out.strip() in valid_enabled_states:
is_systemd = True
else:
# fallback list-unit-files as show does not work on some systems (chroot)
# not used as primary as it skips some services (like those using init.d) and requires .service/etc notation
(rc, out, err) = module.run_command("%s list-unit-files '%s'" % (systemctl, unit))
if rc == 0:
is_systemd = True
else:
# Check for systemctl command
module.run_command(systemctl, check_rc=True)
# Does service exist?
found = is_systemd or is_initd
if is_initd and not is_systemd:
module.warn('The service (%s) is actually an init script but the system is managed by systemd' % unit)
# mask/unmask the service, if requested, can operate on services before they are installed
if module.params['masked'] is not None:
# state is not masked unless systemd affirms otherwise
(rc, out, err) = module.run_command("%s is-enabled '%s'" % (systemctl, unit))
masked = out.strip() == "masked"
if masked != module.params['masked']:
result['changed'] = True
if module.params['masked']:
action = 'mask'
else:
action = 'unmask'
if not module.check_mode:
(rc, out, err) = module.run_command("%s %s '%s'" % (systemctl, action, unit))
if rc != 0:
# some versions of system CAN mask/unmask non existing services, we only fail on missing if they don't
fail_if_missing(module, found, unit, msg='host')
# Enable/disable service startup at boot if requested
if module.params['enabled'] is not None:
if module.params['enabled']:
action = 'enable'
else:
action = 'disable'
fail_if_missing(module, found, unit, msg='host')
# do we need to enable the service?
enabled = False
(rc, out, err) = module.run_command("%s is-enabled '%s' -l" % (systemctl, unit))
# check systemctl result or if it is a init script
if rc == 0:
enabled = True
# Check if service is indirect and if out contains exactly 1 line of string 'indirect' it's disabled
if out.splitlines() == ["indirect"]:
enabled = False
elif rc == 1:
# if not a user or global user service and both init script and unit file exist stdout should have enabled/disabled, otherwise use rc entries
if module.params['scope'] == 'system' and \
is_initd and \
not out.strip().endswith('disabled') and \
sysv_is_enabled(unit):
enabled = True
# default to current state
result['enabled'] = enabled
# Change enable/disable if needed
if enabled != module.params['enabled']:
result['changed'] = True
if not module.check_mode:
(rc, out, err) = module.run_command("%s %s '%s'" % (systemctl, action, unit))
if rc != 0:
module.fail_json(msg="Unable to %s service %s: %s" % (action, unit, out + err))
result['enabled'] = not enabled
# set service state if requested
if module.params['state'] is not None:
fail_if_missing(module, found, unit, msg="host")
# default to desired state
result['state'] = module.params['state']
# What is current service state?
if 'ActiveState' in result['status']:
action = None
if module.params['state'] == 'started':
if not is_running_service(result['status']):
action = 'start'
elif module.params['state'] == 'stopped':
if is_running_service(result['status']) or is_deactivating_service(result['status']):
action = 'stop'
else:
if not is_running_service(result['status']):
action = 'start'
else:
action = module.params['state'][:-2] # remove 'ed' from restarted/reloaded
result['state'] = 'started'
if action:
result['changed'] = True
if not module.check_mode:
(rc, out, err) = module.run_command("%s %s '%s'" % (systemctl, action, unit))
if rc != 0:
module.fail_json(msg="Unable to %s service %s: %s" % (action, unit, err))
# check for chroot
elif is_chroot(module) or os.environ.get('SYSTEMD_OFFLINE') == '1':
module.warn("Target is a chroot or systemd is offline. This can lead to false positives or prevent the init system tools from working.")
else:
# this should not happen?
module.fail_json(msg="Service is in unknown state", status=result['status'])
module.exit_json(**result)
if __name__ == '__main__':
main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 76,610 |
Templating empty template file results in file with string 'None'
|
### Summary
When templating something with the `template` module that results in a blank / empty output, the resulting file contains the string `None`.
I noticed when I templated a config file
```
{# comment #}
{% if expression %}
... something ...
{% endif %}
```
and the resulting file contained the four bytes of `None` in the cases where the expression evaluated to `false`.
### Issue Type
Bug Report
### Component Name
template
### Ansible Version
```console
devel
```
### Configuration
```console
-
```
### OS / Environment
-
### Steps to Reproduce
Playbook:
```yaml (paste below)
- hosts: localhost
tasks:
- template:
src: test-template.j2
dest: test-template.txt
```
test-template.j2:
```
{# test #}
```
Or simply an empty file. Also adding more Jinja2 commands which do not add something to the output yield the same results.
### Expected Results
test-template.txt is blank
### Actual Results
```console
test-template.txt contains the string `None`.
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/76610
|
https://github.com/ansible/ansible/pull/76634
|
b984dd9c59dd43d50f5aafa91cfde29405f2fbd9
|
094a0746b31f7ce9879dfa6fb2972930eb1cbab2
| 2021-12-26T12:18:06Z |
python
| 2022-01-07T16:13:34Z |
changelogs/fragments/76610-empty-template-none.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 76,610 |
Templating empty template file results in file with string 'None'
|
### Summary
When templating something with the `template` module that results in a blank / empty output, the resulting file contains the string `None`.
I noticed when I templated a config file
```
{# comment #}
{% if expression %}
... something ...
{% endif %}
```
and the resulting file contained the four bytes of `None` in the cases where the expression evaluated to `false`.
### Issue Type
Bug Report
### Component Name
template
### Ansible Version
```console
devel
```
### Configuration
```console
-
```
### OS / Environment
-
### Steps to Reproduce
Playbook:
```yaml (paste below)
- hosts: localhost
tasks:
- template:
src: test-template.j2
dest: test-template.txt
```
test-template.j2:
```
{# test #}
```
Or simply an empty file. Also adding more Jinja2 commands which do not add something to the output yield the same results.
### Expected Results
test-template.txt is blank
### Actual Results
```console
test-template.txt contains the string `None`.
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/76610
|
https://github.com/ansible/ansible/pull/76634
|
b984dd9c59dd43d50f5aafa91cfde29405f2fbd9
|
094a0746b31f7ce9879dfa6fb2972930eb1cbab2
| 2021-12-26T12:18:06Z |
python
| 2022-01-07T16:13:34Z |
lib/ansible/template/native_helpers.py
|
# Copyright: (c) 2018, Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
# Make coding more python3-ish
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import ast
from itertools import islice, chain
from jinja2.runtime import StrictUndefined
from ansible.module_utils._text import to_text
from ansible.module_utils.common.collections import is_sequence, Mapping
from ansible.module_utils.six import string_types
from ansible.parsing.yaml.objects import AnsibleVaultEncryptedUnicode
from ansible.utils.native_jinja import NativeJinjaText
from ansible.utils.unsafe_proxy import wrap_var
_JSON_MAP = {
"true": True,
"false": False,
"null": None,
}
class Json2Python(ast.NodeTransformer):
def visit_Name(self, node):
if node.id not in _JSON_MAP:
return node
return ast.Constant(value=_JSON_MAP[node.id])
def _fail_on_undefined(data):
"""Recursively find an undefined value in a nested data structure
and properly raise the undefined exception.
"""
if isinstance(data, Mapping):
for value in data.values():
_fail_on_undefined(value)
elif is_sequence(data):
for item in data:
_fail_on_undefined(item)
else:
if isinstance(data, StrictUndefined):
# To actually raise the undefined exception we need to
# access the undefined object otherwise the exception would
# be raised on the next access which might not be properly
# handled.
# See https://github.com/ansible/ansible/issues/52158
# and StrictUndefined implementation in upstream Jinja2.
str(data)
return data
def ansible_concat(nodes, convert_data, variable_start_string):
head = list(islice(nodes, 2))
if not head:
return None
if len(head) == 1:
out = _fail_on_undefined(head[0])
if isinstance(out, NativeJinjaText):
return out
out = to_text(out)
else:
out = ''.join([to_text(_fail_on_undefined(v)) for v in chain(head, nodes)])
if not convert_data:
return out
# if this looks like a dictionary, list or bool, convert it to such
do_eval = (
(
out.startswith(('{', '[')) and
not out.startswith(variable_start_string)
) or
out in ('True', 'False')
)
if do_eval:
unsafe = hasattr(out, '__UNSAFE__')
try:
out = ast.literal_eval(
ast.fix_missing_locations(
Json2Python().visit(
ast.parse(out, mode='eval')
)
)
)
except (ValueError, SyntaxError, MemoryError):
pass
else:
if unsafe:
out = wrap_var(out)
return out
def ansible_native_concat(nodes):
"""Return a native Python type from the list of compiled nodes. If the
result is a single node, its value is returned. Otherwise, the nodes are
concatenated as strings. If the result can be parsed with
:func:`ast.literal_eval`, the parsed value is returned. Otherwise, the
string is returned.
https://github.com/pallets/jinja/blob/master/src/jinja2/nativetypes.py
"""
head = list(islice(nodes, 2))
if not head:
return None
if len(head) == 1:
out = _fail_on_undefined(head[0])
# TODO send unvaulted data to literal_eval?
if isinstance(out, AnsibleVaultEncryptedUnicode):
return out.data
if isinstance(out, NativeJinjaText):
# Sometimes (e.g. ``| string``) we need to mark variables
# in a special way so that they remain strings and are not
# passed into literal_eval.
# See:
# https://github.com/ansible/ansible/issues/70831
# https://github.com/pallets/jinja/issues/1200
# https://github.com/ansible/ansible/issues/70831#issuecomment-664190894
return out
# short-circuit literal_eval for anything other than strings
if not isinstance(out, string_types):
return out
else:
out = ''.join([to_text(_fail_on_undefined(v)) for v in chain(head, nodes)])
try:
return ast.literal_eval(
# In Python 3.10+ ast.literal_eval removes leading spaces/tabs
# from the given string. For backwards compatibility we need to
# parse the string ourselves without removing leading spaces/tabs.
ast.parse(out, mode='eval')
)
except (ValueError, SyntaxError, MemoryError):
return out
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 76,610 |
Templating empty template file results in file with string 'None'
|
### Summary
When templating something with the `template` module that results in a blank / empty output, the resulting file contains the string `None`.
I noticed when I templated a config file
```
{# comment #}
{% if expression %}
... something ...
{% endif %}
```
and the resulting file contained the four bytes of `None` in the cases where the expression evaluated to `false`.
### Issue Type
Bug Report
### Component Name
template
### Ansible Version
```console
devel
```
### Configuration
```console
-
```
### OS / Environment
-
### Steps to Reproduce
Playbook:
```yaml (paste below)
- hosts: localhost
tasks:
- template:
src: test-template.j2
dest: test-template.txt
```
test-template.j2:
```
{# test #}
```
Or simply an empty file. Also adding more Jinja2 commands which do not add something to the output yield the same results.
### Expected Results
test-template.txt is blank
### Actual Results
```console
test-template.txt contains the string `None`.
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/76610
|
https://github.com/ansible/ansible/pull/76634
|
b984dd9c59dd43d50f5aafa91cfde29405f2fbd9
|
094a0746b31f7ce9879dfa6fb2972930eb1cbab2
| 2021-12-26T12:18:06Z |
python
| 2022-01-07T16:13:34Z |
test/integration/targets/template/tasks/main.yml
|
# test code for the template module
# (c) 2014, Michael DeHaan <[email protected]>
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
- set_fact:
output_dir: "{{ lookup('env', 'OUTPUT_DIR') }}"
- name: show python interpreter
debug:
msg: "{{ ansible_python['executable'] }}"
- name: show jinja2 version
debug:
msg: "{{ lookup('pipe', '{{ ansible_python[\"executable\"] }} -c \"import jinja2; print(jinja2.__version__)\"') }}"
- name: get default group
shell: id -gn
register: group
- name: fill in a basic template
template: src=foo.j2 dest={{output_dir}}/foo.templated mode=0644
register: template_result
- assert:
that:
- "'changed' in template_result"
- "'dest' in template_result"
- "'group' in template_result"
- "'gid' in template_result"
- "'md5sum' in template_result"
- "'checksum' in template_result"
- "'owner' in template_result"
- "'size' in template_result"
- "'src' in template_result"
- "'state' in template_result"
- "'uid' in template_result"
- name: verify that the file was marked as changed
assert:
that:
- "template_result.changed == true"
# Basic template with non-ascii names
- name: Check that non-ascii source and dest work
template:
src: 'café.j2'
dest: '{{ output_dir }}/café.txt'
register: template_results
- name: Check that the resulting file exists
stat:
path: '{{ output_dir }}/café.txt'
register: stat_results
- name: Check that template created the right file
assert:
that:
- 'template_results is changed'
- 'stat_results.stat["exists"]'
# test for import with context on jinja-2.9 See https://github.com/ansible/ansible/issues/20494
- name: fill in a template using import with context ala issue 20494
template: src=import_with_context.j2 dest={{output_dir}}/import_with_context.templated mode=0644
register: template_result
- name: copy known good import_with_context.expected into place
copy: src=import_with_context.expected dest={{output_dir}}/import_with_context.expected
- name: compare templated file to known good import_with_context
shell: diff -uw {{output_dir}}/import_with_context.templated {{output_dir}}/import_with_context.expected
register: diff_result
- name: verify templated import_with_context matches known good
assert:
that:
- 'diff_result.stdout == ""'
- "diff_result.rc == 0"
# test for nested include https://github.com/ansible/ansible/issues/34886
- name: test if parent variables are defined in nested include
template: src=for_loop.j2 dest={{output_dir}}/for_loop.templated mode=0644
- name: save templated output
shell: "cat {{output_dir}}/for_loop.templated"
register: for_loop_out
- debug: var=for_loop_out
- name: verify variables got templated
assert:
that:
- '"foo" in for_loop_out.stdout'
- '"bar" in for_loop_out.stdout'
- '"bam" in for_loop_out.stdout'
# test for 'import as' on jinja-2.9 See https://github.com/ansible/ansible/issues/20494
- name: fill in a template using import as ala fails2 case in issue 20494
template: src=import_as.j2 dest={{output_dir}}/import_as.templated mode=0644
register: import_as_template_result
- name: copy known good import_as.expected into place
copy: src=import_as.expected dest={{output_dir}}/import_as.expected
- name: compare templated file to known good import_as
shell: diff -uw {{output_dir}}/import_as.templated {{output_dir}}/import_as.expected
register: import_as_diff_result
- name: verify templated import_as matches known good
assert:
that:
- 'import_as_diff_result.stdout == ""'
- "import_as_diff_result.rc == 0"
# test for 'import as with context' on jinja-2.9 See https://github.com/ansible/ansible/issues/20494
- name: fill in a template using import as with context ala fails2 case in issue 20494
template: src=import_as_with_context.j2 dest={{output_dir}}/import_as_with_context.templated mode=0644
register: import_as_with_context_template_result
- name: copy known good import_as_with_context.expected into place
copy: src=import_as_with_context.expected dest={{output_dir}}/import_as_with_context.expected
- name: compare templated file to known good import_as_with_context
shell: diff -uw {{output_dir}}/import_as_with_context.templated {{output_dir}}/import_as_with_context.expected
register: import_as_with_context_diff_result
- name: verify templated import_as_with_context matches known good
assert:
that:
- 'import_as_with_context_diff_result.stdout == ""'
- "import_as_with_context_diff_result.rc == 0"
# VERIFY comment_start_string and comment_end_string
- name: Render a template with "comment_start_string" set to [#
template:
src: custom_comment_string.j2
dest: "{{output_dir}}/custom_comment_string.templated"
comment_start_string: "[#"
comment_end_string: "#]"
register: custom_comment_string_result
- name: Get checksum of known good custom_comment_string.expected
stat:
path: "{{role_path}}/files/custom_comment_string.expected"
register: custom_comment_string_good
- name: Verify templated custom_comment_string matches known good using checksum
assert:
that:
- "custom_comment_string_result.checksum == custom_comment_string_good.stat.checksum"
# VERIFY trim_blocks
- name: Render a template with "trim_blocks" set to False
template:
src: trim_blocks.j2
dest: "{{output_dir}}/trim_blocks_false.templated"
trim_blocks: False
register: trim_blocks_false_result
- name: Get checksum of known good trim_blocks_false.expected
stat:
path: "{{role_path}}/files/trim_blocks_false.expected"
register: trim_blocks_false_good
- name: Verify templated trim_blocks_false matches known good using checksum
assert:
that:
- "trim_blocks_false_result.checksum == trim_blocks_false_good.stat.checksum"
- name: Render a template with "trim_blocks" set to True
template:
src: trim_blocks.j2
dest: "{{output_dir}}/trim_blocks_true.templated"
trim_blocks: True
register: trim_blocks_true_result
- name: Get checksum of known good trim_blocks_true.expected
stat:
path: "{{role_path}}/files/trim_blocks_true.expected"
register: trim_blocks_true_good
- name: Verify templated trim_blocks_true matches known good using checksum
assert:
that:
- "trim_blocks_true_result.checksum == trim_blocks_true_good.stat.checksum"
# VERIFY lstrip_blocks
- name: Render a template with "lstrip_blocks" set to False
template:
src: lstrip_blocks.j2
dest: "{{output_dir}}/lstrip_blocks_false.templated"
lstrip_blocks: False
register: lstrip_blocks_false_result
- name: Get checksum of known good lstrip_blocks_false.expected
stat:
path: "{{role_path}}/files/lstrip_blocks_false.expected"
register: lstrip_blocks_false_good
- name: Verify templated lstrip_blocks_false matches known good using checksum
assert:
that:
- "lstrip_blocks_false_result.checksum == lstrip_blocks_false_good.stat.checksum"
- name: Render a template with "lstrip_blocks" set to True
template:
src: lstrip_blocks.j2
dest: "{{output_dir}}/lstrip_blocks_true.templated"
lstrip_blocks: True
register: lstrip_blocks_true_result
ignore_errors: True
- name: Get checksum of known good lstrip_blocks_true.expected
stat:
path: "{{role_path}}/files/lstrip_blocks_true.expected"
register: lstrip_blocks_true_good
- name: Verify templated lstrip_blocks_true matches known good using checksum
assert:
that:
- "lstrip_blocks_true_result.checksum == lstrip_blocks_true_good.stat.checksum"
# VERIFY CONTENTS
- name: check what python version ansible is running on
command: "{{ ansible_python.executable }} -c 'import distutils.sysconfig ; print(distutils.sysconfig.get_python_version())'"
register: pyver
delegate_to: localhost
- name: copy known good into place
copy: src=foo.txt dest={{output_dir}}/foo.txt
- name: compare templated file to known good
shell: diff -uw {{output_dir}}/foo.templated {{output_dir}}/foo.txt
register: diff_result
- name: verify templated file matches known good
assert:
that:
- 'diff_result.stdout == ""'
- "diff_result.rc == 0"
# VERIFY MODE
- name: set file mode
file: path={{output_dir}}/foo.templated mode=0644
register: file_result
- name: ensure file mode did not change
assert:
that:
- "file_result.changed != True"
# VERIFY dest as a directory does not break file attributes
# Note: expanduser is needed to go down the particular codepath that was broken before
- name: setup directory for test
file: state=directory dest={{output_dir | expanduser}}/template-dir mode=0755 owner=nobody group={{ group.stdout }}
- name: set file mode when the destination is a directory
template: src=foo.j2 dest={{output_dir | expanduser}}/template-dir/ mode=0600 owner=root group={{ group.stdout }}
- name: set file mode when the destination is a directory
template: src=foo.j2 dest={{output_dir | expanduser}}/template-dir/ mode=0600 owner=root group={{ group.stdout }}
register: file_result
- name: check that the file has the correct attributes
stat: path={{output_dir | expanduser}}/template-dir/foo.j2
register: file_attrs
- assert:
that:
- "file_attrs.stat.uid == 0"
- "file_attrs.stat.pw_name == 'root'"
- "file_attrs.stat.mode == '0600'"
- name: check that the containing directory did not change attributes
stat: path={{output_dir | expanduser}}/template-dir/
register: dir_attrs
- assert:
that:
- "dir_attrs.stat.uid != 0"
- "dir_attrs.stat.pw_name == 'nobody'"
- "dir_attrs.stat.mode == '0755'"
- name: Check that template to a directory where the directory does not end with a / is allowed
template: src=foo.j2 dest={{output_dir | expanduser}}/template-dir mode=0600 owner=root group={{ group.stdout }}
- name: make a symlink to the templated file
file:
path: '{{ output_dir }}/foo.symlink'
src: '{{ output_dir }}/foo.templated'
state: link
- name: check that templating the symlink results in the file being templated
template:
src: foo.j2
dest: '{{output_dir}}/foo.symlink'
mode: 0600
follow: True
register: template_result
- assert:
that:
- "template_result.changed == True"
- name: check that the file has the correct attributes
stat: path={{output_dir | expanduser}}/template-dir/foo.j2
register: file_attrs
- assert:
that:
- "file_attrs.stat.mode == '0600'"
- name: check that templating the symlink again makes no changes
template:
src: foo.j2
dest: '{{output_dir}}/foo.symlink'
mode: 0600
follow: True
register: template_result
- assert:
that:
- "template_result.changed == False"
# Test strange filenames
- name: Create a temp dir for filename tests
file:
state: directory
dest: '{{ output_dir }}/filename-tests'
- name: create a file with an unusual filename
template:
src: foo.j2
dest: "{{ output_dir }}/filename-tests/foo t'e~m\\plated"
register: template_result
- assert:
that:
- "template_result.changed == True"
- name: check that the unusual filename was created
command: "ls {{ output_dir }}/filename-tests/"
register: unusual_results
- assert:
that:
- "\"foo t'e~m\\plated\" in unusual_results.stdout_lines"
- "{{unusual_results.stdout_lines| length}} == 1"
- name: check that the unusual filename can be checked for changes
template:
src: foo.j2
dest: "{{ output_dir }}/filename-tests/foo t'e~m\\plated"
register: template_result
- assert:
that:
- "template_result.changed == False"
# check_mode
- name: fill in a basic template in check mode
template: src=short.j2 dest={{output_dir}}/short.templated
register: template_result
check_mode: True
- name: check file exists
stat: path={{output_dir}}/short.templated
register: templated
- name: verify that the file was marked as changed in check mode but was not created
assert:
that:
- "not templated.stat.exists"
- "template_result is changed"
- name: fill in a basic template
template: src=short.j2 dest={{output_dir}}/short.templated
- name: fill in a basic template in check mode
template: src=short.j2 dest={{output_dir}}/short.templated
register: template_result
check_mode: True
- name: verify that the file was marked as not changes in check mode
assert:
that:
- "template_result is not changed"
- "'templated_var_loaded' in lookup('file', output_dir + '/short.templated')"
- name: change var for the template
set_fact:
templated_var: "changed"
- name: fill in a basic template with changed var in check mode
template: src=short.j2 dest={{output_dir}}/short.templated
register: template_result
check_mode: True
- name: verify that the file was marked as changed in check mode but the content was not changed
assert:
that:
- "'templated_var_loaded' in lookup('file', output_dir + '/short.templated')"
- "template_result is changed"
# Create a template using a child template, to ensure that variables
# are passed properly from the parent to subtemplate context (issue #20063)
- name: test parent and subtemplate creation of context
template: src=parent.j2 dest={{output_dir}}/parent_and_subtemplate.templated
register: template_result
- stat: path={{output_dir}}/parent_and_subtemplate.templated
- name: verify that the parent and subtemplate creation worked
assert:
that:
- "template_result is changed"
#
# template module can overwrite a file that's been hard linked
# https://github.com/ansible/ansible/issues/10834
#
- name: ensure test dir is absent
file:
path: '{{ output_dir | expanduser }}/hlink_dir'
state: absent
- name: create test dir
file:
path: '{{ output_dir | expanduser }}/hlink_dir'
state: directory
- name: template out test file to system 1
template:
src: foo.j2
dest: '{{ output_dir | expanduser }}/hlink_dir/test_file'
- name: make hard link
file:
src: '{{ output_dir | expanduser }}/hlink_dir/test_file'
dest: '{{ output_dir | expanduser }}/hlink_dir/test_file_hlink'
state: hard
- name: template out test file to system 2
template:
src: foo.j2
dest: '{{ output_dir | expanduser }}/hlink_dir/test_file'
register: hlink_result
- name: check that the files are still hardlinked
stat:
path: '{{ output_dir | expanduser }}/hlink_dir/test_file'
register: orig_file
- name: check that the files are still hardlinked
stat:
path: '{{ output_dir | expanduser }}/hlink_dir/test_file_hlink'
register: hlink_file
# We've done nothing at this point to update the content of the file so it should still be hardlinked
- assert:
that:
- "hlink_result.changed == False"
- "orig_file.stat.inode == hlink_file.stat.inode"
- name: change var for the template
set_fact:
templated_var: "templated_var_loaded"
# UNIX TEMPLATE
- name: fill in a basic template (Unix)
template:
src: foo2.j2
dest: '{{ output_dir }}/foo.unix.templated'
register: template_result
- name: verify that the file was marked as changed (Unix)
assert:
that:
- 'template_result is changed'
- name: fill in a basic template again (Unix)
template:
src: foo2.j2
dest: '{{ output_dir }}/foo.unix.templated'
register: template_result2
- name: verify that the template was not changed (Unix)
assert:
that:
- 'template_result2 is not changed'
# VERIFY UNIX CONTENTS
- name: copy known good into place (Unix)
copy:
src: foo.unix.txt
dest: '{{ output_dir }}/foo.unix.txt'
- name: Dump templated file (Unix)
command: hexdump -C {{ output_dir }}/foo.unix.templated
- name: Dump expected file (Unix)
command: hexdump -C {{ output_dir }}/foo.unix.txt
- name: compare templated file to known good (Unix)
command: diff -u {{ output_dir }}/foo.unix.templated {{ output_dir }}/foo.unix.txt
register: diff_result
- name: verify templated file matches known good (Unix)
assert:
that:
- 'diff_result.stdout == ""'
- "diff_result.rc == 0"
# DOS TEMPLATE
- name: fill in a basic template (DOS)
template:
src: foo2.j2
dest: '{{ output_dir }}/foo.dos.templated'
newline_sequence: '\r\n'
register: template_result
- name: verify that the file was marked as changed (DOS)
assert:
that:
- 'template_result is changed'
- name: fill in a basic template again (DOS)
template:
src: foo2.j2
dest: '{{ output_dir }}/foo.dos.templated'
newline_sequence: '\r\n'
register: template_result2
- name: verify that the template was not changed (DOS)
assert:
that:
- 'template_result2 is not changed'
# VERIFY DOS CONTENTS
- name: copy known good into place (DOS)
copy:
src: foo.dos.txt
dest: '{{ output_dir }}/foo.dos.txt'
- name: Dump templated file (DOS)
command: hexdump -C {{ output_dir }}/foo.dos.templated
- name: Dump expected file (DOS)
command: hexdump -C {{ output_dir }}/foo.dos.txt
- name: compare templated file to known good (DOS)
command: diff -u {{ output_dir }}/foo.dos.templated {{ output_dir }}/foo.dos.txt
register: diff_result
- name: verify templated file matches known good (DOS)
assert:
that:
- 'diff_result.stdout == ""'
- "diff_result.rc == 0"
# VERIFY DOS CONTENTS
- name: copy known good into place (Unix)
copy:
src: foo.unix.txt
dest: '{{ output_dir }}/foo.unix.txt'
- name: Dump templated file (Unix)
command: hexdump -C {{ output_dir }}/foo.unix.templated
- name: Dump expected file (Unix)
command: hexdump -C {{ output_dir }}/foo.unix.txt
- name: compare templated file to known good (Unix)
command: diff -u {{ output_dir }}/foo.unix.templated {{ output_dir }}/foo.unix.txt
register: diff_result
- name: verify templated file matches known good (Unix)
assert:
that:
- 'diff_result.stdout == ""'
- "diff_result.rc == 0"
# Check that mode=preserve works with template
- name: Create a template which has strange permissions
copy:
content: !unsafe '{{ ansible_managed }}\n'
dest: '{{ output_dir }}/foo-template.j2'
mode: 0547
delegate_to: localhost
- name: Use template with mode=preserve
template:
src: '{{ output_dir }}/foo-template.j2'
dest: '{{ output_dir }}/foo-templated.txt'
mode: 'preserve'
register: template_results
- name: Get permissions from the templated file
stat:
path: '{{ output_dir }}/foo-templated.txt'
register: stat_results
- name: Check that the resulting file has the correct permissions
assert:
that:
- 'template_results is changed'
- 'template_results.mode == "0547"'
- 'stat_results.stat["mode"] == "0547"'
# Test output_encoding
- name: Prepare the list of encodings we want to check, including empty string for defaults
set_fact:
template_encoding_1252_encodings: ['', 'utf-8', 'windows-1252']
- name: Copy known good encoding_1252_*.expected into place
copy:
src: 'encoding_1252_{{ item | default("utf-8", true) }}.expected'
dest: '{{ output_dir }}/encoding_1252_{{ item }}.expected'
loop: '{{ template_encoding_1252_encodings }}'
- name: Generate the encoding_1252_* files from templates using various encoding combinations
template:
src: 'encoding_1252.j2'
dest: '{{ output_dir }}/encoding_1252_{{ item }}.txt'
output_encoding: '{{ item }}'
loop: '{{ template_encoding_1252_encodings }}'
- name: Compare the encoding_1252_* templated files to known good
command: diff -u {{ output_dir }}/encoding_1252_{{ item }}.expected {{ output_dir }}/encoding_1252_{{ item }}.txt
register: encoding_1252_diff_result
loop: '{{ template_encoding_1252_encodings }}'
- name: Check that nested undefined values return Undefined
vars:
dict_var:
bar: {}
list_var:
- foo: {}
assert:
that:
- dict_var is defined
- dict_var.bar is defined
- dict_var.bar.baz is not defined
- dict_var.bar.baz | default('DEFAULT') == 'DEFAULT'
- dict_var.bar.baz.abc is not defined
- dict_var.bar.baz.abc | default('DEFAULT') == 'DEFAULT'
- dict_var.baz is not defined
- dict_var.baz.abc is not defined
- dict_var.baz.abc | default('DEFAULT') == 'DEFAULT'
- list_var.0 is defined
- list_var.1 is not defined
- list_var.0.foo is defined
- list_var.0.foo.bar is not defined
- list_var.0.foo.bar | default('DEFAULT') == 'DEFAULT'
- list_var.1.foo is not defined
- list_var.1.foo | default('DEFAULT') == 'DEFAULT'
- dict_var is defined
- dict_var['bar'] is defined
- dict_var['bar']['baz'] is not defined
- dict_var['bar']['baz'] | default('DEFAULT') == 'DEFAULT'
- dict_var['bar']['baz']['abc'] is not defined
- dict_var['bar']['baz']['abc'] | default('DEFAULT') == 'DEFAULT'
- dict_var['baz'] is not defined
- dict_var['baz']['abc'] is not defined
- dict_var['baz']['abc'] | default('DEFAULT') == 'DEFAULT'
- list_var[0] is defined
- list_var[1] is not defined
- list_var[0]['foo'] is defined
- list_var[0]['foo']['bar'] is not defined
- list_var[0]['foo']['bar'] | default('DEFAULT') == 'DEFAULT'
- list_var[1]['foo'] is not defined
- list_var[1]['foo'] | default('DEFAULT') == 'DEFAULT'
- dict_var['bar'].baz is not defined
- dict_var['bar'].baz | default('DEFAULT') == 'DEFAULT'
- template:
src: template_destpath_test.j2
dest: "{{ output_dir }}/template_destpath.templated"
- copy:
content: "{{ output_dir}}/template_destpath.templated\n"
dest: "{{ output_dir }}/template_destpath.expected"
- name: compare templated file to known good template_destpath
shell: diff -uw {{output_dir}}/template_destpath.templated {{output_dir}}/template_destpath.expected
register: diff_result
- name: verify templated template_destpath matches known good
assert:
that:
- 'diff_result.stdout == ""'
- "diff_result.rc == 0"
- debug:
msg: "{{ 'x' in y }}"
ignore_errors: yes
register: error
- name: check that proper error message is emitted when in operator is used
assert:
that: "\"'y' is undefined\" in error.msg"
- template:
src: template_import_macro_globals.j2
dest: "{{ output_dir }}/template_import_macro_globals.templated"
- command: "cat {{ output_dir }}/template_import_macro_globals.templated"
register: out
- assert:
that:
- out.stdout == "bar=lookedup_bar"
# aliases file requires root for template tests so this should be safe
- import_tasks: backup_test.yml
- name: test STRING_TYPE_FILTERS
copy:
content: "{{ a_dict | to_nice_json(indent=(indent_value|int))}}\n"
dest: "{{ output_dir }}/string_type_filters.templated"
vars:
a_dict:
foo: bar
foobar: 1
indent_value: 2
- name: copy known good string_type_filters.expected into place
copy:
src: string_type_filters.expected
dest: "{{ output_dir }}/string_type_filters.expected"
- command: "diff {{ output_dir }}/string_type_filters.templated {{ output_dir}}/string_type_filters.expected"
register: out
- assert:
that:
- out.rc == 0
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 76,610 |
Templating empty template file results in file with string 'None'
|
### Summary
When templating something with the `template` module that results in a blank / empty output, the resulting file contains the string `None`.
I noticed when I templated a config file
```
{# comment #}
{% if expression %}
... something ...
{% endif %}
```
and the resulting file contained the four bytes of `None` in the cases where the expression evaluated to `false`.
### Issue Type
Bug Report
### Component Name
template
### Ansible Version
```console
devel
```
### Configuration
```console
-
```
### OS / Environment
-
### Steps to Reproduce
Playbook:
```yaml (paste below)
- hosts: localhost
tasks:
- template:
src: test-template.j2
dest: test-template.txt
```
test-template.j2:
```
{# test #}
```
Or simply an empty file. Also adding more Jinja2 commands which do not add something to the output yield the same results.
### Expected Results
test-template.txt is blank
### Actual Results
```console
test-template.txt contains the string `None`.
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/76610
|
https://github.com/ansible/ansible/pull/76634
|
b984dd9c59dd43d50f5aafa91cfde29405f2fbd9
|
094a0746b31f7ce9879dfa6fb2972930eb1cbab2
| 2021-12-26T12:18:06Z |
python
| 2022-01-07T16:13:34Z |
test/integration/targets/template/templates/empty_template.j2
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.