status
stringclasses 1
value | repo_name
stringclasses 31
values | repo_url
stringclasses 31
values | issue_id
int64 1
104k
| title
stringlengths 4
369
| body
stringlengths 0
254k
β | issue_url
stringlengths 37
56
| pull_url
stringlengths 37
54
| before_fix_sha
stringlengths 40
40
| after_fix_sha
stringlengths 40
40
| report_datetime
timestamp[us, tz=UTC] | language
stringclasses 5
values | commit_datetime
timestamp[us, tz=UTC] | updated_file
stringlengths 4
188
| file_content
stringlengths 0
5.12M
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 81,013 |
Handlers from dependencies are not deduplicated in ansible-core 2.15.0
|
### Summary
Handlers included from other roles via role dependencies are not deduplicated properly in ansible-core 2.15.0. This behaviour is specific to 2.15.0 and doesn't seem to be documented anywhere.
### Issue Type
Bug Report
### Component Name
ansible-core
### Ansible Version
```console
$ ansible --version
ansible [core 2.15.0]
config file = None
configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /tmp/ansible-test/lib64/python3.11/site-packages/ansible
ansible collection location = /root/.ansible/collections:/usr/share/ansible/collections
executable location = /tmp/ansible-test/bin/ansible
python version = 3.11.2 (main, Apr 5 2023, 11:57:00) [GCC 8.5.0 20210514 (Red Hat 8.5.0-18)] (/tmp/ansible-test/bin/python3.11)
jinja version = 3.1.2
libyaml = True
```
### Configuration
```console
$ ansible-config dump --only-changed -t all
CONFIG_FILE() = None
```
### OS / Environment
AlmaLinux 8
### Steps to Reproduce
1. Create a "global" role with a handler:
```
# roles/global/handlers/main.yml
- name: a global handler
debug:
msg: "a global handler has been triggered"
listen: "global handler"
```
2. Create two roles with tasks that notify this handler, define the "global" role as a dependency in both roles:
```
# roles/role1/meta/main.yml
dependencies:
- global
```
```
# roles/role1/tasks/main.yml
- name: role1/task1
debug:
changed_when: true
notify: "global handler"
```
```
# roles/role2/meta/main.yml
dependencies:
- global
```
```
# roles/role2/tasks/main.yml
- name: role2/task1
debug:
changed_when: true
notify: "global handler"
```
3. Create a playbook:
```
# playbook.yml
- hosts: localhost
roles:
- role1
- role2
```
4. Resulting file tree:
```
.
βββ playbook.yml
βββ roles
βββ global
βΒ Β βββ handlers
βΒ Β βββ main.yml
βββ role1
βΒ Β βββ meta
βΒ Β βΒ Β βββ main.yml
βΒ Β βββ tasks
βΒ Β βββ main.yml
βββ role2
βββ meta
βΒ Β βββ main.yml
βββ tasks
βββ main.yml
```
5. Run the playbook with Ansible 2.15, verify that the handler has been invoked twice.
### Expected Results
Ansible 2.14 deduplicates the handler:
```
$ ansible-playbook --version
ansible-playbook [core 2.14.6]
config file = None
configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /tmp/ansible-test/lib64/python3.11/site-packages/ansible
ansible collection location = /root/.ansible/collections:/usr/share/ansible/collections
executable location = /tmp/ansible-test/bin/ansible-playbook
python version = 3.11.2 (main, Apr 5 2023, 11:57:00) [GCC 8.5.0 20210514 (Red Hat 8.5.0-18)] (/tmp/ansible-test/bin/python3.11)
jinja version = 3.1.2
libyaml = True
```
```
$ ansible-playbook playbook.yml
[WARNING]: No inventory was parsed, only implicit localhost is available
[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all'
PLAY [localhost] ****************************************************************************************************************************************************************************
TASK [Gathering Facts] **********************************************************************************************************************************************************************
ok: [localhost]
TASK [role1 : role1/task1] ******************************************************************************************************************************************************************
changed: [localhost] => {
"msg": "Hello world!"
}
TASK [role2 : role2/task1] ******************************************************************************************************************************************************************
changed: [localhost] => {
"msg": "Hello world!"
}
RUNNING HANDLER [global : a global handler] *************************************************************************************************************************************************
ok: [localhost] => {
"msg": "a global handler has been triggered"
}
PLAY RECAP **********************************************************************************************************************************************************************************
localhost : ok=4 changed=2 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
```
Ansible 2.15 runs this handler twice (see actual results).
### Actual Results
```console
$ ansible-playbook playbook.yml
[WARNING]: No inventory was parsed, only implicit localhost is available
[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all'
PLAY [localhost] ****************************************************************************************************************************************************************************
TASK [Gathering Facts] **********************************************************************************************************************************************************************
ok: [localhost]
TASK [role1 : role1/task1] ******************************************************************************************************************************************************************
changed: [localhost] => {
"msg": "Hello world!"
}
TASK [role2 : role2/task1] ******************************************************************************************************************************************************************
changed: [localhost] => {
"msg": "Hello world!"
}
RUNNING HANDLER [global : a global handler] *************************************************************************************************************************************************
ok: [localhost] => {
"msg": "a global handler has been triggered"
}
RUNNING HANDLER [global : a global handler] *************************************************************************************************************************************************
ok: [localhost] => {
"msg": "a global handler has been triggered"
}
PLAY RECAP **********************************************************************************************************************************************************************************
localhost : ok=5 changed=2 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/81013
|
https://github.com/ansible/ansible/pull/81358
|
bd3ffbe10903125993d7d68fa8cfd687124a241f
|
0cba3b7504c1aabe0fc3773e3ff3ac024edeb308
| 2023-06-09T11:47:43Z |
python
| 2023-08-15T13:03:56Z |
lib/ansible/plugins/strategy/__init__.py
|
# (c) 2012-2014, Michael DeHaan <[email protected]>
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
# Make coding more python3-ish
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import cmd
import functools
import os
import pprint
import queue
import sys
import threading
import time
import typing as t
from collections import deque
from multiprocessing import Lock
from jinja2.exceptions import UndefinedError
from ansible import constants as C
from ansible import context
from ansible.errors import AnsibleError, AnsibleFileNotFound, AnsibleUndefinedVariable, AnsibleParserError
from ansible.executor import action_write_locks
from ansible.executor.play_iterator import IteratingStates, PlayIterator
from ansible.executor.process.worker import WorkerProcess
from ansible.executor.task_result import TaskResult
from ansible.executor.task_queue_manager import CallbackSend, DisplaySend, PromptSend
from ansible.module_utils.six import string_types
from ansible.module_utils.common.text.converters import to_text
from ansible.module_utils.connection import Connection, ConnectionError
from ansible.playbook.conditional import Conditional
from ansible.playbook.handler import Handler
from ansible.playbook.helpers import load_list_of_blocks
from ansible.playbook.task import Task
from ansible.playbook.task_include import TaskInclude
from ansible.plugins import loader as plugin_loader
from ansible.template import Templar
from ansible.utils.display import Display
from ansible.utils.fqcn import add_internal_fqcns
from ansible.utils.unsafe_proxy import wrap_var
from ansible.utils.vars import combine_vars, isidentifier
from ansible.vars.clean import strip_internal_keys, module_response_deepcopy
display = Display()
__all__ = ['StrategyBase']
# This list can be an exact match, or start of string bound
# does not accept regex
ALWAYS_DELEGATE_FACT_PREFIXES = frozenset((
'discovered_interpreter_',
))
class StrategySentinel:
pass
_sentinel = StrategySentinel()
def post_process_whens(result, task, templar, task_vars):
cond = None
if task.changed_when:
with templar.set_temporary_context(available_variables=task_vars):
cond = Conditional(loader=templar._loader)
cond.when = task.changed_when
result['changed'] = cond.evaluate_conditional(templar, templar.available_variables)
if task.failed_when:
with templar.set_temporary_context(available_variables=task_vars):
if cond is None:
cond = Conditional(loader=templar._loader)
cond.when = task.failed_when
failed_when_result = cond.evaluate_conditional(templar, templar.available_variables)
result['failed_when_result'] = result['failed'] = failed_when_result
def _get_item_vars(result, task):
item_vars = {}
if task.loop or task.loop_with:
loop_var = result.get('ansible_loop_var', 'item')
index_var = result.get('ansible_index_var')
if loop_var in result:
item_vars[loop_var] = result[loop_var]
if index_var and index_var in result:
item_vars[index_var] = result[index_var]
if '_ansible_item_label' in result:
item_vars['_ansible_item_label'] = result['_ansible_item_label']
if 'ansible_loop' in result:
item_vars['ansible_loop'] = result['ansible_loop']
return item_vars
def results_thread_main(strategy):
while True:
try:
result = strategy._final_q.get()
if isinstance(result, StrategySentinel):
break
elif isinstance(result, DisplaySend):
dmethod = getattr(display, result.method)
dmethod(*result.args, **result.kwargs)
elif isinstance(result, CallbackSend):
for arg in result.args:
if isinstance(arg, TaskResult):
strategy.normalize_task_result(arg)
break
strategy._tqm.send_callback(result.method_name, *result.args, **result.kwargs)
elif isinstance(result, TaskResult):
strategy.normalize_task_result(result)
with strategy._results_lock:
strategy._results.append(result)
elif isinstance(result, PromptSend):
try:
value = display.prompt_until(
result.prompt,
private=result.private,
seconds=result.seconds,
complete_input=result.complete_input,
interrupt_input=result.interrupt_input,
)
except AnsibleError as e:
value = e
except BaseException as e:
# relay unexpected errors so bugs in display are reported and don't cause workers to hang
try:
raise AnsibleError(f"{e}") from e
except AnsibleError as e:
value = e
strategy._workers[result.worker_id].worker_queue.put(value)
else:
display.warning('Received an invalid object (%s) in the result queue: %r' % (type(result), result))
except (IOError, EOFError):
break
except queue.Empty:
pass
def debug_closure(func):
"""Closure to wrap ``StrategyBase._process_pending_results`` and invoke the task debugger"""
@functools.wraps(func)
def inner(self, iterator, one_pass=False, max_passes=None):
status_to_stats_map = (
('is_failed', 'failures'),
('is_unreachable', 'dark'),
('is_changed', 'changed'),
('is_skipped', 'skipped'),
)
# We don't know the host yet, copy the previous states, for lookup after we process new results
prev_host_states = iterator.host_states.copy()
results = func(self, iterator, one_pass=one_pass, max_passes=max_passes)
_processed_results = []
for result in results:
task = result._task
host = result._host
_queued_task_args = self._queued_task_cache.pop((host.name, task._uuid), None)
task_vars = _queued_task_args['task_vars']
play_context = _queued_task_args['play_context']
# Try to grab the previous host state, if it doesn't exist use get_host_state to generate an empty state
try:
prev_host_state = prev_host_states[host.name]
except KeyError:
prev_host_state = iterator.get_host_state(host)
while result.needs_debugger(globally_enabled=self.debugger_active):
next_action = NextAction()
dbg = Debugger(task, host, task_vars, play_context, result, next_action)
dbg.cmdloop()
if next_action.result == NextAction.REDO:
# rollback host state
self._tqm.clear_failed_hosts()
if task.run_once and iterator._play.strategy in add_internal_fqcns(('linear',)) and result.is_failed():
for host_name, state in prev_host_states.items():
if host_name == host.name:
continue
iterator.set_state_for_host(host_name, state)
iterator._play._removed_hosts.remove(host_name)
iterator.set_state_for_host(host.name, prev_host_state)
for method, what in status_to_stats_map:
if getattr(result, method)():
self._tqm._stats.decrement(what, host.name)
self._tqm._stats.decrement('ok', host.name)
# redo
self._queue_task(host, task, task_vars, play_context)
_processed_results.extend(debug_closure(func)(self, iterator, one_pass))
break
elif next_action.result == NextAction.CONTINUE:
_processed_results.append(result)
break
elif next_action.result == NextAction.EXIT:
# Matches KeyboardInterrupt from bin/ansible
sys.exit(99)
else:
_processed_results.append(result)
return _processed_results
return inner
class StrategyBase:
'''
This is the base class for strategy plugins, which contains some common
code useful to all strategies like running handlers, cleanup actions, etc.
'''
# by default, strategies should support throttling but we allow individual
# strategies to disable this and either forego supporting it or managing
# the throttling internally (as `free` does)
ALLOW_BASE_THROTTLING = True
def __init__(self, tqm):
self._tqm = tqm
self._inventory = tqm.get_inventory()
self._workers = tqm._workers
self._variable_manager = tqm.get_variable_manager()
self._loader = tqm.get_loader()
self._final_q = tqm._final_q
self._step = context.CLIARGS.get('step', False)
self._diff = context.CLIARGS.get('diff', False)
# the task cache is a dictionary of tuples of (host.name, task._uuid)
# used to find the original task object of in-flight tasks and to store
# the task args/vars and play context info used to queue the task.
self._queued_task_cache = {}
# Backwards compat: self._display isn't really needed, just import the global display and use that.
self._display = display
# internal counters
self._pending_results = 0
self._cur_worker = 0
# this dictionary is used to keep track of hosts that have
# outstanding tasks still in queue
self._blocked_hosts = dict()
self._results = deque()
self._results_lock = threading.Condition(threading.Lock())
self._worker_queues = dict()
# create the result processing thread for reading results in the background
self._results_thread = threading.Thread(target=results_thread_main, args=(self,))
self._results_thread.daemon = True
self._results_thread.start()
# holds the list of active (persistent) connections to be shutdown at
# play completion
self._active_connections = dict()
# Caches for get_host calls, to avoid calling excessively
# These values should be set at the top of the ``run`` method of each
# strategy plugin. Use ``_set_hosts_cache`` to set these values
self._hosts_cache = []
self._hosts_cache_all = []
self.debugger_active = C.ENABLE_TASK_DEBUGGER
def _set_hosts_cache(self, play, refresh=True):
"""Responsible for setting _hosts_cache and _hosts_cache_all
See comment in ``__init__`` for the purpose of these caches
"""
if not refresh and all((self._hosts_cache, self._hosts_cache_all)):
return
if not play.finalized and Templar(None).is_template(play.hosts):
_pattern = 'all'
else:
_pattern = play.hosts or 'all'
self._hosts_cache_all = [h.name for h in self._inventory.get_hosts(pattern=_pattern, ignore_restrictions=True)]
self._hosts_cache = [h.name for h in self._inventory.get_hosts(play.hosts, order=play.order)]
def cleanup(self):
# close active persistent connections
for sock in self._active_connections.values():
try:
conn = Connection(sock)
conn.reset()
except ConnectionError as e:
# most likely socket is already closed
display.debug("got an error while closing persistent connection: %s" % e)
self._final_q.put(_sentinel)
self._results_thread.join()
def run(self, iterator, play_context, result=0):
# execute one more pass through the iterator without peeking, to
# make sure that all of the hosts are advanced to their final task.
# This should be safe, as everything should be IteratingStates.COMPLETE by
# this point, though the strategy may not advance the hosts itself.
for host in self._hosts_cache:
if host not in self._tqm._unreachable_hosts:
try:
iterator.get_next_task_for_host(self._inventory.hosts[host])
except KeyError:
iterator.get_next_task_for_host(self._inventory.get_host(host))
# return the appropriate code, depending on the status hosts after the run
if not isinstance(result, bool) and result != self._tqm.RUN_OK:
return result
elif len(self._tqm._unreachable_hosts.keys()) > 0:
return self._tqm.RUN_UNREACHABLE_HOSTS
elif len(iterator.get_failed_hosts()) > 0:
return self._tqm.RUN_FAILED_HOSTS
else:
return self._tqm.RUN_OK
def get_hosts_remaining(self, play):
self._set_hosts_cache(play, refresh=False)
ignore = set(self._tqm._failed_hosts).union(self._tqm._unreachable_hosts)
return [host for host in self._hosts_cache if host not in ignore]
def get_failed_hosts(self, play):
self._set_hosts_cache(play, refresh=False)
return [host for host in self._hosts_cache if host in self._tqm._failed_hosts]
def add_tqm_variables(self, vars, play):
'''
Base class method to add extra variables/information to the list of task
vars sent through the executor engine regarding the task queue manager state.
'''
vars['ansible_current_hosts'] = self.get_hosts_remaining(play)
vars['ansible_failed_hosts'] = self.get_failed_hosts(play)
def _queue_task(self, host, task, task_vars, play_context):
''' handles queueing the task up to be sent to a worker '''
display.debug("entering _queue_task() for %s/%s" % (host.name, task.action))
# Add a write lock for tasks.
# Maybe this should be added somewhere further up the call stack but
# this is the earliest in the code where we have task (1) extracted
# into its own variable and (2) there's only a single code path
# leading to the module being run. This is called by two
# functions: linear.py::run(), and
# free.py::run() so we'd have to add to both to do it there.
# The next common higher level is __init__.py::run() and that has
# tasks inside of play_iterator so we'd have to extract them to do it
# there.
if task.action not in action_write_locks.action_write_locks:
display.debug('Creating lock for %s' % task.action)
action_write_locks.action_write_locks[task.action] = Lock()
# create a templar and template things we need later for the queuing process
templar = Templar(loader=self._loader, variables=task_vars)
try:
throttle = int(templar.template(task.throttle))
except Exception as e:
raise AnsibleError("Failed to convert the throttle value to an integer.", obj=task._ds, orig_exc=e)
# and then queue the new task
try:
# Determine the "rewind point" of the worker list. This means we start
# iterating over the list of workers until the end of the list is found.
# Normally, that is simply the length of the workers list (as determined
# by the forks or serial setting), however a task/block/play may "throttle"
# that limit down.
rewind_point = len(self._workers)
if throttle > 0 and self.ALLOW_BASE_THROTTLING:
if task.run_once:
display.debug("Ignoring 'throttle' as 'run_once' is also set for '%s'" % task.get_name())
else:
if throttle <= rewind_point:
display.debug("task: %s, throttle: %d" % (task.get_name(), throttle))
rewind_point = throttle
queued = False
starting_worker = self._cur_worker
while True:
if self._cur_worker >= rewind_point:
self._cur_worker = 0
worker_prc = self._workers[self._cur_worker]
if worker_prc is None or not worker_prc.is_alive():
self._queued_task_cache[(host.name, task._uuid)] = {
'host': host,
'task': task,
'task_vars': task_vars,
'play_context': play_context
}
# Pass WorkerProcess its strategy worker number so it can send an identifier along with intra-task requests
worker_prc = WorkerProcess(
self._final_q, task_vars, host, task, play_context, self._loader, self._variable_manager, plugin_loader, self._cur_worker,
)
self._workers[self._cur_worker] = worker_prc
self._tqm.send_callback('v2_runner_on_start', host, task)
worker_prc.start()
display.debug("worker is %d (out of %d available)" % (self._cur_worker + 1, len(self._workers)))
queued = True
self._cur_worker += 1
if self._cur_worker >= rewind_point:
self._cur_worker = 0
if queued:
break
elif self._cur_worker == starting_worker:
time.sleep(0.0001)
self._pending_results += 1
except (EOFError, IOError, AssertionError) as e:
# most likely an abort
display.debug("got an error while queuing: %s" % e)
return
display.debug("exiting _queue_task() for %s/%s" % (host.name, task.action))
def get_task_hosts(self, iterator, task_host, task):
if task.run_once:
host_list = [host for host in self._hosts_cache if host not in self._tqm._unreachable_hosts]
else:
host_list = [task_host.name]
return host_list
def get_delegated_hosts(self, result, task):
host_name = result.get('_ansible_delegated_vars', {}).get('ansible_delegated_host', None)
return [host_name or task.delegate_to]
def _set_always_delegated_facts(self, result, task):
"""Sets host facts for ``delegate_to`` hosts for facts that should
always be delegated
This operation mutates ``result`` to remove the always delegated facts
See ``ALWAYS_DELEGATE_FACT_PREFIXES``
"""
if task.delegate_to is None:
return
facts = result['ansible_facts']
always_keys = set()
_add = always_keys.add
for fact_key in facts:
for always_key in ALWAYS_DELEGATE_FACT_PREFIXES:
if fact_key.startswith(always_key):
_add(fact_key)
if always_keys:
_pop = facts.pop
always_facts = {
'ansible_facts': dict((k, _pop(k)) for k in list(facts) if k in always_keys)
}
host_list = self.get_delegated_hosts(result, task)
_set_host_facts = self._variable_manager.set_host_facts
for target_host in host_list:
_set_host_facts(target_host, always_facts)
def normalize_task_result(self, task_result):
"""Normalize a TaskResult to reference actual Host and Task objects
when only given the ``Host.name``, or the ``Task._uuid``
Only the ``Host.name`` and ``Task._uuid`` are commonly sent back from
the ``TaskExecutor`` or ``WorkerProcess`` due to performance concerns
Mutates the original object
"""
if isinstance(task_result._host, string_types):
# If the value is a string, it is ``Host.name``
task_result._host = self._inventory.get_host(to_text(task_result._host))
if isinstance(task_result._task, string_types):
# If the value is a string, it is ``Task._uuid``
queue_cache_entry = (task_result._host.name, task_result._task)
try:
found_task = self._queued_task_cache[queue_cache_entry]['task']
except KeyError:
# This should only happen due to an implicit task created by the
# TaskExecutor, restrict this behavior to the explicit use case
# of an implicit async_status task
if task_result._task_fields.get('action') != 'async_status':
raise
original_task = Task()
else:
original_task = found_task.copy(exclude_parent=True, exclude_tasks=True)
original_task._parent = found_task._parent
original_task.from_attrs(task_result._task_fields)
task_result._task = original_task
return task_result
def search_handlers_by_notification(self, notification: str, iterator: PlayIterator) -> t.Generator[Handler, None, None]:
templar = Templar(None)
# iterate in reversed order since last handler loaded with the same name wins
for handler in (h for b in reversed(iterator._play.handlers) for h in b.block if h.name):
if not handler.cached_name:
if templar.is_template(handler.name):
templar.available_variables = self._variable_manager.get_vars(
play=iterator._play,
task=handler,
_hosts=self._hosts_cache,
_hosts_all=self._hosts_cache_all
)
try:
handler.name = templar.template(handler.name)
except (UndefinedError, AnsibleUndefinedVariable) as e:
# We skip this handler due to the fact that it may be using
# a variable in the name that was conditionally included via
# set_fact or some other method, and we don't want to error
# out unnecessarily
if not handler.listen:
display.warning(
"Handler '%s' is unusable because it has no listen topics and "
"the name could not be templated (host-specific variables are "
"not supported in handler names). The error: %s" % (handler.name, to_text(e))
)
continue
handler.cached_name = True
# first we check with the full result of get_name(), which may
# include the role name (if the handler is from a role). If that
# is not found, we resort to the simple name field, which doesn't
# have anything extra added to it.
if notification in {
handler.name,
handler.get_name(include_role_fqcn=False),
handler.get_name(include_role_fqcn=True),
}:
yield handler
break
templar.available_variables = {}
for handler in (h for b in iterator._play.handlers for h in b.block):
if listeners := handler.listen:
if notification in handler.get_validated_value(
'listen',
handler.fattributes.get('listen'),
listeners,
templar,
):
yield handler
@debug_closure
def _process_pending_results(self, iterator, one_pass=False, max_passes=None):
'''
Reads results off the final queue and takes appropriate action
based on the result (executing callbacks, updating state, etc.).
'''
ret_results = []
handler_templar = Templar(self._loader)
cur_pass = 0
while True:
try:
self._results_lock.acquire()
task_result = self._results.popleft()
except IndexError:
break
finally:
self._results_lock.release()
original_host = task_result._host
original_task = task_result._task
# all host status messages contain 2 entries: (msg, task_result)
role_ran = False
if task_result.is_failed():
role_ran = True
ignore_errors = original_task.ignore_errors
if not ignore_errors:
# save the current state before failing it for later inspection
state_when_failed = iterator.get_state_for_host(original_host.name)
display.debug("marking %s as failed" % original_host.name)
if original_task.run_once:
# if we're using run_once, we have to fail every host here
for h in self._inventory.get_hosts(iterator._play.hosts):
if h.name not in self._tqm._unreachable_hosts:
iterator.mark_host_failed(h)
else:
iterator.mark_host_failed(original_host)
state, dummy = iterator.get_next_task_for_host(original_host, peek=True)
if iterator.is_failed(original_host) and state and state.run_state == IteratingStates.COMPLETE:
self._tqm._failed_hosts[original_host.name] = True
# if we're iterating on the rescue portion of a block then
# we save the failed task in a special var for use
# within the rescue/always
if iterator.is_any_block_rescuing(state_when_failed):
self._tqm._stats.increment('rescued', original_host.name)
iterator._play._removed_hosts.remove(original_host.name)
self._variable_manager.set_nonpersistent_facts(
original_host.name,
dict(
ansible_failed_task=wrap_var(original_task.serialize()),
ansible_failed_result=task_result._result,
),
)
else:
self._tqm._stats.increment('failures', original_host.name)
else:
self._tqm._stats.increment('ok', original_host.name)
self._tqm._stats.increment('ignored', original_host.name)
if 'changed' in task_result._result and task_result._result['changed']:
self._tqm._stats.increment('changed', original_host.name)
self._tqm.send_callback('v2_runner_on_failed', task_result, ignore_errors=ignore_errors)
elif task_result.is_unreachable():
ignore_unreachable = original_task.ignore_unreachable
if not ignore_unreachable:
self._tqm._unreachable_hosts[original_host.name] = True
iterator._play._removed_hosts.append(original_host.name)
self._tqm._stats.increment('dark', original_host.name)
else:
self._tqm._stats.increment('ok', original_host.name)
self._tqm._stats.increment('ignored', original_host.name)
self._tqm.send_callback('v2_runner_on_unreachable', task_result)
elif task_result.is_skipped():
self._tqm._stats.increment('skipped', original_host.name)
self._tqm.send_callback('v2_runner_on_skipped', task_result)
else:
role_ran = True
if original_task.loop:
# this task had a loop, and has more than one result, so
# loop over all of them instead of a single result
result_items = task_result._result.get('results', [])
else:
result_items = [task_result._result]
for result_item in result_items:
if '_ansible_notify' in result_item and task_result.is_changed():
# only ensure that notified handlers exist, if so save the notifications for when
# handlers are actually flushed so the last defined handlers are exexcuted,
# otherwise depending on the setting either error or warn
host_state = iterator.get_state_for_host(original_host.name)
for notification in result_item['_ansible_notify']:
for handler in self.search_handlers_by_notification(notification, iterator):
if host_state.run_state == IteratingStates.HANDLERS:
# we're currently iterating handlers, so we need to expand this now
if handler.notify_host(original_host):
# NOTE even with notifications deduplicated this can still happen in case of handlers being
# notified multiple times using different names, like role name or fqcn
self._tqm.send_callback('v2_playbook_on_notify', handler, original_host)
else:
iterator.add_notification(original_host.name, notification)
display.vv(f"Notification for handler {notification} has been saved.")
break
else:
msg = (
f"The requested handler '{notification}' was not found in either the main handlers"
" list nor in the listening handlers list"
)
if C.ERROR_ON_MISSING_HANDLER:
raise AnsibleError(msg)
else:
display.warning(msg)
if 'add_host' in result_item:
# this task added a new host (add_host module)
new_host_info = result_item.get('add_host', dict())
self._inventory.add_dynamic_host(new_host_info, result_item)
# ensure host is available for subsequent plays
if result_item.get('changed') and new_host_info['host_name'] not in self._hosts_cache_all:
self._hosts_cache_all.append(new_host_info['host_name'])
elif 'add_group' in result_item:
# this task added a new group (group_by module)
self._inventory.add_dynamic_group(original_host, result_item)
if 'add_host' in result_item or 'add_group' in result_item:
item_vars = _get_item_vars(result_item, original_task)
found_task_vars = self._queued_task_cache.get((original_host.name, task_result._task._uuid))['task_vars']
if item_vars:
all_task_vars = combine_vars(found_task_vars, item_vars)
else:
all_task_vars = found_task_vars
all_task_vars[original_task.register] = wrap_var(result_item)
post_process_whens(result_item, original_task, handler_templar, all_task_vars)
if original_task.loop or original_task.loop_with:
new_item_result = TaskResult(
task_result._host,
task_result._task,
result_item,
task_result._task_fields,
)
self._tqm.send_callback('v2_runner_item_on_ok', new_item_result)
if result_item.get('changed', False):
task_result._result['changed'] = True
if result_item.get('failed', False):
task_result._result['failed'] = True
if 'ansible_facts' in result_item and original_task.action not in C._ACTION_DEBUG:
# if delegated fact and we are delegating facts, we need to change target host for them
if original_task.delegate_to is not None and original_task.delegate_facts:
host_list = self.get_delegated_hosts(result_item, original_task)
else:
# Set facts that should always be on the delegated hosts
self._set_always_delegated_facts(result_item, original_task)
host_list = self.get_task_hosts(iterator, original_host, original_task)
if original_task.action in C._ACTION_INCLUDE_VARS:
for (var_name, var_value) in result_item['ansible_facts'].items():
# find the host we're actually referring too here, which may
# be a host that is not really in inventory at all
for target_host in host_list:
self._variable_manager.set_host_variable(target_host, var_name, var_value)
else:
cacheable = result_item.pop('_ansible_facts_cacheable', False)
for target_host in host_list:
# so set_fact is a misnomer but 'cacheable = true' was meant to create an 'actual fact'
# to avoid issues with precedence and confusion with set_fact normal operation,
# we set BOTH fact and nonpersistent_facts (aka hostvar)
# when fact is retrieved from cache in subsequent operations it will have the lower precedence,
# but for playbook setting it the 'higher' precedence is kept
is_set_fact = original_task.action in C._ACTION_SET_FACT
if not is_set_fact or cacheable:
self._variable_manager.set_host_facts(target_host, result_item['ansible_facts'].copy())
if is_set_fact:
self._variable_manager.set_nonpersistent_facts(target_host, result_item['ansible_facts'].copy())
if 'ansible_stats' in result_item and 'data' in result_item['ansible_stats'] and result_item['ansible_stats']['data']:
if 'per_host' not in result_item['ansible_stats'] or result_item['ansible_stats']['per_host']:
host_list = self.get_task_hosts(iterator, original_host, original_task)
else:
host_list = [None]
data = result_item['ansible_stats']['data']
aggregate = 'aggregate' in result_item['ansible_stats'] and result_item['ansible_stats']['aggregate']
for myhost in host_list:
for k in data.keys():
if aggregate:
self._tqm._stats.update_custom_stats(k, data[k], myhost)
else:
self._tqm._stats.set_custom_stats(k, data[k], myhost)
if 'diff' in task_result._result:
if self._diff or getattr(original_task, 'diff', False):
self._tqm.send_callback('v2_on_file_diff', task_result)
if not isinstance(original_task, TaskInclude):
self._tqm._stats.increment('ok', original_host.name)
if 'changed' in task_result._result and task_result._result['changed']:
self._tqm._stats.increment('changed', original_host.name)
# finally, send the ok for this task
self._tqm.send_callback('v2_runner_on_ok', task_result)
# register final results
if original_task.register:
if not isidentifier(original_task.register):
raise AnsibleError("Invalid variable name in 'register' specified: '%s'" % original_task.register)
host_list = self.get_task_hosts(iterator, original_host, original_task)
clean_copy = strip_internal_keys(module_response_deepcopy(task_result._result))
if 'invocation' in clean_copy:
del clean_copy['invocation']
for target_host in host_list:
self._variable_manager.set_nonpersistent_facts(target_host, {original_task.register: clean_copy})
self._pending_results -= 1
if original_host.name in self._blocked_hosts:
del self._blocked_hosts[original_host.name]
# If this is a role task, mark the parent role as being run (if
# the task was ok or failed, but not skipped or unreachable)
if original_task._role is not None and role_ran: # TODO: and original_task.action not in C._ACTION_INCLUDE_ROLE:?
# lookup the role in the role cache to make sure we're dealing
# with the correct object and mark it as executed
role_obj = self._get_cached_role(original_task, iterator._play)
role_obj._had_task_run[original_host.name] = True
ret_results.append(task_result)
if isinstance(original_task, Handler):
for handler in (h for b in iterator._play.handlers for h in b.block if h._uuid == original_task._uuid):
handler.remove_host(original_host)
if one_pass or max_passes is not None and (cur_pass + 1) >= max_passes:
break
cur_pass += 1
return ret_results
def _wait_on_pending_results(self, iterator):
'''
Wait for the shared counter to drop to zero, using a short sleep
between checks to ensure we don't spin lock
'''
ret_results = []
display.debug("waiting for pending results...")
while self._pending_results > 0 and not self._tqm._terminated:
if self._tqm.has_dead_workers():
raise AnsibleError("A worker was found in a dead state")
results = self._process_pending_results(iterator)
ret_results.extend(results)
if self._pending_results > 0:
time.sleep(C.DEFAULT_INTERNAL_POLL_INTERVAL)
display.debug("no more pending results, returning what we have")
return ret_results
def _copy_included_file(self, included_file):
'''
A proven safe and performant way to create a copy of an included file
'''
ti_copy = included_file._task.copy(exclude_parent=True)
ti_copy._parent = included_file._task._parent
temp_vars = ti_copy.vars | included_file._vars
ti_copy.vars = temp_vars
return ti_copy
def _load_included_file(self, included_file, iterator, is_handler=False):
'''
Loads an included YAML file of tasks, applying the optional set of variables.
Raises AnsibleError exception in case of a failure during including a file,
in such case the caller is responsible for marking the host(s) as failed
using PlayIterator.mark_host_failed().
'''
display.debug("loading included file: %s" % included_file._filename)
try:
data = self._loader.load_from_file(included_file._filename)
if data is None:
return []
elif not isinstance(data, list):
raise AnsibleError("included task files must contain a list of tasks")
ti_copy = self._copy_included_file(included_file)
block_list = load_list_of_blocks(
data,
play=iterator._play,
parent_block=ti_copy.build_parent_block(),
role=included_file._task._role,
use_handlers=is_handler,
loader=self._loader,
variable_manager=self._variable_manager,
)
# since we skip incrementing the stats when the task result is
# first processed, we do so now for each host in the list
for host in included_file._hosts:
self._tqm._stats.increment('ok', host.name)
except AnsibleParserError:
raise
except AnsibleError as e:
if isinstance(e, AnsibleFileNotFound):
reason = "Could not find or access '%s' on the Ansible Controller." % to_text(e.file_name)
else:
reason = to_text(e)
for r in included_file._results:
r._result['failed'] = True
for host in included_file._hosts:
tr = TaskResult(host=host, task=included_file._task, return_data=dict(failed=True, reason=reason))
self._tqm._stats.increment('failures', host.name)
self._tqm.send_callback('v2_runner_on_failed', tr)
raise AnsibleError(reason) from e
# finally, send the callback and return the list of blocks loaded
self._tqm.send_callback('v2_playbook_on_include', included_file)
display.debug("done processing included file")
return block_list
def _take_step(self, task, host=None):
ret = False
msg = u'Perform task: %s ' % task
if host:
msg += u'on %s ' % host
msg += u'(N)o/(y)es/(c)ontinue: '
resp = display.prompt(msg)
if resp.lower() in ['y', 'yes']:
display.debug("User ran task")
ret = True
elif resp.lower() in ['c', 'continue']:
display.debug("User ran task and canceled step mode")
self._step = False
ret = True
else:
display.debug("User skipped task")
display.banner(msg)
return ret
def _cond_not_supported_warn(self, task_name):
display.warning("%s task does not support when conditional" % task_name)
def _execute_meta(self, task, play_context, iterator, target_host):
# meta tasks store their args in the _raw_params field of args,
# since they do not use k=v pairs, so get that
meta_action = task.args.get('_raw_params')
def _evaluate_conditional(h):
all_vars = self._variable_manager.get_vars(play=iterator._play, host=h, task=task,
_hosts=self._hosts_cache, _hosts_all=self._hosts_cache_all)
templar = Templar(loader=self._loader, variables=all_vars)
return task.evaluate_conditional(templar, all_vars)
skipped = False
msg = meta_action
skip_reason = '%s conditional evaluated to False' % meta_action
if isinstance(task, Handler):
self._tqm.send_callback('v2_playbook_on_handler_task_start', task)
else:
self._tqm.send_callback('v2_playbook_on_task_start', task, is_conditional=False)
# These don't support "when" conditionals
if meta_action in ('noop', 'refresh_inventory', 'reset_connection') and task.when:
self._cond_not_supported_warn(meta_action)
if meta_action == 'noop':
msg = "noop"
elif meta_action == 'flush_handlers':
if _evaluate_conditional(target_host):
host_state = iterator.get_state_for_host(target_host.name)
# actually notify proper handlers based on all notifications up to this point
for notification in list(host_state.handler_notifications):
for handler in self.search_handlers_by_notification(notification, iterator):
if handler.notify_host(target_host):
# NOTE even with notifications deduplicated this can still happen in case of handlers being
# notified multiple times using different names, like role name or fqcn
self._tqm.send_callback('v2_playbook_on_notify', handler, target_host)
iterator.clear_notification(target_host.name, notification)
if host_state.run_state == IteratingStates.HANDLERS:
raise AnsibleError('flush_handlers cannot be used as a handler')
if target_host.name not in self._tqm._unreachable_hosts:
host_state.pre_flushing_run_state = host_state.run_state
host_state.run_state = IteratingStates.HANDLERS
msg = "triggered running handlers for %s" % target_host.name
else:
skipped = True
skip_reason += ', not running handlers for %s' % target_host.name
elif meta_action == 'refresh_inventory':
self._inventory.refresh_inventory()
self._set_hosts_cache(iterator._play)
msg = "inventory successfully refreshed"
elif meta_action == 'clear_facts':
if _evaluate_conditional(target_host):
for host in self._inventory.get_hosts(iterator._play.hosts):
hostname = host.get_name()
self._variable_manager.clear_facts(hostname)
msg = "facts cleared"
else:
skipped = True
skip_reason += ', not clearing facts and fact cache for %s' % target_host.name
elif meta_action == 'clear_host_errors':
if _evaluate_conditional(target_host):
for host in self._inventory.get_hosts(iterator._play.hosts):
self._tqm._failed_hosts.pop(host.name, False)
self._tqm._unreachable_hosts.pop(host.name, False)
iterator.clear_host_errors(host)
msg = "cleared host errors"
else:
skipped = True
skip_reason += ', not clearing host error state for %s' % target_host.name
elif meta_action == 'end_batch':
if _evaluate_conditional(target_host):
for host in self._inventory.get_hosts(iterator._play.hosts):
if host.name not in self._tqm._unreachable_hosts:
iterator.set_run_state_for_host(host.name, IteratingStates.COMPLETE)
msg = "ending batch"
else:
skipped = True
skip_reason += ', continuing current batch'
elif meta_action == 'end_play':
if _evaluate_conditional(target_host):
for host in self._inventory.get_hosts(iterator._play.hosts):
if host.name not in self._tqm._unreachable_hosts:
iterator.set_run_state_for_host(host.name, IteratingStates.COMPLETE)
# end_play is used in PlaybookExecutor/TQM to indicate that
# the whole play is supposed to be ended as opposed to just a batch
iterator.end_play = True
msg = "ending play"
else:
skipped = True
skip_reason += ', continuing play'
elif meta_action == 'end_host':
if _evaluate_conditional(target_host):
iterator.set_run_state_for_host(target_host.name, IteratingStates.COMPLETE)
iterator._play._removed_hosts.append(target_host.name)
msg = "ending play for %s" % target_host.name
else:
skipped = True
skip_reason += ", continuing execution for %s" % target_host.name
# TODO: Nix msg here? Left for historical reasons, but skip_reason exists now.
msg = "end_host conditional evaluated to false, continuing execution for %s" % target_host.name
elif meta_action == 'role_complete':
# Allow users to use this in a play as reported in https://github.com/ansible/ansible/issues/22286?
# How would this work with allow_duplicates??
if task.implicit:
role_obj = self._get_cached_role(task, iterator._play)
role_obj._completed[target_host.name] = True
msg = 'role_complete for %s' % target_host.name
elif meta_action == 'reset_connection':
all_vars = self._variable_manager.get_vars(play=iterator._play, host=target_host, task=task,
_hosts=self._hosts_cache, _hosts_all=self._hosts_cache_all)
templar = Templar(loader=self._loader, variables=all_vars)
# apply the given task's information to the connection info,
# which may override some fields already set by the play or
# the options specified on the command line
play_context = play_context.set_task_and_variable_override(task=task, variables=all_vars, templar=templar)
# fields set from the play/task may be based on variables, so we have to
# do the same kind of post validation step on it here before we use it.
play_context.post_validate(templar=templar)
# now that the play context is finalized, if the remote_addr is not set
# default to using the host's address field as the remote address
if not play_context.remote_addr:
play_context.remote_addr = target_host.address
# We also add "magic" variables back into the variables dict to make sure
# a certain subset of variables exist. This 'mostly' works here cause meta
# disregards the loop, but should not really use play_context at all
play_context.update_vars(all_vars)
if target_host in self._active_connections:
connection = Connection(self._active_connections[target_host])
del self._active_connections[target_host]
else:
connection = plugin_loader.connection_loader.get(play_context.connection, play_context, os.devnull)
connection.set_options(task_keys=task.dump_attrs(), var_options=all_vars)
play_context.set_attributes_from_plugin(connection)
if connection:
try:
connection.reset()
msg = 'reset connection'
except ConnectionError as e:
# most likely socket is already closed
display.debug("got an error while closing persistent connection: %s" % e)
else:
msg = 'no connection, nothing to reset'
else:
raise AnsibleError("invalid meta action requested: %s" % meta_action, obj=task._ds)
result = {'msg': msg}
if skipped:
result['skipped'] = True
result['skip_reason'] = skip_reason
else:
result['changed'] = False
if not task.implicit:
header = skip_reason if skipped else msg
display.vv(f"META: {header}")
if isinstance(task, Handler):
task.remove_host(target_host)
res = TaskResult(target_host, task, result)
if skipped:
self._tqm.send_callback('v2_runner_on_skipped', res)
return [res]
def _get_cached_role(self, task, play):
role_path = task._role.get_role_path()
role_cache = play.role_cache[role_path]
try:
idx = role_cache.index(task._role)
return role_cache[idx]
except ValueError:
raise AnsibleError(f'Cannot locate {task._role.get_name()} in role cache')
def get_hosts_left(self, iterator):
''' returns list of available hosts for this iterator by filtering out unreachables '''
hosts_left = []
for host in self._hosts_cache:
if host not in self._tqm._unreachable_hosts:
try:
hosts_left.append(self._inventory.hosts[host])
except KeyError:
hosts_left.append(self._inventory.get_host(host))
return hosts_left
def update_active_connections(self, results):
''' updates the current active persistent connections '''
for r in results:
if 'args' in r._task_fields:
socket_path = r._task_fields['args'].get('_ansible_socket')
if socket_path:
if r._host not in self._active_connections:
self._active_connections[r._host] = socket_path
class NextAction(object):
""" The next action after an interpreter's exit. """
REDO = 1
CONTINUE = 2
EXIT = 3
def __init__(self, result=EXIT):
self.result = result
class Debugger(cmd.Cmd):
prompt_continuous = '> ' # multiple lines
def __init__(self, task, host, task_vars, play_context, result, next_action):
# cmd.Cmd is old-style class
cmd.Cmd.__init__(self)
self.prompt = '[%s] %s (debug)> ' % (host, task)
self.intro = None
self.scope = {}
self.scope['task'] = task
self.scope['task_vars'] = task_vars
self.scope['host'] = host
self.scope['play_context'] = play_context
self.scope['result'] = result
self.next_action = next_action
def cmdloop(self):
try:
cmd.Cmd.cmdloop(self)
except KeyboardInterrupt:
pass
do_h = cmd.Cmd.do_help
def do_EOF(self, args):
"""Quit"""
return self.do_quit(args)
def do_quit(self, args):
"""Quit"""
display.display('User interrupted execution')
self.next_action.result = NextAction.EXIT
return True
do_q = do_quit
def do_continue(self, args):
"""Continue to next result"""
self.next_action.result = NextAction.CONTINUE
return True
do_c = do_continue
def do_redo(self, args):
"""Schedule task for re-execution. The re-execution may not be the next result"""
self.next_action.result = NextAction.REDO
return True
do_r = do_redo
def do_update_task(self, args):
"""Recreate the task from ``task._ds``, and template with updated ``task_vars``"""
templar = Templar(None, variables=self.scope['task_vars'])
task = self.scope['task']
task = task.load_data(task._ds)
task.post_validate(templar)
self.scope['task'] = task
do_u = do_update_task
def evaluate(self, args):
try:
return eval(args, globals(), self.scope)
except Exception:
t, v = sys.exc_info()[:2]
if isinstance(t, str):
exc_type_name = t
else:
exc_type_name = t.__name__
display.display('***%s:%s' % (exc_type_name, repr(v)))
raise
def do_pprint(self, args):
"""Pretty Print"""
try:
result = self.evaluate(args)
display.display(pprint.pformat(result))
except Exception:
pass
do_p = do_pprint
def execute(self, args):
try:
code = compile(args + '\n', '<stdin>', 'single')
exec(code, globals(), self.scope)
except Exception:
t, v = sys.exc_info()[:2]
if isinstance(t, str):
exc_type_name = t
else:
exc_type_name = t.__name__
display.display('***%s:%s' % (exc_type_name, repr(v)))
raise
def default(self, line):
try:
self.execute(line)
except Exception:
pass
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 81,013 |
Handlers from dependencies are not deduplicated in ansible-core 2.15.0
|
### Summary
Handlers included from other roles via role dependencies are not deduplicated properly in ansible-core 2.15.0. This behaviour is specific to 2.15.0 and doesn't seem to be documented anywhere.
### Issue Type
Bug Report
### Component Name
ansible-core
### Ansible Version
```console
$ ansible --version
ansible [core 2.15.0]
config file = None
configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /tmp/ansible-test/lib64/python3.11/site-packages/ansible
ansible collection location = /root/.ansible/collections:/usr/share/ansible/collections
executable location = /tmp/ansible-test/bin/ansible
python version = 3.11.2 (main, Apr 5 2023, 11:57:00) [GCC 8.5.0 20210514 (Red Hat 8.5.0-18)] (/tmp/ansible-test/bin/python3.11)
jinja version = 3.1.2
libyaml = True
```
### Configuration
```console
$ ansible-config dump --only-changed -t all
CONFIG_FILE() = None
```
### OS / Environment
AlmaLinux 8
### Steps to Reproduce
1. Create a "global" role with a handler:
```
# roles/global/handlers/main.yml
- name: a global handler
debug:
msg: "a global handler has been triggered"
listen: "global handler"
```
2. Create two roles with tasks that notify this handler, define the "global" role as a dependency in both roles:
```
# roles/role1/meta/main.yml
dependencies:
- global
```
```
# roles/role1/tasks/main.yml
- name: role1/task1
debug:
changed_when: true
notify: "global handler"
```
```
# roles/role2/meta/main.yml
dependencies:
- global
```
```
# roles/role2/tasks/main.yml
- name: role2/task1
debug:
changed_when: true
notify: "global handler"
```
3. Create a playbook:
```
# playbook.yml
- hosts: localhost
roles:
- role1
- role2
```
4. Resulting file tree:
```
.
βββ playbook.yml
βββ roles
βββ global
βΒ Β βββ handlers
βΒ Β βββ main.yml
βββ role1
βΒ Β βββ meta
βΒ Β βΒ Β βββ main.yml
βΒ Β βββ tasks
βΒ Β βββ main.yml
βββ role2
βββ meta
βΒ Β βββ main.yml
βββ tasks
βββ main.yml
```
5. Run the playbook with Ansible 2.15, verify that the handler has been invoked twice.
### Expected Results
Ansible 2.14 deduplicates the handler:
```
$ ansible-playbook --version
ansible-playbook [core 2.14.6]
config file = None
configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /tmp/ansible-test/lib64/python3.11/site-packages/ansible
ansible collection location = /root/.ansible/collections:/usr/share/ansible/collections
executable location = /tmp/ansible-test/bin/ansible-playbook
python version = 3.11.2 (main, Apr 5 2023, 11:57:00) [GCC 8.5.0 20210514 (Red Hat 8.5.0-18)] (/tmp/ansible-test/bin/python3.11)
jinja version = 3.1.2
libyaml = True
```
```
$ ansible-playbook playbook.yml
[WARNING]: No inventory was parsed, only implicit localhost is available
[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all'
PLAY [localhost] ****************************************************************************************************************************************************************************
TASK [Gathering Facts] **********************************************************************************************************************************************************************
ok: [localhost]
TASK [role1 : role1/task1] ******************************************************************************************************************************************************************
changed: [localhost] => {
"msg": "Hello world!"
}
TASK [role2 : role2/task1] ******************************************************************************************************************************************************************
changed: [localhost] => {
"msg": "Hello world!"
}
RUNNING HANDLER [global : a global handler] *************************************************************************************************************************************************
ok: [localhost] => {
"msg": "a global handler has been triggered"
}
PLAY RECAP **********************************************************************************************************************************************************************************
localhost : ok=4 changed=2 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
```
Ansible 2.15 runs this handler twice (see actual results).
### Actual Results
```console
$ ansible-playbook playbook.yml
[WARNING]: No inventory was parsed, only implicit localhost is available
[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all'
PLAY [localhost] ****************************************************************************************************************************************************************************
TASK [Gathering Facts] **********************************************************************************************************************************************************************
ok: [localhost]
TASK [role1 : role1/task1] ******************************************************************************************************************************************************************
changed: [localhost] => {
"msg": "Hello world!"
}
TASK [role2 : role2/task1] ******************************************************************************************************************************************************************
changed: [localhost] => {
"msg": "Hello world!"
}
RUNNING HANDLER [global : a global handler] *************************************************************************************************************************************************
ok: [localhost] => {
"msg": "a global handler has been triggered"
}
RUNNING HANDLER [global : a global handler] *************************************************************************************************************************************************
ok: [localhost] => {
"msg": "a global handler has been triggered"
}
PLAY RECAP **********************************************************************************************************************************************************************************
localhost : ok=5 changed=2 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/81013
|
https://github.com/ansible/ansible/pull/81358
|
bd3ffbe10903125993d7d68fa8cfd687124a241f
|
0cba3b7504c1aabe0fc3773e3ff3ac024edeb308
| 2023-06-09T11:47:43Z |
python
| 2023-08-15T13:03:56Z |
test/integration/targets/handlers/roles/test_listen_role_dedup_global/handlers/main.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 81,013 |
Handlers from dependencies are not deduplicated in ansible-core 2.15.0
|
### Summary
Handlers included from other roles via role dependencies are not deduplicated properly in ansible-core 2.15.0. This behaviour is specific to 2.15.0 and doesn't seem to be documented anywhere.
### Issue Type
Bug Report
### Component Name
ansible-core
### Ansible Version
```console
$ ansible --version
ansible [core 2.15.0]
config file = None
configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /tmp/ansible-test/lib64/python3.11/site-packages/ansible
ansible collection location = /root/.ansible/collections:/usr/share/ansible/collections
executable location = /tmp/ansible-test/bin/ansible
python version = 3.11.2 (main, Apr 5 2023, 11:57:00) [GCC 8.5.0 20210514 (Red Hat 8.5.0-18)] (/tmp/ansible-test/bin/python3.11)
jinja version = 3.1.2
libyaml = True
```
### Configuration
```console
$ ansible-config dump --only-changed -t all
CONFIG_FILE() = None
```
### OS / Environment
AlmaLinux 8
### Steps to Reproduce
1. Create a "global" role with a handler:
```
# roles/global/handlers/main.yml
- name: a global handler
debug:
msg: "a global handler has been triggered"
listen: "global handler"
```
2. Create two roles with tasks that notify this handler, define the "global" role as a dependency in both roles:
```
# roles/role1/meta/main.yml
dependencies:
- global
```
```
# roles/role1/tasks/main.yml
- name: role1/task1
debug:
changed_when: true
notify: "global handler"
```
```
# roles/role2/meta/main.yml
dependencies:
- global
```
```
# roles/role2/tasks/main.yml
- name: role2/task1
debug:
changed_when: true
notify: "global handler"
```
3. Create a playbook:
```
# playbook.yml
- hosts: localhost
roles:
- role1
- role2
```
4. Resulting file tree:
```
.
βββ playbook.yml
βββ roles
βββ global
βΒ Β βββ handlers
βΒ Β βββ main.yml
βββ role1
βΒ Β βββ meta
βΒ Β βΒ Β βββ main.yml
βΒ Β βββ tasks
βΒ Β βββ main.yml
βββ role2
βββ meta
βΒ Β βββ main.yml
βββ tasks
βββ main.yml
```
5. Run the playbook with Ansible 2.15, verify that the handler has been invoked twice.
### Expected Results
Ansible 2.14 deduplicates the handler:
```
$ ansible-playbook --version
ansible-playbook [core 2.14.6]
config file = None
configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /tmp/ansible-test/lib64/python3.11/site-packages/ansible
ansible collection location = /root/.ansible/collections:/usr/share/ansible/collections
executable location = /tmp/ansible-test/bin/ansible-playbook
python version = 3.11.2 (main, Apr 5 2023, 11:57:00) [GCC 8.5.0 20210514 (Red Hat 8.5.0-18)] (/tmp/ansible-test/bin/python3.11)
jinja version = 3.1.2
libyaml = True
```
```
$ ansible-playbook playbook.yml
[WARNING]: No inventory was parsed, only implicit localhost is available
[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all'
PLAY [localhost] ****************************************************************************************************************************************************************************
TASK [Gathering Facts] **********************************************************************************************************************************************************************
ok: [localhost]
TASK [role1 : role1/task1] ******************************************************************************************************************************************************************
changed: [localhost] => {
"msg": "Hello world!"
}
TASK [role2 : role2/task1] ******************************************************************************************************************************************************************
changed: [localhost] => {
"msg": "Hello world!"
}
RUNNING HANDLER [global : a global handler] *************************************************************************************************************************************************
ok: [localhost] => {
"msg": "a global handler has been triggered"
}
PLAY RECAP **********************************************************************************************************************************************************************************
localhost : ok=4 changed=2 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
```
Ansible 2.15 runs this handler twice (see actual results).
### Actual Results
```console
$ ansible-playbook playbook.yml
[WARNING]: No inventory was parsed, only implicit localhost is available
[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all'
PLAY [localhost] ****************************************************************************************************************************************************************************
TASK [Gathering Facts] **********************************************************************************************************************************************************************
ok: [localhost]
TASK [role1 : role1/task1] ******************************************************************************************************************************************************************
changed: [localhost] => {
"msg": "Hello world!"
}
TASK [role2 : role2/task1] ******************************************************************************************************************************************************************
changed: [localhost] => {
"msg": "Hello world!"
}
RUNNING HANDLER [global : a global handler] *************************************************************************************************************************************************
ok: [localhost] => {
"msg": "a global handler has been triggered"
}
RUNNING HANDLER [global : a global handler] *************************************************************************************************************************************************
ok: [localhost] => {
"msg": "a global handler has been triggered"
}
PLAY RECAP **********************************************************************************************************************************************************************************
localhost : ok=5 changed=2 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/81013
|
https://github.com/ansible/ansible/pull/81358
|
bd3ffbe10903125993d7d68fa8cfd687124a241f
|
0cba3b7504c1aabe0fc3773e3ff3ac024edeb308
| 2023-06-09T11:47:43Z |
python
| 2023-08-15T13:03:56Z |
test/integration/targets/handlers/roles/test_listen_role_dedup_role1/meta/main.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 81,013 |
Handlers from dependencies are not deduplicated in ansible-core 2.15.0
|
### Summary
Handlers included from other roles via role dependencies are not deduplicated properly in ansible-core 2.15.0. This behaviour is specific to 2.15.0 and doesn't seem to be documented anywhere.
### Issue Type
Bug Report
### Component Name
ansible-core
### Ansible Version
```console
$ ansible --version
ansible [core 2.15.0]
config file = None
configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /tmp/ansible-test/lib64/python3.11/site-packages/ansible
ansible collection location = /root/.ansible/collections:/usr/share/ansible/collections
executable location = /tmp/ansible-test/bin/ansible
python version = 3.11.2 (main, Apr 5 2023, 11:57:00) [GCC 8.5.0 20210514 (Red Hat 8.5.0-18)] (/tmp/ansible-test/bin/python3.11)
jinja version = 3.1.2
libyaml = True
```
### Configuration
```console
$ ansible-config dump --only-changed -t all
CONFIG_FILE() = None
```
### OS / Environment
AlmaLinux 8
### Steps to Reproduce
1. Create a "global" role with a handler:
```
# roles/global/handlers/main.yml
- name: a global handler
debug:
msg: "a global handler has been triggered"
listen: "global handler"
```
2. Create two roles with tasks that notify this handler, define the "global" role as a dependency in both roles:
```
# roles/role1/meta/main.yml
dependencies:
- global
```
```
# roles/role1/tasks/main.yml
- name: role1/task1
debug:
changed_when: true
notify: "global handler"
```
```
# roles/role2/meta/main.yml
dependencies:
- global
```
```
# roles/role2/tasks/main.yml
- name: role2/task1
debug:
changed_when: true
notify: "global handler"
```
3. Create a playbook:
```
# playbook.yml
- hosts: localhost
roles:
- role1
- role2
```
4. Resulting file tree:
```
.
βββ playbook.yml
βββ roles
βββ global
βΒ Β βββ handlers
βΒ Β βββ main.yml
βββ role1
βΒ Β βββ meta
βΒ Β βΒ Β βββ main.yml
βΒ Β βββ tasks
βΒ Β βββ main.yml
βββ role2
βββ meta
βΒ Β βββ main.yml
βββ tasks
βββ main.yml
```
5. Run the playbook with Ansible 2.15, verify that the handler has been invoked twice.
### Expected Results
Ansible 2.14 deduplicates the handler:
```
$ ansible-playbook --version
ansible-playbook [core 2.14.6]
config file = None
configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /tmp/ansible-test/lib64/python3.11/site-packages/ansible
ansible collection location = /root/.ansible/collections:/usr/share/ansible/collections
executable location = /tmp/ansible-test/bin/ansible-playbook
python version = 3.11.2 (main, Apr 5 2023, 11:57:00) [GCC 8.5.0 20210514 (Red Hat 8.5.0-18)] (/tmp/ansible-test/bin/python3.11)
jinja version = 3.1.2
libyaml = True
```
```
$ ansible-playbook playbook.yml
[WARNING]: No inventory was parsed, only implicit localhost is available
[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all'
PLAY [localhost] ****************************************************************************************************************************************************************************
TASK [Gathering Facts] **********************************************************************************************************************************************************************
ok: [localhost]
TASK [role1 : role1/task1] ******************************************************************************************************************************************************************
changed: [localhost] => {
"msg": "Hello world!"
}
TASK [role2 : role2/task1] ******************************************************************************************************************************************************************
changed: [localhost] => {
"msg": "Hello world!"
}
RUNNING HANDLER [global : a global handler] *************************************************************************************************************************************************
ok: [localhost] => {
"msg": "a global handler has been triggered"
}
PLAY RECAP **********************************************************************************************************************************************************************************
localhost : ok=4 changed=2 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
```
Ansible 2.15 runs this handler twice (see actual results).
### Actual Results
```console
$ ansible-playbook playbook.yml
[WARNING]: No inventory was parsed, only implicit localhost is available
[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all'
PLAY [localhost] ****************************************************************************************************************************************************************************
TASK [Gathering Facts] **********************************************************************************************************************************************************************
ok: [localhost]
TASK [role1 : role1/task1] ******************************************************************************************************************************************************************
changed: [localhost] => {
"msg": "Hello world!"
}
TASK [role2 : role2/task1] ******************************************************************************************************************************************************************
changed: [localhost] => {
"msg": "Hello world!"
}
RUNNING HANDLER [global : a global handler] *************************************************************************************************************************************************
ok: [localhost] => {
"msg": "a global handler has been triggered"
}
RUNNING HANDLER [global : a global handler] *************************************************************************************************************************************************
ok: [localhost] => {
"msg": "a global handler has been triggered"
}
PLAY RECAP **********************************************************************************************************************************************************************************
localhost : ok=5 changed=2 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/81013
|
https://github.com/ansible/ansible/pull/81358
|
bd3ffbe10903125993d7d68fa8cfd687124a241f
|
0cba3b7504c1aabe0fc3773e3ff3ac024edeb308
| 2023-06-09T11:47:43Z |
python
| 2023-08-15T13:03:56Z |
test/integration/targets/handlers/roles/test_listen_role_dedup_role1/tasks/main.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 81,013 |
Handlers from dependencies are not deduplicated in ansible-core 2.15.0
|
### Summary
Handlers included from other roles via role dependencies are not deduplicated properly in ansible-core 2.15.0. This behaviour is specific to 2.15.0 and doesn't seem to be documented anywhere.
### Issue Type
Bug Report
### Component Name
ansible-core
### Ansible Version
```console
$ ansible --version
ansible [core 2.15.0]
config file = None
configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /tmp/ansible-test/lib64/python3.11/site-packages/ansible
ansible collection location = /root/.ansible/collections:/usr/share/ansible/collections
executable location = /tmp/ansible-test/bin/ansible
python version = 3.11.2 (main, Apr 5 2023, 11:57:00) [GCC 8.5.0 20210514 (Red Hat 8.5.0-18)] (/tmp/ansible-test/bin/python3.11)
jinja version = 3.1.2
libyaml = True
```
### Configuration
```console
$ ansible-config dump --only-changed -t all
CONFIG_FILE() = None
```
### OS / Environment
AlmaLinux 8
### Steps to Reproduce
1. Create a "global" role with a handler:
```
# roles/global/handlers/main.yml
- name: a global handler
debug:
msg: "a global handler has been triggered"
listen: "global handler"
```
2. Create two roles with tasks that notify this handler, define the "global" role as a dependency in both roles:
```
# roles/role1/meta/main.yml
dependencies:
- global
```
```
# roles/role1/tasks/main.yml
- name: role1/task1
debug:
changed_when: true
notify: "global handler"
```
```
# roles/role2/meta/main.yml
dependencies:
- global
```
```
# roles/role2/tasks/main.yml
- name: role2/task1
debug:
changed_when: true
notify: "global handler"
```
3. Create a playbook:
```
# playbook.yml
- hosts: localhost
roles:
- role1
- role2
```
4. Resulting file tree:
```
.
βββ playbook.yml
βββ roles
βββ global
βΒ Β βββ handlers
βΒ Β βββ main.yml
βββ role1
βΒ Β βββ meta
βΒ Β βΒ Β βββ main.yml
βΒ Β βββ tasks
βΒ Β βββ main.yml
βββ role2
βββ meta
βΒ Β βββ main.yml
βββ tasks
βββ main.yml
```
5. Run the playbook with Ansible 2.15, verify that the handler has been invoked twice.
### Expected Results
Ansible 2.14 deduplicates the handler:
```
$ ansible-playbook --version
ansible-playbook [core 2.14.6]
config file = None
configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /tmp/ansible-test/lib64/python3.11/site-packages/ansible
ansible collection location = /root/.ansible/collections:/usr/share/ansible/collections
executable location = /tmp/ansible-test/bin/ansible-playbook
python version = 3.11.2 (main, Apr 5 2023, 11:57:00) [GCC 8.5.0 20210514 (Red Hat 8.5.0-18)] (/tmp/ansible-test/bin/python3.11)
jinja version = 3.1.2
libyaml = True
```
```
$ ansible-playbook playbook.yml
[WARNING]: No inventory was parsed, only implicit localhost is available
[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all'
PLAY [localhost] ****************************************************************************************************************************************************************************
TASK [Gathering Facts] **********************************************************************************************************************************************************************
ok: [localhost]
TASK [role1 : role1/task1] ******************************************************************************************************************************************************************
changed: [localhost] => {
"msg": "Hello world!"
}
TASK [role2 : role2/task1] ******************************************************************************************************************************************************************
changed: [localhost] => {
"msg": "Hello world!"
}
RUNNING HANDLER [global : a global handler] *************************************************************************************************************************************************
ok: [localhost] => {
"msg": "a global handler has been triggered"
}
PLAY RECAP **********************************************************************************************************************************************************************************
localhost : ok=4 changed=2 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
```
Ansible 2.15 runs this handler twice (see actual results).
### Actual Results
```console
$ ansible-playbook playbook.yml
[WARNING]: No inventory was parsed, only implicit localhost is available
[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all'
PLAY [localhost] ****************************************************************************************************************************************************************************
TASK [Gathering Facts] **********************************************************************************************************************************************************************
ok: [localhost]
TASK [role1 : role1/task1] ******************************************************************************************************************************************************************
changed: [localhost] => {
"msg": "Hello world!"
}
TASK [role2 : role2/task1] ******************************************************************************************************************************************************************
changed: [localhost] => {
"msg": "Hello world!"
}
RUNNING HANDLER [global : a global handler] *************************************************************************************************************************************************
ok: [localhost] => {
"msg": "a global handler has been triggered"
}
RUNNING HANDLER [global : a global handler] *************************************************************************************************************************************************
ok: [localhost] => {
"msg": "a global handler has been triggered"
}
PLAY RECAP **********************************************************************************************************************************************************************************
localhost : ok=5 changed=2 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/81013
|
https://github.com/ansible/ansible/pull/81358
|
bd3ffbe10903125993d7d68fa8cfd687124a241f
|
0cba3b7504c1aabe0fc3773e3ff3ac024edeb308
| 2023-06-09T11:47:43Z |
python
| 2023-08-15T13:03:56Z |
test/integration/targets/handlers/roles/test_listen_role_dedup_role2/meta/main.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 81,013 |
Handlers from dependencies are not deduplicated in ansible-core 2.15.0
|
### Summary
Handlers included from other roles via role dependencies are not deduplicated properly in ansible-core 2.15.0. This behaviour is specific to 2.15.0 and doesn't seem to be documented anywhere.
### Issue Type
Bug Report
### Component Name
ansible-core
### Ansible Version
```console
$ ansible --version
ansible [core 2.15.0]
config file = None
configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /tmp/ansible-test/lib64/python3.11/site-packages/ansible
ansible collection location = /root/.ansible/collections:/usr/share/ansible/collections
executable location = /tmp/ansible-test/bin/ansible
python version = 3.11.2 (main, Apr 5 2023, 11:57:00) [GCC 8.5.0 20210514 (Red Hat 8.5.0-18)] (/tmp/ansible-test/bin/python3.11)
jinja version = 3.1.2
libyaml = True
```
### Configuration
```console
$ ansible-config dump --only-changed -t all
CONFIG_FILE() = None
```
### OS / Environment
AlmaLinux 8
### Steps to Reproduce
1. Create a "global" role with a handler:
```
# roles/global/handlers/main.yml
- name: a global handler
debug:
msg: "a global handler has been triggered"
listen: "global handler"
```
2. Create two roles with tasks that notify this handler, define the "global" role as a dependency in both roles:
```
# roles/role1/meta/main.yml
dependencies:
- global
```
```
# roles/role1/tasks/main.yml
- name: role1/task1
debug:
changed_when: true
notify: "global handler"
```
```
# roles/role2/meta/main.yml
dependencies:
- global
```
```
# roles/role2/tasks/main.yml
- name: role2/task1
debug:
changed_when: true
notify: "global handler"
```
3. Create a playbook:
```
# playbook.yml
- hosts: localhost
roles:
- role1
- role2
```
4. Resulting file tree:
```
.
βββ playbook.yml
βββ roles
βββ global
βΒ Β βββ handlers
βΒ Β βββ main.yml
βββ role1
βΒ Β βββ meta
βΒ Β βΒ Β βββ main.yml
βΒ Β βββ tasks
βΒ Β βββ main.yml
βββ role2
βββ meta
βΒ Β βββ main.yml
βββ tasks
βββ main.yml
```
5. Run the playbook with Ansible 2.15, verify that the handler has been invoked twice.
### Expected Results
Ansible 2.14 deduplicates the handler:
```
$ ansible-playbook --version
ansible-playbook [core 2.14.6]
config file = None
configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /tmp/ansible-test/lib64/python3.11/site-packages/ansible
ansible collection location = /root/.ansible/collections:/usr/share/ansible/collections
executable location = /tmp/ansible-test/bin/ansible-playbook
python version = 3.11.2 (main, Apr 5 2023, 11:57:00) [GCC 8.5.0 20210514 (Red Hat 8.5.0-18)] (/tmp/ansible-test/bin/python3.11)
jinja version = 3.1.2
libyaml = True
```
```
$ ansible-playbook playbook.yml
[WARNING]: No inventory was parsed, only implicit localhost is available
[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all'
PLAY [localhost] ****************************************************************************************************************************************************************************
TASK [Gathering Facts] **********************************************************************************************************************************************************************
ok: [localhost]
TASK [role1 : role1/task1] ******************************************************************************************************************************************************************
changed: [localhost] => {
"msg": "Hello world!"
}
TASK [role2 : role2/task1] ******************************************************************************************************************************************************************
changed: [localhost] => {
"msg": "Hello world!"
}
RUNNING HANDLER [global : a global handler] *************************************************************************************************************************************************
ok: [localhost] => {
"msg": "a global handler has been triggered"
}
PLAY RECAP **********************************************************************************************************************************************************************************
localhost : ok=4 changed=2 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
```
Ansible 2.15 runs this handler twice (see actual results).
### Actual Results
```console
$ ansible-playbook playbook.yml
[WARNING]: No inventory was parsed, only implicit localhost is available
[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all'
PLAY [localhost] ****************************************************************************************************************************************************************************
TASK [Gathering Facts] **********************************************************************************************************************************************************************
ok: [localhost]
TASK [role1 : role1/task1] ******************************************************************************************************************************************************************
changed: [localhost] => {
"msg": "Hello world!"
}
TASK [role2 : role2/task1] ******************************************************************************************************************************************************************
changed: [localhost] => {
"msg": "Hello world!"
}
RUNNING HANDLER [global : a global handler] *************************************************************************************************************************************************
ok: [localhost] => {
"msg": "a global handler has been triggered"
}
RUNNING HANDLER [global : a global handler] *************************************************************************************************************************************************
ok: [localhost] => {
"msg": "a global handler has been triggered"
}
PLAY RECAP **********************************************************************************************************************************************************************************
localhost : ok=5 changed=2 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/81013
|
https://github.com/ansible/ansible/pull/81358
|
bd3ffbe10903125993d7d68fa8cfd687124a241f
|
0cba3b7504c1aabe0fc3773e3ff3ac024edeb308
| 2023-06-09T11:47:43Z |
python
| 2023-08-15T13:03:56Z |
test/integration/targets/handlers/roles/test_listen_role_dedup_role2/tasks/main.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 81,013 |
Handlers from dependencies are not deduplicated in ansible-core 2.15.0
|
### Summary
Handlers included from other roles via role dependencies are not deduplicated properly in ansible-core 2.15.0. This behaviour is specific to 2.15.0 and doesn't seem to be documented anywhere.
### Issue Type
Bug Report
### Component Name
ansible-core
### Ansible Version
```console
$ ansible --version
ansible [core 2.15.0]
config file = None
configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /tmp/ansible-test/lib64/python3.11/site-packages/ansible
ansible collection location = /root/.ansible/collections:/usr/share/ansible/collections
executable location = /tmp/ansible-test/bin/ansible
python version = 3.11.2 (main, Apr 5 2023, 11:57:00) [GCC 8.5.0 20210514 (Red Hat 8.5.0-18)] (/tmp/ansible-test/bin/python3.11)
jinja version = 3.1.2
libyaml = True
```
### Configuration
```console
$ ansible-config dump --only-changed -t all
CONFIG_FILE() = None
```
### OS / Environment
AlmaLinux 8
### Steps to Reproduce
1. Create a "global" role with a handler:
```
# roles/global/handlers/main.yml
- name: a global handler
debug:
msg: "a global handler has been triggered"
listen: "global handler"
```
2. Create two roles with tasks that notify this handler, define the "global" role as a dependency in both roles:
```
# roles/role1/meta/main.yml
dependencies:
- global
```
```
# roles/role1/tasks/main.yml
- name: role1/task1
debug:
changed_when: true
notify: "global handler"
```
```
# roles/role2/meta/main.yml
dependencies:
- global
```
```
# roles/role2/tasks/main.yml
- name: role2/task1
debug:
changed_when: true
notify: "global handler"
```
3. Create a playbook:
```
# playbook.yml
- hosts: localhost
roles:
- role1
- role2
```
4. Resulting file tree:
```
.
βββ playbook.yml
βββ roles
βββ global
βΒ Β βββ handlers
βΒ Β βββ main.yml
βββ role1
βΒ Β βββ meta
βΒ Β βΒ Β βββ main.yml
βΒ Β βββ tasks
βΒ Β βββ main.yml
βββ role2
βββ meta
βΒ Β βββ main.yml
βββ tasks
βββ main.yml
```
5. Run the playbook with Ansible 2.15, verify that the handler has been invoked twice.
### Expected Results
Ansible 2.14 deduplicates the handler:
```
$ ansible-playbook --version
ansible-playbook [core 2.14.6]
config file = None
configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /tmp/ansible-test/lib64/python3.11/site-packages/ansible
ansible collection location = /root/.ansible/collections:/usr/share/ansible/collections
executable location = /tmp/ansible-test/bin/ansible-playbook
python version = 3.11.2 (main, Apr 5 2023, 11:57:00) [GCC 8.5.0 20210514 (Red Hat 8.5.0-18)] (/tmp/ansible-test/bin/python3.11)
jinja version = 3.1.2
libyaml = True
```
```
$ ansible-playbook playbook.yml
[WARNING]: No inventory was parsed, only implicit localhost is available
[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all'
PLAY [localhost] ****************************************************************************************************************************************************************************
TASK [Gathering Facts] **********************************************************************************************************************************************************************
ok: [localhost]
TASK [role1 : role1/task1] ******************************************************************************************************************************************************************
changed: [localhost] => {
"msg": "Hello world!"
}
TASK [role2 : role2/task1] ******************************************************************************************************************************************************************
changed: [localhost] => {
"msg": "Hello world!"
}
RUNNING HANDLER [global : a global handler] *************************************************************************************************************************************************
ok: [localhost] => {
"msg": "a global handler has been triggered"
}
PLAY RECAP **********************************************************************************************************************************************************************************
localhost : ok=4 changed=2 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
```
Ansible 2.15 runs this handler twice (see actual results).
### Actual Results
```console
$ ansible-playbook playbook.yml
[WARNING]: No inventory was parsed, only implicit localhost is available
[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all'
PLAY [localhost] ****************************************************************************************************************************************************************************
TASK [Gathering Facts] **********************************************************************************************************************************************************************
ok: [localhost]
TASK [role1 : role1/task1] ******************************************************************************************************************************************************************
changed: [localhost] => {
"msg": "Hello world!"
}
TASK [role2 : role2/task1] ******************************************************************************************************************************************************************
changed: [localhost] => {
"msg": "Hello world!"
}
RUNNING HANDLER [global : a global handler] *************************************************************************************************************************************************
ok: [localhost] => {
"msg": "a global handler has been triggered"
}
RUNNING HANDLER [global : a global handler] *************************************************************************************************************************************************
ok: [localhost] => {
"msg": "a global handler has been triggered"
}
PLAY RECAP **********************************************************************************************************************************************************************************
localhost : ok=5 changed=2 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/81013
|
https://github.com/ansible/ansible/pull/81358
|
bd3ffbe10903125993d7d68fa8cfd687124a241f
|
0cba3b7504c1aabe0fc3773e3ff3ac024edeb308
| 2023-06-09T11:47:43Z |
python
| 2023-08-15T13:03:56Z |
test/integration/targets/handlers/runme.sh
|
#!/usr/bin/env bash
set -eux
export ANSIBLE_FORCE_HANDLERS
ANSIBLE_FORCE_HANDLERS=false
# simple handler test
ansible-playbook test_handlers.yml -i inventory.handlers -v "$@" --tags scenario1
# simple from_handlers test
ansible-playbook from_handlers.yml -i inventory.handlers -v "$@" --tags scenario1
ansible-playbook test_listening_handlers.yml -i inventory.handlers -v "$@"
[ "$(ansible-playbook test_handlers.yml -i inventory.handlers -v "$@" --tags scenario2 -l A \
| grep -E -o 'RUNNING HANDLER \[test_handlers : .*]')" = "RUNNING HANDLER [test_handlers : test handler]" ]
# Test forcing handlers using the linear and free strategy
for strategy in linear free; do
export ANSIBLE_STRATEGY=$strategy
# Not forcing, should only run on successful host
[ "$(ansible-playbook test_force_handlers.yml -i inventory.handlers -v "$@" --tags normal \
| grep -E -o CALLED_HANDLER_. | sort | uniq | xargs)" = "CALLED_HANDLER_B" ]
# Forcing from command line
[ "$(ansible-playbook test_force_handlers.yml -i inventory.handlers -v "$@" --tags normal --force-handlers \
| grep -E -o CALLED_HANDLER_. | sort | uniq | xargs)" = "CALLED_HANDLER_A CALLED_HANDLER_B" ]
# Forcing from command line, should only run later tasks on unfailed hosts
[ "$(ansible-playbook test_force_handlers.yml -i inventory.handlers -v "$@" --tags normal --force-handlers \
| grep -E -o CALLED_TASK_. | sort | uniq | xargs)" = "CALLED_TASK_B CALLED_TASK_D CALLED_TASK_E" ]
# Forcing from command line, should call handlers even if all hosts fail
[ "$(ansible-playbook test_force_handlers.yml -i inventory.handlers -v "$@" --tags normal --force-handlers -e fail_all=yes \
| grep -E -o CALLED_HANDLER_. | sort | uniq | xargs)" = "CALLED_HANDLER_A CALLED_HANDLER_B" ]
# Forcing from ansible.cfg
[ "$(ANSIBLE_FORCE_HANDLERS=true ansible-playbook test_force_handlers.yml -i inventory.handlers -v "$@" --tags normal \
| grep -E -o CALLED_HANDLER_. | sort | uniq | xargs)" = "CALLED_HANDLER_A CALLED_HANDLER_B" ]
# Forcing true in play
[ "$(ansible-playbook test_force_handlers.yml -i inventory.handlers -v "$@" --tags force_true_in_play \
| grep -E -o CALLED_HANDLER_. | sort | uniq | xargs)" = "CALLED_HANDLER_A CALLED_HANDLER_B" ]
# Forcing false in play, which overrides command line
[ "$(ansible-playbook test_force_handlers.yml -i inventory.handlers -v "$@" --tags force_false_in_play --force-handlers \
| grep -E -o CALLED_HANDLER_. | sort | uniq | xargs)" = "CALLED_HANDLER_B" ]
# https://github.com/ansible/ansible/pull/80898
[ "$(ansible-playbook 80880.yml -i inventory.handlers -vv "$@" 2>&1)" ]
unset ANSIBLE_STRATEGY
done
[ "$(ansible-playbook test_handlers_include.yml -i ../../inventory -v "$@" --tags playbook_include_handlers \
| grep -E -o 'RUNNING HANDLER \[.*]')" = "RUNNING HANDLER [test handler]" ]
[ "$(ansible-playbook test_handlers_include.yml -i ../../inventory -v "$@" --tags role_include_handlers \
| grep -E -o 'RUNNING HANDLER \[test_handlers_include : .*]')" = "RUNNING HANDLER [test_handlers_include : test handler]" ]
[ "$(ansible-playbook test_handlers_include_role.yml -i ../../inventory -v "$@" \
| grep -E -o 'RUNNING HANDLER \[test_handlers_include_role : .*]')" = "RUNNING HANDLER [test_handlers_include_role : test handler]" ]
# Notify handler listen
ansible-playbook test_handlers_listen.yml -i inventory.handlers -v "$@"
# Notify inexistent handlers results in error
set +e
result="$(ansible-playbook test_handlers_inexistent_notify.yml -i inventory.handlers "$@" 2>&1)"
set -e
grep -q "ERROR! The requested handler 'notify_inexistent_handler' was not found in either the main handlers list nor in the listening handlers list" <<< "$result"
# Notify inexistent handlers without errors when ANSIBLE_ERROR_ON_MISSING_HANDLER=false
ANSIBLE_ERROR_ON_MISSING_HANDLER=false ansible-playbook test_handlers_inexistent_notify.yml -i inventory.handlers -v "$@"
ANSIBLE_ERROR_ON_MISSING_HANDLER=false ansible-playbook test_templating_in_handlers.yml -v "$@"
# https://github.com/ansible/ansible/issues/36649
output_dir=/tmp
set +e
result="$(ansible-playbook test_handlers_any_errors_fatal.yml -e output_dir=$output_dir -i inventory.handlers -v "$@" 2>&1)"
set -e
[ ! -f $output_dir/should_not_exist_B ] || (rm -f $output_dir/should_not_exist_B && exit 1)
# https://github.com/ansible/ansible/issues/47287
[ "$(ansible-playbook test_handlers_including_task.yml -i ../../inventory -v "$@" | grep -E -o 'failed=[0-9]+')" = "failed=0" ]
# https://github.com/ansible/ansible/issues/71222
ansible-playbook test_role_handlers_including_tasks.yml -i ../../inventory -v "$@"
# https://github.com/ansible/ansible/issues/27237
set +e
result="$(ansible-playbook test_handlers_template_run_once.yml -i inventory.handlers "$@" 2>&1)"
set -e
grep -q "handler A" <<< "$result"
grep -q "handler B" <<< "$result"
# Test an undefined variable in another handler name isn't a failure
ansible-playbook 58841.yml "$@" --tags lazy_evaluation 2>&1 | tee out.txt ; cat out.txt
grep out.txt -e "\[WARNING\]: Handler 'handler name with {{ test_var }}' is unusable"
[ "$(grep out.txt -ce 'handler ran')" = "1" ]
[ "$(grep out.txt -ce 'handler with var ran')" = "0" ]
# Test templating a handler name with a defined variable
ansible-playbook 58841.yml "$@" --tags evaluation_time -e test_var=myvar | tee out.txt ; cat out.txt
[ "$(grep out.txt -ce 'handler ran')" = "0" ]
[ "$(grep out.txt -ce 'handler with var ran')" = "1" ]
# Test the handler is not found when the variable is undefined
ansible-playbook 58841.yml "$@" --tags evaluation_time 2>&1 | tee out.txt ; cat out.txt
grep out.txt -e "ERROR! The requested handler 'handler name with myvar' was not found"
grep out.txt -e "\[WARNING\]: Handler 'handler name with {{ test_var }}' is unusable"
[ "$(grep out.txt -ce 'handler ran')" = "0" ]
[ "$(grep out.txt -ce 'handler with var ran')" = "0" ]
# Test include_role and import_role cannot be used as handlers
ansible-playbook test_role_as_handler.yml "$@" 2>&1 | tee out.txt
grep out.txt -e "ERROR! Using 'include_role' as a handler is not supported."
# Test notifying a handler from within include_tasks does not work anymore
ansible-playbook test_notify_included.yml "$@" 2>&1 | tee out.txt
[ "$(grep out.txt -ce 'I was included')" = "1" ]
grep out.txt -e "ERROR! The requested handler 'handler_from_include' was not found in either the main handlers list nor in the listening handlers list"
ansible-playbook test_handlers_meta.yml -i inventory.handlers -vv "$@" | tee out.txt
[ "$(grep out.txt -ce 'RUNNING HANDLER \[noop_handler\]')" = "1" ]
[ "$(grep out.txt -ce 'META: noop')" = "1" ]
# https://github.com/ansible/ansible/issues/46447
set +e
test "$(ansible-playbook 46447.yml -i inventory.handlers -vv "$@" 2>&1 | grep -c 'SHOULD NOT GET HERE')"
set -e
# https://github.com/ansible/ansible/issues/52561
ansible-playbook 52561.yml -i inventory.handlers "$@" 2>&1 | tee out.txt
[ "$(grep out.txt -ce 'handler1 ran')" = "1" ]
# Test flush_handlers meta task does not imply any_errors_fatal
ansible-playbook 54991.yml -i inventory.handlers "$@" 2>&1 | tee out.txt
[ "$(grep out.txt -ce 'handler ran')" = "4" ]
ansible-playbook order.yml -i inventory.handlers "$@" 2>&1
set +e
ansible-playbook order.yml --force-handlers -e test_force_handlers=true -i inventory.handlers "$@" 2>&1
set -e
ansible-playbook include_handlers_fail_force.yml --force-handlers -i inventory.handlers "$@" 2>&1 | tee out.txt
[ "$(grep out.txt -ce 'included handler ran')" = "1" ]
ansible-playbook test_flush_handlers_as_handler.yml -i inventory.handlers "$@" 2>&1 | tee out.txt
grep out.txt -e "ERROR! flush_handlers cannot be used as a handler"
ansible-playbook test_skip_flush.yml -i inventory.handlers "$@" 2>&1 | tee out.txt
[ "$(grep out.txt -ce 'handler ran')" = "0" ]
ansible-playbook test_flush_in_rescue_always.yml -i inventory.handlers "$@" 2>&1 | tee out.txt
[ "$(grep out.txt -ce 'handler ran in rescue')" = "1" ]
[ "$(grep out.txt -ce 'handler ran in always')" = "2" ]
[ "$(grep out.txt -ce 'lockstep works')" = "2" ]
ansible-playbook test_handlers_infinite_loop.yml -i inventory.handlers "$@" 2>&1
ansible-playbook test_flush_handlers_rescue_always.yml -i inventory.handlers "$@" 2>&1 | tee out.txt
[ "$(grep out.txt -ce 'rescue ran')" = "1" ]
[ "$(grep out.txt -ce 'always ran')" = "2" ]
[ "$(grep out.txt -ce 'should run for both hosts')" = "2" ]
ansible-playbook test_fqcn_meta_flush_handlers.yml -i inventory.handlers "$@" 2>&1 | tee out.txt
grep out.txt -e "handler ran"
grep out.txt -e "after flush"
ansible-playbook 79776.yml -i inventory.handlers "$@"
ansible-playbook test_block_as_handler.yml "$@" 2>&1 | tee out.txt
grep out.txt -e "ERROR! Using a block as a handler is not supported."
ansible-playbook test_block_as_handler-include.yml "$@" 2>&1 | tee out.txt
grep out.txt -e "ERROR! Using a block as a handler is not supported."
ansible-playbook test_block_as_handler-import.yml "$@" 2>&1 | tee out.txt
grep out.txt -e "ERROR! Using a block as a handler is not supported."
ansible-playbook test_include_role_handler_once.yml -i inventory.handlers "$@" 2>&1 | tee out.txt
[ "$(grep out.txt -ce 'handler ran')" = "1" ]
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 81,013 |
Handlers from dependencies are not deduplicated in ansible-core 2.15.0
|
### Summary
Handlers included from other roles via role dependencies are not deduplicated properly in ansible-core 2.15.0. This behaviour is specific to 2.15.0 and doesn't seem to be documented anywhere.
### Issue Type
Bug Report
### Component Name
ansible-core
### Ansible Version
```console
$ ansible --version
ansible [core 2.15.0]
config file = None
configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /tmp/ansible-test/lib64/python3.11/site-packages/ansible
ansible collection location = /root/.ansible/collections:/usr/share/ansible/collections
executable location = /tmp/ansible-test/bin/ansible
python version = 3.11.2 (main, Apr 5 2023, 11:57:00) [GCC 8.5.0 20210514 (Red Hat 8.5.0-18)] (/tmp/ansible-test/bin/python3.11)
jinja version = 3.1.2
libyaml = True
```
### Configuration
```console
$ ansible-config dump --only-changed -t all
CONFIG_FILE() = None
```
### OS / Environment
AlmaLinux 8
### Steps to Reproduce
1. Create a "global" role with a handler:
```
# roles/global/handlers/main.yml
- name: a global handler
debug:
msg: "a global handler has been triggered"
listen: "global handler"
```
2. Create two roles with tasks that notify this handler, define the "global" role as a dependency in both roles:
```
# roles/role1/meta/main.yml
dependencies:
- global
```
```
# roles/role1/tasks/main.yml
- name: role1/task1
debug:
changed_when: true
notify: "global handler"
```
```
# roles/role2/meta/main.yml
dependencies:
- global
```
```
# roles/role2/tasks/main.yml
- name: role2/task1
debug:
changed_when: true
notify: "global handler"
```
3. Create a playbook:
```
# playbook.yml
- hosts: localhost
roles:
- role1
- role2
```
4. Resulting file tree:
```
.
βββ playbook.yml
βββ roles
βββ global
βΒ Β βββ handlers
βΒ Β βββ main.yml
βββ role1
βΒ Β βββ meta
βΒ Β βΒ Β βββ main.yml
βΒ Β βββ tasks
βΒ Β βββ main.yml
βββ role2
βββ meta
βΒ Β βββ main.yml
βββ tasks
βββ main.yml
```
5. Run the playbook with Ansible 2.15, verify that the handler has been invoked twice.
### Expected Results
Ansible 2.14 deduplicates the handler:
```
$ ansible-playbook --version
ansible-playbook [core 2.14.6]
config file = None
configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /tmp/ansible-test/lib64/python3.11/site-packages/ansible
ansible collection location = /root/.ansible/collections:/usr/share/ansible/collections
executable location = /tmp/ansible-test/bin/ansible-playbook
python version = 3.11.2 (main, Apr 5 2023, 11:57:00) [GCC 8.5.0 20210514 (Red Hat 8.5.0-18)] (/tmp/ansible-test/bin/python3.11)
jinja version = 3.1.2
libyaml = True
```
```
$ ansible-playbook playbook.yml
[WARNING]: No inventory was parsed, only implicit localhost is available
[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all'
PLAY [localhost] ****************************************************************************************************************************************************************************
TASK [Gathering Facts] **********************************************************************************************************************************************************************
ok: [localhost]
TASK [role1 : role1/task1] ******************************************************************************************************************************************************************
changed: [localhost] => {
"msg": "Hello world!"
}
TASK [role2 : role2/task1] ******************************************************************************************************************************************************************
changed: [localhost] => {
"msg": "Hello world!"
}
RUNNING HANDLER [global : a global handler] *************************************************************************************************************************************************
ok: [localhost] => {
"msg": "a global handler has been triggered"
}
PLAY RECAP **********************************************************************************************************************************************************************************
localhost : ok=4 changed=2 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
```
Ansible 2.15 runs this handler twice (see actual results).
### Actual Results
```console
$ ansible-playbook playbook.yml
[WARNING]: No inventory was parsed, only implicit localhost is available
[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all'
PLAY [localhost] ****************************************************************************************************************************************************************************
TASK [Gathering Facts] **********************************************************************************************************************************************************************
ok: [localhost]
TASK [role1 : role1/task1] ******************************************************************************************************************************************************************
changed: [localhost] => {
"msg": "Hello world!"
}
TASK [role2 : role2/task1] ******************************************************************************************************************************************************************
changed: [localhost] => {
"msg": "Hello world!"
}
RUNNING HANDLER [global : a global handler] *************************************************************************************************************************************************
ok: [localhost] => {
"msg": "a global handler has been triggered"
}
RUNNING HANDLER [global : a global handler] *************************************************************************************************************************************************
ok: [localhost] => {
"msg": "a global handler has been triggered"
}
PLAY RECAP **********************************************************************************************************************************************************************************
localhost : ok=5 changed=2 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/81013
|
https://github.com/ansible/ansible/pull/81358
|
bd3ffbe10903125993d7d68fa8cfd687124a241f
|
0cba3b7504c1aabe0fc3773e3ff3ac024edeb308
| 2023-06-09T11:47:43Z |
python
| 2023-08-15T13:03:56Z |
test/integration/targets/handlers/test_listen_role_dedup.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 80,089 |
file module produces: struct.error: ushort format requires 0 <= number <= (0x7fff * 2 + 1)
|
### Summary
This works on ansible 2.9.6 but fails on 3.8.10
Running on Ubuntu 20.04, Python version 3.8.10
**This task:**
```
- name: Create generic link activemq to installation of choice
file:
src: /opt/apache-activemq-5.15.10
dest: /opt/activemq
state: link
```
**Produces this error:**
```
TASK [Create generic link activemq to installation of choice] ******************
fatal: [localhost]: FAILED! => {
"changed": false,
"rc": 1
}
MSG:
MODULE FAILURE
See stdout/stderr for the exact error
MODULE_STDOUT:
Traceback (most recent call last):
File "/home/dfr/.ansible/tmp/ansible-tmp-24078.281748716-105360636672052/AnsiballZ_file.py", line 102, in <module>
_ansiballz_main()
File "/home/dfr/.ansible/tmp/ansible-tmp-24078.281748716-105360636672052/AnsiballZ_file.py", line 94, in _ansiballz_main
invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)
File "/home/dfr/.ansible/tmp/ansible-tmp-24078.281748716-105360636672052/AnsiballZ_file.py", line 34, in invoke_module
z.writestr(zinfo, sitecustomize)
File "/usr/lib/python3.8/zipfile.py", line 1816, in writestr
with self.open(zinfo, mode='w') as dest:
File "/usr/lib/python3.8/zipfile.py", line 1517, in open
return self._open_to_write(zinfo, force_zip64=force_zip64)
File "/usr/lib/python3.8/zipfile.py", line 1614, in _open_to_write
self.fp.write(zinfo.FileHeader(zip64))
File "/usr/lib/python3.8/zipfile.py", line 448, in FileHeader
header = struct.pack(structFileHeader, stringFileHeader,
struct.error: ushort format requires 0 <= number <= (0x7fff * 2 + 1)
```
This works on ansible 2.9.6 but fails on 3.8.10
### Issue Type
Bug Report
### Component Name
file
### Ansible Version
```console
dfr@ansible-20:~/projects/m9kdeploy$ ansible --version
ansible [core 2.12.10]
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/dfr/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3/dist-packages/ansible
ansible collection location = /home/dfr/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/bin/ansible
python version = 3.8.10 (default, Nov 14 2022, 12:59:47) [GCC 9.4.0]
jinja version = 2.10.1
libyaml = True
```
### Configuration
```console
dfr@ansible-20:~/projects/m9kdeploy/playbooks$ ansible-config dump --only-changed -t all
DEFAULT_HOST_LIST(/home/dfr/projects/m9kdeploy/playbooks/ansible.cfg) = ['/home/dfr/projects/m9kdeploy/playbooks/inventory-home']
DEFAULT_STDOUT_CALLBACK(/home/dfr/projects/m9kdeploy/playbooks/ansible.cfg) = debug
DEFAULT_VAULT_PASSWORD_FILE(/home/dfr/projects/m9kdeploy/playbooks/ansible.cfg) = /home/dfr/.ansible/vault-password
DISPLAY_SKIPPED_HOSTS(/home/dfr/projects/m9kdeploy/playbooks/ansible.cfg) = False
HOST_KEY_CHECKING(/home/dfr/projects/m9kdeploy/playbooks/ansible.cfg) = False
INTERPRETER_PYTHON(/home/dfr/projects/m9kdeploy/playbooks/ansible.cfg) = /usr/bin/python3
BECOME:
======
CACHE:
=====
CALLBACK:
========
default:
_______
display_skipped_hosts(/home/dfr/projects/m9kdeploy/playbooks/ansible.cfg) = False
CLICONF:
=======
CONNECTION:
==========
paramiko_ssh:
____________
host_key_checking(/home/dfr/projects/m9kdeploy/playbooks/ansible.cfg) = False
ssh:
___
host_key_checking(/home/dfr/projects/m9kdeploy/playbooks/ansible.cfg) = False
HTTPAPI:
=======
INVENTORY:
=========
LOOKUP:
======
NETCONF:
=======
SHELL:
=====
VARS:
====
```
### OS / Environment
Ubuntu 20.04
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
- name: Create generic link activemq to installation of choice
file:
src: /opt/apache-activemq-5.15.10
dest: /opt/activemq
state: link
```
### Expected Results
I expect the link to be created with no error
### Actual Results
```console
TASK [Create generic link activemq to installation of choice] ******************
fatal: [localhost]: FAILED! => {
"changed": false,
"rc": 1
}
MSG:
MODULE FAILURE
See stdout/stderr for the exact error
MODULE_STDOUT:
Traceback (most recent call last):
File "/home/dfr/.ansible/tmp/ansible-tmp-24078.281748716-105360636672052/AnsiballZ_file.py", line 102, in <module>
_ansiballz_main()
File "/home/dfr/.ansible/tmp/ansible-tmp-24078.281748716-105360636672052/AnsiballZ_file.py", line 94, in _ansiballz_main
invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)
File "/home/dfr/.ansible/tmp/ansible-tmp-24078.281748716-105360636672052/AnsiballZ_file.py", line 34, in invoke_module
z.writestr(zinfo, sitecustomize)
File "/usr/lib/python3.8/zipfile.py", line 1816, in writestr
with self.open(zinfo, mode='w') as dest:
File "/usr/lib/python3.8/zipfile.py", line 1517, in open
return self._open_to_write(zinfo, force_zip64=force_zip64)
File "/usr/lib/python3.8/zipfile.py", line 1614, in _open_to_write
self.fp.write(zinfo.FileHeader(zip64))
File "/usr/lib/python3.8/zipfile.py", line 448, in FileHeader
header = struct.pack(structFileHeader, stringFileHeader,
struct.error: ushort format requires 0 <= number <= (0x7fff * 2 + 1)
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/80089
|
https://github.com/ansible/ansible/pull/81521
|
9afaf2216b0bf38cfbd9a0c93409959f78d01820
|
a2673cb56438400bc02f89b58316597e517afb52
| 2023-02-24T02:26:33Z |
python
| 2023-08-17T19:08:53Z |
changelogs/fragments/80089-prevent-module-build-date-issue.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 80,089 |
file module produces: struct.error: ushort format requires 0 <= number <= (0x7fff * 2 + 1)
|
### Summary
This works on ansible 2.9.6 but fails on 3.8.10
Running on Ubuntu 20.04, Python version 3.8.10
**This task:**
```
- name: Create generic link activemq to installation of choice
file:
src: /opt/apache-activemq-5.15.10
dest: /opt/activemq
state: link
```
**Produces this error:**
```
TASK [Create generic link activemq to installation of choice] ******************
fatal: [localhost]: FAILED! => {
"changed": false,
"rc": 1
}
MSG:
MODULE FAILURE
See stdout/stderr for the exact error
MODULE_STDOUT:
Traceback (most recent call last):
File "/home/dfr/.ansible/tmp/ansible-tmp-24078.281748716-105360636672052/AnsiballZ_file.py", line 102, in <module>
_ansiballz_main()
File "/home/dfr/.ansible/tmp/ansible-tmp-24078.281748716-105360636672052/AnsiballZ_file.py", line 94, in _ansiballz_main
invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)
File "/home/dfr/.ansible/tmp/ansible-tmp-24078.281748716-105360636672052/AnsiballZ_file.py", line 34, in invoke_module
z.writestr(zinfo, sitecustomize)
File "/usr/lib/python3.8/zipfile.py", line 1816, in writestr
with self.open(zinfo, mode='w') as dest:
File "/usr/lib/python3.8/zipfile.py", line 1517, in open
return self._open_to_write(zinfo, force_zip64=force_zip64)
File "/usr/lib/python3.8/zipfile.py", line 1614, in _open_to_write
self.fp.write(zinfo.FileHeader(zip64))
File "/usr/lib/python3.8/zipfile.py", line 448, in FileHeader
header = struct.pack(structFileHeader, stringFileHeader,
struct.error: ushort format requires 0 <= number <= (0x7fff * 2 + 1)
```
This works on ansible 2.9.6 but fails on 3.8.10
### Issue Type
Bug Report
### Component Name
file
### Ansible Version
```console
dfr@ansible-20:~/projects/m9kdeploy$ ansible --version
ansible [core 2.12.10]
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/dfr/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3/dist-packages/ansible
ansible collection location = /home/dfr/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/bin/ansible
python version = 3.8.10 (default, Nov 14 2022, 12:59:47) [GCC 9.4.0]
jinja version = 2.10.1
libyaml = True
```
### Configuration
```console
dfr@ansible-20:~/projects/m9kdeploy/playbooks$ ansible-config dump --only-changed -t all
DEFAULT_HOST_LIST(/home/dfr/projects/m9kdeploy/playbooks/ansible.cfg) = ['/home/dfr/projects/m9kdeploy/playbooks/inventory-home']
DEFAULT_STDOUT_CALLBACK(/home/dfr/projects/m9kdeploy/playbooks/ansible.cfg) = debug
DEFAULT_VAULT_PASSWORD_FILE(/home/dfr/projects/m9kdeploy/playbooks/ansible.cfg) = /home/dfr/.ansible/vault-password
DISPLAY_SKIPPED_HOSTS(/home/dfr/projects/m9kdeploy/playbooks/ansible.cfg) = False
HOST_KEY_CHECKING(/home/dfr/projects/m9kdeploy/playbooks/ansible.cfg) = False
INTERPRETER_PYTHON(/home/dfr/projects/m9kdeploy/playbooks/ansible.cfg) = /usr/bin/python3
BECOME:
======
CACHE:
=====
CALLBACK:
========
default:
_______
display_skipped_hosts(/home/dfr/projects/m9kdeploy/playbooks/ansible.cfg) = False
CLICONF:
=======
CONNECTION:
==========
paramiko_ssh:
____________
host_key_checking(/home/dfr/projects/m9kdeploy/playbooks/ansible.cfg) = False
ssh:
___
host_key_checking(/home/dfr/projects/m9kdeploy/playbooks/ansible.cfg) = False
HTTPAPI:
=======
INVENTORY:
=========
LOOKUP:
======
NETCONF:
=======
SHELL:
=====
VARS:
====
```
### OS / Environment
Ubuntu 20.04
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
- name: Create generic link activemq to installation of choice
file:
src: /opt/apache-activemq-5.15.10
dest: /opt/activemq
state: link
```
### Expected Results
I expect the link to be created with no error
### Actual Results
```console
TASK [Create generic link activemq to installation of choice] ******************
fatal: [localhost]: FAILED! => {
"changed": false,
"rc": 1
}
MSG:
MODULE FAILURE
See stdout/stderr for the exact error
MODULE_STDOUT:
Traceback (most recent call last):
File "/home/dfr/.ansible/tmp/ansible-tmp-24078.281748716-105360636672052/AnsiballZ_file.py", line 102, in <module>
_ansiballz_main()
File "/home/dfr/.ansible/tmp/ansible-tmp-24078.281748716-105360636672052/AnsiballZ_file.py", line 94, in _ansiballz_main
invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)
File "/home/dfr/.ansible/tmp/ansible-tmp-24078.281748716-105360636672052/AnsiballZ_file.py", line 34, in invoke_module
z.writestr(zinfo, sitecustomize)
File "/usr/lib/python3.8/zipfile.py", line 1816, in writestr
with self.open(zinfo, mode='w') as dest:
File "/usr/lib/python3.8/zipfile.py", line 1517, in open
return self._open_to_write(zinfo, force_zip64=force_zip64)
File "/usr/lib/python3.8/zipfile.py", line 1614, in _open_to_write
self.fp.write(zinfo.FileHeader(zip64))
File "/usr/lib/python3.8/zipfile.py", line 448, in FileHeader
header = struct.pack(structFileHeader, stringFileHeader,
struct.error: ushort format requires 0 <= number <= (0x7fff * 2 + 1)
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/80089
|
https://github.com/ansible/ansible/pull/81521
|
9afaf2216b0bf38cfbd9a0c93409959f78d01820
|
a2673cb56438400bc02f89b58316597e517afb52
| 2023-02-24T02:26:33Z |
python
| 2023-08-17T19:08:53Z |
lib/ansible/executor/module_common.py
|
# (c) 2013-2014, Michael DeHaan <[email protected]>
# (c) 2015 Toshio Kuratomi <[email protected]>
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
# Make coding more python3-ish
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import ast
import base64
import datetime
import json
import os
import shlex
import zipfile
import re
import pkgutil
from ast import AST, Import, ImportFrom
from io import BytesIO
from ansible.release import __version__, __author__
from ansible import constants as C
from ansible.errors import AnsibleError
from ansible.executor.interpreter_discovery import InterpreterDiscoveryRequiredError
from ansible.executor.powershell import module_manifest as ps_manifest
from ansible.module_utils.common.json import AnsibleJSONEncoder
from ansible.module_utils.common.text.converters import to_bytes, to_text, to_native
from ansible.plugins.loader import module_utils_loader
from ansible.utils.collection_loader._collection_finder import _get_collection_metadata, _nested_dict_get
# Must import strategy and use write_locks from there
# If we import write_locks directly then we end up binding a
# variable to the object and then it never gets updated.
from ansible.executor import action_write_locks
from ansible.utils.display import Display
from collections import namedtuple
import importlib.util
import importlib.machinery
display = Display()
ModuleUtilsProcessEntry = namedtuple('ModuleUtilsProcessEntry', ['name_parts', 'is_ambiguous', 'has_redirected_child', 'is_optional'])
REPLACER = b"#<<INCLUDE_ANSIBLE_MODULE_COMMON>>"
REPLACER_VERSION = b"\"<<ANSIBLE_VERSION>>\""
REPLACER_COMPLEX = b"\"<<INCLUDE_ANSIBLE_MODULE_COMPLEX_ARGS>>\""
REPLACER_WINDOWS = b"# POWERSHELL_COMMON"
REPLACER_JSONARGS = b"<<INCLUDE_ANSIBLE_MODULE_JSON_ARGS>>"
REPLACER_SELINUX = b"<<SELINUX_SPECIAL_FILESYSTEMS>>"
# We could end up writing out parameters with unicode characters so we need to
# specify an encoding for the python source file
ENCODING_STRING = u'# -*- coding: utf-8 -*-'
b_ENCODING_STRING = b'# -*- coding: utf-8 -*-'
# module_common is relative to module_utils, so fix the path
_MODULE_UTILS_PATH = os.path.join(os.path.dirname(__file__), '..', 'module_utils')
# ******************************************************************************
ANSIBALLZ_TEMPLATE = u'''%(shebang)s
%(coding)s
_ANSIBALLZ_WRAPPER = True # For test-module.py script to tell this is a ANSIBALLZ_WRAPPER
# This code is part of Ansible, but is an independent component.
# The code in this particular templatable string, and this templatable string
# only, is BSD licensed. Modules which end up using this snippet, which is
# dynamically combined together by Ansible still belong to the author of the
# module, and they may assign their own license to the complete work.
#
# Copyright (c), James Cammarata, 2016
# Copyright (c), Toshio Kuratomi, 2016
#
# Redistribution and use in source and binary forms, with or without modification,
# are permitted provided that the following conditions are met:
#
# * Redistributions of source code must retain the above copyright
# notice, this list of conditions and the following disclaimer.
# * Redistributions in binary form must reproduce the above copyright notice,
# this list of conditions and the following disclaimer in the documentation
# and/or other materials provided with the distribution.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND
# ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
# WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED.
# IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
# INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
# PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
# INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE
# USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
def _ansiballz_main():
import os
import os.path
# Access to the working directory is required by Python when using pipelining, as well as for the coverage module.
# Some platforms, such as macOS, may not allow querying the working directory when using become to drop privileges.
try:
os.getcwd()
except OSError:
try:
os.chdir(os.path.expanduser('~'))
except OSError:
os.chdir('/')
%(rlimit)s
import sys
import __main__
# For some distros and python versions we pick up this script in the temporary
# directory. This leads to problems when the ansible module masks a python
# library that another import needs. We have not figured out what about the
# specific distros and python versions causes this to behave differently.
#
# Tested distros:
# Fedora23 with python3.4 Works
# Ubuntu15.10 with python2.7 Works
# Ubuntu15.10 with python3.4 Fails without this
# Ubuntu16.04.1 with python3.5 Fails without this
# To test on another platform:
# * use the copy module (since this shadows the stdlib copy module)
# * Turn off pipelining
# * Make sure that the destination file does not exist
# * ansible ubuntu16-test -m copy -a 'src=/etc/motd dest=/var/tmp/m'
# This will traceback in shutil. Looking at the complete traceback will show
# that shutil is importing copy which finds the ansible module instead of the
# stdlib module
scriptdir = None
try:
scriptdir = os.path.dirname(os.path.realpath(__main__.__file__))
except (AttributeError, OSError):
# Some platforms don't set __file__ when reading from stdin
# OSX raises OSError if using abspath() in a directory we don't have
# permission to read (realpath calls abspath)
pass
# Strip cwd from sys.path to avoid potential permissions issues
excludes = set(('', '.', scriptdir))
sys.path = [p for p in sys.path if p not in excludes]
import base64
import runpy
import shutil
import tempfile
import zipfile
if sys.version_info < (3,):
PY3 = False
else:
PY3 = True
ZIPDATA = """%(zipdata)s"""
# Note: temp_path isn't needed once we switch to zipimport
def invoke_module(modlib_path, temp_path, json_params):
# When installed via setuptools (including python setup.py install),
# ansible may be installed with an easy-install.pth file. That file
# may load the system-wide install of ansible rather than the one in
# the module. sitecustomize is the only way to override that setting.
z = zipfile.ZipFile(modlib_path, mode='a')
# py3: modlib_path will be text, py2: it's bytes. Need bytes at the end
sitecustomize = u'import sys\\nsys.path.insert(0,"%%s")\\n' %% modlib_path
sitecustomize = sitecustomize.encode('utf-8')
# Use a ZipInfo to work around zipfile limitation on hosts with
# clocks set to a pre-1980 year (for instance, Raspberry Pi)
zinfo = zipfile.ZipInfo()
zinfo.filename = 'sitecustomize.py'
zinfo.date_time = ( %(year)i, %(month)i, %(day)i, %(hour)i, %(minute)i, %(second)i)
z.writestr(zinfo, sitecustomize)
z.close()
# Put the zipped up module_utils we got from the controller first in the python path so that we
# can monkeypatch the right basic
sys.path.insert(0, modlib_path)
# Monkeypatch the parameters into basic
from ansible.module_utils import basic
basic._ANSIBLE_ARGS = json_params
%(coverage)s
# Run the module! By importing it as '__main__', it thinks it is executing as a script
runpy.run_module(mod_name='%(module_fqn)s', init_globals=dict(_module_fqn='%(module_fqn)s', _modlib_path=modlib_path),
run_name='__main__', alter_sys=True)
# Ansible modules must exit themselves
print('{"msg": "New-style module did not handle its own exit", "failed": true}')
sys.exit(1)
def debug(command, zipped_mod, json_params):
# The code here normally doesn't run. It's only used for debugging on the
# remote machine.
#
# The subcommands in this function make it easier to debug ansiballz
# modules. Here's the basic steps:
#
# Run ansible with the environment variable: ANSIBLE_KEEP_REMOTE_FILES=1 and -vvv
# to save the module file remotely::
# $ ANSIBLE_KEEP_REMOTE_FILES=1 ansible host1 -m ping -a 'data=october' -vvv
#
# Part of the verbose output will tell you where on the remote machine the
# module was written to::
# [...]
# <host1> SSH: EXEC ssh -C -q -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o
# PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o ConnectTimeout=10 -o
# ControlPath=/home/badger/.ansible/cp/ansible-ssh-%%h-%%p-%%r -tt rhel7 '/bin/sh -c '"'"'LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8
# LC_MESSAGES=en_US.UTF-8 /usr/bin/python /home/badger/.ansible/tmp/ansible-tmp-1461173013.93-9076457629738/ping'"'"''
# [...]
#
# Login to the remote machine and run the module file via from the previous
# step with the explode subcommand to extract the module payload into
# source files::
# $ ssh host1
# $ /usr/bin/python /home/badger/.ansible/tmp/ansible-tmp-1461173013.93-9076457629738/ping explode
# Module expanded into:
# /home/badger/.ansible/tmp/ansible-tmp-1461173408.08-279692652635227/ansible
#
# You can now edit the source files to instrument the code or experiment with
# different parameter values. When you're ready to run the code you've modified
# (instead of the code from the actual zipped module), use the execute subcommand like this::
# $ /usr/bin/python /home/badger/.ansible/tmp/ansible-tmp-1461173013.93-9076457629738/ping execute
# Okay to use __file__ here because we're running from a kept file
basedir = os.path.join(os.path.abspath(os.path.dirname(__file__)), 'debug_dir')
args_path = os.path.join(basedir, 'args')
if command == 'explode':
# transform the ZIPDATA into an exploded directory of code and then
# print the path to the code. This is an easy way for people to look
# at the code on the remote machine for debugging it in that
# environment
z = zipfile.ZipFile(zipped_mod)
for filename in z.namelist():
if filename.startswith('/'):
raise Exception('Something wrong with this module zip file: should not contain absolute paths')
dest_filename = os.path.join(basedir, filename)
if dest_filename.endswith(os.path.sep) and not os.path.exists(dest_filename):
os.makedirs(dest_filename)
else:
directory = os.path.dirname(dest_filename)
if not os.path.exists(directory):
os.makedirs(directory)
f = open(dest_filename, 'wb')
f.write(z.read(filename))
f.close()
# write the args file
f = open(args_path, 'wb')
f.write(json_params)
f.close()
print('Module expanded into:')
print('%%s' %% basedir)
exitcode = 0
elif command == 'execute':
# Execute the exploded code instead of executing the module from the
# embedded ZIPDATA. This allows people to easily run their modified
# code on the remote machine to see how changes will affect it.
# Set pythonpath to the debug dir
sys.path.insert(0, basedir)
# read in the args file which the user may have modified
with open(args_path, 'rb') as f:
json_params = f.read()
# Monkeypatch the parameters into basic
from ansible.module_utils import basic
basic._ANSIBLE_ARGS = json_params
# Run the module! By importing it as '__main__', it thinks it is executing as a script
runpy.run_module(mod_name='%(module_fqn)s', init_globals=None, run_name='__main__', alter_sys=True)
# Ansible modules must exit themselves
print('{"msg": "New-style module did not handle its own exit", "failed": true}')
sys.exit(1)
else:
print('WARNING: Unknown debug command. Doing nothing.')
exitcode = 0
return exitcode
#
# See comments in the debug() method for information on debugging
#
ANSIBALLZ_PARAMS = %(params)s
if PY3:
ANSIBALLZ_PARAMS = ANSIBALLZ_PARAMS.encode('utf-8')
try:
# There's a race condition with the controller removing the
# remote_tmpdir and this module executing under async. So we cannot
# store this in remote_tmpdir (use system tempdir instead)
# Only need to use [ansible_module]_payload_ in the temp_path until we move to zipimport
# (this helps ansible-test produce coverage stats)
temp_path = tempfile.mkdtemp(prefix='ansible_%(ansible_module)s_payload_')
zipped_mod = os.path.join(temp_path, 'ansible_%(ansible_module)s_payload.zip')
with open(zipped_mod, 'wb') as modlib:
modlib.write(base64.b64decode(ZIPDATA))
if len(sys.argv) == 2:
exitcode = debug(sys.argv[1], zipped_mod, ANSIBALLZ_PARAMS)
else:
# Note: temp_path isn't needed once we switch to zipimport
invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)
finally:
try:
shutil.rmtree(temp_path)
except (NameError, OSError):
# tempdir creation probably failed
pass
sys.exit(exitcode)
if __name__ == '__main__':
_ansiballz_main()
'''
ANSIBALLZ_COVERAGE_TEMPLATE = '''
os.environ['COVERAGE_FILE'] = '%(coverage_output)s=python-%%s=coverage' %% '.'.join(str(v) for v in sys.version_info[:2])
import atexit
try:
import coverage
except ImportError:
print('{"msg": "Could not import `coverage` module.", "failed": true}')
sys.exit(1)
cov = coverage.Coverage(config_file='%(coverage_config)s')
def atexit_coverage():
cov.stop()
cov.save()
atexit.register(atexit_coverage)
cov.start()
'''
ANSIBALLZ_COVERAGE_CHECK_TEMPLATE = '''
try:
if PY3:
import importlib.util
if importlib.util.find_spec('coverage') is None:
raise ImportError
else:
import imp
imp.find_module('coverage')
except ImportError:
print('{"msg": "Could not find `coverage` module.", "failed": true}')
sys.exit(1)
'''
ANSIBALLZ_RLIMIT_TEMPLATE = '''
import resource
existing_soft, existing_hard = resource.getrlimit(resource.RLIMIT_NOFILE)
# adjust soft limit subject to existing hard limit
requested_soft = min(existing_hard, %(rlimit_nofile)d)
if requested_soft != existing_soft:
try:
resource.setrlimit(resource.RLIMIT_NOFILE, (requested_soft, existing_hard))
except ValueError:
# some platforms (eg macOS) lie about their hard limit
pass
'''
def _strip_comments(source):
# Strip comments and blank lines from the wrapper
buf = []
for line in source.splitlines():
l = line.strip()
if not l or l.startswith(u'#'):
continue
buf.append(line)
return u'\n'.join(buf)
if C.DEFAULT_KEEP_REMOTE_FILES:
# Keep comments when KEEP_REMOTE_FILES is set. That way users will see
# the comments with some nice usage instructions
ACTIVE_ANSIBALLZ_TEMPLATE = ANSIBALLZ_TEMPLATE
else:
# ANSIBALLZ_TEMPLATE stripped of comments for smaller over the wire size
ACTIVE_ANSIBALLZ_TEMPLATE = _strip_comments(ANSIBALLZ_TEMPLATE)
# dirname(dirname(dirname(site-packages/ansible/executor/module_common.py) == site-packages
# Do this instead of getting site-packages from distutils.sysconfig so we work when we
# haven't been installed
site_packages = os.path.dirname(os.path.dirname(os.path.dirname(__file__)))
CORE_LIBRARY_PATH_RE = re.compile(r'%s/(?P<path>ansible/modules/.*)\.(py|ps1)$' % re.escape(site_packages))
COLLECTION_PATH_RE = re.compile(r'/(?P<path>ansible_collections/[^/]+/[^/]+/plugins/modules/.*)\.(py|ps1)$')
# Detect new-style Python modules by looking for required imports:
# import ansible_collections.[my_ns.my_col.plugins.module_utils.my_module_util]
# from ansible_collections.[my_ns.my_col.plugins.module_utils import my_module_util]
# import ansible.module_utils[.basic]
# from ansible.module_utils[ import basic]
# from ansible.module_utils[.basic import AnsibleModule]
# from ..module_utils[ import basic]
# from ..module_utils[.basic import AnsibleModule]
NEW_STYLE_PYTHON_MODULE_RE = re.compile(
# Relative imports
br'(?:from +\.{2,} *module_utils.* +import |'
# Collection absolute imports:
br'from +ansible_collections\.[^.]+\.[^.]+\.plugins\.module_utils.* +import |'
br'import +ansible_collections\.[^.]+\.[^.]+\.plugins\.module_utils.*|'
# Core absolute imports
br'from +ansible\.module_utils.* +import |'
br'import +ansible\.module_utils\.)'
)
class ModuleDepFinder(ast.NodeVisitor):
def __init__(self, module_fqn, tree, is_pkg_init=False, *args, **kwargs):
"""
Walk the ast tree for the python module.
:arg module_fqn: The fully qualified name to reach this module in dotted notation.
example: ansible.module_utils.basic
:arg is_pkg_init: Inform the finder it's looking at a package init (eg __init__.py) to allow
relative import expansion to use the proper package level without having imported it locally first.
Save submodule[.submoduleN][.identifier] into self.submodules
when they are from ansible.module_utils or ansible_collections packages
self.submodules will end up with tuples like:
- ('ansible', 'module_utils', 'basic',)
- ('ansible', 'module_utils', 'urls', 'fetch_url')
- ('ansible', 'module_utils', 'database', 'postgres')
- ('ansible', 'module_utils', 'database', 'postgres', 'quote')
- ('ansible', 'module_utils', 'database', 'postgres', 'quote')
- ('ansible_collections', 'my_ns', 'my_col', 'plugins', 'module_utils', 'foo')
It's up to calling code to determine whether the final element of the
tuple are module names or something else (function, class, or variable names)
.. seealso:: :python3:class:`ast.NodeVisitor`
"""
super(ModuleDepFinder, self).__init__(*args, **kwargs)
self._tree = tree # squirrel this away so we can compare node parents to it
self.submodules = set()
self.optional_imports = set()
self.module_fqn = module_fqn
self.is_pkg_init = is_pkg_init
self._visit_map = {
Import: self.visit_Import,
ImportFrom: self.visit_ImportFrom,
}
self.visit(tree)
def generic_visit(self, node):
"""Overridden ``generic_visit`` that makes some assumptions about our
use case, and improves performance by calling visitors directly instead
of calling ``visit`` to offload calling visitors.
"""
generic_visit = self.generic_visit
visit_map = self._visit_map
for field, value in ast.iter_fields(node):
if isinstance(value, list):
for item in value:
if isinstance(item, (Import, ImportFrom)):
item.parent = node
visit_map[item.__class__](item)
elif isinstance(item, AST):
generic_visit(item)
visit = generic_visit
def visit_Import(self, node):
"""
Handle import ansible.module_utils.MODLIB[.MODLIBn] [as asname]
We save these as interesting submodules when the imported library is in ansible.module_utils
or ansible.collections
"""
for alias in node.names:
if (alias.name.startswith('ansible.module_utils.') or
alias.name.startswith('ansible_collections.')):
py_mod = tuple(alias.name.split('.'))
self.submodules.add(py_mod)
# if the import's parent is the root document, it's a required import, otherwise it's optional
if node.parent != self._tree:
self.optional_imports.add(py_mod)
self.generic_visit(node)
def visit_ImportFrom(self, node):
"""
Handle from ansible.module_utils.MODLIB import [.MODLIBn] [as asname]
Also has to handle relative imports
We save these as interesting submodules when the imported library is in ansible.module_utils
or ansible.collections
"""
# FIXME: These should all get skipped:
# from ansible.executor import module_common
# from ...executor import module_common
# from ... import executor (Currently it gives a non-helpful error)
if node.level > 0:
# if we're in a package init, we have to add one to the node level (and make it none if 0 to preserve the right slicing behavior)
level_slice_offset = -node.level + 1 or None if self.is_pkg_init else -node.level
if self.module_fqn:
parts = tuple(self.module_fqn.split('.'))
if node.module:
# relative import: from .module import x
node_module = '.'.join(parts[:level_slice_offset] + (node.module,))
else:
# relative import: from . import x
node_module = '.'.join(parts[:level_slice_offset])
else:
# fall back to an absolute import
node_module = node.module
else:
# absolute import: from module import x
node_module = node.module
# Specialcase: six is a special case because of its
# import logic
py_mod = None
if node.names[0].name == '_six':
self.submodules.add(('_six',))
elif node_module.startswith('ansible.module_utils'):
# from ansible.module_utils.MODULE1[.MODULEn] import IDENTIFIER [as asname]
# from ansible.module_utils.MODULE1[.MODULEn] import MODULEn+1 [as asname]
# from ansible.module_utils.MODULE1[.MODULEn] import MODULEn+1 [,IDENTIFIER] [as asname]
# from ansible.module_utils import MODULE1 [,MODULEn] [as asname]
py_mod = tuple(node_module.split('.'))
elif node_module.startswith('ansible_collections.'):
if node_module.endswith('plugins.module_utils') or '.plugins.module_utils.' in node_module:
# from ansible_collections.ns.coll.plugins.module_utils import MODULE [as aname] [,MODULE2] [as aname]
# from ansible_collections.ns.coll.plugins.module_utils.MODULE import IDENTIFIER [as aname]
# FIXME: Unhandled cornercase (needs to be ignored):
# from ansible_collections.ns.coll.plugins.[!module_utils].[FOO].plugins.module_utils import IDENTIFIER
py_mod = tuple(node_module.split('.'))
else:
# Not from module_utils so ignore. for instance:
# from ansible_collections.ns.coll.plugins.lookup import IDENTIFIER
pass
if py_mod:
for alias in node.names:
self.submodules.add(py_mod + (alias.name,))
# if the import's parent is the root document, it's a required import, otherwise it's optional
if node.parent != self._tree:
self.optional_imports.add(py_mod + (alias.name,))
self.generic_visit(node)
def _slurp(path):
if not os.path.exists(path):
raise AnsibleError("imported module support code does not exist at %s" % os.path.abspath(path))
with open(path, 'rb') as fd:
data = fd.read()
return data
def _get_shebang(interpreter, task_vars, templar, args=tuple(), remote_is_local=False):
"""
Handles the different ways ansible allows overriding the shebang target for a module.
"""
# FUTURE: add logical equivalence for python3 in the case of py3-only modules
interpreter_name = os.path.basename(interpreter).strip()
# name for interpreter var
interpreter_config = u'ansible_%s_interpreter' % interpreter_name
# key for config
interpreter_config_key = "INTERPRETER_%s" % interpreter_name.upper()
interpreter_out = None
# looking for python, rest rely on matching vars
if interpreter_name == 'python':
# skip detection for network os execution, use playbook supplied one if possible
if remote_is_local:
interpreter_out = task_vars['ansible_playbook_python']
# a config def exists for this interpreter type; consult config for the value
elif C.config.get_configuration_definition(interpreter_config_key):
interpreter_from_config = C.config.get_config_value(interpreter_config_key, variables=task_vars)
interpreter_out = templar.template(interpreter_from_config.strip())
# handle interpreter discovery if requested or empty interpreter was provided
if not interpreter_out or interpreter_out in ['auto', 'auto_legacy', 'auto_silent', 'auto_legacy_silent']:
discovered_interpreter_config = u'discovered_interpreter_%s' % interpreter_name
facts_from_task_vars = task_vars.get('ansible_facts', {})
if discovered_interpreter_config not in facts_from_task_vars:
# interpreter discovery is desired, but has not been run for this host
raise InterpreterDiscoveryRequiredError("interpreter discovery needed", interpreter_name=interpreter_name, discovery_mode=interpreter_out)
else:
interpreter_out = facts_from_task_vars[discovered_interpreter_config]
else:
raise InterpreterDiscoveryRequiredError("interpreter discovery required", interpreter_name=interpreter_name, discovery_mode='auto_legacy')
elif interpreter_config in task_vars:
# for non python we consult vars for a possible direct override
interpreter_out = templar.template(task_vars.get(interpreter_config).strip())
if not interpreter_out:
# nothing matched(None) or in case someone configures empty string or empty intepreter
interpreter_out = interpreter
# set shebang
shebang = u'#!{0}'.format(interpreter_out)
if args:
shebang = shebang + u' ' + u' '.join(args)
return shebang, interpreter_out
class ModuleUtilLocatorBase:
def __init__(self, fq_name_parts, is_ambiguous=False, child_is_redirected=False, is_optional=False):
self._is_ambiguous = is_ambiguous
# a child package redirection could cause intermediate package levels to be missing, eg
# from ansible.module_utils.x.y.z import foo; if x.y.z.foo is redirected, we may not have packages on disk for
# the intermediate packages x.y.z, so we'll need to supply empty packages for those
self._child_is_redirected = child_is_redirected
self._is_optional = is_optional
self.found = False
self.redirected = False
self.fq_name_parts = fq_name_parts
self.source_code = ''
self.output_path = ''
self.is_package = False
self._collection_name = None
# for ambiguous imports, we should only test for things more than one level below module_utils
# this lets us detect erroneous imports and redirections earlier
if is_ambiguous and len(self._get_module_utils_remainder_parts(fq_name_parts)) > 1:
self.candidate_names = [fq_name_parts, fq_name_parts[:-1]]
else:
self.candidate_names = [fq_name_parts]
@property
def candidate_names_joined(self):
return ['.'.join(n) for n in self.candidate_names]
def _handle_redirect(self, name_parts):
module_utils_relative_parts = self._get_module_utils_remainder_parts(name_parts)
# only allow redirects from below module_utils- if above that, bail out (eg, parent package names)
if not module_utils_relative_parts:
return False
try:
collection_metadata = _get_collection_metadata(self._collection_name)
except ValueError as ve: # collection not found or some other error related to collection load
if self._is_optional:
return False
raise AnsibleError('error processing module_util {0} loading redirected collection {1}: {2}'
.format('.'.join(name_parts), self._collection_name, to_native(ve)))
routing_entry = _nested_dict_get(collection_metadata, ['plugin_routing', 'module_utils', '.'.join(module_utils_relative_parts)])
if not routing_entry:
return False
# FIXME: add deprecation warning support
dep_or_ts = routing_entry.get('tombstone')
removed = dep_or_ts is not None
if not removed:
dep_or_ts = routing_entry.get('deprecation')
if dep_or_ts:
removal_date = dep_or_ts.get('removal_date')
removal_version = dep_or_ts.get('removal_version')
warning_text = dep_or_ts.get('warning_text')
msg = 'module_util {0} has been removed'.format('.'.join(name_parts))
if warning_text:
msg += ' ({0})'.format(warning_text)
else:
msg += '.'
display.deprecated(msg, removal_version, removed, removal_date, self._collection_name)
if 'redirect' in routing_entry:
self.redirected = True
source_pkg = '.'.join(name_parts)
self.is_package = True # treat all redirects as packages
redirect_target_pkg = routing_entry['redirect']
# expand FQCN redirects
if not redirect_target_pkg.startswith('ansible_collections'):
split_fqcn = redirect_target_pkg.split('.')
if len(split_fqcn) < 3:
raise Exception('invalid redirect for {0}: {1}'.format(source_pkg, redirect_target_pkg))
# assume it's an FQCN, expand it
redirect_target_pkg = 'ansible_collections.{0}.{1}.plugins.module_utils.{2}'.format(
split_fqcn[0], # ns
split_fqcn[1], # coll
'.'.join(split_fqcn[2:]) # sub-module_utils remainder
)
display.vvv('redirecting module_util {0} to {1}'.format(source_pkg, redirect_target_pkg))
self.source_code = self._generate_redirect_shim_source(source_pkg, redirect_target_pkg)
return True
return False
def _get_module_utils_remainder_parts(self, name_parts):
# subclasses should override to return the name parts after module_utils
return []
def _get_module_utils_remainder(self, name_parts):
# return the remainder parts as a package string
return '.'.join(self._get_module_utils_remainder_parts(name_parts))
def _find_module(self, name_parts):
return False
def _locate(self, redirect_first=True):
for candidate_name_parts in self.candidate_names:
if redirect_first and self._handle_redirect(candidate_name_parts):
break
if self._find_module(candidate_name_parts):
break
if not redirect_first and self._handle_redirect(candidate_name_parts):
break
else: # didn't find what we were looking for- last chance for packages whose parents were redirected
if self._child_is_redirected: # make fake packages
self.is_package = True
self.source_code = ''
else: # nope, just bail
return
if self.is_package:
path_parts = candidate_name_parts + ('__init__',)
else:
path_parts = candidate_name_parts
self.found = True
self.output_path = os.path.join(*path_parts) + '.py'
self.fq_name_parts = candidate_name_parts
def _generate_redirect_shim_source(self, fq_source_module, fq_target_module):
return """
import sys
import {1} as mod
sys.modules['{0}'] = mod
""".format(fq_source_module, fq_target_module)
# FIXME: add __repr__ impl
class LegacyModuleUtilLocator(ModuleUtilLocatorBase):
def __init__(self, fq_name_parts, is_ambiguous=False, mu_paths=None, child_is_redirected=False):
super(LegacyModuleUtilLocator, self).__init__(fq_name_parts, is_ambiguous, child_is_redirected)
if fq_name_parts[0:2] != ('ansible', 'module_utils'):
raise Exception('this class can only locate from ansible.module_utils, got {0}'.format(fq_name_parts))
if fq_name_parts[2] == 'six':
# FIXME: handle the ansible.module_utils.six._six case with a redirect or an internal _six attr on six itself?
# six creates its submodules at runtime; convert all these to just 'ansible.module_utils.six'
fq_name_parts = ('ansible', 'module_utils', 'six')
self.candidate_names = [fq_name_parts]
self._mu_paths = mu_paths
self._collection_name = 'ansible.builtin' # legacy module utils always look in ansible.builtin for redirects
self._locate(redirect_first=False) # let local stuff override redirects for legacy
def _get_module_utils_remainder_parts(self, name_parts):
return name_parts[2:] # eg, foo.bar for ansible.module_utils.foo.bar
def _find_module(self, name_parts):
rel_name_parts = self._get_module_utils_remainder_parts(name_parts)
# no redirection; try to find the module
if len(rel_name_parts) == 1: # direct child of module_utils, just search the top-level dirs we were given
paths = self._mu_paths
else: # a nested submodule of module_utils, extend the paths given with the intermediate package names
paths = [os.path.join(p, *rel_name_parts[:-1]) for p in
self._mu_paths] # extend the MU paths with the relative bit
# find_spec needs the full module name
self._info = info = importlib.machinery.PathFinder.find_spec('.'.join(name_parts), paths)
if info is not None and os.path.splitext(info.origin)[1] in importlib.machinery.SOURCE_SUFFIXES:
self.is_package = info.origin.endswith('/__init__.py')
path = info.origin
else:
return False
self.source_code = _slurp(path)
return True
class CollectionModuleUtilLocator(ModuleUtilLocatorBase):
def __init__(self, fq_name_parts, is_ambiguous=False, child_is_redirected=False, is_optional=False):
super(CollectionModuleUtilLocator, self).__init__(fq_name_parts, is_ambiguous, child_is_redirected, is_optional)
if fq_name_parts[0] != 'ansible_collections':
raise Exception('CollectionModuleUtilLocator can only locate from ansible_collections, got {0}'.format(fq_name_parts))
elif len(fq_name_parts) >= 6 and fq_name_parts[3:5] != ('plugins', 'module_utils'):
raise Exception('CollectionModuleUtilLocator can only locate below ansible_collections.(ns).(coll).plugins.module_utils, got {0}'
.format(fq_name_parts))
self._collection_name = '.'.join(fq_name_parts[1:3])
self._locate()
def _find_module(self, name_parts):
# synthesize empty inits for packages down through module_utils- we don't want to allow those to be shipped over, but the
# package hierarchy needs to exist
if len(name_parts) < 6:
self.source_code = ''
self.is_package = True
return True
# NB: we can't use pkgutil.get_data safely here, since we don't want to import/execute package/module code on
# the controller while analyzing/assembling the module, so we'll have to manually import the collection's
# Python package to locate it (import root collection, reassemble resource path beneath, fetch source)
collection_pkg_name = '.'.join(name_parts[0:3])
resource_base_path = os.path.join(*name_parts[3:])
src = None
# look for package_dir first, then module
try:
src = pkgutil.get_data(collection_pkg_name, to_native(os.path.join(resource_base_path, '__init__.py')))
except ImportError:
pass
# TODO: we might want to synthesize fake inits for py3-style packages, for now they're required beneath module_utils
if src is not None: # empty string is OK
self.is_package = True
else:
try:
src = pkgutil.get_data(collection_pkg_name, to_native(resource_base_path + '.py'))
except ImportError:
pass
if src is None: # empty string is OK
return False
self.source_code = src
return True
def _get_module_utils_remainder_parts(self, name_parts):
return name_parts[5:] # eg, foo.bar for ansible_collections.ns.coll.plugins.module_utils.foo.bar
def recursive_finder(name, module_fqn, module_data, zf):
"""
Using ModuleDepFinder, make sure we have all of the module_utils files that
the module and its module_utils files needs. (no longer actually recursive)
:arg name: Name of the python module we're examining
:arg module_fqn: Fully qualified name of the python module we're scanning
:arg module_data: string Python code of the module we're scanning
:arg zf: An open :python:class:`zipfile.ZipFile` object that holds the Ansible module payload
which we're assembling
"""
# py_module_cache maps python module names to a tuple of the code in the module
# and the pathname to the module.
# Here we pre-load it with modules which we create without bothering to
# read from actual files (In some cases, these need to differ from what ansible
# ships because they're namespace packages in the module)
# FIXME: do we actually want ns pkg behavior for these? Seems like they should just be forced to emptyish pkg stubs
py_module_cache = {
('ansible',): (
b'from pkgutil import extend_path\n'
b'__path__=extend_path(__path__,__name__)\n'
b'__version__="' + to_bytes(__version__) +
b'"\n__author__="' + to_bytes(__author__) + b'"\n',
'ansible/__init__.py'),
('ansible', 'module_utils'): (
b'from pkgutil import extend_path\n'
b'__path__=extend_path(__path__,__name__)\n',
'ansible/module_utils/__init__.py')}
module_utils_paths = [p for p in module_utils_loader._get_paths(subdirs=False) if os.path.isdir(p)]
module_utils_paths.append(_MODULE_UTILS_PATH)
# Parse the module code and find the imports of ansible.module_utils
try:
tree = compile(module_data, '<unknown>', 'exec', ast.PyCF_ONLY_AST)
except (SyntaxError, IndentationError) as e:
raise AnsibleError("Unable to import %s due to %s" % (name, e.msg))
finder = ModuleDepFinder(module_fqn, tree)
# the format of this set is a tuple of the module name and whether or not the import is ambiguous as a module name
# or an attribute of a module (eg from x.y import z <-- is z a module or an attribute of x.y?)
modules_to_process = [ModuleUtilsProcessEntry(m, True, False, is_optional=m in finder.optional_imports) for m in finder.submodules]
# HACK: basic is currently always required since module global init is currently tied up with AnsiballZ arg input
modules_to_process.append(ModuleUtilsProcessEntry(('ansible', 'module_utils', 'basic'), False, False, is_optional=False))
# we'll be adding new modules inline as we discover them, so just keep going til we've processed them all
while modules_to_process:
modules_to_process.sort() # not strictly necessary, but nice to process things in predictable and repeatable order
py_module_name, is_ambiguous, child_is_redirected, is_optional = modules_to_process.pop(0)
if py_module_name in py_module_cache:
# this is normal; we'll often see the same module imported many times, but we only need to process it once
continue
if py_module_name[0:2] == ('ansible', 'module_utils'):
module_info = LegacyModuleUtilLocator(py_module_name, is_ambiguous=is_ambiguous,
mu_paths=module_utils_paths, child_is_redirected=child_is_redirected)
elif py_module_name[0] == 'ansible_collections':
module_info = CollectionModuleUtilLocator(py_module_name, is_ambiguous=is_ambiguous,
child_is_redirected=child_is_redirected, is_optional=is_optional)
else:
# FIXME: dot-joined result
display.warning('ModuleDepFinder improperly found a non-module_utils import %s'
% [py_module_name])
continue
# Could not find the module. Construct a helpful error message.
if not module_info.found:
if is_optional:
# this was a best-effort optional import that we couldn't find, oh well, move along...
continue
# FIXME: use dot-joined candidate names
msg = 'Could not find imported module support code for {0}. Looked for ({1})'.format(module_fqn, module_info.candidate_names_joined)
raise AnsibleError(msg)
# check the cache one more time with the module we actually found, since the name could be different than the input
# eg, imported name vs module
if module_info.fq_name_parts in py_module_cache:
continue
# compile the source, process all relevant imported modules
try:
tree = compile(module_info.source_code, '<unknown>', 'exec', ast.PyCF_ONLY_AST)
except (SyntaxError, IndentationError) as e:
raise AnsibleError("Unable to import %s due to %s" % (module_info.fq_name_parts, e.msg))
finder = ModuleDepFinder('.'.join(module_info.fq_name_parts), tree, module_info.is_package)
modules_to_process.extend(ModuleUtilsProcessEntry(m, True, False, is_optional=m in finder.optional_imports)
for m in finder.submodules if m not in py_module_cache)
# we've processed this item, add it to the output list
py_module_cache[module_info.fq_name_parts] = (module_info.source_code, module_info.output_path)
# ensure we process all ancestor package inits
accumulated_pkg_name = []
for pkg in module_info.fq_name_parts[:-1]:
accumulated_pkg_name.append(pkg) # we're accumulating this across iterations
normalized_name = tuple(accumulated_pkg_name) # extra machinations to get a hashable type (list is not)
if normalized_name not in py_module_cache:
modules_to_process.append(ModuleUtilsProcessEntry(normalized_name, False, module_info.redirected, is_optional=is_optional))
for py_module_name in py_module_cache:
py_module_file_name = py_module_cache[py_module_name][1]
zf.writestr(py_module_file_name, py_module_cache[py_module_name][0])
mu_file = to_text(py_module_file_name, errors='surrogate_or_strict')
display.vvvvv("Including module_utils file %s" % mu_file)
def _is_binary(b_module_data):
textchars = bytearray(set([7, 8, 9, 10, 12, 13, 27]) | set(range(0x20, 0x100)) - set([0x7f]))
start = b_module_data[:1024]
return bool(start.translate(None, textchars))
def _get_ansible_module_fqn(module_path):
"""
Get the fully qualified name for an ansible module based on its pathname
remote_module_fqn is the fully qualified name. Like ansible.modules.system.ping
Or ansible_collections.Namespace.Collection_name.plugins.modules.ping
.. warning:: This function is for ansible modules only. It won't work for other things
(non-module plugins, etc)
"""
remote_module_fqn = None
# Is this a core module?
match = CORE_LIBRARY_PATH_RE.search(module_path)
if not match:
# Is this a module in a collection?
match = COLLECTION_PATH_RE.search(module_path)
# We can tell the FQN for core modules and collection modules
if match:
path = match.group('path')
if '.' in path:
# FQNs must be valid as python identifiers. This sanity check has failed.
# we could check other things as well
raise ValueError('Module name (or path) was not a valid python identifier')
remote_module_fqn = '.'.join(path.split('/'))
else:
# Currently we do not handle modules in roles so we can end up here for that reason
raise ValueError("Unable to determine module's fully qualified name")
return remote_module_fqn
def _add_module_to_zip(zf, remote_module_fqn, b_module_data):
"""Add a module from ansible or from an ansible collection into the module zip"""
module_path_parts = remote_module_fqn.split('.')
# Write the module
module_path = '/'.join(module_path_parts) + '.py'
zf.writestr(module_path, b_module_data)
# Write the __init__.py's necessary to get there
if module_path_parts[0] == 'ansible':
# The ansible namespace is setup as part of the module_utils setup...
start = 2
existing_paths = frozenset()
else:
# ... but ansible_collections and other toplevels are not
start = 1
existing_paths = frozenset(zf.namelist())
for idx in range(start, len(module_path_parts)):
package_path = '/'.join(module_path_parts[:idx]) + '/__init__.py'
# If a collections module uses module_utils from a collection then most packages will have already been added by recursive_finder.
if package_path in existing_paths:
continue
# Note: We don't want to include more than one ansible module in a payload at this time
# so no need to fill the __init__.py with namespace code
zf.writestr(package_path, b'')
def _find_module_utils(module_name, b_module_data, module_path, module_args, task_vars, templar, module_compression, async_timeout, become,
become_method, become_user, become_password, become_flags, environment, remote_is_local=False):
"""
Given the source of the module, convert it to a Jinja2 template to insert
module code and return whether it's a new or old style module.
"""
module_substyle = module_style = 'old'
# module_style is something important to calling code (ActionBase). It
# determines how arguments are formatted (json vs k=v) and whether
# a separate arguments file needs to be sent over the wire.
# module_substyle is extra information that's useful internally. It tells
# us what we have to look to substitute in the module files and whether
# we're using module replacer or ansiballz to format the module itself.
if _is_binary(b_module_data):
module_substyle = module_style = 'binary'
elif REPLACER in b_module_data:
# Do REPLACER before from ansible.module_utils because we need make sure
# we substitute "from ansible.module_utils basic" for REPLACER
module_style = 'new'
module_substyle = 'python'
b_module_data = b_module_data.replace(REPLACER, b'from ansible.module_utils.basic import *')
elif NEW_STYLE_PYTHON_MODULE_RE.search(b_module_data):
module_style = 'new'
module_substyle = 'python'
elif REPLACER_WINDOWS in b_module_data:
module_style = 'new'
module_substyle = 'powershell'
b_module_data = b_module_data.replace(REPLACER_WINDOWS, b'#Requires -Module Ansible.ModuleUtils.Legacy')
elif re.search(b'#Requires -Module', b_module_data, re.IGNORECASE) \
or re.search(b'#Requires -Version', b_module_data, re.IGNORECASE)\
or re.search(b'#AnsibleRequires -OSVersion', b_module_data, re.IGNORECASE) \
or re.search(b'#AnsibleRequires -Powershell', b_module_data, re.IGNORECASE) \
or re.search(b'#AnsibleRequires -CSharpUtil', b_module_data, re.IGNORECASE):
module_style = 'new'
module_substyle = 'powershell'
elif REPLACER_JSONARGS in b_module_data:
module_style = 'new'
module_substyle = 'jsonargs'
elif b'WANT_JSON' in b_module_data:
module_substyle = module_style = 'non_native_want_json'
shebang = None
# Neither old-style, non_native_want_json nor binary modules should be modified
# except for the shebang line (Done by modify_module)
if module_style in ('old', 'non_native_want_json', 'binary'):
return b_module_data, module_style, shebang
output = BytesIO()
try:
remote_module_fqn = _get_ansible_module_fqn(module_path)
except ValueError:
# Modules in roles currently are not found by the fqn heuristic so we
# fallback to this. This means that relative imports inside a module from
# a role may fail. Absolute imports should be used for future-proofness.
# People should start writing collections instead of modules in roles so we
# may never fix this
display.debug('ANSIBALLZ: Could not determine module FQN')
remote_module_fqn = 'ansible.modules.%s' % module_name
if module_substyle == 'python':
params = dict(ANSIBLE_MODULE_ARGS=module_args,)
try:
python_repred_params = repr(json.dumps(params, cls=AnsibleJSONEncoder, vault_to_text=True))
except TypeError as e:
raise AnsibleError("Unable to pass options to module, they must be JSON serializable: %s" % to_native(e))
try:
compression_method = getattr(zipfile, module_compression)
except AttributeError:
display.warning(u'Bad module compression string specified: %s. Using ZIP_STORED (no compression)' % module_compression)
compression_method = zipfile.ZIP_STORED
lookup_path = os.path.join(C.DEFAULT_LOCAL_TMP, 'ansiballz_cache')
cached_module_filename = os.path.join(lookup_path, "%s-%s" % (remote_module_fqn, module_compression))
zipdata = None
# Optimization -- don't lock if the module has already been cached
if os.path.exists(cached_module_filename):
display.debug('ANSIBALLZ: using cached module: %s' % cached_module_filename)
with open(cached_module_filename, 'rb') as module_data:
zipdata = module_data.read()
else:
if module_name in action_write_locks.action_write_locks:
display.debug('ANSIBALLZ: Using lock for %s' % module_name)
lock = action_write_locks.action_write_locks[module_name]
else:
# If the action plugin directly invokes the module (instead of
# going through a strategy) then we don't have a cross-process
# Lock specifically for this module. Use the "unexpected
# module" lock instead
display.debug('ANSIBALLZ: Using generic lock for %s' % module_name)
lock = action_write_locks.action_write_locks[None]
display.debug('ANSIBALLZ: Acquiring lock')
with lock:
display.debug('ANSIBALLZ: Lock acquired: %s' % id(lock))
# Check that no other process has created this while we were
# waiting for the lock
if not os.path.exists(cached_module_filename):
display.debug('ANSIBALLZ: Creating module')
# Create the module zip data
zipoutput = BytesIO()
zf = zipfile.ZipFile(zipoutput, mode='w', compression=compression_method)
# walk the module imports, looking for module_utils to send- they'll be added to the zipfile
recursive_finder(module_name, remote_module_fqn, b_module_data, zf)
display.debug('ANSIBALLZ: Writing module into payload')
_add_module_to_zip(zf, remote_module_fqn, b_module_data)
zf.close()
zipdata = base64.b64encode(zipoutput.getvalue())
# Write the assembled module to a temp file (write to temp
# so that no one looking for the file reads a partially
# written file)
#
# FIXME: Once split controller/remote is merged, this can be simplified to
# os.makedirs(lookup_path, exist_ok=True)
if not os.path.exists(lookup_path):
try:
# Note -- if we have a global function to setup, that would
# be a better place to run this
os.makedirs(lookup_path)
except OSError:
# Multiple processes tried to create the directory. If it still does not
# exist, raise the original exception.
if not os.path.exists(lookup_path):
raise
display.debug('ANSIBALLZ: Writing module')
with open(cached_module_filename + '-part', 'wb') as f:
f.write(zipdata)
# Rename the file into its final position in the cache so
# future users of this module can read it off the
# filesystem instead of constructing from scratch.
display.debug('ANSIBALLZ: Renaming module')
os.rename(cached_module_filename + '-part', cached_module_filename)
display.debug('ANSIBALLZ: Done creating module')
if zipdata is None:
display.debug('ANSIBALLZ: Reading module after lock')
# Another process wrote the file while we were waiting for
# the write lock. Go ahead and read the data from disk
# instead of re-creating it.
try:
with open(cached_module_filename, 'rb') as f:
zipdata = f.read()
except IOError:
raise AnsibleError('A different worker process failed to create module file. '
'Look at traceback for that process for debugging information.')
zipdata = to_text(zipdata, errors='surrogate_or_strict')
o_interpreter, o_args = _extract_interpreter(b_module_data)
if o_interpreter is None:
o_interpreter = u'/usr/bin/python'
shebang, interpreter = _get_shebang(o_interpreter, task_vars, templar, o_args, remote_is_local=remote_is_local)
# FUTURE: the module cache entry should be invalidated if we got this value from a host-dependent source
rlimit_nofile = C.config.get_config_value('PYTHON_MODULE_RLIMIT_NOFILE', variables=task_vars)
if not isinstance(rlimit_nofile, int):
rlimit_nofile = int(templar.template(rlimit_nofile))
if rlimit_nofile:
rlimit = ANSIBALLZ_RLIMIT_TEMPLATE % dict(
rlimit_nofile=rlimit_nofile,
)
else:
rlimit = ''
coverage_config = os.environ.get('_ANSIBLE_COVERAGE_CONFIG')
if coverage_config:
coverage_output = os.environ['_ANSIBLE_COVERAGE_OUTPUT']
if coverage_output:
# Enable code coverage analysis of the module.
# This feature is for internal testing and may change without notice.
coverage = ANSIBALLZ_COVERAGE_TEMPLATE % dict(
coverage_config=coverage_config,
coverage_output=coverage_output,
)
else:
# Verify coverage is available without importing it.
# This will detect when a module would fail with coverage enabled with minimal overhead.
coverage = ANSIBALLZ_COVERAGE_CHECK_TEMPLATE
else:
coverage = ''
now = datetime.datetime.now(datetime.timezone.utc)
output.write(to_bytes(ACTIVE_ANSIBALLZ_TEMPLATE % dict(
zipdata=zipdata,
ansible_module=module_name,
module_fqn=remote_module_fqn,
params=python_repred_params,
shebang=shebang,
coding=ENCODING_STRING,
year=now.year,
month=now.month,
day=now.day,
hour=now.hour,
minute=now.minute,
second=now.second,
coverage=coverage,
rlimit=rlimit,
)))
b_module_data = output.getvalue()
elif module_substyle == 'powershell':
# Powershell/winrm don't actually make use of shebang so we can
# safely set this here. If we let the fallback code handle this
# it can fail in the presence of the UTF8 BOM commonly added by
# Windows text editors
shebang = u'#!powershell'
# create the common exec wrapper payload and set that as the module_data
# bytes
b_module_data = ps_manifest._create_powershell_wrapper(
b_module_data, module_path, module_args, environment,
async_timeout, become, become_method, become_user, become_password,
become_flags, module_substyle, task_vars, remote_module_fqn
)
elif module_substyle == 'jsonargs':
module_args_json = to_bytes(json.dumps(module_args, cls=AnsibleJSONEncoder, vault_to_text=True))
# these strings could be included in a third-party module but
# officially they were included in the 'basic' snippet for new-style
# python modules (which has been replaced with something else in
# ansiballz) If we remove them from jsonargs-style module replacer
# then we can remove them everywhere.
python_repred_args = to_bytes(repr(module_args_json))
b_module_data = b_module_data.replace(REPLACER_VERSION, to_bytes(repr(__version__)))
b_module_data = b_module_data.replace(REPLACER_COMPLEX, python_repred_args)
b_module_data = b_module_data.replace(REPLACER_SELINUX, to_bytes(','.join(C.DEFAULT_SELINUX_SPECIAL_FS)))
# The main event -- substitute the JSON args string into the module
b_module_data = b_module_data.replace(REPLACER_JSONARGS, module_args_json)
facility = b'syslog.' + to_bytes(task_vars.get('ansible_syslog_facility', C.DEFAULT_SYSLOG_FACILITY), errors='surrogate_or_strict')
b_module_data = b_module_data.replace(b'syslog.LOG_USER', facility)
return (b_module_data, module_style, shebang)
def _extract_interpreter(b_module_data):
"""
Used to extract shebang expression from binary module data and return a text
string with the shebang, or None if no shebang is detected.
"""
interpreter = None
args = []
b_lines = b_module_data.split(b"\n", 1)
if b_lines[0].startswith(b"#!"):
b_shebang = b_lines[0].strip()
# shlex.split needs text on Python 3
cli_split = shlex.split(to_text(b_shebang[2:], errors='surrogate_or_strict'))
# convert args to text
cli_split = [to_text(a, errors='surrogate_or_strict') for a in cli_split]
interpreter = cli_split[0]
args = cli_split[1:]
return interpreter, args
def modify_module(module_name, module_path, module_args, templar, task_vars=None, module_compression='ZIP_STORED', async_timeout=0, become=False,
become_method=None, become_user=None, become_password=None, become_flags=None, environment=None, remote_is_local=False):
"""
Used to insert chunks of code into modules before transfer rather than
doing regular python imports. This allows for more efficient transfer in
a non-bootstrapping scenario by not moving extra files over the wire and
also takes care of embedding arguments in the transferred modules.
This version is done in such a way that local imports can still be
used in the module code, so IDEs don't have to be aware of what is going on.
Example:
from ansible.module_utils.basic import *
... will result in the insertion of basic.py into the module
from the module_utils/ directory in the source tree.
For powershell, this code effectively no-ops, as the exec wrapper requires access to a number of
properties not available here.
"""
task_vars = {} if task_vars is None else task_vars
environment = {} if environment is None else environment
with open(module_path, 'rb') as f:
# read in the module source
b_module_data = f.read()
(b_module_data, module_style, shebang) = _find_module_utils(module_name, b_module_data, module_path, module_args, task_vars, templar, module_compression,
async_timeout=async_timeout, become=become, become_method=become_method,
become_user=become_user, become_password=become_password, become_flags=become_flags,
environment=environment, remote_is_local=remote_is_local)
if module_style == 'binary':
return (b_module_data, module_style, to_text(shebang, nonstring='passthru'))
elif shebang is None:
interpreter, args = _extract_interpreter(b_module_data)
# No interpreter/shebang, assume a binary module?
if interpreter is not None:
shebang, new_interpreter = _get_shebang(interpreter, task_vars, templar, args, remote_is_local=remote_is_local)
# update shebang
b_lines = b_module_data.split(b"\n", 1)
if interpreter != new_interpreter:
b_lines[0] = to_bytes(shebang, errors='surrogate_or_strict', nonstring='passthru')
if os.path.basename(interpreter).startswith(u'python'):
b_lines.insert(1, b_ENCODING_STRING)
b_module_data = b"\n".join(b_lines)
return (b_module_data, module_style, shebang)
def get_action_args_with_defaults(action, args, defaults, templar, action_groups=None):
# Get the list of groups that contain this action
if action_groups is None:
msg = (
"Finding module_defaults for action %s. "
"The caller has not passed the action_groups, so any "
"that may include this action will be ignored."
)
display.warning(msg=msg)
group_names = []
else:
group_names = action_groups.get(action, [])
tmp_args = {}
module_defaults = {}
# Merge latest defaults into dict, since they are a list of dicts
if isinstance(defaults, list):
for default in defaults:
module_defaults.update(default)
# module_defaults keys are static, but the values may be templated
module_defaults = templar.template(module_defaults)
for default in module_defaults:
if default.startswith('group/'):
group_name = default.split('group/')[-1]
if group_name in group_names:
tmp_args.update((module_defaults.get('group/%s' % group_name) or {}).copy())
# handle specific action defaults
tmp_args.update(module_defaults.get(action, {}).copy())
# direct args override all
tmp_args.update(args)
return tmp_args
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 80,459 |
Variable in handlers of nested included role is undefined if left set to default
|
### Summary
When I try to reference a variable in a handler of an included role B included whitin another role A and the role A is called from my playbook without specifying a value for the variable (hence leaving it set to the default value) Ansible gives me an error saying that the variable is undefined, while it should be set to the default value from role A. This is really difficult to explain, but easy to understand from the example below.
### Issue Type
Bug Report
### Component Name
default variable values handling
### Ansible Version
```console
$ ansible --version
ansible [core 2.14.4]
config file = None
configured module search path = ['/home/codespace/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/codespace/.local/lib/python3.10/site-packages/ansible
ansible collection location = /home/codespace/.ansible/collections:/usr/share/ansible/collections
executable location = /home/codespace/.local/bin/ansible
python version = 3.10.4 (main, Mar 13 2023, 19:44:25) [GCC 9.4.0] (/usr/local/python/3.10.4/bin/python)
jinja version = 3.1.2
libyaml = True
```
### Configuration
```console
$ ansible-config dump --only-changed -t all
CONFIG_FILE() = None
```
### OS / Environment
```
$ cat /etc/os-release
NAME="Ubuntu"
VERSION="20.04.5 LTS (Focal Fossa)"
ID=ubuntu
ID_LIKE=debian
PRETTY_NAME="Ubuntu 20.04.5 LTS"
VERSION_ID="20.04"
HOME_URL="https://www.ubuntu.com/"
SUPPORT_URL="https://help.ubuntu.com/"
BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
VERSION_CODENAME=focal
UBUNTU_CODENAME=focal
```
### Steps to Reproduce
File `playbook.yml`:
```yaml
- hosts: localhost
connection: local
tasks:
- ansible.builtin.include_role: { name: ./role01 }
```
File `role01/defaults/main.yml`:
```yaml
myvar01: value-for-myvar01-from-defaults
```
File `role01/tasks/main.yml`:
```yaml
- ansible.builtin.debug:
msg: myvar01 in role01 tasks is {{ myvar01 }}
- ansible.builtin.include_role: { name: ./role02 }
vars:
myvar02: "{{ myvar01 }}"
```
File `role02/defaults/main.yml`:
```yaml
myvar02: value-for-myvar02-from-defaults
```
File `role02/handlers/main.yml`:
```yaml
- name: myhandler01
ansible.builtin.debug:
msg: myvar02 in role02 handlers is {{ myvar02 }}
```
File `role02/tasks/main.yml`:
```yaml
- ansible.builtin.debug:
msg: myvar02 in role02 tasks is {{ myvar02 }}
- ansible.builtin.command:
cmd: uptime
changed_when: true
notify: myhandler01
- name: Force all notified handlers to run at this point
ansible.builtin.meta: flush_handlers
```
Then run `ansible-playbook playbook.yml` to see the result.
### Expected Results
I expect the value of the variable to be `value-for-myvar01-from-defaults` instead of undefined, like this:
```
$ ansible-playbook playbook.yml
[WARNING]: No inventory was parsed, only implicit localhost is available
[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all'
PLAY [Main play] *********************************************************************************************************************************************************************************************************
TASK [Gathering Facts] ***************************************************************************************************************************************************************************************************
ok: [localhost]
TASK [ansible.builtin.include_role : ./role01] ***************************************************************************************************************************************************************************
TASK [./role01 : ansible.builtin.debug] **********************************************************************************************************************************************************************************
ok: [localhost] => {
"msg": "myvar01 in role01 tasks is value-for-myvar01-from-defaults"
}
TASK [ansible.builtin.include_role : ./role02] ***************************************************************************************************************************************************************************
TASK [./role02 : ansible.builtin.debug] **********************************************************************************************************************************************************************************
ok: [localhost] => {
"msg": "myvar02 in role02 tasks is value-for-myvar01-from-defaults"
}
TASK [./role02 : ansible.builtin.command] ********************************************************************************************************************************************************************************
changed: [localhost]
TASK [./role02 : Force all notified handlers to run at this point] *******************************************************************************************************************************************************
RUNNING HANDLER [./role02 : myhandler01] *********************************************************************************************************************************************************************************
ok: [localhost] => {
"msg": "myvar02 in role02 handlers is value-for-myvar01-from-defaults"
}
PLAY RECAP ***************************************************************************************************************************************************************************************************************
localhost : ok=5 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
```
### Actual Results
```console
$ ansible-playbook playbook.yml
[WARNING]: No inventory was parsed, only implicit localhost is available
[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all'
PLAY [Main play] *********************************************************************************************************************************************************************************************************
TASK [Gathering Facts] ***************************************************************************************************************************************************************************************************
ok: [localhost]
TASK [ansible.builtin.include_role : ./role01] ***************************************************************************************************************************************************************************
TASK [./role01 : ansible.builtin.debug] **********************************************************************************************************************************************************************************
ok: [localhost] => {
"msg": "myvar01 in role01 tasks is value-for-myvar01-from-defaults"
}
TASK [ansible.builtin.include_role : ./role02] ***************************************************************************************************************************************************************************
TASK [./role02 : ansible.builtin.debug] **********************************************************************************************************************************************************************************
ok: [localhost] => {
"msg": "myvar02 in role02 tasks is value-for-myvar01-from-defaults"
}
TASK [./role02 : ansible.builtin.command] ********************************************************************************************************************************************************************************
changed: [localhost]
TASK [./role02 : Force all notified handlers to run at this point] *******************************************************************************************************************************************************
RUNNING HANDLER [./role02 : myhandler01] *********************************************************************************************************************************************************************************
fatal: [localhost]: FAILED! => {"msg": "The task includes an option with an undefined variable. The error was: {{ myvar01 }}: 'myvar01' is undefined. 'myvar01' is undefined. {{ myvar01 }}: 'myvar01' is undefined. 'myvar01' is undefined\n\nThe error appears to be in '/workspaces/test-ansible-nested/role02/handlers/main.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: myhandler01\n ^ here\n"}
PLAY RECAP ***************************************************************************************************************************************************************************************************************
localhost : ok=4 changed=1 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
```
### Code of Conduct
- [x] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/80459
|
https://github.com/ansible/ansible/pull/81524
|
a2673cb56438400bc02f89b58316597e517afb52
|
98f16278172a8d54180ab203a8fc85b2cfe477d9
| 2023-04-10T00:34:51Z |
python
| 2023-08-17T19:14:20Z |
changelogs/fragments/80459-handlers-nested-includes-vars.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 80,459 |
Variable in handlers of nested included role is undefined if left set to default
|
### Summary
When I try to reference a variable in a handler of an included role B included whitin another role A and the role A is called from my playbook without specifying a value for the variable (hence leaving it set to the default value) Ansible gives me an error saying that the variable is undefined, while it should be set to the default value from role A. This is really difficult to explain, but easy to understand from the example below.
### Issue Type
Bug Report
### Component Name
default variable values handling
### Ansible Version
```console
$ ansible --version
ansible [core 2.14.4]
config file = None
configured module search path = ['/home/codespace/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/codespace/.local/lib/python3.10/site-packages/ansible
ansible collection location = /home/codespace/.ansible/collections:/usr/share/ansible/collections
executable location = /home/codespace/.local/bin/ansible
python version = 3.10.4 (main, Mar 13 2023, 19:44:25) [GCC 9.4.0] (/usr/local/python/3.10.4/bin/python)
jinja version = 3.1.2
libyaml = True
```
### Configuration
```console
$ ansible-config dump --only-changed -t all
CONFIG_FILE() = None
```
### OS / Environment
```
$ cat /etc/os-release
NAME="Ubuntu"
VERSION="20.04.5 LTS (Focal Fossa)"
ID=ubuntu
ID_LIKE=debian
PRETTY_NAME="Ubuntu 20.04.5 LTS"
VERSION_ID="20.04"
HOME_URL="https://www.ubuntu.com/"
SUPPORT_URL="https://help.ubuntu.com/"
BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
VERSION_CODENAME=focal
UBUNTU_CODENAME=focal
```
### Steps to Reproduce
File `playbook.yml`:
```yaml
- hosts: localhost
connection: local
tasks:
- ansible.builtin.include_role: { name: ./role01 }
```
File `role01/defaults/main.yml`:
```yaml
myvar01: value-for-myvar01-from-defaults
```
File `role01/tasks/main.yml`:
```yaml
- ansible.builtin.debug:
msg: myvar01 in role01 tasks is {{ myvar01 }}
- ansible.builtin.include_role: { name: ./role02 }
vars:
myvar02: "{{ myvar01 }}"
```
File `role02/defaults/main.yml`:
```yaml
myvar02: value-for-myvar02-from-defaults
```
File `role02/handlers/main.yml`:
```yaml
- name: myhandler01
ansible.builtin.debug:
msg: myvar02 in role02 handlers is {{ myvar02 }}
```
File `role02/tasks/main.yml`:
```yaml
- ansible.builtin.debug:
msg: myvar02 in role02 tasks is {{ myvar02 }}
- ansible.builtin.command:
cmd: uptime
changed_when: true
notify: myhandler01
- name: Force all notified handlers to run at this point
ansible.builtin.meta: flush_handlers
```
Then run `ansible-playbook playbook.yml` to see the result.
### Expected Results
I expect the value of the variable to be `value-for-myvar01-from-defaults` instead of undefined, like this:
```
$ ansible-playbook playbook.yml
[WARNING]: No inventory was parsed, only implicit localhost is available
[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all'
PLAY [Main play] *********************************************************************************************************************************************************************************************************
TASK [Gathering Facts] ***************************************************************************************************************************************************************************************************
ok: [localhost]
TASK [ansible.builtin.include_role : ./role01] ***************************************************************************************************************************************************************************
TASK [./role01 : ansible.builtin.debug] **********************************************************************************************************************************************************************************
ok: [localhost] => {
"msg": "myvar01 in role01 tasks is value-for-myvar01-from-defaults"
}
TASK [ansible.builtin.include_role : ./role02] ***************************************************************************************************************************************************************************
TASK [./role02 : ansible.builtin.debug] **********************************************************************************************************************************************************************************
ok: [localhost] => {
"msg": "myvar02 in role02 tasks is value-for-myvar01-from-defaults"
}
TASK [./role02 : ansible.builtin.command] ********************************************************************************************************************************************************************************
changed: [localhost]
TASK [./role02 : Force all notified handlers to run at this point] *******************************************************************************************************************************************************
RUNNING HANDLER [./role02 : myhandler01] *********************************************************************************************************************************************************************************
ok: [localhost] => {
"msg": "myvar02 in role02 handlers is value-for-myvar01-from-defaults"
}
PLAY RECAP ***************************************************************************************************************************************************************************************************************
localhost : ok=5 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
```
### Actual Results
```console
$ ansible-playbook playbook.yml
[WARNING]: No inventory was parsed, only implicit localhost is available
[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all'
PLAY [Main play] *********************************************************************************************************************************************************************************************************
TASK [Gathering Facts] ***************************************************************************************************************************************************************************************************
ok: [localhost]
TASK [ansible.builtin.include_role : ./role01] ***************************************************************************************************************************************************************************
TASK [./role01 : ansible.builtin.debug] **********************************************************************************************************************************************************************************
ok: [localhost] => {
"msg": "myvar01 in role01 tasks is value-for-myvar01-from-defaults"
}
TASK [ansible.builtin.include_role : ./role02] ***************************************************************************************************************************************************************************
TASK [./role02 : ansible.builtin.debug] **********************************************************************************************************************************************************************************
ok: [localhost] => {
"msg": "myvar02 in role02 tasks is value-for-myvar01-from-defaults"
}
TASK [./role02 : ansible.builtin.command] ********************************************************************************************************************************************************************************
changed: [localhost]
TASK [./role02 : Force all notified handlers to run at this point] *******************************************************************************************************************************************************
RUNNING HANDLER [./role02 : myhandler01] *********************************************************************************************************************************************************************************
fatal: [localhost]: FAILED! => {"msg": "The task includes an option with an undefined variable. The error was: {{ myvar01 }}: 'myvar01' is undefined. 'myvar01' is undefined. {{ myvar01 }}: 'myvar01' is undefined. 'myvar01' is undefined\n\nThe error appears to be in '/workspaces/test-ansible-nested/role02/handlers/main.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: myhandler01\n ^ here\n"}
PLAY RECAP ***************************************************************************************************************************************************************************************************************
localhost : ok=4 changed=1 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
```
### Code of Conduct
- [x] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/80459
|
https://github.com/ansible/ansible/pull/81524
|
a2673cb56438400bc02f89b58316597e517afb52
|
98f16278172a8d54180ab203a8fc85b2cfe477d9
| 2023-04-10T00:34:51Z |
python
| 2023-08-17T19:14:20Z |
lib/ansible/playbook/role_include.py
|
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
# Make coding more python3-ish
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
from os.path import basename
import ansible.constants as C
from ansible.errors import AnsibleParserError
from ansible.playbook.attribute import NonInheritableFieldAttribute
from ansible.playbook.task_include import TaskInclude
from ansible.playbook.role import Role
from ansible.playbook.role.include import RoleInclude
from ansible.utils.display import Display
from ansible.module_utils.six import string_types
from ansible.template import Templar
__all__ = ['IncludeRole']
display = Display()
class IncludeRole(TaskInclude):
"""
A Role include is derived from a regular role to handle the special
circumstances related to the `- include_role: ...`
"""
BASE = frozenset(('name', 'role')) # directly assigned
FROM_ARGS = frozenset(('tasks_from', 'vars_from', 'defaults_from', 'handlers_from')) # used to populate from dict in role
OTHER_ARGS = frozenset(('apply', 'public', 'allow_duplicates', 'rolespec_validate')) # assigned to matching property
VALID_ARGS = BASE | FROM_ARGS | OTHER_ARGS # all valid args
# =================================================================================
# ATTRIBUTES
# private as this is a 'module options' vs a task property
allow_duplicates = NonInheritableFieldAttribute(isa='bool', default=True, private=True, always_post_validate=True)
public = NonInheritableFieldAttribute(isa='bool', default=False, private=True, always_post_validate=True)
rolespec_validate = NonInheritableFieldAttribute(isa='bool', default=True, private=True, always_post_validate=True)
def __init__(self, block=None, role=None, task_include=None):
super(IncludeRole, self).__init__(block=block, role=role, task_include=task_include)
self._from_files = {}
self._parent_role = role
self._role_name = None
self._role_path = None
def get_name(self):
''' return the name of the task '''
return self.name or "%s : %s" % (self.action, self._role_name)
def get_block_list(self, play=None, variable_manager=None, loader=None):
# only need play passed in when dynamic
if play is None:
myplay = self._parent._play
else:
myplay = play
ri = RoleInclude.load(self._role_name, play=myplay, variable_manager=variable_manager, loader=loader, collection_list=self.collections)
ri.vars |= self.vars
if variable_manager is not None:
available_variables = variable_manager.get_vars(play=myplay, task=self)
else:
available_variables = {}
templar = Templar(loader=loader, variables=available_variables)
from_files = templar.template(self._from_files)
# build role
actual_role = Role.load(ri, myplay, parent_role=self._parent_role, from_files=from_files,
from_include=True, validate=self.rolespec_validate, public=self.public)
actual_role._metadata.allow_duplicates = self.allow_duplicates
if self.statically_loaded or self.public:
myplay.roles.append(actual_role)
# save this for later use
self._role_path = actual_role._role_path
# compile role with parent roles as dependencies to ensure they inherit
# variables
dep_chain = actual_role.get_dep_chain()
p_block = self.build_parent_block()
# collections value is not inherited; override with the value we calculated during role setup
p_block.collections = actual_role.collections
blocks = actual_role.compile(play=myplay, dep_chain=dep_chain)
for b in blocks:
b._parent = p_block
# HACK: parent inheritance doesn't seem to have a way to handle this intermediate override until squashed/finalized
b.collections = actual_role.collections
# updated available handlers in play
handlers = actual_role.get_handler_blocks(play=myplay)
for h in handlers:
h._parent = p_block
myplay.handlers = myplay.handlers + handlers
return blocks, handlers
@staticmethod
def load(data, block=None, role=None, task_include=None, variable_manager=None, loader=None):
ir = IncludeRole(block, role, task_include=task_include).load_data(data, variable_manager=variable_manager, loader=loader)
# dyanmic role!
if ir.action in C._ACTION_INCLUDE_ROLE:
ir.static = False
# Validate options
my_arg_names = frozenset(ir.args.keys())
# name is needed, or use role as alias
ir._role_name = ir.args.get('name', ir.args.get('role'))
if ir._role_name is None:
raise AnsibleParserError("'name' is a required field for %s." % ir.action, obj=data)
# public is only valid argument for includes, imports are always 'public' (after they run)
if 'public' in ir.args and ir.action not in C._ACTION_INCLUDE_ROLE:
raise AnsibleParserError('Invalid options for %s: public' % ir.action, obj=data)
# validate bad args, otherwise we silently ignore
bad_opts = my_arg_names.difference(IncludeRole.VALID_ARGS)
if bad_opts:
raise AnsibleParserError('Invalid options for %s: %s' % (ir.action, ','.join(list(bad_opts))), obj=data)
# build options for role include/import tasks
for key in my_arg_names.intersection(IncludeRole.FROM_ARGS):
from_key = key.removesuffix('_from')
args_value = ir.args.get(key)
if not isinstance(args_value, string_types):
raise AnsibleParserError('Expected a string for %s but got %s instead' % (key, type(args_value)))
ir._from_files[from_key] = basename(args_value)
# apply is only valid for includes, not imports as they inherit directly
apply_attrs = ir.args.get('apply', {})
if apply_attrs and ir.action not in C._ACTION_INCLUDE_ROLE:
raise AnsibleParserError('Invalid options for %s: apply' % ir.action, obj=data)
elif not isinstance(apply_attrs, dict):
raise AnsibleParserError('Expected a dict for apply but got %s instead' % type(apply_attrs), obj=data)
# manual list as otherwise the options would set other task parameters we don't want.
for option in my_arg_names.intersection(IncludeRole.OTHER_ARGS):
setattr(ir, option, ir.args.get(option))
return ir
def copy(self, exclude_parent=False, exclude_tasks=False):
new_me = super(IncludeRole, self).copy(exclude_parent=exclude_parent, exclude_tasks=exclude_tasks)
new_me.statically_loaded = self.statically_loaded
new_me._from_files = self._from_files.copy()
new_me._parent_role = self._parent_role
new_me._role_name = self._role_name
new_me._role_path = self._role_path
return new_me
def get_include_params(self):
v = super(IncludeRole, self).get_include_params()
if self._parent_role:
v |= self._parent_role.get_role_params()
v.setdefault('ansible_parent_role_names', []).insert(0, self._parent_role.get_name())
v.setdefault('ansible_parent_role_paths', []).insert(0, self._parent_role._role_path)
return v
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 80,459 |
Variable in handlers of nested included role is undefined if left set to default
|
### Summary
When I try to reference a variable in a handler of an included role B included whitin another role A and the role A is called from my playbook without specifying a value for the variable (hence leaving it set to the default value) Ansible gives me an error saying that the variable is undefined, while it should be set to the default value from role A. This is really difficult to explain, but easy to understand from the example below.
### Issue Type
Bug Report
### Component Name
default variable values handling
### Ansible Version
```console
$ ansible --version
ansible [core 2.14.4]
config file = None
configured module search path = ['/home/codespace/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/codespace/.local/lib/python3.10/site-packages/ansible
ansible collection location = /home/codespace/.ansible/collections:/usr/share/ansible/collections
executable location = /home/codespace/.local/bin/ansible
python version = 3.10.4 (main, Mar 13 2023, 19:44:25) [GCC 9.4.0] (/usr/local/python/3.10.4/bin/python)
jinja version = 3.1.2
libyaml = True
```
### Configuration
```console
$ ansible-config dump --only-changed -t all
CONFIG_FILE() = None
```
### OS / Environment
```
$ cat /etc/os-release
NAME="Ubuntu"
VERSION="20.04.5 LTS (Focal Fossa)"
ID=ubuntu
ID_LIKE=debian
PRETTY_NAME="Ubuntu 20.04.5 LTS"
VERSION_ID="20.04"
HOME_URL="https://www.ubuntu.com/"
SUPPORT_URL="https://help.ubuntu.com/"
BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
VERSION_CODENAME=focal
UBUNTU_CODENAME=focal
```
### Steps to Reproduce
File `playbook.yml`:
```yaml
- hosts: localhost
connection: local
tasks:
- ansible.builtin.include_role: { name: ./role01 }
```
File `role01/defaults/main.yml`:
```yaml
myvar01: value-for-myvar01-from-defaults
```
File `role01/tasks/main.yml`:
```yaml
- ansible.builtin.debug:
msg: myvar01 in role01 tasks is {{ myvar01 }}
- ansible.builtin.include_role: { name: ./role02 }
vars:
myvar02: "{{ myvar01 }}"
```
File `role02/defaults/main.yml`:
```yaml
myvar02: value-for-myvar02-from-defaults
```
File `role02/handlers/main.yml`:
```yaml
- name: myhandler01
ansible.builtin.debug:
msg: myvar02 in role02 handlers is {{ myvar02 }}
```
File `role02/tasks/main.yml`:
```yaml
- ansible.builtin.debug:
msg: myvar02 in role02 tasks is {{ myvar02 }}
- ansible.builtin.command:
cmd: uptime
changed_when: true
notify: myhandler01
- name: Force all notified handlers to run at this point
ansible.builtin.meta: flush_handlers
```
Then run `ansible-playbook playbook.yml` to see the result.
### Expected Results
I expect the value of the variable to be `value-for-myvar01-from-defaults` instead of undefined, like this:
```
$ ansible-playbook playbook.yml
[WARNING]: No inventory was parsed, only implicit localhost is available
[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all'
PLAY [Main play] *********************************************************************************************************************************************************************************************************
TASK [Gathering Facts] ***************************************************************************************************************************************************************************************************
ok: [localhost]
TASK [ansible.builtin.include_role : ./role01] ***************************************************************************************************************************************************************************
TASK [./role01 : ansible.builtin.debug] **********************************************************************************************************************************************************************************
ok: [localhost] => {
"msg": "myvar01 in role01 tasks is value-for-myvar01-from-defaults"
}
TASK [ansible.builtin.include_role : ./role02] ***************************************************************************************************************************************************************************
TASK [./role02 : ansible.builtin.debug] **********************************************************************************************************************************************************************************
ok: [localhost] => {
"msg": "myvar02 in role02 tasks is value-for-myvar01-from-defaults"
}
TASK [./role02 : ansible.builtin.command] ********************************************************************************************************************************************************************************
changed: [localhost]
TASK [./role02 : Force all notified handlers to run at this point] *******************************************************************************************************************************************************
RUNNING HANDLER [./role02 : myhandler01] *********************************************************************************************************************************************************************************
ok: [localhost] => {
"msg": "myvar02 in role02 handlers is value-for-myvar01-from-defaults"
}
PLAY RECAP ***************************************************************************************************************************************************************************************************************
localhost : ok=5 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
```
### Actual Results
```console
$ ansible-playbook playbook.yml
[WARNING]: No inventory was parsed, only implicit localhost is available
[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all'
PLAY [Main play] *********************************************************************************************************************************************************************************************************
TASK [Gathering Facts] ***************************************************************************************************************************************************************************************************
ok: [localhost]
TASK [ansible.builtin.include_role : ./role01] ***************************************************************************************************************************************************************************
TASK [./role01 : ansible.builtin.debug] **********************************************************************************************************************************************************************************
ok: [localhost] => {
"msg": "myvar01 in role01 tasks is value-for-myvar01-from-defaults"
}
TASK [ansible.builtin.include_role : ./role02] ***************************************************************************************************************************************************************************
TASK [./role02 : ansible.builtin.debug] **********************************************************************************************************************************************************************************
ok: [localhost] => {
"msg": "myvar02 in role02 tasks is value-for-myvar01-from-defaults"
}
TASK [./role02 : ansible.builtin.command] ********************************************************************************************************************************************************************************
changed: [localhost]
TASK [./role02 : Force all notified handlers to run at this point] *******************************************************************************************************************************************************
RUNNING HANDLER [./role02 : myhandler01] *********************************************************************************************************************************************************************************
fatal: [localhost]: FAILED! => {"msg": "The task includes an option with an undefined variable. The error was: {{ myvar01 }}: 'myvar01' is undefined. 'myvar01' is undefined. {{ myvar01 }}: 'myvar01' is undefined. 'myvar01' is undefined\n\nThe error appears to be in '/workspaces/test-ansible-nested/role02/handlers/main.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: myhandler01\n ^ here\n"}
PLAY RECAP ***************************************************************************************************************************************************************************************************************
localhost : ok=4 changed=1 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
```
### Code of Conduct
- [x] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/80459
|
https://github.com/ansible/ansible/pull/81524
|
a2673cb56438400bc02f89b58316597e517afb52
|
98f16278172a8d54180ab203a8fc85b2cfe477d9
| 2023-04-10T00:34:51Z |
python
| 2023-08-17T19:14:20Z |
test/integration/targets/handlers/roles/r1-dep_chain-vars/defaults/main.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 80,459 |
Variable in handlers of nested included role is undefined if left set to default
|
### Summary
When I try to reference a variable in a handler of an included role B included whitin another role A and the role A is called from my playbook without specifying a value for the variable (hence leaving it set to the default value) Ansible gives me an error saying that the variable is undefined, while it should be set to the default value from role A. This is really difficult to explain, but easy to understand from the example below.
### Issue Type
Bug Report
### Component Name
default variable values handling
### Ansible Version
```console
$ ansible --version
ansible [core 2.14.4]
config file = None
configured module search path = ['/home/codespace/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/codespace/.local/lib/python3.10/site-packages/ansible
ansible collection location = /home/codespace/.ansible/collections:/usr/share/ansible/collections
executable location = /home/codespace/.local/bin/ansible
python version = 3.10.4 (main, Mar 13 2023, 19:44:25) [GCC 9.4.0] (/usr/local/python/3.10.4/bin/python)
jinja version = 3.1.2
libyaml = True
```
### Configuration
```console
$ ansible-config dump --only-changed -t all
CONFIG_FILE() = None
```
### OS / Environment
```
$ cat /etc/os-release
NAME="Ubuntu"
VERSION="20.04.5 LTS (Focal Fossa)"
ID=ubuntu
ID_LIKE=debian
PRETTY_NAME="Ubuntu 20.04.5 LTS"
VERSION_ID="20.04"
HOME_URL="https://www.ubuntu.com/"
SUPPORT_URL="https://help.ubuntu.com/"
BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
VERSION_CODENAME=focal
UBUNTU_CODENAME=focal
```
### Steps to Reproduce
File `playbook.yml`:
```yaml
- hosts: localhost
connection: local
tasks:
- ansible.builtin.include_role: { name: ./role01 }
```
File `role01/defaults/main.yml`:
```yaml
myvar01: value-for-myvar01-from-defaults
```
File `role01/tasks/main.yml`:
```yaml
- ansible.builtin.debug:
msg: myvar01 in role01 tasks is {{ myvar01 }}
- ansible.builtin.include_role: { name: ./role02 }
vars:
myvar02: "{{ myvar01 }}"
```
File `role02/defaults/main.yml`:
```yaml
myvar02: value-for-myvar02-from-defaults
```
File `role02/handlers/main.yml`:
```yaml
- name: myhandler01
ansible.builtin.debug:
msg: myvar02 in role02 handlers is {{ myvar02 }}
```
File `role02/tasks/main.yml`:
```yaml
- ansible.builtin.debug:
msg: myvar02 in role02 tasks is {{ myvar02 }}
- ansible.builtin.command:
cmd: uptime
changed_when: true
notify: myhandler01
- name: Force all notified handlers to run at this point
ansible.builtin.meta: flush_handlers
```
Then run `ansible-playbook playbook.yml` to see the result.
### Expected Results
I expect the value of the variable to be `value-for-myvar01-from-defaults` instead of undefined, like this:
```
$ ansible-playbook playbook.yml
[WARNING]: No inventory was parsed, only implicit localhost is available
[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all'
PLAY [Main play] *********************************************************************************************************************************************************************************************************
TASK [Gathering Facts] ***************************************************************************************************************************************************************************************************
ok: [localhost]
TASK [ansible.builtin.include_role : ./role01] ***************************************************************************************************************************************************************************
TASK [./role01 : ansible.builtin.debug] **********************************************************************************************************************************************************************************
ok: [localhost] => {
"msg": "myvar01 in role01 tasks is value-for-myvar01-from-defaults"
}
TASK [ansible.builtin.include_role : ./role02] ***************************************************************************************************************************************************************************
TASK [./role02 : ansible.builtin.debug] **********************************************************************************************************************************************************************************
ok: [localhost] => {
"msg": "myvar02 in role02 tasks is value-for-myvar01-from-defaults"
}
TASK [./role02 : ansible.builtin.command] ********************************************************************************************************************************************************************************
changed: [localhost]
TASK [./role02 : Force all notified handlers to run at this point] *******************************************************************************************************************************************************
RUNNING HANDLER [./role02 : myhandler01] *********************************************************************************************************************************************************************************
ok: [localhost] => {
"msg": "myvar02 in role02 handlers is value-for-myvar01-from-defaults"
}
PLAY RECAP ***************************************************************************************************************************************************************************************************************
localhost : ok=5 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
```
### Actual Results
```console
$ ansible-playbook playbook.yml
[WARNING]: No inventory was parsed, only implicit localhost is available
[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all'
PLAY [Main play] *********************************************************************************************************************************************************************************************************
TASK [Gathering Facts] ***************************************************************************************************************************************************************************************************
ok: [localhost]
TASK [ansible.builtin.include_role : ./role01] ***************************************************************************************************************************************************************************
TASK [./role01 : ansible.builtin.debug] **********************************************************************************************************************************************************************************
ok: [localhost] => {
"msg": "myvar01 in role01 tasks is value-for-myvar01-from-defaults"
}
TASK [ansible.builtin.include_role : ./role02] ***************************************************************************************************************************************************************************
TASK [./role02 : ansible.builtin.debug] **********************************************************************************************************************************************************************************
ok: [localhost] => {
"msg": "myvar02 in role02 tasks is value-for-myvar01-from-defaults"
}
TASK [./role02 : ansible.builtin.command] ********************************************************************************************************************************************************************************
changed: [localhost]
TASK [./role02 : Force all notified handlers to run at this point] *******************************************************************************************************************************************************
RUNNING HANDLER [./role02 : myhandler01] *********************************************************************************************************************************************************************************
fatal: [localhost]: FAILED! => {"msg": "The task includes an option with an undefined variable. The error was: {{ myvar01 }}: 'myvar01' is undefined. 'myvar01' is undefined. {{ myvar01 }}: 'myvar01' is undefined. 'myvar01' is undefined\n\nThe error appears to be in '/workspaces/test-ansible-nested/role02/handlers/main.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: myhandler01\n ^ here\n"}
PLAY RECAP ***************************************************************************************************************************************************************************************************************
localhost : ok=4 changed=1 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
```
### Code of Conduct
- [x] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/80459
|
https://github.com/ansible/ansible/pull/81524
|
a2673cb56438400bc02f89b58316597e517afb52
|
98f16278172a8d54180ab203a8fc85b2cfe477d9
| 2023-04-10T00:34:51Z |
python
| 2023-08-17T19:14:20Z |
test/integration/targets/handlers/roles/r1-dep_chain-vars/tasks/main.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 80,459 |
Variable in handlers of nested included role is undefined if left set to default
|
### Summary
When I try to reference a variable in a handler of an included role B included whitin another role A and the role A is called from my playbook without specifying a value for the variable (hence leaving it set to the default value) Ansible gives me an error saying that the variable is undefined, while it should be set to the default value from role A. This is really difficult to explain, but easy to understand from the example below.
### Issue Type
Bug Report
### Component Name
default variable values handling
### Ansible Version
```console
$ ansible --version
ansible [core 2.14.4]
config file = None
configured module search path = ['/home/codespace/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/codespace/.local/lib/python3.10/site-packages/ansible
ansible collection location = /home/codespace/.ansible/collections:/usr/share/ansible/collections
executable location = /home/codespace/.local/bin/ansible
python version = 3.10.4 (main, Mar 13 2023, 19:44:25) [GCC 9.4.0] (/usr/local/python/3.10.4/bin/python)
jinja version = 3.1.2
libyaml = True
```
### Configuration
```console
$ ansible-config dump --only-changed -t all
CONFIG_FILE() = None
```
### OS / Environment
```
$ cat /etc/os-release
NAME="Ubuntu"
VERSION="20.04.5 LTS (Focal Fossa)"
ID=ubuntu
ID_LIKE=debian
PRETTY_NAME="Ubuntu 20.04.5 LTS"
VERSION_ID="20.04"
HOME_URL="https://www.ubuntu.com/"
SUPPORT_URL="https://help.ubuntu.com/"
BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
VERSION_CODENAME=focal
UBUNTU_CODENAME=focal
```
### Steps to Reproduce
File `playbook.yml`:
```yaml
- hosts: localhost
connection: local
tasks:
- ansible.builtin.include_role: { name: ./role01 }
```
File `role01/defaults/main.yml`:
```yaml
myvar01: value-for-myvar01-from-defaults
```
File `role01/tasks/main.yml`:
```yaml
- ansible.builtin.debug:
msg: myvar01 in role01 tasks is {{ myvar01 }}
- ansible.builtin.include_role: { name: ./role02 }
vars:
myvar02: "{{ myvar01 }}"
```
File `role02/defaults/main.yml`:
```yaml
myvar02: value-for-myvar02-from-defaults
```
File `role02/handlers/main.yml`:
```yaml
- name: myhandler01
ansible.builtin.debug:
msg: myvar02 in role02 handlers is {{ myvar02 }}
```
File `role02/tasks/main.yml`:
```yaml
- ansible.builtin.debug:
msg: myvar02 in role02 tasks is {{ myvar02 }}
- ansible.builtin.command:
cmd: uptime
changed_when: true
notify: myhandler01
- name: Force all notified handlers to run at this point
ansible.builtin.meta: flush_handlers
```
Then run `ansible-playbook playbook.yml` to see the result.
### Expected Results
I expect the value of the variable to be `value-for-myvar01-from-defaults` instead of undefined, like this:
```
$ ansible-playbook playbook.yml
[WARNING]: No inventory was parsed, only implicit localhost is available
[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all'
PLAY [Main play] *********************************************************************************************************************************************************************************************************
TASK [Gathering Facts] ***************************************************************************************************************************************************************************************************
ok: [localhost]
TASK [ansible.builtin.include_role : ./role01] ***************************************************************************************************************************************************************************
TASK [./role01 : ansible.builtin.debug] **********************************************************************************************************************************************************************************
ok: [localhost] => {
"msg": "myvar01 in role01 tasks is value-for-myvar01-from-defaults"
}
TASK [ansible.builtin.include_role : ./role02] ***************************************************************************************************************************************************************************
TASK [./role02 : ansible.builtin.debug] **********************************************************************************************************************************************************************************
ok: [localhost] => {
"msg": "myvar02 in role02 tasks is value-for-myvar01-from-defaults"
}
TASK [./role02 : ansible.builtin.command] ********************************************************************************************************************************************************************************
changed: [localhost]
TASK [./role02 : Force all notified handlers to run at this point] *******************************************************************************************************************************************************
RUNNING HANDLER [./role02 : myhandler01] *********************************************************************************************************************************************************************************
ok: [localhost] => {
"msg": "myvar02 in role02 handlers is value-for-myvar01-from-defaults"
}
PLAY RECAP ***************************************************************************************************************************************************************************************************************
localhost : ok=5 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
```
### Actual Results
```console
$ ansible-playbook playbook.yml
[WARNING]: No inventory was parsed, only implicit localhost is available
[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all'
PLAY [Main play] *********************************************************************************************************************************************************************************************************
TASK [Gathering Facts] ***************************************************************************************************************************************************************************************************
ok: [localhost]
TASK [ansible.builtin.include_role : ./role01] ***************************************************************************************************************************************************************************
TASK [./role01 : ansible.builtin.debug] **********************************************************************************************************************************************************************************
ok: [localhost] => {
"msg": "myvar01 in role01 tasks is value-for-myvar01-from-defaults"
}
TASK [ansible.builtin.include_role : ./role02] ***************************************************************************************************************************************************************************
TASK [./role02 : ansible.builtin.debug] **********************************************************************************************************************************************************************************
ok: [localhost] => {
"msg": "myvar02 in role02 tasks is value-for-myvar01-from-defaults"
}
TASK [./role02 : ansible.builtin.command] ********************************************************************************************************************************************************************************
changed: [localhost]
TASK [./role02 : Force all notified handlers to run at this point] *******************************************************************************************************************************************************
RUNNING HANDLER [./role02 : myhandler01] *********************************************************************************************************************************************************************************
fatal: [localhost]: FAILED! => {"msg": "The task includes an option with an undefined variable. The error was: {{ myvar01 }}: 'myvar01' is undefined. 'myvar01' is undefined. {{ myvar01 }}: 'myvar01' is undefined. 'myvar01' is undefined\n\nThe error appears to be in '/workspaces/test-ansible-nested/role02/handlers/main.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: myhandler01\n ^ here\n"}
PLAY RECAP ***************************************************************************************************************************************************************************************************************
localhost : ok=4 changed=1 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
```
### Code of Conduct
- [x] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/80459
|
https://github.com/ansible/ansible/pull/81524
|
a2673cb56438400bc02f89b58316597e517afb52
|
98f16278172a8d54180ab203a8fc85b2cfe477d9
| 2023-04-10T00:34:51Z |
python
| 2023-08-17T19:14:20Z |
test/integration/targets/handlers/roles/r2-dep_chain-vars/handlers/main.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 80,459 |
Variable in handlers of nested included role is undefined if left set to default
|
### Summary
When I try to reference a variable in a handler of an included role B included whitin another role A and the role A is called from my playbook without specifying a value for the variable (hence leaving it set to the default value) Ansible gives me an error saying that the variable is undefined, while it should be set to the default value from role A. This is really difficult to explain, but easy to understand from the example below.
### Issue Type
Bug Report
### Component Name
default variable values handling
### Ansible Version
```console
$ ansible --version
ansible [core 2.14.4]
config file = None
configured module search path = ['/home/codespace/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/codespace/.local/lib/python3.10/site-packages/ansible
ansible collection location = /home/codespace/.ansible/collections:/usr/share/ansible/collections
executable location = /home/codespace/.local/bin/ansible
python version = 3.10.4 (main, Mar 13 2023, 19:44:25) [GCC 9.4.0] (/usr/local/python/3.10.4/bin/python)
jinja version = 3.1.2
libyaml = True
```
### Configuration
```console
$ ansible-config dump --only-changed -t all
CONFIG_FILE() = None
```
### OS / Environment
```
$ cat /etc/os-release
NAME="Ubuntu"
VERSION="20.04.5 LTS (Focal Fossa)"
ID=ubuntu
ID_LIKE=debian
PRETTY_NAME="Ubuntu 20.04.5 LTS"
VERSION_ID="20.04"
HOME_URL="https://www.ubuntu.com/"
SUPPORT_URL="https://help.ubuntu.com/"
BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
VERSION_CODENAME=focal
UBUNTU_CODENAME=focal
```
### Steps to Reproduce
File `playbook.yml`:
```yaml
- hosts: localhost
connection: local
tasks:
- ansible.builtin.include_role: { name: ./role01 }
```
File `role01/defaults/main.yml`:
```yaml
myvar01: value-for-myvar01-from-defaults
```
File `role01/tasks/main.yml`:
```yaml
- ansible.builtin.debug:
msg: myvar01 in role01 tasks is {{ myvar01 }}
- ansible.builtin.include_role: { name: ./role02 }
vars:
myvar02: "{{ myvar01 }}"
```
File `role02/defaults/main.yml`:
```yaml
myvar02: value-for-myvar02-from-defaults
```
File `role02/handlers/main.yml`:
```yaml
- name: myhandler01
ansible.builtin.debug:
msg: myvar02 in role02 handlers is {{ myvar02 }}
```
File `role02/tasks/main.yml`:
```yaml
- ansible.builtin.debug:
msg: myvar02 in role02 tasks is {{ myvar02 }}
- ansible.builtin.command:
cmd: uptime
changed_when: true
notify: myhandler01
- name: Force all notified handlers to run at this point
ansible.builtin.meta: flush_handlers
```
Then run `ansible-playbook playbook.yml` to see the result.
### Expected Results
I expect the value of the variable to be `value-for-myvar01-from-defaults` instead of undefined, like this:
```
$ ansible-playbook playbook.yml
[WARNING]: No inventory was parsed, only implicit localhost is available
[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all'
PLAY [Main play] *********************************************************************************************************************************************************************************************************
TASK [Gathering Facts] ***************************************************************************************************************************************************************************************************
ok: [localhost]
TASK [ansible.builtin.include_role : ./role01] ***************************************************************************************************************************************************************************
TASK [./role01 : ansible.builtin.debug] **********************************************************************************************************************************************************************************
ok: [localhost] => {
"msg": "myvar01 in role01 tasks is value-for-myvar01-from-defaults"
}
TASK [ansible.builtin.include_role : ./role02] ***************************************************************************************************************************************************************************
TASK [./role02 : ansible.builtin.debug] **********************************************************************************************************************************************************************************
ok: [localhost] => {
"msg": "myvar02 in role02 tasks is value-for-myvar01-from-defaults"
}
TASK [./role02 : ansible.builtin.command] ********************************************************************************************************************************************************************************
changed: [localhost]
TASK [./role02 : Force all notified handlers to run at this point] *******************************************************************************************************************************************************
RUNNING HANDLER [./role02 : myhandler01] *********************************************************************************************************************************************************************************
ok: [localhost] => {
"msg": "myvar02 in role02 handlers is value-for-myvar01-from-defaults"
}
PLAY RECAP ***************************************************************************************************************************************************************************************************************
localhost : ok=5 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
```
### Actual Results
```console
$ ansible-playbook playbook.yml
[WARNING]: No inventory was parsed, only implicit localhost is available
[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all'
PLAY [Main play] *********************************************************************************************************************************************************************************************************
TASK [Gathering Facts] ***************************************************************************************************************************************************************************************************
ok: [localhost]
TASK [ansible.builtin.include_role : ./role01] ***************************************************************************************************************************************************************************
TASK [./role01 : ansible.builtin.debug] **********************************************************************************************************************************************************************************
ok: [localhost] => {
"msg": "myvar01 in role01 tasks is value-for-myvar01-from-defaults"
}
TASK [ansible.builtin.include_role : ./role02] ***************************************************************************************************************************************************************************
TASK [./role02 : ansible.builtin.debug] **********************************************************************************************************************************************************************************
ok: [localhost] => {
"msg": "myvar02 in role02 tasks is value-for-myvar01-from-defaults"
}
TASK [./role02 : ansible.builtin.command] ********************************************************************************************************************************************************************************
changed: [localhost]
TASK [./role02 : Force all notified handlers to run at this point] *******************************************************************************************************************************************************
RUNNING HANDLER [./role02 : myhandler01] *********************************************************************************************************************************************************************************
fatal: [localhost]: FAILED! => {"msg": "The task includes an option with an undefined variable. The error was: {{ myvar01 }}: 'myvar01' is undefined. 'myvar01' is undefined. {{ myvar01 }}: 'myvar01' is undefined. 'myvar01' is undefined\n\nThe error appears to be in '/workspaces/test-ansible-nested/role02/handlers/main.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: myhandler01\n ^ here\n"}
PLAY RECAP ***************************************************************************************************************************************************************************************************************
localhost : ok=4 changed=1 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
```
### Code of Conduct
- [x] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/80459
|
https://github.com/ansible/ansible/pull/81524
|
a2673cb56438400bc02f89b58316597e517afb52
|
98f16278172a8d54180ab203a8fc85b2cfe477d9
| 2023-04-10T00:34:51Z |
python
| 2023-08-17T19:14:20Z |
test/integration/targets/handlers/roles/r2-dep_chain-vars/tasks/main.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 80,459 |
Variable in handlers of nested included role is undefined if left set to default
|
### Summary
When I try to reference a variable in a handler of an included role B included whitin another role A and the role A is called from my playbook without specifying a value for the variable (hence leaving it set to the default value) Ansible gives me an error saying that the variable is undefined, while it should be set to the default value from role A. This is really difficult to explain, but easy to understand from the example below.
### Issue Type
Bug Report
### Component Name
default variable values handling
### Ansible Version
```console
$ ansible --version
ansible [core 2.14.4]
config file = None
configured module search path = ['/home/codespace/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/codespace/.local/lib/python3.10/site-packages/ansible
ansible collection location = /home/codespace/.ansible/collections:/usr/share/ansible/collections
executable location = /home/codespace/.local/bin/ansible
python version = 3.10.4 (main, Mar 13 2023, 19:44:25) [GCC 9.4.0] (/usr/local/python/3.10.4/bin/python)
jinja version = 3.1.2
libyaml = True
```
### Configuration
```console
$ ansible-config dump --only-changed -t all
CONFIG_FILE() = None
```
### OS / Environment
```
$ cat /etc/os-release
NAME="Ubuntu"
VERSION="20.04.5 LTS (Focal Fossa)"
ID=ubuntu
ID_LIKE=debian
PRETTY_NAME="Ubuntu 20.04.5 LTS"
VERSION_ID="20.04"
HOME_URL="https://www.ubuntu.com/"
SUPPORT_URL="https://help.ubuntu.com/"
BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
VERSION_CODENAME=focal
UBUNTU_CODENAME=focal
```
### Steps to Reproduce
File `playbook.yml`:
```yaml
- hosts: localhost
connection: local
tasks:
- ansible.builtin.include_role: { name: ./role01 }
```
File `role01/defaults/main.yml`:
```yaml
myvar01: value-for-myvar01-from-defaults
```
File `role01/tasks/main.yml`:
```yaml
- ansible.builtin.debug:
msg: myvar01 in role01 tasks is {{ myvar01 }}
- ansible.builtin.include_role: { name: ./role02 }
vars:
myvar02: "{{ myvar01 }}"
```
File `role02/defaults/main.yml`:
```yaml
myvar02: value-for-myvar02-from-defaults
```
File `role02/handlers/main.yml`:
```yaml
- name: myhandler01
ansible.builtin.debug:
msg: myvar02 in role02 handlers is {{ myvar02 }}
```
File `role02/tasks/main.yml`:
```yaml
- ansible.builtin.debug:
msg: myvar02 in role02 tasks is {{ myvar02 }}
- ansible.builtin.command:
cmd: uptime
changed_when: true
notify: myhandler01
- name: Force all notified handlers to run at this point
ansible.builtin.meta: flush_handlers
```
Then run `ansible-playbook playbook.yml` to see the result.
### Expected Results
I expect the value of the variable to be `value-for-myvar01-from-defaults` instead of undefined, like this:
```
$ ansible-playbook playbook.yml
[WARNING]: No inventory was parsed, only implicit localhost is available
[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all'
PLAY [Main play] *********************************************************************************************************************************************************************************************************
TASK [Gathering Facts] ***************************************************************************************************************************************************************************************************
ok: [localhost]
TASK [ansible.builtin.include_role : ./role01] ***************************************************************************************************************************************************************************
TASK [./role01 : ansible.builtin.debug] **********************************************************************************************************************************************************************************
ok: [localhost] => {
"msg": "myvar01 in role01 tasks is value-for-myvar01-from-defaults"
}
TASK [ansible.builtin.include_role : ./role02] ***************************************************************************************************************************************************************************
TASK [./role02 : ansible.builtin.debug] **********************************************************************************************************************************************************************************
ok: [localhost] => {
"msg": "myvar02 in role02 tasks is value-for-myvar01-from-defaults"
}
TASK [./role02 : ansible.builtin.command] ********************************************************************************************************************************************************************************
changed: [localhost]
TASK [./role02 : Force all notified handlers to run at this point] *******************************************************************************************************************************************************
RUNNING HANDLER [./role02 : myhandler01] *********************************************************************************************************************************************************************************
ok: [localhost] => {
"msg": "myvar02 in role02 handlers is value-for-myvar01-from-defaults"
}
PLAY RECAP ***************************************************************************************************************************************************************************************************************
localhost : ok=5 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
```
### Actual Results
```console
$ ansible-playbook playbook.yml
[WARNING]: No inventory was parsed, only implicit localhost is available
[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all'
PLAY [Main play] *********************************************************************************************************************************************************************************************************
TASK [Gathering Facts] ***************************************************************************************************************************************************************************************************
ok: [localhost]
TASK [ansible.builtin.include_role : ./role01] ***************************************************************************************************************************************************************************
TASK [./role01 : ansible.builtin.debug] **********************************************************************************************************************************************************************************
ok: [localhost] => {
"msg": "myvar01 in role01 tasks is value-for-myvar01-from-defaults"
}
TASK [ansible.builtin.include_role : ./role02] ***************************************************************************************************************************************************************************
TASK [./role02 : ansible.builtin.debug] **********************************************************************************************************************************************************************************
ok: [localhost] => {
"msg": "myvar02 in role02 tasks is value-for-myvar01-from-defaults"
}
TASK [./role02 : ansible.builtin.command] ********************************************************************************************************************************************************************************
changed: [localhost]
TASK [./role02 : Force all notified handlers to run at this point] *******************************************************************************************************************************************************
RUNNING HANDLER [./role02 : myhandler01] *********************************************************************************************************************************************************************************
fatal: [localhost]: FAILED! => {"msg": "The task includes an option with an undefined variable. The error was: {{ myvar01 }}: 'myvar01' is undefined. 'myvar01' is undefined. {{ myvar01 }}: 'myvar01' is undefined. 'myvar01' is undefined\n\nThe error appears to be in '/workspaces/test-ansible-nested/role02/handlers/main.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: myhandler01\n ^ here\n"}
PLAY RECAP ***************************************************************************************************************************************************************************************************************
localhost : ok=4 changed=1 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
```
### Code of Conduct
- [x] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/80459
|
https://github.com/ansible/ansible/pull/81524
|
a2673cb56438400bc02f89b58316597e517afb52
|
98f16278172a8d54180ab203a8fc85b2cfe477d9
| 2023-04-10T00:34:51Z |
python
| 2023-08-17T19:14:20Z |
test/integration/targets/handlers/runme.sh
|
#!/usr/bin/env bash
set -eux
export ANSIBLE_FORCE_HANDLERS
ANSIBLE_FORCE_HANDLERS=false
# simple handler test
ansible-playbook test_handlers.yml -i inventory.handlers -v "$@" --tags scenario1
# simple from_handlers test
ansible-playbook from_handlers.yml -i inventory.handlers -v "$@" --tags scenario1
ansible-playbook test_listening_handlers.yml -i inventory.handlers -v "$@"
[ "$(ansible-playbook test_handlers.yml -i inventory.handlers -v "$@" --tags scenario2 -l A \
| grep -E -o 'RUNNING HANDLER \[test_handlers : .*]')" = "RUNNING HANDLER [test_handlers : test handler]" ]
# Test forcing handlers using the linear and free strategy
for strategy in linear free; do
export ANSIBLE_STRATEGY=$strategy
# Not forcing, should only run on successful host
[ "$(ansible-playbook test_force_handlers.yml -i inventory.handlers -v "$@" --tags normal \
| grep -E -o CALLED_HANDLER_. | sort | uniq | xargs)" = "CALLED_HANDLER_B" ]
# Forcing from command line
[ "$(ansible-playbook test_force_handlers.yml -i inventory.handlers -v "$@" --tags normal --force-handlers \
| grep -E -o CALLED_HANDLER_. | sort | uniq | xargs)" = "CALLED_HANDLER_A CALLED_HANDLER_B" ]
# Forcing from command line, should only run later tasks on unfailed hosts
[ "$(ansible-playbook test_force_handlers.yml -i inventory.handlers -v "$@" --tags normal --force-handlers \
| grep -E -o CALLED_TASK_. | sort | uniq | xargs)" = "CALLED_TASK_B CALLED_TASK_D CALLED_TASK_E" ]
# Forcing from command line, should call handlers even if all hosts fail
[ "$(ansible-playbook test_force_handlers.yml -i inventory.handlers -v "$@" --tags normal --force-handlers -e fail_all=yes \
| grep -E -o CALLED_HANDLER_. | sort | uniq | xargs)" = "CALLED_HANDLER_A CALLED_HANDLER_B" ]
# Forcing from ansible.cfg
[ "$(ANSIBLE_FORCE_HANDLERS=true ansible-playbook test_force_handlers.yml -i inventory.handlers -v "$@" --tags normal \
| grep -E -o CALLED_HANDLER_. | sort | uniq | xargs)" = "CALLED_HANDLER_A CALLED_HANDLER_B" ]
# Forcing true in play
[ "$(ansible-playbook test_force_handlers.yml -i inventory.handlers -v "$@" --tags force_true_in_play \
| grep -E -o CALLED_HANDLER_. | sort | uniq | xargs)" = "CALLED_HANDLER_A CALLED_HANDLER_B" ]
# Forcing false in play, which overrides command line
[ "$(ansible-playbook test_force_handlers.yml -i inventory.handlers -v "$@" --tags force_false_in_play --force-handlers \
| grep -E -o CALLED_HANDLER_. | sort | uniq | xargs)" = "CALLED_HANDLER_B" ]
# https://github.com/ansible/ansible/pull/80898
[ "$(ansible-playbook 80880.yml -i inventory.handlers -vv "$@" 2>&1)" ]
unset ANSIBLE_STRATEGY
done
[ "$(ansible-playbook test_handlers_include.yml -i ../../inventory -v "$@" --tags playbook_include_handlers \
| grep -E -o 'RUNNING HANDLER \[.*]')" = "RUNNING HANDLER [test handler]" ]
[ "$(ansible-playbook test_handlers_include.yml -i ../../inventory -v "$@" --tags role_include_handlers \
| grep -E -o 'RUNNING HANDLER \[test_handlers_include : .*]')" = "RUNNING HANDLER [test_handlers_include : test handler]" ]
[ "$(ansible-playbook test_handlers_include_role.yml -i ../../inventory -v "$@" \
| grep -E -o 'RUNNING HANDLER \[test_handlers_include_role : .*]')" = "RUNNING HANDLER [test_handlers_include_role : test handler]" ]
# Notify handler listen
ansible-playbook test_handlers_listen.yml -i inventory.handlers -v "$@"
# Notify inexistent handlers results in error
set +e
result="$(ansible-playbook test_handlers_inexistent_notify.yml -i inventory.handlers "$@" 2>&1)"
set -e
grep -q "ERROR! The requested handler 'notify_inexistent_handler' was not found in either the main handlers list nor in the listening handlers list" <<< "$result"
# Notify inexistent handlers without errors when ANSIBLE_ERROR_ON_MISSING_HANDLER=false
ANSIBLE_ERROR_ON_MISSING_HANDLER=false ansible-playbook test_handlers_inexistent_notify.yml -i inventory.handlers -v "$@"
ANSIBLE_ERROR_ON_MISSING_HANDLER=false ansible-playbook test_templating_in_handlers.yml -v "$@"
# https://github.com/ansible/ansible/issues/36649
output_dir=/tmp
set +e
result="$(ansible-playbook test_handlers_any_errors_fatal.yml -e output_dir=$output_dir -i inventory.handlers -v "$@" 2>&1)"
set -e
[ ! -f $output_dir/should_not_exist_B ] || (rm -f $output_dir/should_not_exist_B && exit 1)
# https://github.com/ansible/ansible/issues/47287
[ "$(ansible-playbook test_handlers_including_task.yml -i ../../inventory -v "$@" | grep -E -o 'failed=[0-9]+')" = "failed=0" ]
# https://github.com/ansible/ansible/issues/71222
ansible-playbook test_role_handlers_including_tasks.yml -i ../../inventory -v "$@"
# https://github.com/ansible/ansible/issues/27237
set +e
result="$(ansible-playbook test_handlers_template_run_once.yml -i inventory.handlers "$@" 2>&1)"
set -e
grep -q "handler A" <<< "$result"
grep -q "handler B" <<< "$result"
# Test an undefined variable in another handler name isn't a failure
ansible-playbook 58841.yml "$@" --tags lazy_evaluation 2>&1 | tee out.txt ; cat out.txt
grep out.txt -e "\[WARNING\]: Handler 'handler name with {{ test_var }}' is unusable"
[ "$(grep out.txt -ce 'handler ran')" = "1" ]
[ "$(grep out.txt -ce 'handler with var ran')" = "0" ]
# Test templating a handler name with a defined variable
ansible-playbook 58841.yml "$@" --tags evaluation_time -e test_var=myvar | tee out.txt ; cat out.txt
[ "$(grep out.txt -ce 'handler ran')" = "0" ]
[ "$(grep out.txt -ce 'handler with var ran')" = "1" ]
# Test the handler is not found when the variable is undefined
ansible-playbook 58841.yml "$@" --tags evaluation_time 2>&1 | tee out.txt ; cat out.txt
grep out.txt -e "ERROR! The requested handler 'handler name with myvar' was not found"
grep out.txt -e "\[WARNING\]: Handler 'handler name with {{ test_var }}' is unusable"
[ "$(grep out.txt -ce 'handler ran')" = "0" ]
[ "$(grep out.txt -ce 'handler with var ran')" = "0" ]
# Test include_role and import_role cannot be used as handlers
ansible-playbook test_role_as_handler.yml "$@" 2>&1 | tee out.txt
grep out.txt -e "ERROR! Using 'include_role' as a handler is not supported."
# Test notifying a handler from within include_tasks does not work anymore
ansible-playbook test_notify_included.yml "$@" 2>&1 | tee out.txt
[ "$(grep out.txt -ce 'I was included')" = "1" ]
grep out.txt -e "ERROR! The requested handler 'handler_from_include' was not found in either the main handlers list nor in the listening handlers list"
ansible-playbook test_handlers_meta.yml -i inventory.handlers -vv "$@" | tee out.txt
[ "$(grep out.txt -ce 'RUNNING HANDLER \[noop_handler\]')" = "1" ]
[ "$(grep out.txt -ce 'META: noop')" = "1" ]
# https://github.com/ansible/ansible/issues/46447
set +e
test "$(ansible-playbook 46447.yml -i inventory.handlers -vv "$@" 2>&1 | grep -c 'SHOULD NOT GET HERE')"
set -e
# https://github.com/ansible/ansible/issues/52561
ansible-playbook 52561.yml -i inventory.handlers "$@" 2>&1 | tee out.txt
[ "$(grep out.txt -ce 'handler1 ran')" = "1" ]
# Test flush_handlers meta task does not imply any_errors_fatal
ansible-playbook 54991.yml -i inventory.handlers "$@" 2>&1 | tee out.txt
[ "$(grep out.txt -ce 'handler ran')" = "4" ]
ansible-playbook order.yml -i inventory.handlers "$@" 2>&1
set +e
ansible-playbook order.yml --force-handlers -e test_force_handlers=true -i inventory.handlers "$@" 2>&1
set -e
ansible-playbook include_handlers_fail_force.yml --force-handlers -i inventory.handlers "$@" 2>&1 | tee out.txt
[ "$(grep out.txt -ce 'included handler ran')" = "1" ]
ansible-playbook test_flush_handlers_as_handler.yml -i inventory.handlers "$@" 2>&1 | tee out.txt
grep out.txt -e "ERROR! flush_handlers cannot be used as a handler"
ansible-playbook test_skip_flush.yml -i inventory.handlers "$@" 2>&1 | tee out.txt
[ "$(grep out.txt -ce 'handler ran')" = "0" ]
ansible-playbook test_flush_in_rescue_always.yml -i inventory.handlers "$@" 2>&1 | tee out.txt
[ "$(grep out.txt -ce 'handler ran in rescue')" = "1" ]
[ "$(grep out.txt -ce 'handler ran in always')" = "2" ]
[ "$(grep out.txt -ce 'lockstep works')" = "2" ]
ansible-playbook test_handlers_infinite_loop.yml -i inventory.handlers "$@" 2>&1
ansible-playbook test_flush_handlers_rescue_always.yml -i inventory.handlers "$@" 2>&1 | tee out.txt
[ "$(grep out.txt -ce 'rescue ran')" = "1" ]
[ "$(grep out.txt -ce 'always ran')" = "2" ]
[ "$(grep out.txt -ce 'should run for both hosts')" = "2" ]
ansible-playbook test_fqcn_meta_flush_handlers.yml -i inventory.handlers "$@" 2>&1 | tee out.txt
grep out.txt -e "handler ran"
grep out.txt -e "after flush"
ansible-playbook 79776.yml -i inventory.handlers "$@"
ansible-playbook test_block_as_handler.yml "$@" 2>&1 | tee out.txt
grep out.txt -e "ERROR! Using a block as a handler is not supported."
ansible-playbook test_block_as_handler-include.yml "$@" 2>&1 | tee out.txt
grep out.txt -e "ERROR! Using a block as a handler is not supported."
ansible-playbook test_block_as_handler-import.yml "$@" 2>&1 | tee out.txt
grep out.txt -e "ERROR! Using a block as a handler is not supported."
ansible-playbook test_include_role_handler_once.yml -i inventory.handlers "$@" 2>&1 | tee out.txt
[ "$(grep out.txt -ce 'handler ran')" = "1" ]
ansible-playbook test_listen_role_dedup.yml "$@" 2>&1 | tee out.txt
[ "$(grep out.txt -ce 'a handler from a role')" = "1" ]
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 81,446 |
path_join filter documentation missing important detail
|
### Summary
There is an undocumented gotcha in path_join. It is probably just a side effect of os.path.join's behaviour.
For a sparse backup task I had two tasks:
```
- ansible.builtin.file:
state: directory
path: "{{ (backup_dir, item) | path_join }}"
loop: "{{template_dirs}}"
- ansible.builtin.copy:
remote_src: true
src: "{{item}}"
dest: "{{ (backup_dir, item) | path_join }}"
loop: "{{ templates | map(attribute='dest') }}
```
`template_dirs` is a list of relative paths, `templates.dest` is the absolute path of the destination config. For the first task, path_join resulted in `/tmp/somedir/directory/in/list`; for the second, it produced `/directory/in/list/filename`. `(backup_dir, item[1:]) | path_join` produced the expected result of `/tmp/somedir/directory/in/list/filename`.
The documentation for the path_join filter should be clear that a list entry with an absolute path overrides earlier elements in the list, and there should be an example of it doing that.
### Issue Type
Documentation Report
### Component Name
lib/ansible/plugins/filter/core.py
### Ansible Version
```console
$ ansible --version
ansible [core 2.15.2]
config file = None
configured module search path = ['/home/rorsten/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/rorsten/miniconda3/lib/python3.11/site-packages/ansible
ansible collection location = /home/rorsten/.ansible/collections:/usr/share/ansible/collections
executable location = /home/rorsten/miniconda3/bin/ansible
python version = 3.11.4 (main, Jul 5 2023, 13:45:01) [GCC 11.2.0] (/home/rorsten/miniconda3/bin/python3.11)
jinja version = 3.1.2
libyaml = True
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
CONFIG_FILE() = None
```
### OS / Environment
Ubuntu 16.04, 20.04; MacOS 12.6.7
### Additional Information
It is not unreasonable to expect that the output of list | path_join would be the concatenation of all of the path elements in the list, but this seems not to be the actual behaviour. The clarification could prevent quite a bit of frustration.
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/81446
|
https://github.com/ansible/ansible/pull/81544
|
863e2571db5d70b03478fd18efef15c4bde88c10
|
4a96b3d5b41fe9f0d12d899234b22e676d82e804
| 2023-08-04T18:41:17Z |
python
| 2023-08-22T15:12:21Z |
lib/ansible/plugins/filter/core.py
|
# (c) 2012, Jeroen Hoekx <[email protected]>
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
# Make coding more python3-ish
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import base64
import glob
import hashlib
import json
import ntpath
import os.path
import re
import shlex
import sys
import time
import uuid
import yaml
import datetime
from collections.abc import Mapping
from functools import partial
from random import Random, SystemRandom, shuffle
from jinja2.filters import pass_environment
from ansible.errors import AnsibleError, AnsibleFilterError, AnsibleFilterTypeError
from ansible.module_utils.six import string_types, integer_types, reraise, text_type
from ansible.module_utils.common.text.converters import to_bytes, to_native, to_text
from ansible.module_utils.common.collections import is_sequence
from ansible.module_utils.common.yaml import yaml_load, yaml_load_all
from ansible.parsing.ajson import AnsibleJSONEncoder
from ansible.parsing.yaml.dumper import AnsibleDumper
from ansible.template import recursive_check_defined
from ansible.utils.display import Display
from ansible.utils.encrypt import passlib_or_crypt, PASSLIB_AVAILABLE
from ansible.utils.hashing import md5s, checksum_s
from ansible.utils.unicode import unicode_wrap
from ansible.utils.vars import merge_hash
display = Display()
UUID_NAMESPACE_ANSIBLE = uuid.UUID('361E6D51-FAEC-444A-9079-341386DA8E2E')
def to_yaml(a, *args, **kw):
'''Make verbose, human readable yaml'''
default_flow_style = kw.pop('default_flow_style', None)
try:
transformed = yaml.dump(a, Dumper=AnsibleDumper, allow_unicode=True, default_flow_style=default_flow_style, **kw)
except Exception as e:
raise AnsibleFilterError("to_yaml - %s" % to_native(e), orig_exc=e)
return to_text(transformed)
def to_nice_yaml(a, indent=4, *args, **kw):
'''Make verbose, human readable yaml'''
try:
transformed = yaml.dump(a, Dumper=AnsibleDumper, indent=indent, allow_unicode=True, default_flow_style=False, **kw)
except Exception as e:
raise AnsibleFilterError("to_nice_yaml - %s" % to_native(e), orig_exc=e)
return to_text(transformed)
def to_json(a, *args, **kw):
''' Convert the value to JSON '''
# defaults for filters
if 'vault_to_text' not in kw:
kw['vault_to_text'] = True
if 'preprocess_unsafe' not in kw:
kw['preprocess_unsafe'] = False
return json.dumps(a, cls=AnsibleJSONEncoder, *args, **kw)
def to_nice_json(a, indent=4, sort_keys=True, *args, **kw):
'''Make verbose, human readable JSON'''
return to_json(a, indent=indent, sort_keys=sort_keys, separators=(',', ': '), *args, **kw)
def to_bool(a):
''' return a bool for the arg '''
if a is None or isinstance(a, bool):
return a
if isinstance(a, string_types):
a = a.lower()
if a in ('yes', 'on', '1', 'true', 1):
return True
return False
def to_datetime(string, format="%Y-%m-%d %H:%M:%S"):
return datetime.datetime.strptime(string, format)
def strftime(string_format, second=None, utc=False):
''' return a date string using string. See https://docs.python.org/3/library/time.html#time.strftime for format '''
if utc:
timefn = time.gmtime
else:
timefn = time.localtime
if second is not None:
try:
second = float(second)
except Exception:
raise AnsibleFilterError('Invalid value for epoch value (%s)' % second)
return time.strftime(string_format, timefn(second))
def quote(a):
''' return its argument quoted for shell usage '''
if a is None:
a = u''
return shlex.quote(to_text(a))
def fileglob(pathname):
''' return list of matched regular files for glob '''
return [g for g in glob.glob(pathname) if os.path.isfile(g)]
def regex_replace(value='', pattern='', replacement='', ignorecase=False, multiline=False):
''' Perform a `re.sub` returning a string '''
value = to_text(value, errors='surrogate_or_strict', nonstring='simplerepr')
flags = 0
if ignorecase:
flags |= re.I
if multiline:
flags |= re.M
_re = re.compile(pattern, flags=flags)
return _re.sub(replacement, value)
def regex_findall(value, regex, multiline=False, ignorecase=False):
''' Perform re.findall and return the list of matches '''
value = to_text(value, errors='surrogate_or_strict', nonstring='simplerepr')
flags = 0
if ignorecase:
flags |= re.I
if multiline:
flags |= re.M
return re.findall(regex, value, flags)
def regex_search(value, regex, *args, **kwargs):
''' Perform re.search and return the list of matches or a backref '''
value = to_text(value, errors='surrogate_or_strict', nonstring='simplerepr')
groups = list()
for arg in args:
if arg.startswith('\\g'):
match = re.match(r'\\g<(\S+)>', arg).group(1)
groups.append(match)
elif arg.startswith('\\'):
match = int(re.match(r'\\(\d+)', arg).group(1))
groups.append(match)
else:
raise AnsibleFilterError('Unknown argument')
flags = 0
if kwargs.get('ignorecase'):
flags |= re.I
if kwargs.get('multiline'):
flags |= re.M
match = re.search(regex, value, flags)
if match:
if not groups:
return match.group()
else:
items = list()
for item in groups:
items.append(match.group(item))
return items
def ternary(value, true_val, false_val, none_val=None):
''' value ? true_val : false_val '''
if value is None and none_val is not None:
return none_val
elif bool(value):
return true_val
else:
return false_val
def regex_escape(string, re_type='python'):
"""Escape all regular expressions special characters from STRING."""
string = to_text(string, errors='surrogate_or_strict', nonstring='simplerepr')
if re_type == 'python':
return re.escape(string)
elif re_type == 'posix_basic':
# list of BRE special chars:
# https://en.wikibooks.org/wiki/Regular_Expressions/POSIX_Basic_Regular_Expressions
return regex_replace(string, r'([].[^$*\\])', r'\\\1')
# TODO: implement posix_extended
# It's similar to, but different from python regex, which is similar to,
# but different from PCRE. It's possible that re.escape would work here.
# https://remram44.github.io/regex-cheatsheet/regex.html#programs
elif re_type == 'posix_extended':
raise AnsibleFilterError('Regex type (%s) not yet implemented' % re_type)
else:
raise AnsibleFilterError('Invalid regex type (%s)' % re_type)
def from_yaml(data):
if isinstance(data, string_types):
# The ``text_type`` call here strips any custom
# string wrapper class, so that CSafeLoader can
# read the data
return yaml_load(text_type(to_text(data, errors='surrogate_or_strict')))
return data
def from_yaml_all(data):
if isinstance(data, string_types):
# The ``text_type`` call here strips any custom
# string wrapper class, so that CSafeLoader can
# read the data
return yaml_load_all(text_type(to_text(data, errors='surrogate_or_strict')))
return data
@pass_environment
def rand(environment, end, start=None, step=None, seed=None):
if seed is None:
r = SystemRandom()
else:
r = Random(seed)
if isinstance(end, integer_types):
if not start:
start = 0
if not step:
step = 1
return r.randrange(start, end, step)
elif hasattr(end, '__iter__'):
if start or step:
raise AnsibleFilterError('start and step can only be used with integer values')
return r.choice(end)
else:
raise AnsibleFilterError('random can only be used on sequences and integers')
def randomize_list(mylist, seed=None):
try:
mylist = list(mylist)
if seed:
r = Random(seed)
r.shuffle(mylist)
else:
shuffle(mylist)
except Exception:
pass
return mylist
def get_hash(data, hashtype='sha1'):
try:
h = hashlib.new(hashtype)
except Exception as e:
# hash is not supported?
raise AnsibleFilterError(e)
h.update(to_bytes(data, errors='surrogate_or_strict'))
return h.hexdigest()
def get_encrypted_password(password, hashtype='sha512', salt=None, salt_size=None, rounds=None, ident=None):
passlib_mapping = {
'md5': 'md5_crypt',
'blowfish': 'bcrypt',
'sha256': 'sha256_crypt',
'sha512': 'sha512_crypt',
}
hashtype = passlib_mapping.get(hashtype, hashtype)
unknown_passlib_hashtype = False
if PASSLIB_AVAILABLE and hashtype not in passlib_mapping and hashtype not in passlib_mapping.values():
unknown_passlib_hashtype = True
display.deprecated(
f"Checking for unsupported password_hash passlib hashtype '{hashtype}'. "
"This will be an error in the future as all supported hashtypes must be documented.",
version='2.19'
)
try:
return passlib_or_crypt(password, hashtype, salt=salt, salt_size=salt_size, rounds=rounds, ident=ident)
except AnsibleError as e:
reraise(AnsibleFilterError, AnsibleFilterError(to_native(e), orig_exc=e), sys.exc_info()[2])
except Exception as e:
if unknown_passlib_hashtype:
# This can occur if passlib.hash has the hashtype attribute, but it has a different signature than the valid choices.
# In 2.19 this will replace the deprecation warning above and the extra exception handling can be deleted.
choices = ', '.join(passlib_mapping)
raise AnsibleFilterError(f"{hashtype} is not in the list of supported passlib algorithms: {choices}") from e
raise
def to_uuid(string, namespace=UUID_NAMESPACE_ANSIBLE):
uuid_namespace = namespace
if not isinstance(uuid_namespace, uuid.UUID):
try:
uuid_namespace = uuid.UUID(namespace)
except (AttributeError, ValueError) as e:
raise AnsibleFilterError("Invalid value '%s' for 'namespace': %s" % (to_native(namespace), to_native(e)))
# uuid.uuid5() requires bytes on Python 2 and bytes or text or Python 3
return to_text(uuid.uuid5(uuid_namespace, to_native(string, errors='surrogate_or_strict')))
def mandatory(a, msg=None):
"""Make a variable mandatory."""
from jinja2.runtime import Undefined
if isinstance(a, Undefined):
if a._undefined_name is not None:
name = "'%s' " % to_text(a._undefined_name)
else:
name = ''
if msg is not None:
raise AnsibleFilterError(to_native(msg))
raise AnsibleFilterError("Mandatory variable %s not defined." % name)
return a
def combine(*terms, **kwargs):
recursive = kwargs.pop('recursive', False)
list_merge = kwargs.pop('list_merge', 'replace')
if kwargs:
raise AnsibleFilterError("'recursive' and 'list_merge' are the only valid keyword arguments")
# allow the user to do `[dict1, dict2, ...] | combine`
dictionaries = flatten(terms, levels=1)
# recursively check that every elements are defined (for jinja2)
recursive_check_defined(dictionaries)
if not dictionaries:
return {}
if len(dictionaries) == 1:
return dictionaries[0]
# merge all the dicts so that the dict at the end of the array have precedence
# over the dict at the beginning.
# we merge the dicts from the highest to the lowest priority because there is
# a huge probability that the lowest priority dict will be the biggest in size
# (as the low prio dict will hold the "default" values and the others will be "patches")
# and merge_hash create a copy of it's first argument.
# so high/right -> low/left is more efficient than low/left -> high/right
high_to_low_prio_dict_iterator = reversed(dictionaries)
result = next(high_to_low_prio_dict_iterator)
for dictionary in high_to_low_prio_dict_iterator:
result = merge_hash(dictionary, result, recursive, list_merge)
return result
def comment(text, style='plain', **kw):
# Predefined comment types
comment_styles = {
'plain': {
'decoration': '# '
},
'erlang': {
'decoration': '% '
},
'c': {
'decoration': '// '
},
'cblock': {
'beginning': '/*',
'decoration': ' * ',
'end': ' */'
},
'xml': {
'beginning': '<!--',
'decoration': ' - ',
'end': '-->'
}
}
# Pointer to the right comment type
style_params = comment_styles[style]
if 'decoration' in kw:
prepostfix = kw['decoration']
else:
prepostfix = style_params['decoration']
# Default params
p = {
'newline': '\n',
'beginning': '',
'prefix': (prepostfix).rstrip(),
'prefix_count': 1,
'decoration': '',
'postfix': (prepostfix).rstrip(),
'postfix_count': 1,
'end': ''
}
# Update default params
p.update(style_params)
p.update(kw)
# Compose substrings for the final string
str_beginning = ''
if p['beginning']:
str_beginning = "%s%s" % (p['beginning'], p['newline'])
str_prefix = ''
if p['prefix']:
if p['prefix'] != p['newline']:
str_prefix = str(
"%s%s" % (p['prefix'], p['newline'])) * int(p['prefix_count'])
else:
str_prefix = str(
"%s" % (p['newline'])) * int(p['prefix_count'])
str_text = ("%s%s" % (
p['decoration'],
# Prepend each line of the text with the decorator
text.replace(
p['newline'], "%s%s" % (p['newline'], p['decoration'])))).replace(
# Remove trailing spaces when only decorator is on the line
"%s%s" % (p['decoration'], p['newline']),
"%s%s" % (p['decoration'].rstrip(), p['newline']))
str_postfix = p['newline'].join(
[''] + [p['postfix'] for x in range(p['postfix_count'])])
str_end = ''
if p['end']:
str_end = "%s%s" % (p['newline'], p['end'])
# Return the final string
return "%s%s%s%s%s" % (
str_beginning,
str_prefix,
str_text,
str_postfix,
str_end)
@pass_environment
def extract(environment, item, container, morekeys=None):
if morekeys is None:
keys = [item]
elif isinstance(morekeys, list):
keys = [item] + morekeys
else:
keys = [item, morekeys]
value = container
for key in keys:
value = environment.getitem(value, key)
return value
def b64encode(string, encoding='utf-8'):
return to_text(base64.b64encode(to_bytes(string, encoding=encoding, errors='surrogate_or_strict')))
def b64decode(string, encoding='utf-8'):
return to_text(base64.b64decode(to_bytes(string, errors='surrogate_or_strict')), encoding=encoding)
def flatten(mylist, levels=None, skip_nulls=True):
ret = []
for element in mylist:
if skip_nulls and element in (None, 'None', 'null'):
# ignore null items
continue
elif is_sequence(element):
if levels is None:
ret.extend(flatten(element, skip_nulls=skip_nulls))
elif levels >= 1:
# decrement as we go down the stack
ret.extend(flatten(element, levels=(int(levels) - 1), skip_nulls=skip_nulls))
else:
ret.append(element)
else:
ret.append(element)
return ret
def subelements(obj, subelements, skip_missing=False):
'''Accepts a dict or list of dicts, and a dotted accessor and produces a product
of the element and the results of the dotted accessor
>>> obj = [{"name": "alice", "groups": ["wheel"], "authorized": ["/tmp/alice/onekey.pub"]}]
>>> subelements(obj, 'groups')
[({'name': 'alice', 'groups': ['wheel'], 'authorized': ['/tmp/alice/onekey.pub']}, 'wheel')]
'''
if isinstance(obj, dict):
element_list = list(obj.values())
elif isinstance(obj, list):
element_list = obj[:]
else:
raise AnsibleFilterError('obj must be a list of dicts or a nested dict')
if isinstance(subelements, list):
subelement_list = subelements[:]
elif isinstance(subelements, string_types):
subelement_list = subelements.split('.')
else:
raise AnsibleFilterTypeError('subelements must be a list or a string')
results = []
for element in element_list:
values = element
for subelement in subelement_list:
try:
values = values[subelement]
except KeyError:
if skip_missing:
values = []
break
raise AnsibleFilterError("could not find %r key in iterated item %r" % (subelement, values))
except TypeError:
raise AnsibleFilterTypeError("the key %s should point to a dictionary, got '%s'" % (subelement, values))
if not isinstance(values, list):
raise AnsibleFilterTypeError("the key %r should point to a list, got %r" % (subelement, values))
for value in values:
results.append((element, value))
return results
def dict_to_list_of_dict_key_value_elements(mydict, key_name='key', value_name='value'):
''' takes a dictionary and transforms it into a list of dictionaries,
with each having a 'key' and 'value' keys that correspond to the keys and values of the original '''
if not isinstance(mydict, Mapping):
raise AnsibleFilterTypeError("dict2items requires a dictionary, got %s instead." % type(mydict))
ret = []
for key in mydict:
ret.append({key_name: key, value_name: mydict[key]})
return ret
def list_of_dict_key_value_elements_to_dict(mylist, key_name='key', value_name='value'):
''' takes a list of dicts with each having a 'key' and 'value' keys, and transforms the list into a dictionary,
effectively as the reverse of dict2items '''
if not is_sequence(mylist):
raise AnsibleFilterTypeError("items2dict requires a list, got %s instead." % type(mylist))
try:
return dict((item[key_name], item[value_name]) for item in mylist)
except KeyError:
raise AnsibleFilterTypeError(
"items2dict requires each dictionary in the list to contain the keys '%s' and '%s', got %s instead."
% (key_name, value_name, mylist)
)
except TypeError:
raise AnsibleFilterTypeError("items2dict requires a list of dictionaries, got %s instead." % mylist)
def path_join(paths):
''' takes a sequence or a string, and return a concatenation
of the different members '''
if isinstance(paths, string_types):
return os.path.join(paths)
elif is_sequence(paths):
return os.path.join(*paths)
else:
raise AnsibleFilterTypeError("|path_join expects string or sequence, got %s instead." % type(paths))
def commonpath(paths):
"""
Retrieve the longest common path from the given list.
:param paths: A list of file system paths.
:type paths: List[str]
:returns: The longest common path.
:rtype: str
"""
if not is_sequence(paths):
raise AnsibleFilterTypeError("|path_join expects sequence, got %s instead." % type(paths))
return os.path.commonpath(paths)
class FilterModule(object):
''' Ansible core jinja2 filters '''
def filters(self):
return {
# base 64
'b64decode': b64decode,
'b64encode': b64encode,
# uuid
'to_uuid': to_uuid,
# json
'to_json': to_json,
'to_nice_json': to_nice_json,
'from_json': json.loads,
# yaml
'to_yaml': to_yaml,
'to_nice_yaml': to_nice_yaml,
'from_yaml': from_yaml,
'from_yaml_all': from_yaml_all,
# path
'basename': partial(unicode_wrap, os.path.basename),
'dirname': partial(unicode_wrap, os.path.dirname),
'expanduser': partial(unicode_wrap, os.path.expanduser),
'expandvars': partial(unicode_wrap, os.path.expandvars),
'path_join': path_join,
'realpath': partial(unicode_wrap, os.path.realpath),
'relpath': partial(unicode_wrap, os.path.relpath),
'splitext': partial(unicode_wrap, os.path.splitext),
'win_basename': partial(unicode_wrap, ntpath.basename),
'win_dirname': partial(unicode_wrap, ntpath.dirname),
'win_splitdrive': partial(unicode_wrap, ntpath.splitdrive),
'commonpath': commonpath,
'normpath': partial(unicode_wrap, os.path.normpath),
# file glob
'fileglob': fileglob,
# types
'bool': to_bool,
'to_datetime': to_datetime,
# date formatting
'strftime': strftime,
# quote string for shell usage
'quote': quote,
# hash filters
# md5 hex digest of string
'md5': md5s,
# sha1 hex digest of string
'sha1': checksum_s,
# checksum of string as used by ansible for checksumming files
'checksum': checksum_s,
# generic hashing
'password_hash': get_encrypted_password,
'hash': get_hash,
# regex
'regex_replace': regex_replace,
'regex_escape': regex_escape,
'regex_search': regex_search,
'regex_findall': regex_findall,
# ? : ;
'ternary': ternary,
# random stuff
'random': rand,
'shuffle': randomize_list,
# undefined
'mandatory': mandatory,
# comment-style decoration
'comment': comment,
# debug
'type_debug': lambda o: o.__class__.__name__,
# Data structures
'combine': combine,
'extract': extract,
'flatten': flatten,
'dict2items': dict_to_list_of_dict_key_value_elements,
'items2dict': list_of_dict_key_value_elements_to_dict,
'subelements': subelements,
'split': partial(unicode_wrap, text_type.split),
}
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 81,446 |
path_join filter documentation missing important detail
|
### Summary
There is an undocumented gotcha in path_join. It is probably just a side effect of os.path.join's behaviour.
For a sparse backup task I had two tasks:
```
- ansible.builtin.file:
state: directory
path: "{{ (backup_dir, item) | path_join }}"
loop: "{{template_dirs}}"
- ansible.builtin.copy:
remote_src: true
src: "{{item}}"
dest: "{{ (backup_dir, item) | path_join }}"
loop: "{{ templates | map(attribute='dest') }}
```
`template_dirs` is a list of relative paths, `templates.dest` is the absolute path of the destination config. For the first task, path_join resulted in `/tmp/somedir/directory/in/list`; for the second, it produced `/directory/in/list/filename`. `(backup_dir, item[1:]) | path_join` produced the expected result of `/tmp/somedir/directory/in/list/filename`.
The documentation for the path_join filter should be clear that a list entry with an absolute path overrides earlier elements in the list, and there should be an example of it doing that.
### Issue Type
Documentation Report
### Component Name
lib/ansible/plugins/filter/core.py
### Ansible Version
```console
$ ansible --version
ansible [core 2.15.2]
config file = None
configured module search path = ['/home/rorsten/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/rorsten/miniconda3/lib/python3.11/site-packages/ansible
ansible collection location = /home/rorsten/.ansible/collections:/usr/share/ansible/collections
executable location = /home/rorsten/miniconda3/bin/ansible
python version = 3.11.4 (main, Jul 5 2023, 13:45:01) [GCC 11.2.0] (/home/rorsten/miniconda3/bin/python3.11)
jinja version = 3.1.2
libyaml = True
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
CONFIG_FILE() = None
```
### OS / Environment
Ubuntu 16.04, 20.04; MacOS 12.6.7
### Additional Information
It is not unreasonable to expect that the output of list | path_join would be the concatenation of all of the path elements in the list, but this seems not to be the actual behaviour. The clarification could prevent quite a bit of frustration.
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/81446
|
https://github.com/ansible/ansible/pull/81544
|
863e2571db5d70b03478fd18efef15c4bde88c10
|
4a96b3d5b41fe9f0d12d899234b22e676d82e804
| 2023-08-04T18:41:17Z |
python
| 2023-08-22T15:12:21Z |
lib/ansible/plugins/filter/path_join.yml
|
DOCUMENTATION:
name: path_join
author: Anthony Bourguignon (@Toniob)
version_added: "2.10"
short_description: Join one or more path components
positional: _input
description:
- Returns a path obtained by joining one or more path components.
options:
_input:
description: A path, or a list of paths.
type: list
elements: str
required: true
EXAMPLES: |
# If path == 'foo/bar' and file == 'baz.txt', the result is '/etc/foo/bar/subdir/baz.txt'
{{ ('/etc', path, 'subdir', file) | path_join }}
# equivalent to '/etc/subdir/{{filename}}'
wheremyfile: "{{ ['/etc', 'subdir', filename] | path_join }}"
# trustme => '/etc/apt/trusted.d/mykey.gpgp'
trustme: "{{ ['/etc', 'apt', 'trusted.d', 'mykey.gpg'] | path_join }}"
RETURN:
_value:
description: The concatenated path.
type: str
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 77,049 |
Group module emits false warning when local: yes
|
### Summary
When attempting to create a new local group using the group module a warning is written to the log and the group is created anyway.
### Issue Type
Bug Report
### Component Name
group
### Ansible Version
```console
$ ansible --version
TFDM==>ansible --version
ansible [core 2.11.5]
config file = /opt/repo/team/n26984/WIP_working/playbooks/ansible.cfg
configured module search path = ['/opt/repo/team/n26984/WIP_working/playbooks/collections/ansible_collections/tfdm/utils/plugins/modules', '/usr/share/ansible/collections/ansible_collections/tfdm/utils/plugins/modules', '/home/tfdmseed/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/lib/python3.8/site-packages/ansible
ansible collection location = /home/tfdmseed/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/local/bin/ansible
python version = 3.8.6 (default, Jan 22 2021, 11:41:28) [GCC 8.4.1 20200928 (Red Hat 8.4.1-1)]
jinja version = 3.0.2
libyaml = True
```
### Configuration
```console
$ ansible-config dump --only-changed
TFDM==>ansible-config dump --only-changed
ANSIBLE_PIPELINING(/opt/repo/team/n26984/WIP_working/playbooks/ansible.cfg) = True
BECOME_ALLOW_SAME_USER(/opt/repo/team/n26984/WIP_working/playbooks/ansible.cfg) = True
BECOME_PLUGIN_PATH(/opt/repo/team/n26984/WIP_working/playbooks/ansible.cfg) = ['/opt/repo/team/n26984/WIP_working/playbooks/collections/ansible_collections/tfd>
CALLBACKS_ENABLED(/opt/repo/team/n26984/WIP_working/playbooks/ansible.cfg) = ['tfdm.utils.tfdm_log', 'tfdm.utils.profile_tasks_custom']
DEFAULT_ACTION_PLUGIN_PATH(/opt/repo/team/n26984/WIP_working/playbooks/ansible.cfg) = ['/opt/repo/team/n26984/WIP_working/playbooks/collections/ansible_collect>
DEFAULT_FILTER_PLUGIN_PATH(/opt/repo/team/n26984/WIP_working/playbooks/ansible.cfg) = ['/opt/repo/team/n26984/WIP_working/playbooks/collections/ansible_collect>
DEFAULT_FORKS(/opt/repo/team/n26984/WIP_working/playbooks/ansible.cfg) = 100
DEFAULT_HOST_LIST(/opt/repo/team/n26984/WIP_working/playbooks/ansible.cfg) = ['/opt/repo/team/n26984/WIP_working/playbooks/inventory.yml']
DEFAULT_LOAD_CALLBACK_PLUGINS(/opt/repo/team/n26984/WIP_working/playbooks/ansible.cfg) = True
DEFAULT_LOCAL_TMP(/opt/repo/team/n26984/WIP_working/playbooks/ansible.cfg) = /tmp/.ansible_tfdmseed/ansible-local-181801rco8rkt
DEFAULT_MODULE_PATH(/opt/repo/team/n26984/WIP_working/playbooks/ansible.cfg) = ['/opt/repo/team/n26984/WIP_working/playbooks/collections/ansible_collections/tf>
DEFAULT_ROLES_PATH(/opt/repo/team/n26984/WIP_working/playbooks/ansible.cfg) = ['/opt/repo/team/n26984/WIP_working/playbooks/roles', '/usr/share/ansible/tfdm_ro>
DEFAULT_STDOUT_CALLBACK(/opt/repo/team/n26984/WIP_working/playbooks/ansible.cfg) = tfdm.utils.yaml_custom
DEFAULT_STRATEGY(/opt/repo/team/n26984/WIP_working/playbooks/ansible.cfg) = free
DEFAULT_TEST_PLUGIN_PATH(/opt/repo/team/n26984/WIP_working/playbooks/ansible.cfg) = ['/opt/repo/team/n26984/WIP_working/playbooks/collections/ansible_collectio>
DEFAULT_TIMEOUT(/opt/repo/team/n26984/WIP_working/playbooks/ansible.cfg) = 60
DEFAULT_VAULT_PASSWORD_FILE(/opt/repo/team/n26984/WIP_working/playbooks/ansible.cfg) = /opt/repo/team/n26984/WIP_working/playbooks/tfdm_key
DIFF_ALWAYS(/opt/repo/team/n26984/WIP_working/playbooks/ansible.cfg) = True
DIFF_CONTEXT(/opt/repo/team/n26984/WIP_working/playbooks/ansible.cfg) = 0
DISPLAY_SKIPPED_HOSTS(/opt/repo/team/n26984/WIP_working/playbooks/ansible.cfg) = False
DUPLICATE_YAML_DICT_KEY(/opt/repo/team/n26984/WIP_working/playbooks/ansible.cfg) = ignore
HOST_KEY_CHECKING(/opt/repo/team/n26984/WIP_working/playbooks/ansible.cfg) = False
INTERPRETER_PYTHON(/opt/repo/team/n26984/WIP_working/playbooks/ansible.cfg) = auto_silent
INVENTORY_CACHE_ENABLED(/opt/repo/team/n26984/WIP_working/playbooks/ansible.cfg) = True
INVENTORY_ENABLED(/opt/repo/team/n26984/WIP_working/playbooks/ansible.cfg) = ['tfdm.utils.tfdm_inventory', 'host_list', 'script', 'auto', 'yaml', 'ini']
LOCALHOST_WARNING(/opt/repo/team/n26984/WIP_working/playbooks/ansible.cfg) = False
MAX_FILE_SIZE_FOR_DIFF(/opt/repo/team/n26984/WIP_working/playbooks/ansible.cfg) = 10485760
VARIABLE_PLUGINS_ENABLED(/opt/repo/team/n26984/WIP_working/playbooks/ansible.cfg) = ['host_group_vars', 'tfdm.utils.tfdm_site_vars', 'tfdm.utils.nested_vars']
```
### OS / Environment
TFDM==>cat /etc/redhat-release
Red Hat Enterprise Linux release 8.4 (Ootpa)
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
---
- name: Create Local Group
hosts: "{{ target_host | get_target_hosts(groups) }}"
tasks:
- name: create a local group
group:
name: jke_grp
gid: 4242
local: yes
```
prompt>ansible-playbook test_group.yml -e target_host=sup01
### Expected Results
The local group is created on the specified host with no warnings
### Actual Results
```console
TFDM==>ansible-playbook test_group.yml -e target_host=sup01
2022-02-17 14:25:51.932801 tfdmseed /opt/repo/team/n26984/WIP_working/playbooks /usr/local/bin/ansible-playbook test_group.yml -e target_host=sup01 /var/log/ansible/tfdmseed/test_group.2022-02-17T14-25-51.log.gz (/backups/ansible/vnat/tfdmseed/test_group.2022-02-17T14-25-51.log.gz)
PLAY [Create Local Group] **************************************************************************************************************************************
Thursday 17 February 2022 14:25:52 +0000 (0:00:00.178) 0:00:00.178 *****
TASK [Gathering Facts] *****************************************************************************************************************************************
ok: [sup01]
Thursday 17 February 2022 14:25:54 +0000 (0:00:02.058) 0:00:02.237 *****
TASK [create a local group] ************************************************************************************************************************************
[WARNING]: 'local: true' specified and group was not found in /etc/group. The local group may already exist if the local group database exists somewhere other
than /etc/group.
changed: [sup01]
PLAY RECAP *****************************************************************************************************************************************************
sup01 : ok=2 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
LOG: /var/log/ansible/tfdmseed/test_group.2022-02-17T14-25-51.log.gz (/backups/ansible/vnat/tfdmseed/test_group.2022-02-17T14-25-51.log.gz)
[WARNING]: There were 1 previous warnings in this run. Please review the log.
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/77049
|
https://github.com/ansible/ansible/pull/81557
|
4a96b3d5b41fe9f0d12d899234b22e676d82e804
|
aa8a29a9d4cad82cbaad74be10cd8f1c1dc4e3c0
| 2022-02-17T14:27:44Z |
python
| 2023-08-22T15:23:10Z |
changelogs/fragments/group_warning.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 77,049 |
Group module emits false warning when local: yes
|
### Summary
When attempting to create a new local group using the group module a warning is written to the log and the group is created anyway.
### Issue Type
Bug Report
### Component Name
group
### Ansible Version
```console
$ ansible --version
TFDM==>ansible --version
ansible [core 2.11.5]
config file = /opt/repo/team/n26984/WIP_working/playbooks/ansible.cfg
configured module search path = ['/opt/repo/team/n26984/WIP_working/playbooks/collections/ansible_collections/tfdm/utils/plugins/modules', '/usr/share/ansible/collections/ansible_collections/tfdm/utils/plugins/modules', '/home/tfdmseed/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/lib/python3.8/site-packages/ansible
ansible collection location = /home/tfdmseed/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/local/bin/ansible
python version = 3.8.6 (default, Jan 22 2021, 11:41:28) [GCC 8.4.1 20200928 (Red Hat 8.4.1-1)]
jinja version = 3.0.2
libyaml = True
```
### Configuration
```console
$ ansible-config dump --only-changed
TFDM==>ansible-config dump --only-changed
ANSIBLE_PIPELINING(/opt/repo/team/n26984/WIP_working/playbooks/ansible.cfg) = True
BECOME_ALLOW_SAME_USER(/opt/repo/team/n26984/WIP_working/playbooks/ansible.cfg) = True
BECOME_PLUGIN_PATH(/opt/repo/team/n26984/WIP_working/playbooks/ansible.cfg) = ['/opt/repo/team/n26984/WIP_working/playbooks/collections/ansible_collections/tfd>
CALLBACKS_ENABLED(/opt/repo/team/n26984/WIP_working/playbooks/ansible.cfg) = ['tfdm.utils.tfdm_log', 'tfdm.utils.profile_tasks_custom']
DEFAULT_ACTION_PLUGIN_PATH(/opt/repo/team/n26984/WIP_working/playbooks/ansible.cfg) = ['/opt/repo/team/n26984/WIP_working/playbooks/collections/ansible_collect>
DEFAULT_FILTER_PLUGIN_PATH(/opt/repo/team/n26984/WIP_working/playbooks/ansible.cfg) = ['/opt/repo/team/n26984/WIP_working/playbooks/collections/ansible_collect>
DEFAULT_FORKS(/opt/repo/team/n26984/WIP_working/playbooks/ansible.cfg) = 100
DEFAULT_HOST_LIST(/opt/repo/team/n26984/WIP_working/playbooks/ansible.cfg) = ['/opt/repo/team/n26984/WIP_working/playbooks/inventory.yml']
DEFAULT_LOAD_CALLBACK_PLUGINS(/opt/repo/team/n26984/WIP_working/playbooks/ansible.cfg) = True
DEFAULT_LOCAL_TMP(/opt/repo/team/n26984/WIP_working/playbooks/ansible.cfg) = /tmp/.ansible_tfdmseed/ansible-local-181801rco8rkt
DEFAULT_MODULE_PATH(/opt/repo/team/n26984/WIP_working/playbooks/ansible.cfg) = ['/opt/repo/team/n26984/WIP_working/playbooks/collections/ansible_collections/tf>
DEFAULT_ROLES_PATH(/opt/repo/team/n26984/WIP_working/playbooks/ansible.cfg) = ['/opt/repo/team/n26984/WIP_working/playbooks/roles', '/usr/share/ansible/tfdm_ro>
DEFAULT_STDOUT_CALLBACK(/opt/repo/team/n26984/WIP_working/playbooks/ansible.cfg) = tfdm.utils.yaml_custom
DEFAULT_STRATEGY(/opt/repo/team/n26984/WIP_working/playbooks/ansible.cfg) = free
DEFAULT_TEST_PLUGIN_PATH(/opt/repo/team/n26984/WIP_working/playbooks/ansible.cfg) = ['/opt/repo/team/n26984/WIP_working/playbooks/collections/ansible_collectio>
DEFAULT_TIMEOUT(/opt/repo/team/n26984/WIP_working/playbooks/ansible.cfg) = 60
DEFAULT_VAULT_PASSWORD_FILE(/opt/repo/team/n26984/WIP_working/playbooks/ansible.cfg) = /opt/repo/team/n26984/WIP_working/playbooks/tfdm_key
DIFF_ALWAYS(/opt/repo/team/n26984/WIP_working/playbooks/ansible.cfg) = True
DIFF_CONTEXT(/opt/repo/team/n26984/WIP_working/playbooks/ansible.cfg) = 0
DISPLAY_SKIPPED_HOSTS(/opt/repo/team/n26984/WIP_working/playbooks/ansible.cfg) = False
DUPLICATE_YAML_DICT_KEY(/opt/repo/team/n26984/WIP_working/playbooks/ansible.cfg) = ignore
HOST_KEY_CHECKING(/opt/repo/team/n26984/WIP_working/playbooks/ansible.cfg) = False
INTERPRETER_PYTHON(/opt/repo/team/n26984/WIP_working/playbooks/ansible.cfg) = auto_silent
INVENTORY_CACHE_ENABLED(/opt/repo/team/n26984/WIP_working/playbooks/ansible.cfg) = True
INVENTORY_ENABLED(/opt/repo/team/n26984/WIP_working/playbooks/ansible.cfg) = ['tfdm.utils.tfdm_inventory', 'host_list', 'script', 'auto', 'yaml', 'ini']
LOCALHOST_WARNING(/opt/repo/team/n26984/WIP_working/playbooks/ansible.cfg) = False
MAX_FILE_SIZE_FOR_DIFF(/opt/repo/team/n26984/WIP_working/playbooks/ansible.cfg) = 10485760
VARIABLE_PLUGINS_ENABLED(/opt/repo/team/n26984/WIP_working/playbooks/ansible.cfg) = ['host_group_vars', 'tfdm.utils.tfdm_site_vars', 'tfdm.utils.nested_vars']
```
### OS / Environment
TFDM==>cat /etc/redhat-release
Red Hat Enterprise Linux release 8.4 (Ootpa)
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
---
- name: Create Local Group
hosts: "{{ target_host | get_target_hosts(groups) }}"
tasks:
- name: create a local group
group:
name: jke_grp
gid: 4242
local: yes
```
prompt>ansible-playbook test_group.yml -e target_host=sup01
### Expected Results
The local group is created on the specified host with no warnings
### Actual Results
```console
TFDM==>ansible-playbook test_group.yml -e target_host=sup01
2022-02-17 14:25:51.932801 tfdmseed /opt/repo/team/n26984/WIP_working/playbooks /usr/local/bin/ansible-playbook test_group.yml -e target_host=sup01 /var/log/ansible/tfdmseed/test_group.2022-02-17T14-25-51.log.gz (/backups/ansible/vnat/tfdmseed/test_group.2022-02-17T14-25-51.log.gz)
PLAY [Create Local Group] **************************************************************************************************************************************
Thursday 17 February 2022 14:25:52 +0000 (0:00:00.178) 0:00:00.178 *****
TASK [Gathering Facts] *****************************************************************************************************************************************
ok: [sup01]
Thursday 17 February 2022 14:25:54 +0000 (0:00:02.058) 0:00:02.237 *****
TASK [create a local group] ************************************************************************************************************************************
[WARNING]: 'local: true' specified and group was not found in /etc/group. The local group may already exist if the local group database exists somewhere other
than /etc/group.
changed: [sup01]
PLAY RECAP *****************************************************************************************************************************************************
sup01 : ok=2 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
LOG: /var/log/ansible/tfdmseed/test_group.2022-02-17T14-25-51.log.gz (/backups/ansible/vnat/tfdmseed/test_group.2022-02-17T14-25-51.log.gz)
[WARNING]: There were 1 previous warnings in this run. Please review the log.
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/77049
|
https://github.com/ansible/ansible/pull/81557
|
4a96b3d5b41fe9f0d12d899234b22e676d82e804
|
aa8a29a9d4cad82cbaad74be10cd8f1c1dc4e3c0
| 2022-02-17T14:27:44Z |
python
| 2023-08-22T15:23:10Z |
lib/ansible/modules/group.py
|
# -*- coding: utf-8 -*-
# Copyright: (c) 2012, Stephen Fromm <[email protected]>
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
DOCUMENTATION = '''
---
module: group
version_added: "0.0.2"
short_description: Add or remove groups
requirements:
- groupadd
- groupdel
- groupmod
description:
- Manage presence of groups on a host.
- For Windows targets, use the M(ansible.windows.win_group) module instead.
options:
name:
description:
- Name of the group to manage.
type: str
required: true
gid:
description:
- Optional I(GID) to set for the group.
type: int
state:
description:
- Whether the group should be present or not on the remote host.
type: str
choices: [ absent, present ]
default: present
force:
description:
- Whether to delete a group even if it is the primary group of a user.
- Only applicable on platforms which implement a --force flag on the group deletion command.
type: bool
default: false
version_added: "2.15"
system:
description:
- If V(yes), indicates that the group created is a system group.
type: bool
default: no
local:
description:
- Forces the use of "local" command alternatives on platforms that implement it.
- This is useful in environments that use centralized authentication when you want to manipulate the local groups.
(for example, it uses C(lgroupadd) instead of C(groupadd)).
- This requires that these commands exist on the targeted host, otherwise it will be a fatal error.
type: bool
default: no
version_added: "2.6"
non_unique:
description:
- This option allows to change the group ID to a non-unique value. Requires O(gid).
- Not supported on macOS or BusyBox distributions.
type: bool
default: no
version_added: "2.8"
extends_documentation_fragment: action_common_attributes
attributes:
check_mode:
support: full
diff_mode:
support: none
platform:
platforms: posix
seealso:
- module: ansible.builtin.user
- module: ansible.windows.win_group
author:
- Stephen Fromm (@sfromm)
'''
EXAMPLES = '''
- name: Ensure group "somegroup" exists
ansible.builtin.group:
name: somegroup
state: present
- name: Ensure group "docker" exists with correct gid
ansible.builtin.group:
name: docker
state: present
gid: 1750
'''
RETURN = r'''
gid:
description: Group ID of the group.
returned: When O(state) is C(present)
type: int
sample: 1001
name:
description: Group name.
returned: always
type: str
sample: users
state:
description: Whether the group is present or not.
returned: always
type: str
sample: 'absent'
system:
description: Whether the group is a system group or not.
returned: When O(state) is C(present)
type: bool
sample: False
'''
import grp
import os
from ansible.module_utils.common.text.converters import to_bytes
from ansible.module_utils.basic import AnsibleModule
from ansible.module_utils.common.sys_info import get_platform_subclass
class Group(object):
"""
This is a generic Group manipulation class that is subclassed
based on platform.
A subclass may wish to override the following action methods:-
- group_del()
- group_add()
- group_mod()
All subclasses MUST define platform and distribution (which may be None).
"""
platform = 'Generic'
distribution = None # type: str | None
GROUPFILE = '/etc/group'
def __new__(cls, *args, **kwargs):
new_cls = get_platform_subclass(Group)
return super(cls, new_cls).__new__(new_cls)
def __init__(self, module):
self.module = module
self.state = module.params['state']
self.name = module.params['name']
self.force = module.params['force']
self.gid = module.params['gid']
self.system = module.params['system']
self.local = module.params['local']
self.non_unique = module.params['non_unique']
def execute_command(self, cmd):
return self.module.run_command(cmd)
def group_del(self):
if self.local:
command_name = 'lgroupdel'
else:
command_name = 'groupdel'
cmd = [self.module.get_bin_path(command_name, True), self.name]
return self.execute_command(cmd)
def _local_check_gid_exists(self):
if self.gid:
for gr in grp.getgrall():
if self.gid == gr.gr_gid and self.name != gr.gr_name:
self.module.fail_json(msg="GID '{0}' already exists with group '{1}'".format(self.gid, gr.gr_name))
def group_add(self, **kwargs):
if self.local:
command_name = 'lgroupadd'
self._local_check_gid_exists()
else:
command_name = 'groupadd'
cmd = [self.module.get_bin_path(command_name, True)]
for key in kwargs:
if key == 'gid' and kwargs[key] is not None:
cmd.append('-g')
cmd.append(str(kwargs[key]))
if self.non_unique:
cmd.append('-o')
elif key == 'system' and kwargs[key] is True:
cmd.append('-r')
cmd.append(self.name)
return self.execute_command(cmd)
def group_mod(self, **kwargs):
if self.local:
command_name = 'lgroupmod'
self._local_check_gid_exists()
else:
command_name = 'groupmod'
cmd = [self.module.get_bin_path(command_name, True)]
info = self.group_info()
for key in kwargs:
if key == 'gid':
if kwargs[key] is not None and info[2] != int(kwargs[key]):
cmd.append('-g')
cmd.append(str(kwargs[key]))
if self.non_unique:
cmd.append('-o')
if len(cmd) == 1:
return (None, '', '')
if self.module.check_mode:
return (0, '', '')
cmd.append(self.name)
return self.execute_command(cmd)
def group_exists(self):
# The grp module does not distinguish between local and directory accounts.
# It's output cannot be used to determine whether or not a group exists locally.
# It returns True if the group exists locally or in the directory, so instead
# look in the local GROUP file for an existing account.
if self.local:
if not os.path.exists(self.GROUPFILE):
self.module.fail_json(msg="'local: true' specified but unable to find local group file {0} to parse.".format(self.GROUPFILE))
exists = False
name_test = '{0}:'.format(self.name)
with open(self.GROUPFILE, 'rb') as f:
reversed_lines = f.readlines()[::-1]
for line in reversed_lines:
if line.startswith(to_bytes(name_test)):
exists = True
break
if not exists:
self.module.warn(
"'local: true' specified and group was not found in {file}. "
"The local group may already exist if the local group database exists somewhere other than {file}.".format(file=self.GROUPFILE))
return exists
else:
try:
if grp.getgrnam(self.name):
return True
except KeyError:
return False
def group_info(self):
if not self.group_exists():
return False
try:
info = list(grp.getgrnam(self.name))
except KeyError:
return False
return info
# ===========================================
class Linux(Group):
"""
This is a Linux Group manipulation class. This is to apply the '-f' parameter to the groupdel command
This overrides the following methods from the generic class:-
- group_del()
"""
platform = 'Linux'
distribution = None
def group_del(self):
if self.local:
command_name = 'lgroupdel'
else:
command_name = 'groupdel'
cmd = [self.module.get_bin_path(command_name, True)]
if self.force:
cmd.append('-f')
cmd.append(self.name)
return self.execute_command(cmd)
# ===========================================
class SunOS(Group):
"""
This is a SunOS Group manipulation class. Solaris doesn't have
the 'system' group concept.
This overrides the following methods from the generic class:-
- group_add()
"""
platform = 'SunOS'
distribution = None
GROUPFILE = '/etc/group'
def group_add(self, **kwargs):
cmd = [self.module.get_bin_path('groupadd', True)]
for key in kwargs:
if key == 'gid' and kwargs[key] is not None:
cmd.append('-g')
cmd.append(str(kwargs[key]))
if self.non_unique:
cmd.append('-o')
cmd.append(self.name)
return self.execute_command(cmd)
# ===========================================
class AIX(Group):
"""
This is a AIX Group manipulation class.
This overrides the following methods from the generic class:-
- group_del()
- group_add()
- group_mod()
"""
platform = 'AIX'
distribution = None
GROUPFILE = '/etc/group'
def group_del(self):
cmd = [self.module.get_bin_path('rmgroup', True), self.name]
return self.execute_command(cmd)
def group_add(self, **kwargs):
cmd = [self.module.get_bin_path('mkgroup', True)]
for key in kwargs:
if key == 'gid' and kwargs[key] is not None:
cmd.append('id=' + str(kwargs[key]))
elif key == 'system' and kwargs[key] is True:
cmd.append('-a')
cmd.append(self.name)
return self.execute_command(cmd)
def group_mod(self, **kwargs):
cmd = [self.module.get_bin_path('chgroup', True)]
info = self.group_info()
for key in kwargs:
if key == 'gid':
if kwargs[key] is not None and info[2] != int(kwargs[key]):
cmd.append('id=' + str(kwargs[key]))
if len(cmd) == 1:
return (None, '', '')
if self.module.check_mode:
return (0, '', '')
cmd.append(self.name)
return self.execute_command(cmd)
# ===========================================
class FreeBsdGroup(Group):
"""
This is a FreeBSD Group manipulation class.
This overrides the following methods from the generic class:-
- group_del()
- group_add()
- group_mod()
"""
platform = 'FreeBSD'
distribution = None
GROUPFILE = '/etc/group'
def group_del(self):
cmd = [self.module.get_bin_path('pw', True), 'groupdel', self.name]
return self.execute_command(cmd)
def group_add(self, **kwargs):
cmd = [self.module.get_bin_path('pw', True), 'groupadd', self.name]
if self.gid is not None:
cmd.append('-g')
cmd.append(str(self.gid))
if self.non_unique:
cmd.append('-o')
return self.execute_command(cmd)
def group_mod(self, **kwargs):
cmd = [self.module.get_bin_path('pw', True), 'groupmod', self.name]
info = self.group_info()
cmd_len = len(cmd)
if self.gid is not None and int(self.gid) != info[2]:
cmd.append('-g')
cmd.append(str(self.gid))
if self.non_unique:
cmd.append('-o')
# modify the group if cmd will do anything
if cmd_len != len(cmd):
if self.module.check_mode:
return (0, '', '')
return self.execute_command(cmd)
return (None, '', '')
class DragonFlyBsdGroup(FreeBsdGroup):
"""
This is a DragonFlyBSD Group manipulation class.
It inherits all behaviors from FreeBsdGroup class.
"""
platform = 'DragonFly'
# ===========================================
class DarwinGroup(Group):
"""
This is a Mac macOS Darwin Group manipulation class.
This overrides the following methods from the generic class:-
- group_del()
- group_add()
- group_mod()
group manipulation are done using dseditgroup(1).
"""
platform = 'Darwin'
distribution = None
def group_add(self, **kwargs):
cmd = [self.module.get_bin_path('dseditgroup', True)]
cmd += ['-o', 'create']
if self.gid is not None:
cmd += ['-i', str(self.gid)]
elif 'system' in kwargs and kwargs['system'] is True:
gid = self.get_lowest_available_system_gid()
if gid is not False:
self.gid = str(gid)
cmd += ['-i', str(self.gid)]
cmd += ['-L', self.name]
(rc, out, err) = self.execute_command(cmd)
return (rc, out, err)
def group_del(self):
cmd = [self.module.get_bin_path('dseditgroup', True)]
cmd += ['-o', 'delete']
cmd += ['-L', self.name]
(rc, out, err) = self.execute_command(cmd)
return (rc, out, err)
def group_mod(self, gid=None):
info = self.group_info()
if self.gid is not None and int(self.gid) != info[2]:
cmd = [self.module.get_bin_path('dseditgroup', True)]
cmd += ['-o', 'edit']
if gid is not None:
cmd += ['-i', str(gid)]
cmd += ['-L', self.name]
(rc, out, err) = self.execute_command(cmd)
return (rc, out, err)
return (None, '', '')
def get_lowest_available_system_gid(self):
# check for lowest available system gid (< 500)
try:
cmd = [self.module.get_bin_path('dscl', True)]
cmd += ['/Local/Default', '-list', '/Groups', 'PrimaryGroupID']
(rc, out, err) = self.execute_command(cmd)
lines = out.splitlines()
highest = 0
for group_info in lines:
parts = group_info.split(' ')
if len(parts) > 1:
gid = int(parts[-1])
if gid > highest and gid < 500:
highest = gid
if highest == 0 or highest == 499:
return False
return (highest + 1)
except Exception:
return False
class OpenBsdGroup(Group):
"""
This is a OpenBSD Group manipulation class.
This overrides the following methods from the generic class:-
- group_del()
- group_add()
- group_mod()
"""
platform = 'OpenBSD'
distribution = None
GROUPFILE = '/etc/group'
def group_del(self):
cmd = [self.module.get_bin_path('groupdel', True), self.name]
return self.execute_command(cmd)
def group_add(self, **kwargs):
cmd = [self.module.get_bin_path('groupadd', True)]
if self.gid is not None:
cmd.append('-g')
cmd.append(str(self.gid))
if self.non_unique:
cmd.append('-o')
cmd.append(self.name)
return self.execute_command(cmd)
def group_mod(self, **kwargs):
cmd = [self.module.get_bin_path('groupmod', True)]
info = self.group_info()
if self.gid is not None and int(self.gid) != info[2]:
cmd.append('-g')
cmd.append(str(self.gid))
if self.non_unique:
cmd.append('-o')
if len(cmd) == 1:
return (None, '', '')
if self.module.check_mode:
return (0, '', '')
cmd.append(self.name)
return self.execute_command(cmd)
# ===========================================
class NetBsdGroup(Group):
"""
This is a NetBSD Group manipulation class.
This overrides the following methods from the generic class:-
- group_del()
- group_add()
- group_mod()
"""
platform = 'NetBSD'
distribution = None
GROUPFILE = '/etc/group'
def group_del(self):
cmd = [self.module.get_bin_path('groupdel', True), self.name]
return self.execute_command(cmd)
def group_add(self, **kwargs):
cmd = [self.module.get_bin_path('groupadd', True)]
if self.gid is not None:
cmd.append('-g')
cmd.append(str(self.gid))
if self.non_unique:
cmd.append('-o')
cmd.append(self.name)
return self.execute_command(cmd)
def group_mod(self, **kwargs):
cmd = [self.module.get_bin_path('groupmod', True)]
info = self.group_info()
if self.gid is not None and int(self.gid) != info[2]:
cmd.append('-g')
cmd.append(str(self.gid))
if self.non_unique:
cmd.append('-o')
if len(cmd) == 1:
return (None, '', '')
if self.module.check_mode:
return (0, '', '')
cmd.append(self.name)
return self.execute_command(cmd)
# ===========================================
class BusyBoxGroup(Group):
"""
BusyBox group manipulation class for systems that have addgroup and delgroup.
It overrides the following methods:
- group_add()
- group_del()
- group_mod()
"""
def group_add(self, **kwargs):
cmd = [self.module.get_bin_path('addgroup', True)]
if self.gid is not None:
cmd.extend(['-g', str(self.gid)])
if self.system:
cmd.append('-S')
cmd.append(self.name)
return self.execute_command(cmd)
def group_del(self):
cmd = [self.module.get_bin_path('delgroup', True), self.name]
return self.execute_command(cmd)
def group_mod(self, **kwargs):
# Since there is no groupmod command, modify /etc/group directly
info = self.group_info()
if self.gid is not None and self.gid != info[2]:
with open('/etc/group', 'rb') as f:
b_groups = f.read()
b_name = to_bytes(self.name)
b_current_group_string = b'%s:x:%d:' % (b_name, info[2])
b_new_group_string = b'%s:x:%d:' % (b_name, self.gid)
if b':%d:' % self.gid in b_groups:
self.module.fail_json(msg="gid '{gid}' in use".format(gid=self.gid))
if self.module.check_mode:
return 0, '', ''
b_new_groups = b_groups.replace(b_current_group_string, b_new_group_string)
with open('/etc/group', 'wb') as f:
f.write(b_new_groups)
return 0, '', ''
return None, '', ''
class AlpineGroup(BusyBoxGroup):
platform = 'Linux'
distribution = 'Alpine'
def main():
module = AnsibleModule(
argument_spec=dict(
state=dict(type='str', default='present', choices=['absent', 'present']),
name=dict(type='str', required=True),
force=dict(type='bool', default=False),
gid=dict(type='int'),
system=dict(type='bool', default=False),
local=dict(type='bool', default=False),
non_unique=dict(type='bool', default=False),
),
supports_check_mode=True,
required_if=[
['non_unique', True, ['gid']],
],
)
if module.params['force'] and module.params['local']:
module.fail_json(msg='force is not a valid option for local, force=True and local=True are mutually exclusive')
group = Group(module)
module.debug('Group instantiated - platform %s' % group.platform)
if group.distribution:
module.debug('Group instantiated - distribution %s' % group.distribution)
rc = None
out = ''
err = ''
result = {}
result['name'] = group.name
result['state'] = group.state
if group.state == 'absent':
if group.group_exists():
if module.check_mode:
module.exit_json(changed=True)
(rc, out, err) = group.group_del()
if rc != 0:
module.fail_json(name=group.name, msg=err)
elif group.state == 'present':
if not group.group_exists():
if module.check_mode:
module.exit_json(changed=True)
(rc, out, err) = group.group_add(gid=group.gid, system=group.system)
else:
(rc, out, err) = group.group_mod(gid=group.gid)
if rc is not None and rc != 0:
module.fail_json(name=group.name, msg=err)
if rc is None:
result['changed'] = False
else:
result['changed'] = True
if out:
result['stdout'] = out
if err:
result['stderr'] = err
if group.group_exists():
info = group.group_info()
result['system'] = group.system
result['gid'] = info[2]
module.exit_json(**result)
if __name__ == '__main__':
main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 77,868 |
Apt module "fail_on_autoremove=yes" sends invalid switch to aptitude
|
### Summary
The apt module runs `aptitude` on the target host. When `fail_on_autoremove=yes`, the `--no-remove` switch is added to the command line. This switch is valid for `apt`, but not for `aptitude`, so the command fails.
### Issue Type
Bug Report
### Component Name
apt
### Ansible Version
```console
$ ansible --version
[DEPRECATION WARNING]: Ansible will require Python 3.8 or newer on the controller starting with Ansible 2.12. Current version:
3.7.3 (default, Jan 22 2021, 20:04:44) [GCC 8.3.0]. This feature will be removed from ansible-core in version 2.12. Deprecation
warnings can be disabled by setting deprecation_warnings=False in ansible.cfg.
ansible [core 2.11.9]
config file = /etc/ansible/ansible.cfg
configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/lib/python3.7/dist-packages/ansible
ansible collection location = /root/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/local/bin/ansible
python version = 3.7.3 (default, Jan 22 2021, 20:04:44) [GCC 8.3.0]
jinja version = 3.0.3
libyaml = True
```
### Configuration
```console
$ ansible-config dump --only-changed -t all
[DEPRECATION WARNING]: Ansible will require Python 3.8 or newer on the controller starting with Ansible 2.12. Current version:
3.7.3 (default, Jan 22 2021, 20:04:44) [GCC 8.3.0]. This feature will be removed from ansible-core in version 2.12. Deprecation
warnings can be disabled by setting deprecation_warnings=False in ansible.cfg.
usage: ansible-config [-h] [--version] [-v] {list,dump,view} ...
ansible-config: error: unrecognized arguments: -t all
usage: ansible-config [-h] [--version] [-v] {list,dump,view} ...
View ansible configuration.
positional arguments:
{list,dump,view}
list Print all config options
dump Dump configuration
view View configuration file
optional arguments:
--version show program's version number, config file location,
configured module search path, module location, executable
location and exit
-h, --help show this help message and exit
-v, --verbose verbose mode (-vvv for more, -vvvv to enable connection
debugging)
```
### OS / Environment
Debian 10, aptitude 0.8.11
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```
ansible 'all:!internal_windows:!ucs' -b -m apt -a "update_cache=yes upgrade=yes fail_on_autoremove=yes"
```
### Expected Results
Expected target servers' apt packages to be upgraded.
### Actual Results
```console
tallis | FAILED! => {
"ansible_facts": {
"discovered_interpreter_python": "/usr/bin/python3"
},
"changed": false,
"msg": "'/usr/bin/aptitude safe-upgrade' failed: /usr/bin/aptitude: unrecognized option '--no-remove'\n",
"rc": 1,
"stdout": "aptitude 0.8.11\nUsage: aptitude [-S fname] [-u|-i]\n aptitude [options] <action> ...\n\nActions (if none is specified, aptitude will enter interactive mode):\n\n install Install/upgrade packages.\n remove Remove packages.\n purge Remove packages and their configuration files.\n hold Place packages on hold.\n unhold Cancel a hold command for a package.\n markauto Mark packages as having been automatically installed.\n unmarkauto Mark packages as having been manually installed.\n forbid-version Forbid aptitude from upgrading to a specific package version.\n update
Download lists of new/upgradable packages.\n safe-upgrade Perform a safe upgrade.\n full-upgrade Perform an upgrade, possibly installing and removing packages.\n build-dep Install the build-dependencies of packages.\n forget-new Forget what packages are \"new\".\n search Search for a package by name and/or expression.\n show Display detailed info about a package.\n showsrc Display detailed info about a source package (apt wrapper).\n versions Displays the versions of specified packages.\n clean Erase downloaded package files.\n autoclean Erase old downloaded package files.\n changelog View a package's changelog.\n download Download the .deb file for a package (apt wrapper).\n source Download source package (apt wrapper).\n reinstall Reinstall a currently installed package.\n why Explain why a particular package should be installed.\n why-not Explain why a particular package cannot be installed.\n\n add-user-tag Add user tag to packages/patterns.\n remove-user-tag Remove user tag from packages/patterns.\n\nOptions:\n -h This help text.\n --no-gui Do not use the GTK GUI even if available.\n -s Simulate actions, but do not actually perform them.\n -d Only download packages, do not install or remove anything.\n -P Always prompt for confirmation of actions.\n -y Assume that the answer to simple yes/no questions is 'yes'.\n -F format Specify a format for displaying search results; see the manual.\n -O order Specify how search results should be sorted; see the manual.\n -w width Specify the display width for formatting search results.\n -f Aggressively try to fix broken packages.\n -V Show which versions of packages are to be installed.\n -D Show the dependencies of automatically changed packages.\n -Z Show the change in installed size of each package.\n -v Display extra information. (may be supplied multiple times).\n -t [release] Set the release from which packages should be installed.\n -q In command-line mode, suppress the incremental progress\n indicators.\n -o key=val Directly set the configuration option named 'key'.\n --with(out)-recommends Specify whether or not to treat recommends as\n strong dependencies.\n -S fname Read the aptitude extended status info from fname.\n -u Download new package lists on startup.\n (terminal interface only)\n -i Perform an install run on startup.\n (terminal interface only)\n\nSee the manual page for a complete list and description of all the options.\n\nThis aptitude does not have Super Cow Powers.\n",
"stdout_lines": [
"aptitude 0.8.11",
"Usage: aptitude [-S fname] [-u|-i]",
" aptitude [options] <action> ...",
"",
"Actions (if none is specified, aptitude will enter interactive mode):",
"",
" install Install/upgrade packages.",
" remove Remove packages.",
" purge Remove packages and their configuration files.",
" hold Place packages on hold.",
" unhold Cancel a hold command for a package.",
" markauto Mark packages as having been automatically installed.",
" unmarkauto Mark packages as having been manually installed.",
" forbid-version Forbid aptitude from upgrading to a specific package version.",
" update Download lists of new/upgradable packages.",
" safe-upgrade Perform a safe upgrade.",
" full-upgrade Perform an upgrade, possibly installing and removing packages.",
" build-dep Install the build-dependencies of packages.",
" forget-new Forget what packages are \"new\".",
" search Search for a package by name and/or expression.",
" show Display detailed info about a package.",
" showsrc Display detailed info about a source package (apt wrapper).",
" versions Displays the versions of specified packages.",
" clean Erase downloaded package files.",
" autoclean Erase old downloaded package files.",
" changelog View a package's changelog.",
" download Download the .deb file for a package (apt wrapper).",
" source Download source package (apt wrapper).",
" reinstall Reinstall a currently installed package.",
" why Explain why a particular package should be installed.",
" why-not Explain why a particular package cannot be installed.",
"",
" add-user-tag Add user tag to packages/patterns.",
" remove-user-tag Remove user tag from packages/patterns.",
"",
"Options:",
" -h This help text.",
" --no-gui Do not use the GTK GUI even if available.",
" -s Simulate actions, but do not actually perform them.",
" -d Only download packages, do not install or remove anything.",
" -P Always prompt for confirmation of actions.",
" -y Assume that the answer to simple yes/no questions is 'yes'.",
" -F format Specify a format for displaying search results; see the manual.",
" -O order Specify how search results should be sorted; see the manual.",
" -w width Specify the display width for formatting search results.",
" -f Aggressively try to fix broken packages.",
" -V Show which versions of packages are to be installed.",
" -D Show the dependencies of automatically changed packages.",
" -Z Show the change in installed size of each package.",
" -v Display extra information. (may be supplied multiple times).",
" -t [release] Set the release from which packages should be installed.",
" -q In command-line mode, suppress the incremental progress",
" indicators.",
" -o key=val Directly set the configuration option named 'key'.",
" --with(out)-recommends Specify whether or not to treat recommends as",
" strong dependencies.",
" -S fname Read the aptitude extended status info from fname.",
" -u Download new package lists on startup.",
" (terminal interface only)",
" -i Perform an install run on startup.",
" (terminal interface only)",
"",
"See the manual page for a complete list and description of all the options.",
"",
"This aptitude does not have Super Cow Powers."
]
}
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/77868
|
https://github.com/ansible/ansible/pull/81445
|
5deb4ee99118b2b1990d45bd06c7a23a147861f6
|
2e6d849bdb80364b5229f4b9190935e84ee9bfe8
| 2022-05-20T17:17:22Z |
python
| 2023-08-23T15:42:03Z |
changelogs/fragments/apt_fail_on_autoremove.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 77,868 |
Apt module "fail_on_autoremove=yes" sends invalid switch to aptitude
|
### Summary
The apt module runs `aptitude` on the target host. When `fail_on_autoremove=yes`, the `--no-remove` switch is added to the command line. This switch is valid for `apt`, but not for `aptitude`, so the command fails.
### Issue Type
Bug Report
### Component Name
apt
### Ansible Version
```console
$ ansible --version
[DEPRECATION WARNING]: Ansible will require Python 3.8 or newer on the controller starting with Ansible 2.12. Current version:
3.7.3 (default, Jan 22 2021, 20:04:44) [GCC 8.3.0]. This feature will be removed from ansible-core in version 2.12. Deprecation
warnings can be disabled by setting deprecation_warnings=False in ansible.cfg.
ansible [core 2.11.9]
config file = /etc/ansible/ansible.cfg
configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/lib/python3.7/dist-packages/ansible
ansible collection location = /root/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/local/bin/ansible
python version = 3.7.3 (default, Jan 22 2021, 20:04:44) [GCC 8.3.0]
jinja version = 3.0.3
libyaml = True
```
### Configuration
```console
$ ansible-config dump --only-changed -t all
[DEPRECATION WARNING]: Ansible will require Python 3.8 or newer on the controller starting with Ansible 2.12. Current version:
3.7.3 (default, Jan 22 2021, 20:04:44) [GCC 8.3.0]. This feature will be removed from ansible-core in version 2.12. Deprecation
warnings can be disabled by setting deprecation_warnings=False in ansible.cfg.
usage: ansible-config [-h] [--version] [-v] {list,dump,view} ...
ansible-config: error: unrecognized arguments: -t all
usage: ansible-config [-h] [--version] [-v] {list,dump,view} ...
View ansible configuration.
positional arguments:
{list,dump,view}
list Print all config options
dump Dump configuration
view View configuration file
optional arguments:
--version show program's version number, config file location,
configured module search path, module location, executable
location and exit
-h, --help show this help message and exit
-v, --verbose verbose mode (-vvv for more, -vvvv to enable connection
debugging)
```
### OS / Environment
Debian 10, aptitude 0.8.11
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```
ansible 'all:!internal_windows:!ucs' -b -m apt -a "update_cache=yes upgrade=yes fail_on_autoremove=yes"
```
### Expected Results
Expected target servers' apt packages to be upgraded.
### Actual Results
```console
tallis | FAILED! => {
"ansible_facts": {
"discovered_interpreter_python": "/usr/bin/python3"
},
"changed": false,
"msg": "'/usr/bin/aptitude safe-upgrade' failed: /usr/bin/aptitude: unrecognized option '--no-remove'\n",
"rc": 1,
"stdout": "aptitude 0.8.11\nUsage: aptitude [-S fname] [-u|-i]\n aptitude [options] <action> ...\n\nActions (if none is specified, aptitude will enter interactive mode):\n\n install Install/upgrade packages.\n remove Remove packages.\n purge Remove packages and their configuration files.\n hold Place packages on hold.\n unhold Cancel a hold command for a package.\n markauto Mark packages as having been automatically installed.\n unmarkauto Mark packages as having been manually installed.\n forbid-version Forbid aptitude from upgrading to a specific package version.\n update
Download lists of new/upgradable packages.\n safe-upgrade Perform a safe upgrade.\n full-upgrade Perform an upgrade, possibly installing and removing packages.\n build-dep Install the build-dependencies of packages.\n forget-new Forget what packages are \"new\".\n search Search for a package by name and/or expression.\n show Display detailed info about a package.\n showsrc Display detailed info about a source package (apt wrapper).\n versions Displays the versions of specified packages.\n clean Erase downloaded package files.\n autoclean Erase old downloaded package files.\n changelog View a package's changelog.\n download Download the .deb file for a package (apt wrapper).\n source Download source package (apt wrapper).\n reinstall Reinstall a currently installed package.\n why Explain why a particular package should be installed.\n why-not Explain why a particular package cannot be installed.\n\n add-user-tag Add user tag to packages/patterns.\n remove-user-tag Remove user tag from packages/patterns.\n\nOptions:\n -h This help text.\n --no-gui Do not use the GTK GUI even if available.\n -s Simulate actions, but do not actually perform them.\n -d Only download packages, do not install or remove anything.\n -P Always prompt for confirmation of actions.\n -y Assume that the answer to simple yes/no questions is 'yes'.\n -F format Specify a format for displaying search results; see the manual.\n -O order Specify how search results should be sorted; see the manual.\n -w width Specify the display width for formatting search results.\n -f Aggressively try to fix broken packages.\n -V Show which versions of packages are to be installed.\n -D Show the dependencies of automatically changed packages.\n -Z Show the change in installed size of each package.\n -v Display extra information. (may be supplied multiple times).\n -t [release] Set the release from which packages should be installed.\n -q In command-line mode, suppress the incremental progress\n indicators.\n -o key=val Directly set the configuration option named 'key'.\n --with(out)-recommends Specify whether or not to treat recommends as\n strong dependencies.\n -S fname Read the aptitude extended status info from fname.\n -u Download new package lists on startup.\n (terminal interface only)\n -i Perform an install run on startup.\n (terminal interface only)\n\nSee the manual page for a complete list and description of all the options.\n\nThis aptitude does not have Super Cow Powers.\n",
"stdout_lines": [
"aptitude 0.8.11",
"Usage: aptitude [-S fname] [-u|-i]",
" aptitude [options] <action> ...",
"",
"Actions (if none is specified, aptitude will enter interactive mode):",
"",
" install Install/upgrade packages.",
" remove Remove packages.",
" purge Remove packages and their configuration files.",
" hold Place packages on hold.",
" unhold Cancel a hold command for a package.",
" markauto Mark packages as having been automatically installed.",
" unmarkauto Mark packages as having been manually installed.",
" forbid-version Forbid aptitude from upgrading to a specific package version.",
" update Download lists of new/upgradable packages.",
" safe-upgrade Perform a safe upgrade.",
" full-upgrade Perform an upgrade, possibly installing and removing packages.",
" build-dep Install the build-dependencies of packages.",
" forget-new Forget what packages are \"new\".",
" search Search for a package by name and/or expression.",
" show Display detailed info about a package.",
" showsrc Display detailed info about a source package (apt wrapper).",
" versions Displays the versions of specified packages.",
" clean Erase downloaded package files.",
" autoclean Erase old downloaded package files.",
" changelog View a package's changelog.",
" download Download the .deb file for a package (apt wrapper).",
" source Download source package (apt wrapper).",
" reinstall Reinstall a currently installed package.",
" why Explain why a particular package should be installed.",
" why-not Explain why a particular package cannot be installed.",
"",
" add-user-tag Add user tag to packages/patterns.",
" remove-user-tag Remove user tag from packages/patterns.",
"",
"Options:",
" -h This help text.",
" --no-gui Do not use the GTK GUI even if available.",
" -s Simulate actions, but do not actually perform them.",
" -d Only download packages, do not install or remove anything.",
" -P Always prompt for confirmation of actions.",
" -y Assume that the answer to simple yes/no questions is 'yes'.",
" -F format Specify a format for displaying search results; see the manual.",
" -O order Specify how search results should be sorted; see the manual.",
" -w width Specify the display width for formatting search results.",
" -f Aggressively try to fix broken packages.",
" -V Show which versions of packages are to be installed.",
" -D Show the dependencies of automatically changed packages.",
" -Z Show the change in installed size of each package.",
" -v Display extra information. (may be supplied multiple times).",
" -t [release] Set the release from which packages should be installed.",
" -q In command-line mode, suppress the incremental progress",
" indicators.",
" -o key=val Directly set the configuration option named 'key'.",
" --with(out)-recommends Specify whether or not to treat recommends as",
" strong dependencies.",
" -S fname Read the aptitude extended status info from fname.",
" -u Download new package lists on startup.",
" (terminal interface only)",
" -i Perform an install run on startup.",
" (terminal interface only)",
"",
"See the manual page for a complete list and description of all the options.",
"",
"This aptitude does not have Super Cow Powers."
]
}
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/77868
|
https://github.com/ansible/ansible/pull/81445
|
5deb4ee99118b2b1990d45bd06c7a23a147861f6
|
2e6d849bdb80364b5229f4b9190935e84ee9bfe8
| 2022-05-20T17:17:22Z |
python
| 2023-08-23T15:42:03Z |
lib/ansible/modules/apt.py
|
# -*- coding: utf-8 -*-
# Copyright: (c) 2012, Flowroute LLC
# Written by Matthew Williams <[email protected]>
# Based on yum module written by Seth Vidal <skvidal at fedoraproject.org>
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
DOCUMENTATION = '''
---
module: apt
short_description: Manages apt-packages
description:
- Manages I(apt) packages (such as for Debian/Ubuntu).
version_added: "0.0.2"
options:
name:
description:
- A list of package names, like V(foo), or package specifier with version, like V(foo=1.0) or V(foo>=1.0).
Name wildcards (fnmatch) like V(apt*) and version wildcards like V(foo=1.0*) are also supported.
aliases: [ package, pkg ]
type: list
elements: str
state:
description:
- Indicates the desired package state. V(latest) ensures that the latest version is installed. V(build-dep) ensures the package build dependencies
are installed. V(fixed) attempt to correct a system with broken dependencies in place.
type: str
default: present
choices: [ absent, build-dep, latest, present, fixed ]
update_cache:
description:
- Run the equivalent of C(apt-get update) before the operation. Can be run as part of the package installation or as a separate step.
- Default is not to update the cache.
aliases: [ update-cache ]
type: bool
update_cache_retries:
description:
- Amount of retries if the cache update fails. Also see O(update_cache_retry_max_delay).
type: int
default: 5
version_added: '2.10'
update_cache_retry_max_delay:
description:
- Use an exponential backoff delay for each retry (see O(update_cache_retries)) up to this max delay in seconds.
type: int
default: 12
version_added: '2.10'
cache_valid_time:
description:
- Update the apt cache if it is older than the O(cache_valid_time). This option is set in seconds.
- As of Ansible 2.4, if explicitly set, this sets O(update_cache=yes).
type: int
default: 0
purge:
description:
- Will force purging of configuration files if O(state=absent) or O(autoremove=yes).
type: bool
default: 'no'
default_release:
description:
- Corresponds to the C(-t) option for I(apt) and sets pin priorities
aliases: [ default-release ]
type: str
install_recommends:
description:
- Corresponds to the C(--no-install-recommends) option for I(apt). V(true) installs recommended packages. V(false) does not install
recommended packages. By default, Ansible will use the same defaults as the operating system. Suggested packages are never installed.
aliases: [ install-recommends ]
type: bool
force:
description:
- 'Corresponds to the C(--force-yes) to I(apt-get) and implies O(allow_unauthenticated=yes) and O(allow_downgrade=yes)'
- "This option will disable checking both the packages' signatures and the certificates of the
web servers they are downloaded from."
- 'This option *is not* the equivalent of passing the C(-f) flag to I(apt-get) on the command line'
- '**This is a destructive operation with the potential to destroy your system, and it should almost never be used.**
Please also see C(man apt-get) for more information.'
type: bool
default: 'no'
clean:
description:
- Run the equivalent of C(apt-get clean) to clear out the local repository of retrieved package files. It removes everything but
the lock file from /var/cache/apt/archives/ and /var/cache/apt/archives/partial/.
- Can be run as part of the package installation (clean runs before install) or as a separate step.
type: bool
default: 'no'
version_added: "2.13"
allow_unauthenticated:
description:
- Ignore if packages cannot be authenticated. This is useful for bootstrapping environments that manage their own apt-key setup.
- 'O(allow_unauthenticated) is only supported with O(state): V(install)/V(present)'
aliases: [ allow-unauthenticated ]
type: bool
default: 'no'
version_added: "2.1"
allow_downgrade:
description:
- Corresponds to the C(--allow-downgrades) option for I(apt).
- This option enables the named package and version to replace an already installed higher version of that package.
- Note that setting O(allow_downgrade=true) can make this module behave in a non-idempotent way.
- (The task could end up with a set of packages that does not match the complete list of specified packages to install).
aliases: [ allow-downgrade, allow_downgrades, allow-downgrades ]
type: bool
default: 'no'
version_added: "2.12"
allow_change_held_packages:
description:
- Allows changing the version of a package which is on the apt hold list
type: bool
default: 'no'
version_added: '2.13'
upgrade:
description:
- If yes or safe, performs an aptitude safe-upgrade.
- If full, performs an aptitude full-upgrade.
- If dist, performs an apt-get dist-upgrade.
- 'Note: This does not upgrade a specific package, use state=latest for that.'
- 'Note: Since 2.4, apt-get is used as a fall-back if aptitude is not present.'
version_added: "1.1"
choices: [ dist, full, 'no', safe, 'yes' ]
default: 'no'
type: str
dpkg_options:
description:
- Add dpkg options to apt command. Defaults to '-o "Dpkg::Options::=--force-confdef" -o "Dpkg::Options::=--force-confold"'
- Options should be supplied as comma separated list
default: force-confdef,force-confold
type: str
deb:
description:
- Path to a .deb package on the remote machine.
- If :// in the path, ansible will attempt to download deb before installing. (Version added 2.1)
- Requires the C(xz-utils) package to extract the control file of the deb package to install.
type: path
required: false
version_added: "1.6"
autoremove:
description:
- If V(true), remove unused dependency packages for all module states except V(build-dep). It can also be used as the only option.
- Previous to version 2.4, autoclean was also an alias for autoremove, now it is its own separate command. See documentation for further information.
type: bool
default: 'no'
version_added: "2.1"
autoclean:
description:
- If V(true), cleans the local repository of retrieved package files that can no longer be downloaded.
type: bool
default: 'no'
version_added: "2.4"
policy_rc_d:
description:
- Force the exit code of /usr/sbin/policy-rc.d.
- For example, if I(policy_rc_d=101) the installed package will not trigger a service start.
- If /usr/sbin/policy-rc.d already exists, it is backed up and restored after the package installation.
- If V(null), the /usr/sbin/policy-rc.d isn't created/changed.
type: int
default: null
version_added: "2.8"
only_upgrade:
description:
- Only upgrade a package if it is already installed.
type: bool
default: 'no'
version_added: "2.1"
fail_on_autoremove:
description:
- 'Corresponds to the C(--no-remove) option for C(apt).'
- 'If V(true), it is ensured that no packages will be removed or the task will fail.'
- 'O(fail_on_autoremove) is only supported with O(state) except V(absent)'
type: bool
default: 'no'
version_added: "2.11"
force_apt_get:
description:
- Force usage of apt-get instead of aptitude
type: bool
default: 'no'
version_added: "2.4"
lock_timeout:
description:
- How many seconds will this action wait to acquire a lock on the apt db.
- Sometimes there is a transitory lock and this will retry at least until timeout is hit.
type: int
default: 60
version_added: "2.12"
requirements:
- python-apt (python 2)
- python3-apt (python 3)
- aptitude (before 2.4)
author: "Matthew Williams (@mgwilliams)"
extends_documentation_fragment: action_common_attributes
attributes:
check_mode:
support: full
diff_mode:
support: full
platform:
platforms: debian
notes:
- Three of the upgrade modes (V(full), V(safe) and its alias V(true)) required C(aptitude) up to 2.3, since 2.4 C(apt-get) is used as a fall-back.
- In most cases, packages installed with apt will start newly installed services by default. Most distributions have mechanisms to avoid this.
For example when installing Postgresql-9.5 in Debian 9, creating an excutable shell script (/usr/sbin/policy-rc.d) that throws
a return code of 101 will stop Postgresql 9.5 starting up after install. Remove the file or remove its execute permission afterwards.
- The apt-get commandline supports implicit regex matches here but we do not because it can let typos through easier
(If you typo C(foo) as C(fo) apt-get would install packages that have "fo" in their name with a warning and a prompt for the user.
Since we don't have warnings and prompts before installing we disallow this.Use an explicit fnmatch pattern if you want wildcarding)
- When used with a C(loop:) each package will be processed individually, it is much more efficient to pass the list directly to the O(name) option.
- When O(default_release) is used, an implicit priority of 990 is used. This is the same behavior as C(apt-get -t).
- When an exact version is specified, an implicit priority of 1001 is used.
'''
EXAMPLES = '''
- name: Install apache httpd (state=present is optional)
ansible.builtin.apt:
name: apache2
state: present
- name: Update repositories cache and install "foo" package
ansible.builtin.apt:
name: foo
update_cache: yes
- name: Remove "foo" package
ansible.builtin.apt:
name: foo
state: absent
- name: Install the package "foo"
ansible.builtin.apt:
name: foo
- name: Install a list of packages
ansible.builtin.apt:
pkg:
- foo
- foo-tools
- name: Install the version '1.00' of package "foo"
ansible.builtin.apt:
name: foo=1.00
- name: Update the repository cache and update package "nginx" to latest version using default release squeeze-backport
ansible.builtin.apt:
name: nginx
state: latest
default_release: squeeze-backports
update_cache: yes
- name: Install the version '1.18.0' of package "nginx" and allow potential downgrades
ansible.builtin.apt:
name: nginx=1.18.0
state: present
allow_downgrade: yes
- name: Install zfsutils-linux with ensuring conflicted packages (e.g. zfs-fuse) will not be removed.
ansible.builtin.apt:
name: zfsutils-linux
state: latest
fail_on_autoremove: yes
- name: Install latest version of "openjdk-6-jdk" ignoring "install-recommends"
ansible.builtin.apt:
name: openjdk-6-jdk
state: latest
install_recommends: no
- name: Update all packages to their latest version
ansible.builtin.apt:
name: "*"
state: latest
- name: Upgrade the OS (apt-get dist-upgrade)
ansible.builtin.apt:
upgrade: dist
- name: Run the equivalent of "apt-get update" as a separate step
ansible.builtin.apt:
update_cache: yes
- name: Only run "update_cache=yes" if the last one is more than 3600 seconds ago
ansible.builtin.apt:
update_cache: yes
cache_valid_time: 3600
- name: Pass options to dpkg on run
ansible.builtin.apt:
upgrade: dist
update_cache: yes
dpkg_options: 'force-confold,force-confdef'
- name: Install a .deb package
ansible.builtin.apt:
deb: /tmp/mypackage.deb
- name: Install the build dependencies for package "foo"
ansible.builtin.apt:
pkg: foo
state: build-dep
- name: Install a .deb package from the internet
ansible.builtin.apt:
deb: https://example.com/python-ppq_0.1-1_all.deb
- name: Remove useless packages from the cache
ansible.builtin.apt:
autoclean: yes
- name: Remove dependencies that are no longer required
ansible.builtin.apt:
autoremove: yes
- name: Remove dependencies that are no longer required and purge their configuration files
ansible.builtin.apt:
autoremove: yes
purge: true
- name: Run the equivalent of "apt-get clean" as a separate step
apt:
clean: yes
'''
RETURN = '''
cache_updated:
description: if the cache was updated or not
returned: success, in some cases
type: bool
sample: True
cache_update_time:
description: time of the last cache update (0 if unknown)
returned: success, in some cases
type: int
sample: 1425828348000
stdout:
description: output from apt
returned: success, when needed
type: str
sample: |-
Reading package lists...
Building dependency tree...
Reading state information...
The following extra packages will be installed:
apache2-bin ...
stderr:
description: error output from apt
returned: success, when needed
type: str
sample: "AH00558: apache2: Could not reliably determine the server's fully qualified domain name, using 127.0.1.1. Set the 'ServerName' directive globally to ..."
''' # NOQA
# added to stave off future warnings about apt api
import warnings
warnings.filterwarnings('ignore', "apt API not stable yet", FutureWarning)
import datetime
import fnmatch
import locale as locale_module
import os
import random
import re
import shutil
import sys
import tempfile
import time
from ansible.module_utils.basic import AnsibleModule
from ansible.module_utils.common.locale import get_best_parsable_locale
from ansible.module_utils.common.respawn import has_respawned, probe_interpreters_for_module, respawn_module
from ansible.module_utils.common.text.converters import to_native, to_text
from ansible.module_utils.six import PY3, string_types
from ansible.module_utils.urls import fetch_file
DPKG_OPTIONS = 'force-confdef,force-confold'
APT_GET_ZERO = "\n0 upgraded, 0 newly installed"
APTITUDE_ZERO = "\n0 packages upgraded, 0 newly installed"
APT_LISTS_PATH = "/var/lib/apt/lists"
APT_UPDATE_SUCCESS_STAMP_PATH = "/var/lib/apt/periodic/update-success-stamp"
APT_MARK_INVALID_OP = 'Invalid operation'
APT_MARK_INVALID_OP_DEB6 = 'Usage: apt-mark [options] {markauto|unmarkauto} packages'
CLEAN_OP_CHANGED_STR = dict(
autoremove='The following packages will be REMOVED',
# "Del python3-q 2.4-1 [24 kB]"
autoclean='Del ',
)
HAS_PYTHON_APT = False
try:
import apt
import apt.debfile
import apt_pkg
HAS_PYTHON_APT = True
except ImportError:
apt = apt_pkg = None
class PolicyRcD(object):
"""
This class is a context manager for the /usr/sbin/policy-rc.d file.
It allow the user to prevent dpkg to start the corresponding service when installing
a package.
https://people.debian.org/~hmh/invokerc.d-policyrc.d-specification.txt
"""
def __init__(self, module):
# we need the module for later use (eg. fail_json)
self.m = module
# if policy_rc_d is null then we don't need to modify policy-rc.d
if self.m.params['policy_rc_d'] is None:
return
# if the /usr/sbin/policy-rc.d already exists
# we will back it up during package installation
# then restore it
if os.path.exists('/usr/sbin/policy-rc.d'):
self.backup_dir = tempfile.mkdtemp(prefix="ansible")
else:
self.backup_dir = None
def __enter__(self):
"""
This method will be called when we enter the context, before we call `apt-get β¦`
"""
# if policy_rc_d is null then we don't need to modify policy-rc.d
if self.m.params['policy_rc_d'] is None:
return
# if the /usr/sbin/policy-rc.d already exists we back it up
if self.backup_dir:
try:
shutil.move('/usr/sbin/policy-rc.d', self.backup_dir)
except Exception:
self.m.fail_json(msg="Fail to move /usr/sbin/policy-rc.d to %s" % self.backup_dir)
# we write /usr/sbin/policy-rc.d so it always exits with code policy_rc_d
try:
with open('/usr/sbin/policy-rc.d', 'w') as policy_rc_d:
policy_rc_d.write('#!/bin/sh\nexit %d\n' % self.m.params['policy_rc_d'])
os.chmod('/usr/sbin/policy-rc.d', 0o0755)
except Exception:
self.m.fail_json(msg="Failed to create or chmod /usr/sbin/policy-rc.d")
def __exit__(self, type, value, traceback):
"""
This method will be called when we exit the context, after `apt-get β¦` is done
"""
# if policy_rc_d is null then we don't need to modify policy-rc.d
if self.m.params['policy_rc_d'] is None:
return
if self.backup_dir:
# if /usr/sbin/policy-rc.d already exists before the call to __enter__
# we restore it (from the backup done in __enter__)
try:
shutil.move(os.path.join(self.backup_dir, 'policy-rc.d'),
'/usr/sbin/policy-rc.d')
os.rmdir(self.backup_dir)
except Exception:
self.m.fail_json(msg="Fail to move back %s to /usr/sbin/policy-rc.d"
% os.path.join(self.backup_dir, 'policy-rc.d'))
else:
# if there wasn't a /usr/sbin/policy-rc.d file before the call to __enter__
# we just remove the file
try:
os.remove('/usr/sbin/policy-rc.d')
except Exception:
self.m.fail_json(msg="Fail to remove /usr/sbin/policy-rc.d (after package manipulation)")
def package_split(pkgspec):
parts = re.split(r'(>?=)', pkgspec, 1)
if len(parts) > 1:
return parts
return parts[0], None, None
def package_version_compare(version, other_version):
try:
return apt_pkg.version_compare(version, other_version)
except AttributeError:
return apt_pkg.VersionCompare(version, other_version)
def package_best_match(pkgname, version_cmp, version, release, cache):
policy = apt_pkg.Policy(cache)
policy.read_pinfile(apt_pkg.config.find_file("Dir::Etc::preferences"))
policy.read_pindir(apt_pkg.config.find_file("Dir::Etc::preferencesparts"))
if release:
# 990 is the priority used in `apt-get -t`
policy.create_pin('Release', pkgname, release, 990)
if version_cmp == "=":
# Installing a specific version from command line overrides all pinning
# We don't mimmic this exactly, but instead set a priority which is higher than all APT built-in pin priorities.
policy.create_pin('Version', pkgname, version, 1001)
pkg = cache[pkgname]
pkgver = policy.get_candidate_ver(pkg)
if not pkgver:
return None
if version_cmp == "=" and not fnmatch.fnmatch(pkgver.ver_str, version):
# Even though we put in a pin policy, it can be ignored if there is no
# possible candidate.
return None
return pkgver.ver_str
def package_status(m, pkgname, version_cmp, version, default_release, cache, state):
"""
:return: A tuple of (installed, installed_version, version_installable, has_files). *installed* indicates whether
the package (regardless of version) is installed. *installed_version* indicates whether the installed package
matches the provided version criteria. *version_installable* provides the latest matching version that can be
installed. In the case of virtual packages where we can't determine an applicable match, True is returned.
*has_files* indicates whether the package has files on the filesystem (even if not installed, meaning a purge is
required).
"""
try:
# get the package from the cache, as well as the
# low-level apt_pkg.Package object which contains
# state fields not directly accessible from the
# higher-level apt.package.Package object.
pkg = cache[pkgname]
ll_pkg = cache._cache[pkgname] # the low-level package object
except KeyError:
if state == 'install':
try:
provided_packages = cache.get_providing_packages(pkgname)
if provided_packages:
# When this is a virtual package satisfied by only
# one installed package, return the status of the target
# package to avoid requesting re-install
if cache.is_virtual_package(pkgname) and len(provided_packages) == 1:
package = provided_packages[0]
installed, installed_version, version_installable, has_files = \
package_status(m, package.name, version_cmp, version, default_release, cache, state='install')
if installed:
return installed, installed_version, version_installable, has_files
# Otherwise return nothing so apt will sort out
# what package to satisfy this with
return False, False, True, False
m.fail_json(msg="No package matching '%s' is available" % pkgname)
except AttributeError:
# python-apt version too old to detect virtual packages
# mark as not installed and let apt-get install deal with it
return False, False, True, False
else:
return False, False, None, False
try:
has_files = len(pkg.installed_files) > 0
except UnicodeDecodeError:
has_files = True
except AttributeError:
has_files = False # older python-apt cannot be used to determine non-purged
try:
package_is_installed = ll_pkg.current_state == apt_pkg.CURSTATE_INSTALLED
except AttributeError: # python-apt 0.7.X has very weak low-level object
try:
# might not be necessary as python-apt post-0.7.X should have current_state property
package_is_installed = pkg.is_installed
except AttributeError:
# assume older version of python-apt is installed
package_is_installed = pkg.isInstalled
version_best = package_best_match(pkgname, version_cmp, version, default_release, cache._cache)
version_is_installed = False
version_installable = None
if package_is_installed:
try:
installed_version = pkg.installed.version
except AttributeError:
installed_version = pkg.installedVersion
if version_cmp == "=":
# check if the version is matched as well
version_is_installed = fnmatch.fnmatch(installed_version, version)
if version_best and installed_version != version_best and fnmatch.fnmatch(version_best, version):
version_installable = version_best
elif version_cmp == ">=":
version_is_installed = apt_pkg.version_compare(installed_version, version) >= 0
if version_best and installed_version != version_best and apt_pkg.version_compare(version_best, version) >= 0:
version_installable = version_best
else:
version_is_installed = True
if version_best and installed_version != version_best:
version_installable = version_best
else:
version_installable = version_best
return package_is_installed, version_is_installed, version_installable, has_files
def expand_dpkg_options(dpkg_options_compressed):
options_list = dpkg_options_compressed.split(',')
dpkg_options = ""
for dpkg_option in options_list:
dpkg_options = '%s -o "Dpkg::Options::=--%s"' \
% (dpkg_options, dpkg_option)
return dpkg_options.strip()
def expand_pkgspec_from_fnmatches(m, pkgspec, cache):
# Note: apt-get does implicit regex matching when an exact package name
# match is not found. Something like this:
# matches = [pkg.name for pkg in cache if re.match(pkgspec, pkg.name)]
# (Should also deal with the ':' for multiarch like the fnmatch code below)
#
# We have decided not to do similar implicit regex matching but might take
# a PR to add some sort of explicit regex matching:
# https://github.com/ansible/ansible-modules-core/issues/1258
new_pkgspec = []
if pkgspec:
for pkgspec_pattern in pkgspec:
if not isinstance(pkgspec_pattern, string_types):
m.fail_json(msg="Invalid type for package name, expected string but got %s" % type(pkgspec_pattern))
pkgname_pattern, version_cmp, version = package_split(pkgspec_pattern)
# note that none of these chars is allowed in a (debian) pkgname
if frozenset('*?[]!').intersection(pkgname_pattern):
# handle multiarch pkgnames, the idea is that "apt*" should
# only select native packages. But "apt*:i386" should still work
if ":" not in pkgname_pattern:
# Filter the multiarch packages from the cache only once
try:
pkg_name_cache = _non_multiarch # pylint: disable=used-before-assignment
except NameError:
pkg_name_cache = _non_multiarch = [pkg.name for pkg in cache if ':' not in pkg.name] # noqa: F841
else:
# Create a cache of pkg_names including multiarch only once
try:
pkg_name_cache = _all_pkg_names # pylint: disable=used-before-assignment
except NameError:
pkg_name_cache = _all_pkg_names = [pkg.name for pkg in cache] # noqa: F841
matches = fnmatch.filter(pkg_name_cache, pkgname_pattern)
if not matches:
m.fail_json(msg="No package(s) matching '%s' available" % to_text(pkgname_pattern))
else:
new_pkgspec.extend(matches)
else:
# No wildcards in name
new_pkgspec.append(pkgspec_pattern)
return new_pkgspec
def parse_diff(output):
diff = to_native(output).splitlines()
try:
# check for start marker from aptitude
diff_start = diff.index('Resolving dependencies...')
except ValueError:
try:
# check for start marker from apt-get
diff_start = diff.index('Reading state information...')
except ValueError:
# show everything
diff_start = -1
try:
# check for end marker line from both apt-get and aptitude
diff_end = next(i for i, item in enumerate(diff) if re.match('[0-9]+ (packages )?upgraded', item))
except StopIteration:
diff_end = len(diff)
diff_start += 1
diff_end += 1
return {'prepared': '\n'.join(diff[diff_start:diff_end])}
def mark_installed_manually(m, packages):
if not packages:
return
apt_mark_cmd_path = m.get_bin_path("apt-mark")
# https://github.com/ansible/ansible/issues/40531
if apt_mark_cmd_path is None:
m.warn("Could not find apt-mark binary, not marking package(s) as manually installed.")
return
cmd = "%s manual %s" % (apt_mark_cmd_path, ' '.join(packages))
rc, out, err = m.run_command(cmd)
if APT_MARK_INVALID_OP in err or APT_MARK_INVALID_OP_DEB6 in err:
cmd = "%s unmarkauto %s" % (apt_mark_cmd_path, ' '.join(packages))
rc, out, err = m.run_command(cmd)
if rc != 0:
m.fail_json(msg="'%s' failed: %s" % (cmd, err), stdout=out, stderr=err, rc=rc)
def install(m, pkgspec, cache, upgrade=False, default_release=None,
install_recommends=None, force=False,
dpkg_options=expand_dpkg_options(DPKG_OPTIONS),
build_dep=False, fixed=False, autoremove=False, fail_on_autoremove=False, only_upgrade=False,
allow_unauthenticated=False, allow_downgrade=False, allow_change_held_packages=False):
pkg_list = []
packages = ""
pkgspec = expand_pkgspec_from_fnmatches(m, pkgspec, cache)
package_names = []
for package in pkgspec:
if build_dep:
# Let apt decide what to install
pkg_list.append("'%s'" % package)
continue
name, version_cmp, version = package_split(package)
package_names.append(name)
installed, installed_version, version_installable, has_files = package_status(m, name, version_cmp, version, default_release, cache, state='install')
if not installed and only_upgrade:
# only_upgrade upgrades packages that are already installed
# since this package is not installed, skip it
continue
if not installed_version and not version_installable:
status = False
data = dict(msg="no available installation candidate for %s" % package)
return (status, data)
if version_installable and ((not installed and not only_upgrade) or upgrade or not installed_version):
if version_installable is not True:
pkg_list.append("'%s=%s'" % (name, version_installable))
elif version:
pkg_list.append("'%s=%s'" % (name, version))
else:
pkg_list.append("'%s'" % name)
elif installed_version and version_installable and version_cmp == "=":
# This happens when the package is installed, a newer version is
# available, and the version is a wildcard that matches both
#
# This is legacy behavior, and isn't documented (in fact it does
# things documentations says it shouldn't). It should not be relied
# upon.
pkg_list.append("'%s=%s'" % (name, version))
packages = ' '.join(pkg_list)
if packages:
if force:
force_yes = '--force-yes'
else:
force_yes = ''
if m.check_mode:
check_arg = '--simulate'
else:
check_arg = ''
if autoremove:
autoremove = '--auto-remove'
else:
autoremove = ''
if fail_on_autoremove:
fail_on_autoremove = '--no-remove'
else:
fail_on_autoremove = ''
if only_upgrade:
only_upgrade = '--only-upgrade'
else:
only_upgrade = ''
if fixed:
fixed = '--fix-broken'
else:
fixed = ''
if build_dep:
cmd = "%s -y %s %s %s %s %s %s build-dep %s" % (APT_GET_CMD, dpkg_options, only_upgrade, fixed, force_yes, fail_on_autoremove, check_arg, packages)
else:
cmd = "%s -y %s %s %s %s %s %s %s install %s" % \
(APT_GET_CMD, dpkg_options, only_upgrade, fixed, force_yes, autoremove, fail_on_autoremove, check_arg, packages)
if default_release:
cmd += " -t '%s'" % (default_release,)
if install_recommends is False:
cmd += " -o APT::Install-Recommends=no"
elif install_recommends is True:
cmd += " -o APT::Install-Recommends=yes"
# install_recommends is None uses the OS default
if allow_unauthenticated:
cmd += " --allow-unauthenticated"
if allow_downgrade:
cmd += " --allow-downgrades"
if allow_change_held_packages:
cmd += " --allow-change-held-packages"
with PolicyRcD(m):
rc, out, err = m.run_command(cmd)
if m._diff:
diff = parse_diff(out)
else:
diff = {}
status = True
changed = True
if build_dep:
changed = APT_GET_ZERO not in out
data = dict(changed=changed, stdout=out, stderr=err, diff=diff)
if rc:
status = False
data = dict(msg="'%s' failed: %s" % (cmd, err), stdout=out, stderr=err, rc=rc)
else:
status = True
data = dict(changed=False)
if not build_dep and not m.check_mode:
mark_installed_manually(m, package_names)
return (status, data)
def get_field_of_deb(m, deb_file, field="Version"):
cmd_dpkg = m.get_bin_path("dpkg", True)
cmd = cmd_dpkg + " --field %s %s" % (deb_file, field)
rc, stdout, stderr = m.run_command(cmd)
if rc != 0:
m.fail_json(msg="%s failed" % cmd, stdout=stdout, stderr=stderr)
return to_native(stdout).strip('\n')
def install_deb(
m, debs, cache, force, fail_on_autoremove, install_recommends,
allow_unauthenticated,
allow_downgrade,
allow_change_held_packages,
dpkg_options,
):
changed = False
deps_to_install = []
pkgs_to_install = []
for deb_file in debs.split(','):
try:
pkg = apt.debfile.DebPackage(deb_file, cache=apt.Cache())
pkg_name = get_field_of_deb(m, deb_file, "Package")
pkg_version = get_field_of_deb(m, deb_file, "Version")
if hasattr(apt_pkg, 'get_architectures') and len(apt_pkg.get_architectures()) > 1:
pkg_arch = get_field_of_deb(m, deb_file, "Architecture")
pkg_key = "%s:%s" % (pkg_name, pkg_arch)
else:
pkg_key = pkg_name
try:
installed_pkg = apt.Cache()[pkg_key]
installed_version = installed_pkg.installed.version
if package_version_compare(pkg_version, installed_version) == 0:
# Does not need to down-/upgrade, move on to next package
continue
except Exception:
# Must not be installed, continue with installation
pass
# Check if package is installable
if not pkg.check():
if force or ("later version" in pkg._failure_string and allow_downgrade):
pass
else:
m.fail_json(msg=pkg._failure_string)
# add any missing deps to the list of deps we need
# to install so they're all done in one shot
deps_to_install.extend(pkg.missing_deps)
except Exception as e:
m.fail_json(msg="Unable to install package: %s" % to_native(e))
# and add this deb to the list of packages to install
pkgs_to_install.append(deb_file)
# install the deps through apt
retvals = {}
if deps_to_install:
(success, retvals) = install(m=m, pkgspec=deps_to_install, cache=cache,
install_recommends=install_recommends,
fail_on_autoremove=fail_on_autoremove,
allow_unauthenticated=allow_unauthenticated,
allow_downgrade=allow_downgrade,
allow_change_held_packages=allow_change_held_packages,
dpkg_options=expand_dpkg_options(dpkg_options))
if not success:
m.fail_json(**retvals)
changed = retvals.get('changed', False)
if pkgs_to_install:
options = ' '.join(["--%s" % x for x in dpkg_options.split(",")])
if m.check_mode:
options += " --simulate"
if force:
options += " --force-all"
cmd = "dpkg %s -i %s" % (options, " ".join(pkgs_to_install))
with PolicyRcD(m):
rc, out, err = m.run_command(cmd)
if "stdout" in retvals:
stdout = retvals["stdout"] + out
else:
stdout = out
if "diff" in retvals:
diff = retvals["diff"]
if 'prepared' in diff:
diff['prepared'] += '\n\n' + out
else:
diff = parse_diff(out)
if "stderr" in retvals:
stderr = retvals["stderr"] + err
else:
stderr = err
if rc == 0:
m.exit_json(changed=True, stdout=stdout, stderr=stderr, diff=diff)
else:
m.fail_json(msg="%s failed" % cmd, stdout=stdout, stderr=stderr)
else:
m.exit_json(changed=changed, stdout=retvals.get('stdout', ''), stderr=retvals.get('stderr', ''), diff=retvals.get('diff', ''))
def remove(m, pkgspec, cache, purge=False, force=False,
dpkg_options=expand_dpkg_options(DPKG_OPTIONS), autoremove=False,
allow_change_held_packages=False):
pkg_list = []
pkgspec = expand_pkgspec_from_fnmatches(m, pkgspec, cache)
for package in pkgspec:
name, version_cmp, version = package_split(package)
installed, installed_version, upgradable, has_files = package_status(m, name, version_cmp, version, None, cache, state='remove')
if installed_version or (has_files and purge):
pkg_list.append("'%s'" % package)
packages = ' '.join(pkg_list)
if not packages:
m.exit_json(changed=False)
else:
if force:
force_yes = '--force-yes'
else:
force_yes = ''
if purge:
purge = '--purge'
else:
purge = ''
if autoremove:
autoremove = '--auto-remove'
else:
autoremove = ''
if m.check_mode:
check_arg = '--simulate'
else:
check_arg = ''
if allow_change_held_packages:
allow_change_held_packages = '--allow-change-held-packages'
else:
allow_change_held_packages = ''
cmd = "%s -q -y %s %s %s %s %s %s remove %s" % (
APT_GET_CMD,
dpkg_options,
purge,
force_yes,
autoremove,
check_arg,
allow_change_held_packages,
packages
)
with PolicyRcD(m):
rc, out, err = m.run_command(cmd)
if m._diff:
diff = parse_diff(out)
else:
diff = {}
if rc:
m.fail_json(msg="'apt-get remove %s' failed: %s" % (packages, err), stdout=out, stderr=err, rc=rc)
m.exit_json(changed=True, stdout=out, stderr=err, diff=diff)
def cleanup(m, purge=False, force=False, operation=None,
dpkg_options=expand_dpkg_options(DPKG_OPTIONS)):
if operation not in frozenset(['autoremove', 'autoclean']):
raise AssertionError('Expected "autoremove" or "autoclean" cleanup operation, got %s' % operation)
if force:
force_yes = '--force-yes'
else:
force_yes = ''
if purge:
purge = '--purge'
else:
purge = ''
if m.check_mode:
check_arg = '--simulate'
else:
check_arg = ''
cmd = "%s -y %s %s %s %s %s" % (APT_GET_CMD, dpkg_options, purge, force_yes, operation, check_arg)
with PolicyRcD(m):
rc, out, err = m.run_command(cmd)
if m._diff:
diff = parse_diff(out)
else:
diff = {}
if rc:
m.fail_json(msg="'apt-get %s' failed: %s" % (operation, err), stdout=out, stderr=err, rc=rc)
changed = CLEAN_OP_CHANGED_STR[operation] in out
m.exit_json(changed=changed, stdout=out, stderr=err, diff=diff)
def aptclean(m):
clean_rc, clean_out, clean_err = m.run_command(['apt-get', 'clean'])
clean_diff = parse_diff(clean_out) if m._diff else {}
if clean_rc:
m.fail_json(msg="apt-get clean failed", stdout=clean_out, rc=clean_rc)
if clean_err:
m.fail_json(msg="apt-get clean failed: %s" % clean_err, stdout=clean_out, rc=clean_rc)
return (clean_out, clean_err, clean_diff)
def upgrade(m, mode="yes", force=False, default_release=None,
use_apt_get=False,
dpkg_options=expand_dpkg_options(DPKG_OPTIONS), autoremove=False, fail_on_autoremove=False,
allow_unauthenticated=False,
allow_downgrade=False,
):
if autoremove:
autoremove = '--auto-remove'
else:
autoremove = ''
if m.check_mode:
check_arg = '--simulate'
else:
check_arg = ''
apt_cmd = None
prompt_regex = None
if mode == "dist" or (mode == "full" and use_apt_get):
# apt-get dist-upgrade
apt_cmd = APT_GET_CMD
upgrade_command = "dist-upgrade %s" % (autoremove)
elif mode == "full" and not use_apt_get:
# aptitude full-upgrade
apt_cmd = APTITUDE_CMD
upgrade_command = "full-upgrade"
else:
if use_apt_get:
apt_cmd = APT_GET_CMD
upgrade_command = "upgrade --with-new-pkgs %s" % (autoremove)
else:
# aptitude safe-upgrade # mode=yes # default
apt_cmd = APTITUDE_CMD
upgrade_command = "safe-upgrade"
prompt_regex = r"(^Do you want to ignore this warning and proceed anyway\?|^\*\*\*.*\[default=.*\])"
if force:
if apt_cmd == APT_GET_CMD:
force_yes = '--force-yes'
else:
force_yes = '--assume-yes --allow-untrusted'
else:
force_yes = ''
if fail_on_autoremove:
fail_on_autoremove = '--no-remove'
else:
fail_on_autoremove = ''
allow_unauthenticated = '--allow-unauthenticated' if allow_unauthenticated else ''
allow_downgrade = '--allow-downgrades' if allow_downgrade else ''
if apt_cmd is None:
if use_apt_get:
apt_cmd = APT_GET_CMD
else:
m.fail_json(msg="Unable to find APTITUDE in path. Please make sure "
"to have APTITUDE in path or use 'force_apt_get=True'")
apt_cmd_path = m.get_bin_path(apt_cmd, required=True)
cmd = '%s -y %s %s %s %s %s %s %s' % (
apt_cmd_path,
dpkg_options,
force_yes,
fail_on_autoremove,
allow_unauthenticated,
allow_downgrade,
check_arg,
upgrade_command,
)
if default_release:
cmd += " -t '%s'" % (default_release,)
with PolicyRcD(m):
rc, out, err = m.run_command(cmd, prompt_regex=prompt_regex)
if m._diff:
diff = parse_diff(out)
else:
diff = {}
if rc:
m.fail_json(msg="'%s %s' failed: %s" % (apt_cmd, upgrade_command, err), stdout=out, rc=rc)
if (apt_cmd == APT_GET_CMD and APT_GET_ZERO in out) or (apt_cmd == APTITUDE_CMD and APTITUDE_ZERO in out):
m.exit_json(changed=False, msg=out, stdout=out, stderr=err)
m.exit_json(changed=True, msg=out, stdout=out, stderr=err, diff=diff)
def get_cache_mtime():
"""Return mtime of a valid apt cache file.
Stat the apt cache file and if no cache file is found return 0
:returns: ``int``
"""
cache_time = 0
if os.path.exists(APT_UPDATE_SUCCESS_STAMP_PATH):
cache_time = os.stat(APT_UPDATE_SUCCESS_STAMP_PATH).st_mtime
elif os.path.exists(APT_LISTS_PATH):
cache_time = os.stat(APT_LISTS_PATH).st_mtime
return cache_time
def get_updated_cache_time():
"""Return the mtime time stamp and the updated cache time.
Always retrieve the mtime of the apt cache or set the `cache_mtime`
variable to 0
:returns: ``tuple``
"""
cache_mtime = get_cache_mtime()
mtimestamp = datetime.datetime.fromtimestamp(cache_mtime)
updated_cache_time = int(time.mktime(mtimestamp.timetuple()))
return mtimestamp, updated_cache_time
# https://github.com/ansible/ansible-modules-core/issues/2951
def get_cache(module):
'''Attempt to get the cache object and update till it works'''
cache = None
try:
cache = apt.Cache()
except SystemError as e:
if '/var/lib/apt/lists/' in to_native(e).lower():
# update cache until files are fixed or retries exceeded
retries = 0
while retries < 2:
(rc, so, se) = module.run_command(['apt-get', 'update', '-q'])
retries += 1
if rc == 0:
break
if rc != 0:
module.fail_json(msg='Updating the cache to correct corrupt package lists failed:\n%s\n%s' % (to_native(e), so + se), rc=rc)
# try again
cache = apt.Cache()
else:
module.fail_json(msg=to_native(e))
return cache
def main():
module = AnsibleModule(
argument_spec=dict(
state=dict(type='str', default='present', choices=['absent', 'build-dep', 'fixed', 'latest', 'present']),
update_cache=dict(type='bool', aliases=['update-cache']),
update_cache_retries=dict(type='int', default=5),
update_cache_retry_max_delay=dict(type='int', default=12),
cache_valid_time=dict(type='int', default=0),
purge=dict(type='bool', default=False),
package=dict(type='list', elements='str', aliases=['pkg', 'name']),
deb=dict(type='path'),
default_release=dict(type='str', aliases=['default-release']),
install_recommends=dict(type='bool', aliases=['install-recommends']),
force=dict(type='bool', default=False),
upgrade=dict(type='str', choices=['dist', 'full', 'no', 'safe', 'yes'], default='no'),
dpkg_options=dict(type='str', default=DPKG_OPTIONS),
autoremove=dict(type='bool', default=False),
autoclean=dict(type='bool', default=False),
fail_on_autoremove=dict(type='bool', default=False),
policy_rc_d=dict(type='int', default=None),
only_upgrade=dict(type='bool', default=False),
force_apt_get=dict(type='bool', default=False),
clean=dict(type='bool', default=False),
allow_unauthenticated=dict(type='bool', default=False, aliases=['allow-unauthenticated']),
allow_downgrade=dict(type='bool', default=False, aliases=['allow-downgrade', 'allow_downgrades', 'allow-downgrades']),
allow_change_held_packages=dict(type='bool', default=False),
lock_timeout=dict(type='int', default=60),
),
mutually_exclusive=[['deb', 'package', 'upgrade']],
required_one_of=[['autoremove', 'deb', 'package', 'update_cache', 'upgrade']],
supports_check_mode=True,
)
# We screenscrape apt-get and aptitude output for information so we need
# to make sure we use the best parsable locale when running commands
# also set apt specific vars for desired behaviour
locale = get_best_parsable_locale(module)
locale_module.setlocale(locale_module.LC_ALL, locale)
# APT related constants
APT_ENV_VARS = dict(
DEBIAN_FRONTEND='noninteractive',
DEBIAN_PRIORITY='critical',
LANG=locale,
LC_ALL=locale,
LC_MESSAGES=locale,
LC_CTYPE=locale,
)
module.run_command_environ_update = APT_ENV_VARS
if not HAS_PYTHON_APT:
# This interpreter can't see the apt Python library- we'll do the following to try and fix that:
# 1) look in common locations for system-owned interpreters that can see it; if we find one, respawn under it
# 2) finding none, try to install a matching python-apt package for the current interpreter version;
# we limit to the current interpreter version to try and avoid installing a whole other Python just
# for apt support
# 3) if we installed a support package, try to respawn under what we think is the right interpreter (could be
# the current interpreter again, but we'll let it respawn anyway for simplicity)
# 4) if still not working, return an error and give up (some corner cases not covered, but this shouldn't be
# made any more complex than it already is to try and cover more, eg, custom interpreters taking over
# system locations)
apt_pkg_name = 'python3-apt' if PY3 else 'python-apt'
if has_respawned():
# this shouldn't be possible; short-circuit early if it happens...
module.fail_json(msg="{0} must be installed and visible from {1}.".format(apt_pkg_name, sys.executable))
interpreters = ['/usr/bin/python3', '/usr/bin/python2', '/usr/bin/python']
interpreter = probe_interpreters_for_module(interpreters, 'apt')
if interpreter:
# found the Python bindings; respawn this module under the interpreter where we found them
respawn_module(interpreter)
# this is the end of the line for this process, it will exit here once the respawned module has completed
# don't make changes if we're in check_mode
if module.check_mode:
module.fail_json(msg="%s must be installed to use check mode. "
"If run normally this module can auto-install it." % apt_pkg_name)
# We skip cache update in auto install the dependency if the
# user explicitly declared it with update_cache=no.
if module.params.get('update_cache') is False:
module.warn("Auto-installing missing dependency without updating cache: %s" % apt_pkg_name)
else:
module.warn("Updating cache and auto-installing missing dependency: %s" % apt_pkg_name)
module.run_command(['apt-get', 'update'], check_rc=True)
# try to install the apt Python binding
module.run_command(['apt-get', 'install', '--no-install-recommends', apt_pkg_name, '-y', '-q'], check_rc=True)
# try again to find the bindings in common places
interpreter = probe_interpreters_for_module(interpreters, 'apt')
if interpreter:
# found the Python bindings; respawn this module under the interpreter where we found them
# NB: respawn is somewhat wasteful if it's this interpreter, but simplifies the code
respawn_module(interpreter)
# this is the end of the line for this process, it will exit here once the respawned module has completed
else:
# we've done all we can do; just tell the user it's busted and get out
module.fail_json(msg="{0} must be installed and visible from {1}.".format(apt_pkg_name, sys.executable))
global APTITUDE_CMD
APTITUDE_CMD = module.get_bin_path("aptitude", False)
global APT_GET_CMD
APT_GET_CMD = module.get_bin_path("apt-get")
p = module.params
if p['clean'] is True:
aptclean_stdout, aptclean_stderr, aptclean_diff = aptclean(module)
# If there is nothing else to do exit. This will set state as
# changed based on if the cache was updated.
if not p['package'] and not p['upgrade'] and not p['deb']:
module.exit_json(
changed=True,
msg=aptclean_stdout,
stdout=aptclean_stdout,
stderr=aptclean_stderr,
diff=aptclean_diff
)
if p['upgrade'] == 'no':
p['upgrade'] = None
use_apt_get = p['force_apt_get']
if not use_apt_get and not APTITUDE_CMD:
use_apt_get = True
updated_cache = False
updated_cache_time = 0
install_recommends = p['install_recommends']
allow_unauthenticated = p['allow_unauthenticated']
allow_downgrade = p['allow_downgrade']
allow_change_held_packages = p['allow_change_held_packages']
dpkg_options = expand_dpkg_options(p['dpkg_options'])
autoremove = p['autoremove']
fail_on_autoremove = p['fail_on_autoremove']
autoclean = p['autoclean']
# max times we'll retry
deadline = time.time() + p['lock_timeout']
# keep running on lock issues unless timeout or resolution is hit.
while True:
# Get the cache object, this has 3 retries built in
cache = get_cache(module)
try:
if p['default_release']:
try:
apt_pkg.config['APT::Default-Release'] = p['default_release']
except AttributeError:
apt_pkg.Config['APT::Default-Release'] = p['default_release']
# reopen cache w/ modified config
cache.open(progress=None)
mtimestamp, updated_cache_time = get_updated_cache_time()
# Cache valid time is default 0, which will update the cache if
# needed and `update_cache` was set to true
updated_cache = False
if p['update_cache'] or p['cache_valid_time']:
now = datetime.datetime.now()
tdelta = datetime.timedelta(seconds=p['cache_valid_time'])
if not mtimestamp + tdelta >= now:
# Retry to update the cache with exponential backoff
err = ''
update_cache_retries = module.params.get('update_cache_retries')
update_cache_retry_max_delay = module.params.get('update_cache_retry_max_delay')
randomize = random.randint(0, 1000) / 1000.0
for retry in range(update_cache_retries):
try:
if not module.check_mode:
cache.update()
break
except apt.cache.FetchFailedException as e:
err = to_native(e)
# Use exponential backoff plus a little bit of randomness
delay = 2 ** retry + randomize
if delay > update_cache_retry_max_delay:
delay = update_cache_retry_max_delay + randomize
time.sleep(delay)
else:
module.fail_json(msg='Failed to update apt cache: %s' % (err if err else 'unknown reason'))
cache.open(progress=None)
mtimestamp, post_cache_update_time = get_updated_cache_time()
if module.check_mode or updated_cache_time != post_cache_update_time:
updated_cache = True
updated_cache_time = post_cache_update_time
# If there is nothing else to do exit. This will set state as
# changed based on if the cache was updated.
if not p['package'] and not p['upgrade'] and not p['deb']:
module.exit_json(
changed=updated_cache,
cache_updated=updated_cache,
cache_update_time=updated_cache_time
)
force_yes = p['force']
if p['upgrade']:
upgrade(
module,
p['upgrade'],
force_yes,
p['default_release'],
use_apt_get,
dpkg_options,
autoremove,
fail_on_autoremove,
allow_unauthenticated,
allow_downgrade
)
if p['deb']:
if p['state'] != 'present':
module.fail_json(msg="deb only supports state=present")
if '://' in p['deb']:
p['deb'] = fetch_file(module, p['deb'])
install_deb(module, p['deb'], cache,
install_recommends=install_recommends,
allow_unauthenticated=allow_unauthenticated,
allow_change_held_packages=allow_change_held_packages,
allow_downgrade=allow_downgrade,
force=force_yes, fail_on_autoremove=fail_on_autoremove, dpkg_options=p['dpkg_options'])
unfiltered_packages = p['package'] or ()
packages = [package.strip() for package in unfiltered_packages if package != '*']
all_installed = '*' in unfiltered_packages
latest = p['state'] == 'latest'
if latest and all_installed:
if packages:
module.fail_json(msg='unable to install additional packages when upgrading all installed packages')
upgrade(
module,
'yes',
force_yes,
p['default_release'],
use_apt_get,
dpkg_options,
autoremove,
fail_on_autoremove,
allow_unauthenticated,
allow_downgrade
)
if packages:
for package in packages:
if package.count('=') > 1:
module.fail_json(msg="invalid package spec: %s" % package)
if not packages:
if autoclean:
cleanup(module, p['purge'], force=force_yes, operation='autoclean', dpkg_options=dpkg_options)
if autoremove:
cleanup(module, p['purge'], force=force_yes, operation='autoremove', dpkg_options=dpkg_options)
if p['state'] in ('latest', 'present', 'build-dep', 'fixed'):
state_upgrade = False
state_builddep = False
state_fixed = False
if p['state'] == 'latest':
state_upgrade = True
if p['state'] == 'build-dep':
state_builddep = True
if p['state'] == 'fixed':
state_fixed = True
success, retvals = install(
module,
packages,
cache,
upgrade=state_upgrade,
default_release=p['default_release'],
install_recommends=install_recommends,
force=force_yes,
dpkg_options=dpkg_options,
build_dep=state_builddep,
fixed=state_fixed,
autoremove=autoremove,
fail_on_autoremove=fail_on_autoremove,
only_upgrade=p['only_upgrade'],
allow_unauthenticated=allow_unauthenticated,
allow_downgrade=allow_downgrade,
allow_change_held_packages=allow_change_held_packages,
)
# Store if the cache has been updated
retvals['cache_updated'] = updated_cache
# Store when the update time was last
retvals['cache_update_time'] = updated_cache_time
if success:
module.exit_json(**retvals)
else:
module.fail_json(**retvals)
elif p['state'] == 'absent':
remove(
module,
packages,
cache,
p['purge'],
force=force_yes,
dpkg_options=dpkg_options,
autoremove=autoremove,
allow_change_held_packages=allow_change_held_packages
)
except apt.cache.LockFailedException as lockFailedException:
if time.time() < deadline:
continue
module.fail_json(msg="Failed to lock apt for exclusive operation: %s" % lockFailedException)
except apt.cache.FetchFailedException as fetchFailedException:
module.fail_json(msg="Could not fetch updated apt files: %s" % fetchFailedException)
# got here w/o exception and/or exit???
module.fail_json(msg='Unexpected code path taken, we really should have exited before, this is a bug')
if __name__ == "__main__":
main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 77,868 |
Apt module "fail_on_autoremove=yes" sends invalid switch to aptitude
|
### Summary
The apt module runs `aptitude` on the target host. When `fail_on_autoremove=yes`, the `--no-remove` switch is added to the command line. This switch is valid for `apt`, but not for `aptitude`, so the command fails.
### Issue Type
Bug Report
### Component Name
apt
### Ansible Version
```console
$ ansible --version
[DEPRECATION WARNING]: Ansible will require Python 3.8 or newer on the controller starting with Ansible 2.12. Current version:
3.7.3 (default, Jan 22 2021, 20:04:44) [GCC 8.3.0]. This feature will be removed from ansible-core in version 2.12. Deprecation
warnings can be disabled by setting deprecation_warnings=False in ansible.cfg.
ansible [core 2.11.9]
config file = /etc/ansible/ansible.cfg
configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/lib/python3.7/dist-packages/ansible
ansible collection location = /root/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/local/bin/ansible
python version = 3.7.3 (default, Jan 22 2021, 20:04:44) [GCC 8.3.0]
jinja version = 3.0.3
libyaml = True
```
### Configuration
```console
$ ansible-config dump --only-changed -t all
[DEPRECATION WARNING]: Ansible will require Python 3.8 or newer on the controller starting with Ansible 2.12. Current version:
3.7.3 (default, Jan 22 2021, 20:04:44) [GCC 8.3.0]. This feature will be removed from ansible-core in version 2.12. Deprecation
warnings can be disabled by setting deprecation_warnings=False in ansible.cfg.
usage: ansible-config [-h] [--version] [-v] {list,dump,view} ...
ansible-config: error: unrecognized arguments: -t all
usage: ansible-config [-h] [--version] [-v] {list,dump,view} ...
View ansible configuration.
positional arguments:
{list,dump,view}
list Print all config options
dump Dump configuration
view View configuration file
optional arguments:
--version show program's version number, config file location,
configured module search path, module location, executable
location and exit
-h, --help show this help message and exit
-v, --verbose verbose mode (-vvv for more, -vvvv to enable connection
debugging)
```
### OS / Environment
Debian 10, aptitude 0.8.11
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```
ansible 'all:!internal_windows:!ucs' -b -m apt -a "update_cache=yes upgrade=yes fail_on_autoremove=yes"
```
### Expected Results
Expected target servers' apt packages to be upgraded.
### Actual Results
```console
tallis | FAILED! => {
"ansible_facts": {
"discovered_interpreter_python": "/usr/bin/python3"
},
"changed": false,
"msg": "'/usr/bin/aptitude safe-upgrade' failed: /usr/bin/aptitude: unrecognized option '--no-remove'\n",
"rc": 1,
"stdout": "aptitude 0.8.11\nUsage: aptitude [-S fname] [-u|-i]\n aptitude [options] <action> ...\n\nActions (if none is specified, aptitude will enter interactive mode):\n\n install Install/upgrade packages.\n remove Remove packages.\n purge Remove packages and their configuration files.\n hold Place packages on hold.\n unhold Cancel a hold command for a package.\n markauto Mark packages as having been automatically installed.\n unmarkauto Mark packages as having been manually installed.\n forbid-version Forbid aptitude from upgrading to a specific package version.\n update
Download lists of new/upgradable packages.\n safe-upgrade Perform a safe upgrade.\n full-upgrade Perform an upgrade, possibly installing and removing packages.\n build-dep Install the build-dependencies of packages.\n forget-new Forget what packages are \"new\".\n search Search for a package by name and/or expression.\n show Display detailed info about a package.\n showsrc Display detailed info about a source package (apt wrapper).\n versions Displays the versions of specified packages.\n clean Erase downloaded package files.\n autoclean Erase old downloaded package files.\n changelog View a package's changelog.\n download Download the .deb file for a package (apt wrapper).\n source Download source package (apt wrapper).\n reinstall Reinstall a currently installed package.\n why Explain why a particular package should be installed.\n why-not Explain why a particular package cannot be installed.\n\n add-user-tag Add user tag to packages/patterns.\n remove-user-tag Remove user tag from packages/patterns.\n\nOptions:\n -h This help text.\n --no-gui Do not use the GTK GUI even if available.\n -s Simulate actions, but do not actually perform them.\n -d Only download packages, do not install or remove anything.\n -P Always prompt for confirmation of actions.\n -y Assume that the answer to simple yes/no questions is 'yes'.\n -F format Specify a format for displaying search results; see the manual.\n -O order Specify how search results should be sorted; see the manual.\n -w width Specify the display width for formatting search results.\n -f Aggressively try to fix broken packages.\n -V Show which versions of packages are to be installed.\n -D Show the dependencies of automatically changed packages.\n -Z Show the change in installed size of each package.\n -v Display extra information. (may be supplied multiple times).\n -t [release] Set the release from which packages should be installed.\n -q In command-line mode, suppress the incremental progress\n indicators.\n -o key=val Directly set the configuration option named 'key'.\n --with(out)-recommends Specify whether or not to treat recommends as\n strong dependencies.\n -S fname Read the aptitude extended status info from fname.\n -u Download new package lists on startup.\n (terminal interface only)\n -i Perform an install run on startup.\n (terminal interface only)\n\nSee the manual page for a complete list and description of all the options.\n\nThis aptitude does not have Super Cow Powers.\n",
"stdout_lines": [
"aptitude 0.8.11",
"Usage: aptitude [-S fname] [-u|-i]",
" aptitude [options] <action> ...",
"",
"Actions (if none is specified, aptitude will enter interactive mode):",
"",
" install Install/upgrade packages.",
" remove Remove packages.",
" purge Remove packages and their configuration files.",
" hold Place packages on hold.",
" unhold Cancel a hold command for a package.",
" markauto Mark packages as having been automatically installed.",
" unmarkauto Mark packages as having been manually installed.",
" forbid-version Forbid aptitude from upgrading to a specific package version.",
" update Download lists of new/upgradable packages.",
" safe-upgrade Perform a safe upgrade.",
" full-upgrade Perform an upgrade, possibly installing and removing packages.",
" build-dep Install the build-dependencies of packages.",
" forget-new Forget what packages are \"new\".",
" search Search for a package by name and/or expression.",
" show Display detailed info about a package.",
" showsrc Display detailed info about a source package (apt wrapper).",
" versions Displays the versions of specified packages.",
" clean Erase downloaded package files.",
" autoclean Erase old downloaded package files.",
" changelog View a package's changelog.",
" download Download the .deb file for a package (apt wrapper).",
" source Download source package (apt wrapper).",
" reinstall Reinstall a currently installed package.",
" why Explain why a particular package should be installed.",
" why-not Explain why a particular package cannot be installed.",
"",
" add-user-tag Add user tag to packages/patterns.",
" remove-user-tag Remove user tag from packages/patterns.",
"",
"Options:",
" -h This help text.",
" --no-gui Do not use the GTK GUI even if available.",
" -s Simulate actions, but do not actually perform them.",
" -d Only download packages, do not install or remove anything.",
" -P Always prompt for confirmation of actions.",
" -y Assume that the answer to simple yes/no questions is 'yes'.",
" -F format Specify a format for displaying search results; see the manual.",
" -O order Specify how search results should be sorted; see the manual.",
" -w width Specify the display width for formatting search results.",
" -f Aggressively try to fix broken packages.",
" -V Show which versions of packages are to be installed.",
" -D Show the dependencies of automatically changed packages.",
" -Z Show the change in installed size of each package.",
" -v Display extra information. (may be supplied multiple times).",
" -t [release] Set the release from which packages should be installed.",
" -q In command-line mode, suppress the incremental progress",
" indicators.",
" -o key=val Directly set the configuration option named 'key'.",
" --with(out)-recommends Specify whether or not to treat recommends as",
" strong dependencies.",
" -S fname Read the aptitude extended status info from fname.",
" -u Download new package lists on startup.",
" (terminal interface only)",
" -i Perform an install run on startup.",
" (terminal interface only)",
"",
"See the manual page for a complete list and description of all the options.",
"",
"This aptitude does not have Super Cow Powers."
]
}
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/77868
|
https://github.com/ansible/ansible/pull/81445
|
5deb4ee99118b2b1990d45bd06c7a23a147861f6
|
2e6d849bdb80364b5229f4b9190935e84ee9bfe8
| 2022-05-20T17:17:22Z |
python
| 2023-08-23T15:42:03Z |
test/integration/targets/apt/tasks/repo.yml
|
- block:
- name: Install foo package version 1.0.0
apt:
name: foo=1.0.0
allow_unauthenticated: yes
register: apt_result
- name: Check install with dpkg
shell: dpkg-query -l foo
register: dpkg_result
- name: Check if install was successful
assert:
that:
- "apt_result is success"
- "dpkg_result is success"
- "'1.0.0' in dpkg_result.stdout"
- name: Update to foo version 1.0.1
apt:
name: foo
state: latest
allow_unauthenticated: yes
register: apt_result
- name: Check install with dpkg
shell: dpkg-query -l foo
register: dpkg_result
- name: Check if install was successful
assert:
that:
- "apt_result is success"
- "dpkg_result is success"
- "'1.0.1' in dpkg_result.stdout"
always:
- name: Clean up
apt:
name: foo
state: absent
allow_unauthenticated: yes
- name: Try to install non-existent version
apt:
name: foo=99
state: present
ignore_errors: true
register: apt_result
- name: Check if install failed
assert:
that:
- apt_result is failed
# https://github.com/ansible/ansible/issues/30638
- block:
- name: Don't install foo=1.0.1 since foo is not installed and only_upgrade is set
apt:
name: foo=1.0.1
state: present
only_upgrade: yes
allow_unauthenticated: yes
ignore_errors: yes
register: apt_result
- name: Check that foo was not upgraded
assert:
that:
- "apt_result is not changed"
- "apt_result is success"
- apt:
name: foo=1.0.0
allow_unauthenticated: yes
- name: Upgrade foo to 1.0.1 but don't upgrade foobar since it is not installed
apt:
name: foobar=1.0.1,foo=1.0.1
state: present
only_upgrade: yes
allow_unauthenticated: yes
register: apt_result
- name: Check install with dpkg
shell: "dpkg-query -l {{ item }}"
register: dpkg_result
ignore_errors: yes
loop:
- foobar
- foo
- name: Check if install was successful
assert:
that:
- "apt_result is success"
- "dpkg_result.results[0] is failure"
- "'1.0.1' not in dpkg_result.results[0].stdout"
- "dpkg_result.results[1] is success"
- "'1.0.1' in dpkg_result.results[1].stdout"
always:
- name: Clean up
apt:
name: foo
state: absent
allow_unauthenticated: yes
- block:
- name: Install foo=1.0.0
apt:
name: foo=1.0.0
- name: Run version test matrix
apt:
name: foo{{ item.0 }}
default_release: '{{ item.1 }}'
state: '{{ item.2 | ternary("latest","present") }}'
check_mode: true
register: apt_result
loop:
# [filter, release, state_latest, expected]
- ["", null, false, null]
- ["", null, true, "1.0.1"]
- ["=1.0.0", null, false, null]
- ["=1.0.0", null, true, null]
- ["=1.0.1", null, false, "1.0.1"]
#- ["=1.0.*", null, false, null] # legacy behavior. should not upgrade without state=latest
- ["=1.0.*", null, true, "1.0.1"]
- [">=1.0.0", null, false, null]
- [">=1.0.0", null, true, "1.0.1"]
- [">=1.0.1", null, false, "1.0.1"]
- ["", "testing", false, null]
- ["", "testing", true, "2.0.1"]
- ["=2.0.0", null, false, "2.0.0"]
- [">=2.0.0", "testing", false, "2.0.1"]
- name: Validate version test matrix
assert:
that:
- (item.item.3 is not none) == (item.stdout is defined)
- item.item.3 is none or "Inst foo [1.0.0] (" + item.item.3 + " localhost [all])" in item.stdout_lines
loop: '{{ apt_result.results }}'
- name: Pin foo=1.0.0
copy:
content: |-
Package: foo
Pin: version 1.0.0
Pin-Priority: 1000
dest: /etc/apt/preferences.d/foo
- name: Run pinning version test matrix
apt:
name: foo{{ item.0 }}
default_release: '{{ item.1 }}'
state: '{{ item.2 | ternary("latest","present") }}'
check_mode: true
ignore_errors: true
register: apt_result
loop:
# [filter, release, state_latest, expected] # expected=null for no change. expected=False to assert an error
- ["", null, false, null]
- ["", null, true, null]
- ["=1.0.0", null, false, null]
- ["=1.0.0", null, true, null]
- ["=1.0.1", null, false, "1.0.1"]
#- ["=1.0.*", null, false, null] # legacy behavior. should not upgrade without state=latest
- ["=1.0.*", null, true, "1.0.1"]
- [">=1.0.0", null, false, null]
- [">=1.0.0", null, true, null]
- [">=1.0.1", null, false, False]
- ["", "testing", false, null]
- ["", "testing", true, null]
- ["=2.0.0", null, false, "2.0.0"]
- [">=2.0.0", "testing", false, False]
- name: Validate pinning version test matrix
assert:
that:
- (item.item.3 != False) or (item.item.3 == False and item is failed)
- (item.item.3 is string) == (item.stdout is defined)
- item.item.3 is not string or "Inst foo [1.0.0] (" + item.item.3 + " localhost [all])" in item.stdout_lines
loop: '{{ apt_result.results }}'
always:
- name: Uninstall foo
apt:
name: foo
state: absent
- name: Unpin foo
file:
path: /etc/apt/preferences.d/foo
state: absent
# https://github.com/ansible/ansible/issues/35900
- block:
- name: Disable ubuntu repos so system packages are not upgraded and do not change testing env
command: mv /etc/apt/sources.list /etc/apt/sources.list.backup
- name: Install foobar, installs foo as a dependency
apt:
name: foobar=1.0.0
allow_unauthenticated: yes
- name: mark foobar as auto for next test
shell: apt-mark auto foobar
- name: Install foobar (marked as manual) (check mode)
apt:
name: foobar=1.0.1
allow_unauthenticated: yes
check_mode: yes
register: manual_foobar_install_check_mode
- name: check foobar was not marked as manually installed by check mode
shell: apt-mark showmanual | grep foobar
ignore_errors: yes
register: showmanual
- assert:
that:
- manual_foobar_install_check_mode.changed
- "'foobar' not in showmanual.stdout"
- name: Install foobar (marked as manual)
apt:
name: foobar=1.0.1
allow_unauthenticated: yes
register: manual_foobar_install
- name: check foobar was marked as manually installed
shell: apt-mark showmanual | grep foobar
ignore_errors: yes
register: showmanual
- assert:
that:
- manual_foobar_install.changed
- "'foobar' in showmanual.stdout"
- name: Upgrade foobar to a version which does not depend on foo, autoremove should remove foo
apt:
upgrade: dist
autoremove: yes
allow_unauthenticated: yes
- name: Check foo with dpkg
shell: dpkg-query -l foo
register: dpkg_result
ignore_errors: yes
- name: Check that foo was removed by autoremove
assert:
that:
- "dpkg_result is failed"
always:
- name: Clean up
apt:
pkg: foo,foobar
state: absent
autoclean: yes
- name: Restore ubuntu repos
command: mv /etc/apt/sources.list.backup /etc/apt/sources.list
# https://github.com/ansible/ansible/issues/26298
- block:
- name: Disable ubuntu repos so system packages are not upgraded and do not change testing env
command: mv /etc/apt/sources.list /etc/apt/sources.list.backup
- name: Install foobar, installs foo as a dependency
apt:
name: foobar=1.0.0
allow_unauthenticated: yes
- name: Upgrade foobar to a version which does not depend on foo
apt:
upgrade: dist
force: yes # workaround for --allow-unauthenticated used along with upgrade
- name: autoremove should remove foo
apt:
autoremove: yes
register: autoremove_result
- name: Check that autoremove correctly reports changed=True
assert:
that:
- "autoremove_result is changed"
- name: Check foo with dpkg
shell: dpkg-query -l foo
register: dpkg_result
ignore_errors: yes
- name: Check that foo was removed by autoremove
assert:
that:
- "dpkg_result is failed"
- name: Nothing to autoremove
apt:
autoremove: yes
register: autoremove_result
- name: Check that autoremove correctly reports changed=False
assert:
that:
- "autoremove_result is not changed"
- name: Create a fake .deb file for autoclean to remove
file:
name: /var/cache/apt/archives/python3-q_2.4-1_all.deb
state: touch
- name: autoclean fake .deb file
apt:
autoclean: yes
register: autoclean_result
- name: Check if the .deb file exists
stat:
path: /var/cache/apt/archives/python3-q_2.4-1_all.deb
register: stat_result
- name: Check that autoclean correctly reports changed=True and file was removed
assert:
that:
- "autoclean_result is changed"
- "not stat_result.stat.exists"
- name: Nothing to autoclean
apt:
autoclean: yes
register: autoclean_result
- name: Check that autoclean correctly reports changed=False
assert:
that:
- "autoclean_result is not changed"
always:
- name: Clean up
apt:
pkg: foo,foobar
state: absent
autoclean: yes
- name: Restore ubuntu repos
command: mv /etc/apt/sources.list.backup /etc/apt/sources.list
- name: Downgrades
import_tasks: "downgrade.yml"
- name: Upgrades
block:
- import_tasks: "upgrade.yml"
vars:
aptitude_present: "{{ True | bool }}"
upgrade_type: "dist"
force_apt_get: "{{ False | bool }}"
- name: Check if aptitude is installed
command: dpkg-query --show --showformat='${db:Status-Abbrev}' aptitude
register: aptitude_status
- name: Remove aptitude, if installed, to test fall-back to apt-get
apt:
pkg: aptitude
state: absent
when:
- aptitude_status.stdout.find('ii') != -1
- include_tasks: "upgrade.yml"
vars:
aptitude_present: "{{ False | bool }}"
upgrade_type: "{{ item.upgrade_type }}"
force_apt_get: "{{ item.force_apt_get }}"
with_items:
- { upgrade_type: safe, force_apt_get: False }
- { upgrade_type: full, force_apt_get: False }
- { upgrade_type: safe, force_apt_get: True }
- { upgrade_type: full, force_apt_get: True }
- name: (Re-)Install aptitude, run same tests again
apt:
pkg: aptitude
state: present
- include_tasks: "upgrade.yml"
vars:
aptitude_present: "{{ True | bool }}"
upgrade_type: "{{ item.upgrade_type }}"
force_apt_get: "{{ item.force_apt_get }}"
with_items:
- { upgrade_type: safe, force_apt_get: False }
- { upgrade_type: full, force_apt_get: False }
- { upgrade_type: safe, force_apt_get: True }
- { upgrade_type: full, force_apt_get: True }
- name: Remove aptitude if not originally present
apt:
pkg: aptitude
state: absent
when:
- aptitude_status.stdout.find('ii') == -1
- block:
- name: Install the foo package with diff=yes
apt:
name: foo
allow_unauthenticated: yes
diff: yes
register: apt_result
- name: Check the content of diff.prepared
assert:
that:
- apt_result is success
- "'The following NEW packages will be installed:\n foo' in apt_result.diff.prepared"
always:
- name: Clean up
apt:
name: foo
state: absent
allow_unauthenticated: yes
- block:
- name: Install foo package version 1.0.0 with force=yes, implies allow_unauthenticated=yes
apt:
name: foo=1.0.0
force: yes
register: apt_result
- name: Check install with dpkg
shell: dpkg-query -l foo
register: dpkg_result
- name: Check if install was successful
assert:
that:
- "apt_result is success"
- "dpkg_result is success"
- "'1.0.0' in dpkg_result.stdout"
always:
- name: Clean up
apt:
name: foo
state: absent
allow_unauthenticated: yes
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 77,868 |
Apt module "fail_on_autoremove=yes" sends invalid switch to aptitude
|
### Summary
The apt module runs `aptitude` on the target host. When `fail_on_autoremove=yes`, the `--no-remove` switch is added to the command line. This switch is valid for `apt`, but not for `aptitude`, so the command fails.
### Issue Type
Bug Report
### Component Name
apt
### Ansible Version
```console
$ ansible --version
[DEPRECATION WARNING]: Ansible will require Python 3.8 or newer on the controller starting with Ansible 2.12. Current version:
3.7.3 (default, Jan 22 2021, 20:04:44) [GCC 8.3.0]. This feature will be removed from ansible-core in version 2.12. Deprecation
warnings can be disabled by setting deprecation_warnings=False in ansible.cfg.
ansible [core 2.11.9]
config file = /etc/ansible/ansible.cfg
configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/lib/python3.7/dist-packages/ansible
ansible collection location = /root/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/local/bin/ansible
python version = 3.7.3 (default, Jan 22 2021, 20:04:44) [GCC 8.3.0]
jinja version = 3.0.3
libyaml = True
```
### Configuration
```console
$ ansible-config dump --only-changed -t all
[DEPRECATION WARNING]: Ansible will require Python 3.8 or newer on the controller starting with Ansible 2.12. Current version:
3.7.3 (default, Jan 22 2021, 20:04:44) [GCC 8.3.0]. This feature will be removed from ansible-core in version 2.12. Deprecation
warnings can be disabled by setting deprecation_warnings=False in ansible.cfg.
usage: ansible-config [-h] [--version] [-v] {list,dump,view} ...
ansible-config: error: unrecognized arguments: -t all
usage: ansible-config [-h] [--version] [-v] {list,dump,view} ...
View ansible configuration.
positional arguments:
{list,dump,view}
list Print all config options
dump Dump configuration
view View configuration file
optional arguments:
--version show program's version number, config file location,
configured module search path, module location, executable
location and exit
-h, --help show this help message and exit
-v, --verbose verbose mode (-vvv for more, -vvvv to enable connection
debugging)
```
### OS / Environment
Debian 10, aptitude 0.8.11
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```
ansible 'all:!internal_windows:!ucs' -b -m apt -a "update_cache=yes upgrade=yes fail_on_autoremove=yes"
```
### Expected Results
Expected target servers' apt packages to be upgraded.
### Actual Results
```console
tallis | FAILED! => {
"ansible_facts": {
"discovered_interpreter_python": "/usr/bin/python3"
},
"changed": false,
"msg": "'/usr/bin/aptitude safe-upgrade' failed: /usr/bin/aptitude: unrecognized option '--no-remove'\n",
"rc": 1,
"stdout": "aptitude 0.8.11\nUsage: aptitude [-S fname] [-u|-i]\n aptitude [options] <action> ...\n\nActions (if none is specified, aptitude will enter interactive mode):\n\n install Install/upgrade packages.\n remove Remove packages.\n purge Remove packages and their configuration files.\n hold Place packages on hold.\n unhold Cancel a hold command for a package.\n markauto Mark packages as having been automatically installed.\n unmarkauto Mark packages as having been manually installed.\n forbid-version Forbid aptitude from upgrading to a specific package version.\n update
Download lists of new/upgradable packages.\n safe-upgrade Perform a safe upgrade.\n full-upgrade Perform an upgrade, possibly installing and removing packages.\n build-dep Install the build-dependencies of packages.\n forget-new Forget what packages are \"new\".\n search Search for a package by name and/or expression.\n show Display detailed info about a package.\n showsrc Display detailed info about a source package (apt wrapper).\n versions Displays the versions of specified packages.\n clean Erase downloaded package files.\n autoclean Erase old downloaded package files.\n changelog View a package's changelog.\n download Download the .deb file for a package (apt wrapper).\n source Download source package (apt wrapper).\n reinstall Reinstall a currently installed package.\n why Explain why a particular package should be installed.\n why-not Explain why a particular package cannot be installed.\n\n add-user-tag Add user tag to packages/patterns.\n remove-user-tag Remove user tag from packages/patterns.\n\nOptions:\n -h This help text.\n --no-gui Do not use the GTK GUI even if available.\n -s Simulate actions, but do not actually perform them.\n -d Only download packages, do not install or remove anything.\n -P Always prompt for confirmation of actions.\n -y Assume that the answer to simple yes/no questions is 'yes'.\n -F format Specify a format for displaying search results; see the manual.\n -O order Specify how search results should be sorted; see the manual.\n -w width Specify the display width for formatting search results.\n -f Aggressively try to fix broken packages.\n -V Show which versions of packages are to be installed.\n -D Show the dependencies of automatically changed packages.\n -Z Show the change in installed size of each package.\n -v Display extra information. (may be supplied multiple times).\n -t [release] Set the release from which packages should be installed.\n -q In command-line mode, suppress the incremental progress\n indicators.\n -o key=val Directly set the configuration option named 'key'.\n --with(out)-recommends Specify whether or not to treat recommends as\n strong dependencies.\n -S fname Read the aptitude extended status info from fname.\n -u Download new package lists on startup.\n (terminal interface only)\n -i Perform an install run on startup.\n (terminal interface only)\n\nSee the manual page for a complete list and description of all the options.\n\nThis aptitude does not have Super Cow Powers.\n",
"stdout_lines": [
"aptitude 0.8.11",
"Usage: aptitude [-S fname] [-u|-i]",
" aptitude [options] <action> ...",
"",
"Actions (if none is specified, aptitude will enter interactive mode):",
"",
" install Install/upgrade packages.",
" remove Remove packages.",
" purge Remove packages and their configuration files.",
" hold Place packages on hold.",
" unhold Cancel a hold command for a package.",
" markauto Mark packages as having been automatically installed.",
" unmarkauto Mark packages as having been manually installed.",
" forbid-version Forbid aptitude from upgrading to a specific package version.",
" update Download lists of new/upgradable packages.",
" safe-upgrade Perform a safe upgrade.",
" full-upgrade Perform an upgrade, possibly installing and removing packages.",
" build-dep Install the build-dependencies of packages.",
" forget-new Forget what packages are \"new\".",
" search Search for a package by name and/or expression.",
" show Display detailed info about a package.",
" showsrc Display detailed info about a source package (apt wrapper).",
" versions Displays the versions of specified packages.",
" clean Erase downloaded package files.",
" autoclean Erase old downloaded package files.",
" changelog View a package's changelog.",
" download Download the .deb file for a package (apt wrapper).",
" source Download source package (apt wrapper).",
" reinstall Reinstall a currently installed package.",
" why Explain why a particular package should be installed.",
" why-not Explain why a particular package cannot be installed.",
"",
" add-user-tag Add user tag to packages/patterns.",
" remove-user-tag Remove user tag from packages/patterns.",
"",
"Options:",
" -h This help text.",
" --no-gui Do not use the GTK GUI even if available.",
" -s Simulate actions, but do not actually perform them.",
" -d Only download packages, do not install or remove anything.",
" -P Always prompt for confirmation of actions.",
" -y Assume that the answer to simple yes/no questions is 'yes'.",
" -F format Specify a format for displaying search results; see the manual.",
" -O order Specify how search results should be sorted; see the manual.",
" -w width Specify the display width for formatting search results.",
" -f Aggressively try to fix broken packages.",
" -V Show which versions of packages are to be installed.",
" -D Show the dependencies of automatically changed packages.",
" -Z Show the change in installed size of each package.",
" -v Display extra information. (may be supplied multiple times).",
" -t [release] Set the release from which packages should be installed.",
" -q In command-line mode, suppress the incremental progress",
" indicators.",
" -o key=val Directly set the configuration option named 'key'.",
" --with(out)-recommends Specify whether or not to treat recommends as",
" strong dependencies.",
" -S fname Read the aptitude extended status info from fname.",
" -u Download new package lists on startup.",
" (terminal interface only)",
" -i Perform an install run on startup.",
" (terminal interface only)",
"",
"See the manual page for a complete list and description of all the options.",
"",
"This aptitude does not have Super Cow Powers."
]
}
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/77868
|
https://github.com/ansible/ansible/pull/81445
|
5deb4ee99118b2b1990d45bd06c7a23a147861f6
|
2e6d849bdb80364b5229f4b9190935e84ee9bfe8
| 2022-05-20T17:17:22Z |
python
| 2023-08-23T15:42:03Z |
test/integration/targets/apt/tasks/upgrade_scenarios.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 71,916 |
user module: ValueError: invalid literal for int() with base 10: '' when attempting to set 'expires' on a user already in /etc/passwd
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below -->
Ansible user module fails on casting the `expires` value to an `int` when an entry already exists in `/etc/passwd` for a user with the error message `ValueError: invalid literal for int() with base 10: ''`.
https://github.com/ansible/ansible/blob/31ddca4c0db2584b0a68880bdea1d97bd8b22032/lib/ansible/modules/user.py#L800
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
user module
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.9.6
config file = /etc/ansible/ansible.cfg
configured module search path = ['/export/home/rrotaru/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /tech/home/rrotaru/venv/lib64/python3.6/site-packages/ansible
executable location = /tech/home/rrotaru/venv/bin/ansible
python version = 3.6.8 (default, Jun 11 2019, 15:15:01) [GCC 4.8.5 20150623 (Red Hat 4.8.5-39)]
```
```
ansible 2.10.1
config file = /etc/ansible/ansible.cfg
configured module search path = ['/export/home/rrotaru/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /tech/home/rrotaru/venv/lib64/python3.6/site-packages/ansible
executable location = /tech/home/rrotaru/venv/bin/ansible
python version = 3.6.8 (default, Jun 11 2019, 15:15:01) [GCC 4.8.5 20150623 (Red Hat 4.8.5-39)]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
N/A
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
Red Hat Enterprise Linux Server release 7.5 (Maipo)
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
Run the playbook below.
<!--- Paste example playbooks or commands between quotes below -->
```yaml
---
- hosts: localhost
become: yes
tasks:
- lineinfile:
path: /etc/passwd
line: "dummy::123:123::/export/home/dummy:/bin/ksh"
regexp: "^dummy.*"
state: present
- user:
name: dummy
state: present
uid: 123
expires: -1
password: "dummy#password | password_hash('sha512') }}"
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
User module should update the user expiration to non-expiring.
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
User module fails with python ValueError.
<!--- Paste verbatim command output between quotes -->
```paste below
(venv) $ ansible-playbook expirestest.yml -K -v
Using /etc/ansible/ansible.cfg as config file
BECOME password:
[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit
localhost does not match 'all'
PLAY [localhost] *******************************************************************************
TASK [Gathering Facts] *************************************************************************
ok: [localhost]
TASK [lineinfile] ******************************************************************************
changed: [localhost] => {"backup": "", "changed": true, "msg": "line added"}
TASK [user] ************************************************************************************
An exception occurred during task execution. To see the full traceback, use -vvv. The error was: ValueError: invalid literal for int() with base 10: ''
fatal: [localhost]: FAILED! => {"changed": false, "module_stderr": "Traceback (most recent call last):\n File \"/export/home/rrotaru/.ansible/tmp/ansible-tmp-1600960729.601071-12482-194155520219212/AnsiballZ_user.py\", line 102, in <module>\n _ansiballz_main()\n File \"/export/home/rrotaru/.ansible/tmp/ansible-tmp-1600960729.601071-12482-194155520219212/AnsiballZ_user.py\", line 94, in _ansiballz_main\n invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)\n File \"/export/home/rrotaru/.ansible/tmp/ansible-tmp-1600960729.601071-12482-194155520219212/AnsiballZ_user.py\", line 40, in invoke_module\n runpy.run_module(mod_name='ansible.modules.user', init_globals=None, run_name='__main__', alter_sys=True)\n File \"/usr/lib64/python3.6/runpy.py\", line 205, in run_module\n return _run_module_code(code, init_globals, run_name, mod_spec)\n File \"/usr/lib64/python3.6/runpy.py\", line 96, in _run_module_code\n mod_name, mod_spec, pkg_name, script_name)\n File \"/usr/lib64/python3.6/runpy.py\", line 85, in _run_code\n exec(code, run_globals)\n File \"/tmp/ansible_user_payload_v5n53kd3/ansible_user_payload.zip/ansible/modules/user.py\", line 3026, in <module>\n File \"/tmp/ansible_user_payload_v5n53kd3/ansible_user_payload.zip/ansible/modules/user.py\", line 2965, in main\n File \"/tmp/ansible_user_payload_v5n53kd3/ansible_user_payload.zip/ansible/modules/user.py\", line 1111, in modify_user\n File \"/tmp/ansible_user_payload_v5n53kd3/ansible_user_payload.zip/ansible/modules/user.py\", line 799, in modify_user_usermod\nValueError: invalid literal for int() with base 10: ''\n", "module_stdout": "", "msg": "MODULE FAILURE\nSee stdout/stderr for the exact error", "rc": 1}
PLAY RECAP *************************************************************************************
localhost : ok=2 changed=1 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
```
|
https://github.com/ansible/ansible/issues/71916
|
https://github.com/ansible/ansible/pull/75194
|
2e6d849bdb80364b5229f4b9190935e84ee9bfe8
|
116948cd1468b8a3e34af8e3671f4089e1f7584c
| 2020-09-24T15:22:58Z |
python
| 2023-08-23T18:18:57Z |
changelogs/fragments/71916-user-expires-int.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 71,916 |
user module: ValueError: invalid literal for int() with base 10: '' when attempting to set 'expires' on a user already in /etc/passwd
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below -->
Ansible user module fails on casting the `expires` value to an `int` when an entry already exists in `/etc/passwd` for a user with the error message `ValueError: invalid literal for int() with base 10: ''`.
https://github.com/ansible/ansible/blob/31ddca4c0db2584b0a68880bdea1d97bd8b22032/lib/ansible/modules/user.py#L800
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
user module
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.9.6
config file = /etc/ansible/ansible.cfg
configured module search path = ['/export/home/rrotaru/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /tech/home/rrotaru/venv/lib64/python3.6/site-packages/ansible
executable location = /tech/home/rrotaru/venv/bin/ansible
python version = 3.6.8 (default, Jun 11 2019, 15:15:01) [GCC 4.8.5 20150623 (Red Hat 4.8.5-39)]
```
```
ansible 2.10.1
config file = /etc/ansible/ansible.cfg
configured module search path = ['/export/home/rrotaru/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /tech/home/rrotaru/venv/lib64/python3.6/site-packages/ansible
executable location = /tech/home/rrotaru/venv/bin/ansible
python version = 3.6.8 (default, Jun 11 2019, 15:15:01) [GCC 4.8.5 20150623 (Red Hat 4.8.5-39)]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
N/A
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
Red Hat Enterprise Linux Server release 7.5 (Maipo)
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
Run the playbook below.
<!--- Paste example playbooks or commands between quotes below -->
```yaml
---
- hosts: localhost
become: yes
tasks:
- lineinfile:
path: /etc/passwd
line: "dummy::123:123::/export/home/dummy:/bin/ksh"
regexp: "^dummy.*"
state: present
- user:
name: dummy
state: present
uid: 123
expires: -1
password: "dummy#password | password_hash('sha512') }}"
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
User module should update the user expiration to non-expiring.
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
User module fails with python ValueError.
<!--- Paste verbatim command output between quotes -->
```paste below
(venv) $ ansible-playbook expirestest.yml -K -v
Using /etc/ansible/ansible.cfg as config file
BECOME password:
[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit
localhost does not match 'all'
PLAY [localhost] *******************************************************************************
TASK [Gathering Facts] *************************************************************************
ok: [localhost]
TASK [lineinfile] ******************************************************************************
changed: [localhost] => {"backup": "", "changed": true, "msg": "line added"}
TASK [user] ************************************************************************************
An exception occurred during task execution. To see the full traceback, use -vvv. The error was: ValueError: invalid literal for int() with base 10: ''
fatal: [localhost]: FAILED! => {"changed": false, "module_stderr": "Traceback (most recent call last):\n File \"/export/home/rrotaru/.ansible/tmp/ansible-tmp-1600960729.601071-12482-194155520219212/AnsiballZ_user.py\", line 102, in <module>\n _ansiballz_main()\n File \"/export/home/rrotaru/.ansible/tmp/ansible-tmp-1600960729.601071-12482-194155520219212/AnsiballZ_user.py\", line 94, in _ansiballz_main\n invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)\n File \"/export/home/rrotaru/.ansible/tmp/ansible-tmp-1600960729.601071-12482-194155520219212/AnsiballZ_user.py\", line 40, in invoke_module\n runpy.run_module(mod_name='ansible.modules.user', init_globals=None, run_name='__main__', alter_sys=True)\n File \"/usr/lib64/python3.6/runpy.py\", line 205, in run_module\n return _run_module_code(code, init_globals, run_name, mod_spec)\n File \"/usr/lib64/python3.6/runpy.py\", line 96, in _run_module_code\n mod_name, mod_spec, pkg_name, script_name)\n File \"/usr/lib64/python3.6/runpy.py\", line 85, in _run_code\n exec(code, run_globals)\n File \"/tmp/ansible_user_payload_v5n53kd3/ansible_user_payload.zip/ansible/modules/user.py\", line 3026, in <module>\n File \"/tmp/ansible_user_payload_v5n53kd3/ansible_user_payload.zip/ansible/modules/user.py\", line 2965, in main\n File \"/tmp/ansible_user_payload_v5n53kd3/ansible_user_payload.zip/ansible/modules/user.py\", line 1111, in modify_user\n File \"/tmp/ansible_user_payload_v5n53kd3/ansible_user_payload.zip/ansible/modules/user.py\", line 799, in modify_user_usermod\nValueError: invalid literal for int() with base 10: ''\n", "module_stdout": "", "msg": "MODULE FAILURE\nSee stdout/stderr for the exact error", "rc": 1}
PLAY RECAP *************************************************************************************
localhost : ok=2 changed=1 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
```
|
https://github.com/ansible/ansible/issues/71916
|
https://github.com/ansible/ansible/pull/75194
|
2e6d849bdb80364b5229f4b9190935e84ee9bfe8
|
116948cd1468b8a3e34af8e3671f4089e1f7584c
| 2020-09-24T15:22:58Z |
python
| 2023-08-23T18:18:57Z |
lib/ansible/modules/user.py
|
# -*- coding: utf-8 -*-
# Copyright: (c) 2012, Stephen Fromm <[email protected]>
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
DOCUMENTATION = r'''
module: user
version_added: "0.2"
short_description: Manage user accounts
description:
- Manage user accounts and user attributes.
- For Windows targets, use the M(ansible.windows.win_user) module instead.
options:
name:
description:
- Name of the user to create, remove or modify.
type: str
required: true
aliases: [ user ]
uid:
description:
- Optionally sets the I(UID) of the user.
type: int
comment:
description:
- Optionally sets the description (aka I(GECOS)) of user account.
- On macOS, this defaults to the O(name) option.
type: str
hidden:
description:
- macOS only, optionally hide the user from the login window and system preferences.
- The default will be V(true) if the O(system) option is used.
type: bool
version_added: "2.6"
non_unique:
description:
- Optionally when used with the -u option, this option allows to change the user ID to a non-unique value.
type: bool
default: no
version_added: "1.1"
seuser:
description:
- Optionally sets the seuser type (user_u) on selinux enabled systems.
type: str
version_added: "2.1"
group:
description:
- Optionally sets the user's primary group (takes a group name).
- On macOS, this defaults to V('staff')
type: str
groups:
description:
- A list of supplementary groups which the user is also a member of.
- By default, the user is removed from all other groups. Configure O(append) to modify this.
- When set to an empty string V(''),
the user is removed from all groups except the primary group.
- Before Ansible 2.3, the only input format allowed was a comma separated string.
type: list
elements: str
append:
description:
- If V(true), add the user to the groups specified in O(groups).
- If V(false), user will only be added to the groups specified in O(groups),
removing them from all other groups.
type: bool
default: no
shell:
description:
- Optionally set the user's shell.
- On macOS, before Ansible 2.5, the default shell for non-system users was V(/usr/bin/false).
Since Ansible 2.5, the default shell for non-system users on macOS is V(/bin/bash).
- See notes for details on how other operating systems determine the default shell by
the underlying tool.
type: str
home:
description:
- Optionally set the user's home directory.
type: path
skeleton:
description:
- Optionally set a home skeleton directory.
- Requires O(create_home) option!
type: str
version_added: "2.0"
password:
description:
- If provided, set the user's password to the provided encrypted hash (Linux) or plain text password (macOS).
- B(Linux/Unix/POSIX:) Enter the hashed password as the value.
- See L(FAQ entry,https://docs.ansible.com/ansible/latest/reference_appendices/faq.html#how-do-i-generate-encrypted-passwords-for-the-user-module)
for details on various ways to generate the hash of a password.
- To create an account with a locked/disabled password on Linux systems, set this to V('!') or V('*').
- To create an account with a locked/disabled password on OpenBSD, set this to V('*************').
- B(OS X/macOS:) Enter the cleartext password as the value. Be sure to take relevant security precautions.
- On macOS, the password specified in the C(password) option will always be set, regardless of whether the user account already exists or not.
- When the password is passed as an argument, the C(user) module will always return changed to C(true) for macOS systems.
Since macOS no longer provides access to the hashed passwords directly.
type: str
state:
description:
- Whether the account should exist or not, taking action if the state is different from what is stated.
- See this L(FAQ entry,https://docs.ansible.com/ansible/latest/reference_appendices/faq.html#running-on-macos-as-a-target)
for additional requirements when removing users on macOS systems.
type: str
choices: [ absent, present ]
default: present
create_home:
description:
- Unless set to V(false), a home directory will be made for the user
when the account is created or if the home directory does not exist.
- Changed from O(createhome) to O(create_home) in Ansible 2.5.
type: bool
default: yes
aliases: [ createhome ]
move_home:
description:
- "If set to V(true) when used with O(home), attempt to move the user's old home
directory to the specified directory if it isn't there already and the old home exists."
type: bool
default: no
system:
description:
- When creating an account O(state=present), setting this to V(true) makes the user a system account.
- This setting cannot be changed on existing users.
type: bool
default: no
force:
description:
- This only affects O(state=absent), it forces removal of the user and associated directories on supported platforms.
- The behavior is the same as C(userdel --force), check the man page for C(userdel) on your system for details and support.
- When used with O(generate_ssh_key=yes) this forces an existing key to be overwritten.
type: bool
default: no
remove:
description:
- This only affects O(state=absent), it attempts to remove directories associated with the user.
- The behavior is the same as C(userdel --remove), check the man page for details and support.
type: bool
default: no
login_class:
description:
- Optionally sets the user's login class, a feature of most BSD OSs.
type: str
generate_ssh_key:
description:
- Whether to generate a SSH key for the user in question.
- This will B(not) overwrite an existing SSH key unless used with O(force=yes).
type: bool
default: no
version_added: "0.9"
ssh_key_bits:
description:
- Optionally specify number of bits in SSH key to create.
- The default value depends on ssh-keygen.
type: int
version_added: "0.9"
ssh_key_type:
description:
- Optionally specify the type of SSH key to generate.
- Available SSH key types will depend on implementation
present on target host.
type: str
default: rsa
version_added: "0.9"
ssh_key_file:
description:
- Optionally specify the SSH key filename.
- If this is a relative filename then it will be relative to the user's home directory.
- This parameter defaults to V(.ssh/id_rsa).
type: path
version_added: "0.9"
ssh_key_comment:
description:
- Optionally define the comment for the SSH key.
type: str
default: ansible-generated on $HOSTNAME
version_added: "0.9"
ssh_key_passphrase:
description:
- Set a passphrase for the SSH key.
- If no passphrase is provided, the SSH key will default to having no passphrase.
type: str
version_added: "0.9"
update_password:
description:
- V(always) will update passwords if they differ.
- V(on_create) will only set the password for newly created users.
type: str
choices: [ always, on_create ]
default: always
version_added: "1.3"
expires:
description:
- An expiry time for the user in epoch, it will be ignored on platforms that do not support this.
- Currently supported on GNU/Linux, FreeBSD, and DragonFlyBSD.
- Since Ansible 2.6 you can remove the expiry time by specifying a negative value.
Currently supported on GNU/Linux and FreeBSD.
type: float
version_added: "1.9"
password_lock:
description:
- Lock the password (C(usermod -L), C(usermod -U), C(pw lock)).
- Implementation differs by platform. This option does not always mean the user cannot login using other methods.
- This option does not disable the user, only lock the password.
- This must be set to V(False) in order to unlock a currently locked password. The absence of this parameter will not unlock a password.
- Currently supported on Linux, FreeBSD, DragonFlyBSD, NetBSD, OpenBSD.
type: bool
version_added: "2.6"
local:
description:
- Forces the use of "local" command alternatives on platforms that implement it.
- This is useful in environments that use centralized authentication when you want to manipulate the local users
(in other words, it uses C(luseradd) instead of C(useradd)).
- This will check C(/etc/passwd) for an existing account before invoking commands. If the local account database
exists somewhere other than C(/etc/passwd), this setting will not work properly.
- This requires that the above commands as well as C(/etc/passwd) must exist on the target host, otherwise it will be a fatal error.
type: bool
default: no
version_added: "2.4"
profile:
description:
- Sets the profile of the user.
- Does nothing when used with other platforms.
- Can set multiple profiles using comma separation.
- To delete all the profiles, use O(profile='').
- Currently supported on Illumos/Solaris.
type: str
version_added: "2.8"
authorization:
description:
- Sets the authorization of the user.
- Can set multiple authorizations using comma separation.
- To delete all authorizations, use O(authorization='').
- Currently supported on Illumos/Solaris. Does nothing when used with other platforms.
type: str
version_added: "2.8"
role:
description:
- Sets the role of the user.
- Does nothing when used with other platforms.
- Can set multiple roles using comma separation.
- To delete all roles, use O(role='').
- Currently supported on Illumos/Solaris.
type: str
version_added: "2.8"
password_expire_max:
description:
- Maximum number of days between password change.
- Supported on Linux only.
type: int
version_added: "2.11"
password_expire_min:
description:
- Minimum number of days between password change.
- Supported on Linux only.
type: int
version_added: "2.11"
password_expire_warn:
description:
- Number of days of warning before password expires.
- Supported on Linux only.
type: int
version_added: "2.16"
umask:
description:
- Sets the umask of the user.
- Does nothing when used with other platforms.
- Currently supported on Linux.
- Requires O(local) is omitted or V(False).
type: str
version_added: "2.12"
extends_documentation_fragment: action_common_attributes
attributes:
check_mode:
support: full
diff_mode:
support: none
platform:
platforms: posix
notes:
- There are specific requirements per platform on user management utilities. However
they generally come pre-installed with the system and Ansible will require they
are present at runtime. If they are not, a descriptive error message will be shown.
- On SunOS platforms, the shadow file is backed up automatically since this module edits it directly.
On other platforms, the shadow file is backed up by the underlying tools used by this module.
- On macOS, this module uses C(dscl) to create, modify, and delete accounts. C(dseditgroup) is used to
modify group membership. Accounts are hidden from the login window by modifying
C(/Library/Preferences/com.apple.loginwindow.plist).
- On FreeBSD, this module uses C(pw useradd) and C(chpass) to create, C(pw usermod) and C(chpass) to modify,
C(pw userdel) remove, C(pw lock) to lock, and C(pw unlock) to unlock accounts.
- On all other platforms, this module uses C(useradd) to create, C(usermod) to modify, and
C(userdel) to remove accounts.
seealso:
- module: ansible.posix.authorized_key
- module: ansible.builtin.group
- module: ansible.windows.win_user
author:
- Stephen Fromm (@sfromm)
'''
EXAMPLES = r'''
- name: Add the user 'johnd' with a specific uid and a primary group of 'admin'
ansible.builtin.user:
name: johnd
comment: John Doe
uid: 1040
group: admin
- name: Add the user 'james' with a bash shell, appending the group 'admins' and 'developers' to the user's groups
ansible.builtin.user:
name: james
shell: /bin/bash
groups: admins,developers
append: yes
- name: Remove the user 'johnd'
ansible.builtin.user:
name: johnd
state: absent
remove: yes
- name: Create a 2048-bit SSH key for user jsmith in ~jsmith/.ssh/id_rsa
ansible.builtin.user:
name: jsmith
generate_ssh_key: yes
ssh_key_bits: 2048
ssh_key_file: .ssh/id_rsa
- name: Added a consultant whose account you want to expire
ansible.builtin.user:
name: james18
shell: /bin/zsh
groups: developers
expires: 1422403387
- name: Starting at Ansible 2.6, modify user, remove expiry time
ansible.builtin.user:
name: james18
expires: -1
- name: Set maximum expiration date for password
ansible.builtin.user:
name: ram19
password_expire_max: 10
- name: Set minimum expiration date for password
ansible.builtin.user:
name: pushkar15
password_expire_min: 5
- name: Set number of warning days for password expiration
ansible.builtin.user:
name: jane157
password_expire_warn: 30
'''
RETURN = r'''
append:
description: Whether or not to append the user to groups.
returned: When O(state) is V(present) and the user exists
type: bool
sample: True
comment:
description: Comment section from passwd file, usually the user name.
returned: When user exists
type: str
sample: Agent Smith
create_home:
description: Whether or not to create the home directory.
returned: When user does not exist and not check mode
type: bool
sample: True
force:
description: Whether or not a user account was forcibly deleted.
returned: When O(state) is V(absent) and user exists
type: bool
sample: False
group:
description: Primary user group ID
returned: When user exists
type: int
sample: 1001
groups:
description: List of groups of which the user is a member.
returned: When O(groups) is not empty and O(state) is V(present)
type: str
sample: 'chrony,apache'
home:
description: "Path to user's home directory."
returned: When O(state) is V(present)
type: str
sample: '/home/asmith'
move_home:
description: Whether or not to move an existing home directory.
returned: When O(state) is V(present) and user exists
type: bool
sample: False
name:
description: User account name.
returned: always
type: str
sample: asmith
password:
description: Masked value of the password.
returned: When O(state) is V(present) and O(password) is not empty
type: str
sample: 'NOT_LOGGING_PASSWORD'
remove:
description: Whether or not to remove the user account.
returned: When O(state) is V(absent) and user exists
type: bool
sample: True
shell:
description: User login shell.
returned: When O(state) is V(present)
type: str
sample: '/bin/bash'
ssh_fingerprint:
description: Fingerprint of generated SSH key.
returned: When O(generate_ssh_key) is V(True)
type: str
sample: '2048 SHA256:aYNHYcyVm87Igh0IMEDMbvW0QDlRQfE0aJugp684ko8 ansible-generated on host (RSA)'
ssh_key_file:
description: Path to generated SSH private key file.
returned: When O(generate_ssh_key) is V(True)
type: str
sample: /home/asmith/.ssh/id_rsa
ssh_public_key:
description: Generated SSH public key file.
returned: When O(generate_ssh_key) is V(True)
type: str
sample: >
'ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC95opt4SPEC06tOYsJQJIuN23BbLMGmYo8ysVZQc4h2DZE9ugbjWWGS1/pweUGjVstgzMkBEeBCByaEf/RJKNecKRPeGd2Bw9DCj/bn5Z6rGfNENKBmo
618mUJBvdlEgea96QGjOwSB7/gmonduC7gsWDMNcOdSE3wJMTim4lddiBx4RgC9yXsJ6Tkz9BHD73MXPpT5ETnse+A3fw3IGVSjaueVnlUyUmOBf7fzmZbhlFVXf2Zi2rFTXqvbdGHKkzpw1U8eB8xFPP7y
d5u1u0e6Acju/8aZ/l17IDFiLke5IzlqIMRTEbDwLNeO84YQKWTm9fODHzhYe0yvxqLiK07 ansible-generated on host'
stderr:
description: Standard error from running commands.
returned: When stderr is returned by a command that is run
type: str
sample: Group wheels does not exist
stdout:
description: Standard output from running commands.
returned: When standard output is returned by the command that is run
type: str
sample:
system:
description: Whether or not the account is a system account.
returned: When O(system) is passed to the module and the account does not exist
type: bool
sample: True
uid:
description: User ID of the user account.
returned: When O(uid) is passed to the module
type: int
sample: 1044
'''
import ctypes.util
import grp
import calendar
import os
import re
import pty
import pwd
import select
import shutil
import socket
import subprocess
import time
import math
from ansible.module_utils import distro
from ansible.module_utils.common.text.converters import to_bytes, to_native, to_text
from ansible.module_utils.basic import AnsibleModule
from ansible.module_utils.common.locale import get_best_parsable_locale
from ansible.module_utils.common.sys_info import get_platform_subclass
import ansible.module_utils.compat.typing as t
class StructSpwdType(ctypes.Structure):
_fields_ = [
('sp_namp', ctypes.c_char_p),
('sp_pwdp', ctypes.c_char_p),
('sp_lstchg', ctypes.c_long),
('sp_min', ctypes.c_long),
('sp_max', ctypes.c_long),
('sp_warn', ctypes.c_long),
('sp_inact', ctypes.c_long),
('sp_expire', ctypes.c_long),
('sp_flag', ctypes.c_ulong),
]
try:
_LIBC = ctypes.cdll.LoadLibrary(
t.cast(
str,
ctypes.util.find_library('c')
)
)
_LIBC.getspnam.argtypes = (ctypes.c_char_p,)
_LIBC.getspnam.restype = ctypes.POINTER(StructSpwdType)
HAVE_SPWD = True
except AttributeError:
HAVE_SPWD = False
_HASH_RE = re.compile(r'[^a-zA-Z0-9./=]')
def getspnam(b_name):
return _LIBC.getspnam(b_name).contents
class User(object):
"""
This is a generic User manipulation class that is subclassed
based on platform.
A subclass may wish to override the following action methods:-
- create_user()
- remove_user()
- modify_user()
- ssh_key_gen()
- ssh_key_fingerprint()
- user_exists()
All subclasses MUST define platform and distribution (which may be None).
"""
platform = 'Generic'
distribution = None # type: str | None
PASSWORDFILE = '/etc/passwd'
SHADOWFILE = '/etc/shadow' # type: str | None
SHADOWFILE_EXPIRE_INDEX = 7
LOGIN_DEFS = '/etc/login.defs'
DATE_FORMAT = '%Y-%m-%d'
def __new__(cls, *args, **kwargs):
new_cls = get_platform_subclass(User)
return super(cls, new_cls).__new__(new_cls)
def __init__(self, module):
self.module = module
self.state = module.params['state']
self.name = module.params['name']
self.uid = module.params['uid']
self.hidden = module.params['hidden']
self.non_unique = module.params['non_unique']
self.seuser = module.params['seuser']
self.group = module.params['group']
self.comment = module.params['comment']
self.shell = module.params['shell']
self.password = module.params['password']
self.force = module.params['force']
self.remove = module.params['remove']
self.create_home = module.params['create_home']
self.move_home = module.params['move_home']
self.skeleton = module.params['skeleton']
self.system = module.params['system']
self.login_class = module.params['login_class']
self.append = module.params['append']
self.sshkeygen = module.params['generate_ssh_key']
self.ssh_bits = module.params['ssh_key_bits']
self.ssh_type = module.params['ssh_key_type']
self.ssh_comment = module.params['ssh_key_comment']
self.ssh_passphrase = module.params['ssh_key_passphrase']
self.update_password = module.params['update_password']
self.home = module.params['home']
self.expires = None
self.password_lock = module.params['password_lock']
self.groups = None
self.local = module.params['local']
self.profile = module.params['profile']
self.authorization = module.params['authorization']
self.role = module.params['role']
self.password_expire_max = module.params['password_expire_max']
self.password_expire_min = module.params['password_expire_min']
self.password_expire_warn = module.params['password_expire_warn']
self.umask = module.params['umask']
if self.umask is not None and self.local:
module.fail_json(msg="'umask' can not be used with 'local'")
if module.params['groups'] is not None:
self.groups = ','.join(module.params['groups'])
if module.params['expires'] is not None:
try:
self.expires = time.gmtime(module.params['expires'])
except Exception as e:
module.fail_json(msg="Invalid value for 'expires' %s: %s" % (self.expires, to_native(e)))
if module.params['ssh_key_file'] is not None:
self.ssh_file = module.params['ssh_key_file']
else:
self.ssh_file = os.path.join('.ssh', 'id_%s' % self.ssh_type)
if self.groups is None and self.append:
# Change the argument_spec in 2.14 and remove this warning
# required_by={'append': ['groups']}
module.warn("'append' is set, but no 'groups' are specified. Use 'groups' for appending new groups."
"This will change to an error in Ansible 2.14.")
def check_password_encrypted(self):
# Darwin needs cleartext password, so skip validation
if self.module.params['password'] and self.platform != 'Darwin':
maybe_invalid = False
# Allow setting certain passwords in order to disable the account
if self.module.params['password'] in set(['*', '!', '*************']):
maybe_invalid = False
else:
# : for delimiter, * for disable user, ! for lock user
# these characters are invalid in the password
if any(char in self.module.params['password'] for char in ':*!'):
maybe_invalid = True
if '$' not in self.module.params['password']:
maybe_invalid = True
else:
fields = self.module.params['password'].split("$")
if len(fields) >= 3:
# contains character outside the crypto constraint
if bool(_HASH_RE.search(fields[-1])):
maybe_invalid = True
# md5
if fields[1] == '1' and len(fields[-1]) != 22:
maybe_invalid = True
# sha256
if fields[1] == '5' and len(fields[-1]) != 43:
maybe_invalid = True
# sha512
if fields[1] == '6' and len(fields[-1]) != 86:
maybe_invalid = True
else:
maybe_invalid = True
if maybe_invalid:
self.module.warn("The input password appears not to have been hashed. "
"The 'password' argument must be encrypted for this module to work properly.")
def execute_command(self, cmd, use_unsafe_shell=False, data=None, obey_checkmode=True):
if self.module.check_mode and obey_checkmode:
self.module.debug('In check mode, would have run: "%s"' % cmd)
return (0, '', '')
else:
# cast all args to strings ansible-modules-core/issues/4397
cmd = [str(x) for x in cmd]
return self.module.run_command(cmd, use_unsafe_shell=use_unsafe_shell, data=data)
def backup_shadow(self):
if not self.module.check_mode and self.SHADOWFILE:
return self.module.backup_local(self.SHADOWFILE)
def remove_user_userdel(self):
if self.local:
command_name = 'luserdel'
else:
command_name = 'userdel'
cmd = [self.module.get_bin_path(command_name, True)]
if self.force and not self.local:
cmd.append('-f')
if self.remove:
cmd.append('-r')
cmd.append(self.name)
return self.execute_command(cmd)
def create_user_useradd(self):
if self.local:
command_name = 'luseradd'
lgroupmod_cmd = self.module.get_bin_path('lgroupmod', True)
lchage_cmd = self.module.get_bin_path('lchage', True)
else:
command_name = 'useradd'
cmd = [self.module.get_bin_path(command_name, True)]
if self.uid is not None:
cmd.append('-u')
cmd.append(self.uid)
if self.non_unique:
cmd.append('-o')
if self.seuser is not None:
cmd.append('-Z')
cmd.append(self.seuser)
if self.group is not None:
if not self.group_exists(self.group):
self.module.fail_json(msg="Group %s does not exist" % self.group)
cmd.append('-g')
cmd.append(self.group)
elif self.group_exists(self.name):
# use the -N option (no user group) if a group already
# exists with the same name as the user to prevent
# errors from useradd trying to create a group when
# USERGROUPS_ENAB is set in /etc/login.defs.
if self.local:
# luseradd uses -n instead of -N
cmd.append('-n')
else:
if os.path.exists('/etc/redhat-release'):
dist = distro.version()
major_release = int(dist.split('.')[0])
if major_release <= 5:
cmd.append('-n')
else:
cmd.append('-N')
elif os.path.exists('/etc/SuSE-release'):
# -N did not exist in useradd before SLE 11 and did not
# automatically create a group
dist = distro.version()
major_release = int(dist.split('.')[0])
if major_release >= 12:
cmd.append('-N')
else:
cmd.append('-N')
if self.groups is not None and len(self.groups):
groups = self.get_groups_set()
if not self.local:
cmd.append('-G')
cmd.append(','.join(groups))
if self.comment is not None:
cmd.append('-c')
cmd.append(self.comment)
if self.home is not None:
# If the specified path to the user home contains parent directories that
# do not exist and create_home is True first create the parent directory
# since useradd cannot create it.
if self.create_home:
parent = os.path.dirname(self.home)
if not os.path.isdir(parent):
self.create_homedir(self.home)
cmd.append('-d')
cmd.append(self.home)
if self.shell is not None:
cmd.append('-s')
cmd.append(self.shell)
if self.expires is not None and not self.local:
cmd.append('-e')
if self.expires < time.gmtime(0):
cmd.append('')
else:
cmd.append(time.strftime(self.DATE_FORMAT, self.expires))
if self.password is not None:
cmd.append('-p')
if self.password_lock:
cmd.append('!%s' % self.password)
else:
cmd.append(self.password)
if self.create_home:
if not self.local:
cmd.append('-m')
if self.skeleton is not None:
cmd.append('-k')
cmd.append(self.skeleton)
if self.umask is not None:
cmd.append('-K')
cmd.append('UMASK=' + self.umask)
else:
cmd.append('-M')
if self.system:
cmd.append('-r')
cmd.append(self.name)
(rc, out, err) = self.execute_command(cmd)
if not self.local or rc != 0:
return (rc, out, err)
if self.expires is not None:
if self.expires < time.gmtime(0):
lexpires = -1
else:
# Convert seconds since Epoch to days since Epoch
lexpires = int(math.floor(self.module.params['expires'])) // 86400
(rc, _out, _err) = self.execute_command([lchage_cmd, '-E', to_native(lexpires), self.name])
out += _out
err += _err
if rc != 0:
return (rc, out, err)
if self.groups is None or len(self.groups) == 0:
return (rc, out, err)
for add_group in groups:
(rc, _out, _err) = self.execute_command([lgroupmod_cmd, '-M', self.name, add_group])
out += _out
err += _err
if rc != 0:
return (rc, out, err)
return (rc, out, err)
def _check_usermod_append(self):
# check if this version of usermod can append groups
if self.local:
command_name = 'lusermod'
else:
command_name = 'usermod'
usermod_path = self.module.get_bin_path(command_name, True)
# for some reason, usermod --help cannot be used by non root
# on RH/Fedora, due to lack of execute bit for others
if not os.access(usermod_path, os.X_OK):
return False
cmd = [usermod_path, '--help']
(rc, data1, data2) = self.execute_command(cmd, obey_checkmode=False)
helpout = data1 + data2
# check if --append exists
lines = to_native(helpout).split('\n')
for line in lines:
if line.strip().startswith('-a, --append'):
return True
return False
def modify_user_usermod(self):
if self.local:
command_name = 'lusermod'
lgroupmod_cmd = self.module.get_bin_path('lgroupmod', True)
lgroupmod_add = set()
lgroupmod_del = set()
lchage_cmd = self.module.get_bin_path('lchage', True)
lexpires = None
else:
command_name = 'usermod'
cmd = [self.module.get_bin_path(command_name, True)]
info = self.user_info()
has_append = self._check_usermod_append()
if self.uid is not None and info[2] != int(self.uid):
cmd.append('-u')
cmd.append(self.uid)
if self.non_unique:
cmd.append('-o')
if self.group is not None:
if not self.group_exists(self.group):
self.module.fail_json(msg="Group %s does not exist" % self.group)
ginfo = self.group_info(self.group)
if info[3] != ginfo[2]:
cmd.append('-g')
cmd.append(ginfo[2])
if self.groups is not None:
# get a list of all groups for the user, including the primary
current_groups = self.user_group_membership(exclude_primary=False)
groups_need_mod = False
groups = []
if self.groups == '':
if current_groups and not self.append:
groups_need_mod = True
else:
groups = self.get_groups_set(remove_existing=False, names_only=True)
group_diff = set(current_groups).symmetric_difference(groups)
if group_diff:
if self.append:
for g in groups:
if g in group_diff:
if has_append:
cmd.append('-a')
groups_need_mod = True
break
else:
groups_need_mod = True
if groups_need_mod:
if self.local:
if self.append:
lgroupmod_add = set(groups).difference(current_groups)
lgroupmod_del = set()
else:
lgroupmod_add = set(groups).difference(current_groups)
lgroupmod_del = set(current_groups).difference(groups)
else:
if self.append and not has_append:
cmd.append('-A')
cmd.append(','.join(group_diff))
else:
cmd.append('-G')
cmd.append(','.join(groups))
if self.comment is not None and info[4] != self.comment:
cmd.append('-c')
cmd.append(self.comment)
if self.home is not None and info[5] != self.home:
cmd.append('-d')
cmd.append(self.home)
if self.move_home:
cmd.append('-m')
if self.shell is not None and info[6] != self.shell:
cmd.append('-s')
cmd.append(self.shell)
if self.expires is not None:
current_expires = int(self.user_password()[1])
if self.expires < time.gmtime(0):
if current_expires >= 0:
if self.local:
lexpires = -1
else:
cmd.append('-e')
cmd.append('')
else:
# Convert days since Epoch to seconds since Epoch as struct_time
current_expire_date = time.gmtime(current_expires * 86400)
# Current expires is negative or we compare year, month, and day only
if current_expires < 0 or current_expire_date[:3] != self.expires[:3]:
if self.local:
# Convert seconds since Epoch to days since Epoch
lexpires = int(math.floor(self.module.params['expires'])) // 86400
else:
cmd.append('-e')
cmd.append(time.strftime(self.DATE_FORMAT, self.expires))
# Lock if no password or unlocked, unlock only if locked
if self.password_lock and not info[1].startswith('!'):
cmd.append('-L')
elif self.password_lock is False and info[1].startswith('!'):
# usermod will refuse to unlock a user with no password, module shows 'changed' regardless
cmd.append('-U')
if self.update_password == 'always' and self.password is not None and info[1].lstrip('!') != self.password.lstrip('!'):
# Remove options that are mutually exclusive with -p
cmd = [c for c in cmd if c not in ['-U', '-L']]
cmd.append('-p')
if self.password_lock:
# Lock the account and set the hash in a single command
cmd.append('!%s' % self.password)
else:
cmd.append(self.password)
(rc, out, err) = (None, '', '')
# skip if no usermod changes to be made
if len(cmd) > 1:
cmd.append(self.name)
(rc, out, err) = self.execute_command(cmd)
if not self.local or not (rc is None or rc == 0):
return (rc, out, err)
if lexpires is not None:
(rc, _out, _err) = self.execute_command([lchage_cmd, '-E', to_native(lexpires), self.name])
out += _out
err += _err
if rc != 0:
return (rc, out, err)
if len(lgroupmod_add) == 0 and len(lgroupmod_del) == 0:
return (rc, out, err)
for add_group in lgroupmod_add:
(rc, _out, _err) = self.execute_command([lgroupmod_cmd, '-M', self.name, add_group])
out += _out
err += _err
if rc != 0:
return (rc, out, err)
for del_group in lgroupmod_del:
(rc, _out, _err) = self.execute_command([lgroupmod_cmd, '-m', self.name, del_group])
out += _out
err += _err
if rc != 0:
return (rc, out, err)
return (rc, out, err)
def group_exists(self, group):
try:
# Try group as a gid first
grp.getgrgid(int(group))
return True
except (ValueError, KeyError):
try:
grp.getgrnam(group)
return True
except KeyError:
return False
def group_info(self, group):
if not self.group_exists(group):
return False
try:
# Try group as a gid first
return list(grp.getgrgid(int(group)))
except (ValueError, KeyError):
return list(grp.getgrnam(group))
def get_groups_set(self, remove_existing=True, names_only=False):
if self.groups is None:
return None
info = self.user_info()
groups = set(x.strip() for x in self.groups.split(',') if x)
group_names = set()
for g in groups.copy():
if not self.group_exists(g):
self.module.fail_json(msg="Group %s does not exist" % (g))
group_info = self.group_info(g)
if info and remove_existing and group_info[2] == info[3]:
groups.remove(g)
elif names_only:
group_names.add(group_info[0])
if names_only:
return group_names
return groups
def user_group_membership(self, exclude_primary=True):
''' Return a list of groups the user belongs to '''
groups = []
info = self.get_pwd_info()
for group in grp.getgrall():
if self.name in group.gr_mem:
# Exclude the user's primary group by default
if not exclude_primary:
groups.append(group[0])
else:
if info[3] != group.gr_gid:
groups.append(group[0])
return groups
def user_exists(self):
# The pwd module does not distinguish between local and directory accounts.
# It's output cannot be used to determine whether or not an account exists locally.
# It returns True if the account exists locally or in the directory, so instead
# look in the local PASSWORD file for an existing account.
if self.local:
if not os.path.exists(self.PASSWORDFILE):
self.module.fail_json(msg="'local: true' specified but unable to find local account file {0} to parse.".format(self.PASSWORDFILE))
exists = False
name_test = '{0}:'.format(self.name)
with open(self.PASSWORDFILE, 'rb') as f:
reversed_lines = f.readlines()[::-1]
for line in reversed_lines:
if line.startswith(to_bytes(name_test)):
exists = True
break
if not exists:
self.module.warn(
"'local: true' specified and user '{name}' was not found in {file}. "
"The local user account may already exist if the local account database exists "
"somewhere other than {file}.".format(file=self.PASSWORDFILE, name=self.name))
return exists
else:
try:
if pwd.getpwnam(self.name):
return True
except KeyError:
return False
def get_pwd_info(self):
if not self.user_exists():
return False
return list(pwd.getpwnam(self.name))
def user_info(self):
if not self.user_exists():
return False
info = self.get_pwd_info()
if len(info[1]) == 1 or len(info[1]) == 0:
info[1] = self.user_password()[0]
return info
def set_password_expire(self):
min_needs_change = self.password_expire_min is not None
max_needs_change = self.password_expire_max is not None
warn_needs_change = self.password_expire_warn is not None
if HAVE_SPWD:
try:
shadow_info = getspnam(to_bytes(self.name))
except ValueError:
return None, '', ''
min_needs_change &= self.password_expire_min != shadow_info.sp_min
max_needs_change &= self.password_expire_max != shadow_info.sp_max
warn_needs_change &= self.password_expire_warn != shadow_info.sp_warn
if not (min_needs_change or max_needs_change or warn_needs_change):
return (None, '', '') # target state already reached
command_name = 'chage'
cmd = [self.module.get_bin_path(command_name, True)]
if min_needs_change:
cmd.extend(["-m", self.password_expire_min])
if max_needs_change:
cmd.extend(["-M", self.password_expire_max])
if warn_needs_change:
cmd.extend(["-W", self.password_expire_warn])
cmd.append(self.name)
return self.execute_command(cmd)
def user_password(self):
passwd = ''
expires = ''
if HAVE_SPWD:
try:
shadow_info = getspnam(to_bytes(self.name))
passwd = to_native(shadow_info.sp_pwdp)
expires = shadow_info.sp_expire
return passwd, expires
except ValueError:
return passwd, expires
if not self.user_exists():
return passwd, expires
elif self.SHADOWFILE:
passwd, expires = self.parse_shadow_file()
return passwd, expires
def parse_shadow_file(self):
passwd = ''
expires = ''
if os.path.exists(self.SHADOWFILE) and os.access(self.SHADOWFILE, os.R_OK):
with open(self.SHADOWFILE, 'r') as f:
for line in f:
if line.startswith('%s:' % self.name):
passwd = line.split(':')[1]
expires = line.split(':')[self.SHADOWFILE_EXPIRE_INDEX] or -1
return passwd, expires
def get_ssh_key_path(self):
info = self.user_info()
if os.path.isabs(self.ssh_file):
ssh_key_file = self.ssh_file
else:
if not os.path.exists(info[5]) and not self.module.check_mode:
raise Exception('User %s home directory does not exist' % self.name)
ssh_key_file = os.path.join(info[5], self.ssh_file)
return ssh_key_file
def ssh_key_gen(self):
info = self.user_info()
overwrite = None
try:
ssh_key_file = self.get_ssh_key_path()
except Exception as e:
return (1, '', to_native(e))
ssh_dir = os.path.dirname(ssh_key_file)
if not os.path.exists(ssh_dir):
if self.module.check_mode:
return (0, '', '')
try:
os.mkdir(ssh_dir, int('0700', 8))
os.chown(ssh_dir, info[2], info[3])
except OSError as e:
return (1, '', 'Failed to create %s: %s' % (ssh_dir, to_native(e)))
if os.path.exists(ssh_key_file):
if self.force:
# ssh-keygen doesn't support overwriting the key interactively, so send 'y' to confirm
overwrite = 'y'
else:
return (None, 'Key already exists, use "force: yes" to overwrite', '')
cmd = [self.module.get_bin_path('ssh-keygen', True)]
cmd.append('-t')
cmd.append(self.ssh_type)
if self.ssh_bits > 0:
cmd.append('-b')
cmd.append(self.ssh_bits)
cmd.append('-C')
cmd.append(self.ssh_comment)
cmd.append('-f')
cmd.append(ssh_key_file)
if self.ssh_passphrase is not None:
if self.module.check_mode:
self.module.debug('In check mode, would have run: "%s"' % cmd)
return (0, '', '')
master_in_fd, slave_in_fd = pty.openpty()
master_out_fd, slave_out_fd = pty.openpty()
master_err_fd, slave_err_fd = pty.openpty()
env = os.environ.copy()
env['LC_ALL'] = get_best_parsable_locale(self.module)
try:
p = subprocess.Popen([to_bytes(c) for c in cmd],
stdin=slave_in_fd,
stdout=slave_out_fd,
stderr=slave_err_fd,
preexec_fn=os.setsid,
env=env)
out_buffer = b''
err_buffer = b''
while p.poll() is None:
r_list = select.select([master_out_fd, master_err_fd], [], [], 1)[0]
first_prompt = b'Enter passphrase (empty for no passphrase):'
second_prompt = b'Enter same passphrase again'
prompt = first_prompt
for fd in r_list:
if fd == master_out_fd:
chunk = os.read(master_out_fd, 10240)
out_buffer += chunk
if prompt in out_buffer:
os.write(master_in_fd, to_bytes(self.ssh_passphrase, errors='strict') + b'\r')
prompt = second_prompt
else:
chunk = os.read(master_err_fd, 10240)
err_buffer += chunk
if prompt in err_buffer:
os.write(master_in_fd, to_bytes(self.ssh_passphrase, errors='strict') + b'\r')
prompt = second_prompt
if b'Overwrite (y/n)?' in out_buffer or b'Overwrite (y/n)?' in err_buffer:
# The key was created between us checking for existence and now
return (None, 'Key already exists', '')
rc = p.returncode
out = to_native(out_buffer)
err = to_native(err_buffer)
except OSError as e:
return (1, '', to_native(e))
else:
cmd.append('-N')
cmd.append('')
(rc, out, err) = self.execute_command(cmd, data=overwrite)
if rc == 0 and not self.module.check_mode:
# If the keys were successfully created, we should be able
# to tweak ownership.
os.chown(ssh_key_file, info[2], info[3])
os.chown('%s.pub' % ssh_key_file, info[2], info[3])
return (rc, out, err)
def ssh_key_fingerprint(self):
ssh_key_file = self.get_ssh_key_path()
if not os.path.exists(ssh_key_file):
return (1, 'SSH Key file %s does not exist' % ssh_key_file, '')
cmd = [self.module.get_bin_path('ssh-keygen', True)]
cmd.append('-l')
cmd.append('-f')
cmd.append(ssh_key_file)
return self.execute_command(cmd, obey_checkmode=False)
def get_ssh_public_key(self):
ssh_public_key_file = '%s.pub' % self.get_ssh_key_path()
try:
with open(ssh_public_key_file, 'r') as f:
ssh_public_key = f.read().strip()
except IOError:
return None
return ssh_public_key
def create_user(self):
# by default we use the create_user_useradd method
return self.create_user_useradd()
def remove_user(self):
# by default we use the remove_user_userdel method
return self.remove_user_userdel()
def modify_user(self):
# by default we use the modify_user_usermod method
return self.modify_user_usermod()
def create_homedir(self, path):
if not os.path.exists(path):
if self.skeleton is not None:
skeleton = self.skeleton
else:
skeleton = '/etc/skel'
if os.path.exists(skeleton) and skeleton != os.devnull:
try:
shutil.copytree(skeleton, path, symlinks=True)
except OSError as e:
self.module.exit_json(failed=True, msg="%s" % to_native(e))
else:
try:
os.makedirs(path)
except OSError as e:
self.module.exit_json(failed=True, msg="%s" % to_native(e))
# get umask from /etc/login.defs and set correct home mode
if os.path.exists(self.LOGIN_DEFS):
with open(self.LOGIN_DEFS, 'r') as f:
for line in f:
m = re.match(r'^UMASK\s+(\d+)$', line)
if m:
umask = int(m.group(1), 8)
mode = 0o777 & ~umask
try:
os.chmod(path, mode)
except OSError as e:
self.module.exit_json(failed=True, msg="%s" % to_native(e))
def chown_homedir(self, uid, gid, path):
try:
os.chown(path, uid, gid)
for root, dirs, files in os.walk(path):
for d in dirs:
os.chown(os.path.join(root, d), uid, gid)
for f in files:
os.chown(os.path.join(root, f), uid, gid)
except OSError as e:
self.module.exit_json(failed=True, msg="%s" % to_native(e))
# ===========================================
class FreeBsdUser(User):
"""
This is a FreeBSD User manipulation class - it uses the pw command
to manipulate the user database, followed by the chpass command
to change the password.
This overrides the following methods from the generic class:-
- create_user()
- remove_user()
- modify_user()
"""
platform = 'FreeBSD'
distribution = None
SHADOWFILE = '/etc/master.passwd'
SHADOWFILE_EXPIRE_INDEX = 6
DATE_FORMAT = '%d-%b-%Y'
def _handle_lock(self):
info = self.user_info()
if self.password_lock and not info[1].startswith('*LOCKED*'):
cmd = [
self.module.get_bin_path('pw', True),
'lock',
self.name
]
if self.uid is not None and info[2] != int(self.uid):
cmd.append('-u')
cmd.append(self.uid)
return self.execute_command(cmd)
elif self.password_lock is False and info[1].startswith('*LOCKED*'):
cmd = [
self.module.get_bin_path('pw', True),
'unlock',
self.name
]
if self.uid is not None and info[2] != int(self.uid):
cmd.append('-u')
cmd.append(self.uid)
return self.execute_command(cmd)
return (None, '', '')
def remove_user(self):
cmd = [
self.module.get_bin_path('pw', True),
'userdel',
'-n',
self.name
]
if self.remove:
cmd.append('-r')
return self.execute_command(cmd)
def create_user(self):
cmd = [
self.module.get_bin_path('pw', True),
'useradd',
'-n',
self.name,
]
if self.uid is not None:
cmd.append('-u')
cmd.append(self.uid)
if self.non_unique:
cmd.append('-o')
if self.comment is not None:
cmd.append('-c')
cmd.append(self.comment)
if self.home is not None:
cmd.append('-d')
cmd.append(self.home)
if self.group is not None:
if not self.group_exists(self.group):
self.module.fail_json(msg="Group %s does not exist" % self.group)
cmd.append('-g')
cmd.append(self.group)
if self.groups is not None:
groups = self.get_groups_set()
cmd.append('-G')
cmd.append(','.join(groups))
if self.create_home:
cmd.append('-m')
if self.skeleton is not None:
cmd.append('-k')
cmd.append(self.skeleton)
if self.umask is not None:
cmd.append('-K')
cmd.append('UMASK=' + self.umask)
if self.shell is not None:
cmd.append('-s')
cmd.append(self.shell)
if self.login_class is not None:
cmd.append('-L')
cmd.append(self.login_class)
if self.expires is not None:
cmd.append('-e')
if self.expires < time.gmtime(0):
cmd.append('0')
else:
cmd.append(str(calendar.timegm(self.expires)))
# system cannot be handled currently - should we error if its requested?
# create the user
(rc, out, err) = self.execute_command(cmd)
if rc is not None and rc != 0:
self.module.fail_json(name=self.name, msg=err, rc=rc)
# we have to set the password in a second command
if self.password is not None:
cmd = [
self.module.get_bin_path('chpass', True),
'-p',
self.password,
self.name
]
_rc, _out, _err = self.execute_command(cmd)
if rc is None:
rc = _rc
out += _out
err += _err
# we have to lock/unlock the password in a distinct command
_rc, _out, _err = self._handle_lock()
if rc is None:
rc = _rc
out += _out
err += _err
return (rc, out, err)
def modify_user(self):
cmd = [
self.module.get_bin_path('pw', True),
'usermod',
'-n',
self.name
]
cmd_len = len(cmd)
info = self.user_info()
if self.uid is not None and info[2] != int(self.uid):
cmd.append('-u')
cmd.append(self.uid)
if self.non_unique:
cmd.append('-o')
if self.comment is not None and info[4] != self.comment:
cmd.append('-c')
cmd.append(self.comment)
if self.home is not None:
if (info[5] != self.home and self.move_home) or (not os.path.exists(self.home) and self.create_home):
cmd.append('-m')
if info[5] != self.home:
cmd.append('-d')
cmd.append(self.home)
if self.skeleton is not None:
cmd.append('-k')
cmd.append(self.skeleton)
if self.umask is not None:
cmd.append('-K')
cmd.append('UMASK=' + self.umask)
if self.group is not None:
if not self.group_exists(self.group):
self.module.fail_json(msg="Group %s does not exist" % self.group)
ginfo = self.group_info(self.group)
if info[3] != ginfo[2]:
cmd.append('-g')
cmd.append(self.group)
if self.shell is not None and info[6] != self.shell:
cmd.append('-s')
cmd.append(self.shell)
if self.login_class is not None:
# find current login class
user_login_class = None
if os.path.exists(self.SHADOWFILE) and os.access(self.SHADOWFILE, os.R_OK):
with open(self.SHADOWFILE, 'r') as f:
for line in f:
if line.startswith('%s:' % self.name):
user_login_class = line.split(':')[4]
# act only if login_class change
if self.login_class != user_login_class:
cmd.append('-L')
cmd.append(self.login_class)
if self.groups is not None:
current_groups = self.user_group_membership()
groups = self.get_groups_set(names_only=True)
group_diff = set(current_groups).symmetric_difference(groups)
groups_need_mod = False
if group_diff:
if self.append:
for g in groups:
if g in group_diff:
groups_need_mod = True
break
else:
groups_need_mod = True
if groups_need_mod:
cmd.append('-G')
new_groups = groups
if self.append:
new_groups = groups | set(current_groups)
cmd.append(','.join(new_groups))
if self.expires is not None:
current_expires = int(self.user_password()[1])
# If expiration is negative or zero and the current expiration is greater than zero, disable expiration.
# In OpenBSD, setting expiration to zero disables expiration. It does not expire the account.
if self.expires <= time.gmtime(0):
if current_expires > 0:
cmd.append('-e')
cmd.append('0')
else:
# Convert days since Epoch to seconds since Epoch as struct_time
current_expire_date = time.gmtime(current_expires)
# Current expires is negative or we compare year, month, and day only
if current_expires <= 0 or current_expire_date[:3] != self.expires[:3]:
cmd.append('-e')
cmd.append(str(calendar.timegm(self.expires)))
(rc, out, err) = (None, '', '')
# modify the user if cmd will do anything
if cmd_len != len(cmd):
(rc, _out, _err) = self.execute_command(cmd)
out += _out
err += _err
if rc is not None and rc != 0:
self.module.fail_json(name=self.name, msg=err, rc=rc)
# we have to set the password in a second command
if self.update_password == 'always' and self.password is not None and info[1].lstrip('*LOCKED*') != self.password.lstrip('*LOCKED*'):
cmd = [
self.module.get_bin_path('chpass', True),
'-p',
self.password,
self.name
]
_rc, _out, _err = self.execute_command(cmd)
if rc is None:
rc = _rc
out += _out
err += _err
# we have to lock/unlock the password in a distinct command
_rc, _out, _err = self._handle_lock()
if rc is None:
rc = _rc
out += _out
err += _err
return (rc, out, err)
class DragonFlyBsdUser(FreeBsdUser):
"""
This is a DragonFlyBSD User manipulation class - it inherits the
FreeBsdUser class behaviors, such as using the pw command to
manipulate the user database, followed by the chpass command
to change the password.
"""
platform = 'DragonFly'
class OpenBSDUser(User):
"""
This is a OpenBSD User manipulation class.
Main differences are that OpenBSD:-
- has no concept of "system" account.
- has no force delete user
This overrides the following methods from the generic class:-
- create_user()
- remove_user()
- modify_user()
"""
platform = 'OpenBSD'
distribution = None
SHADOWFILE = '/etc/master.passwd'
def create_user(self):
cmd = [self.module.get_bin_path('useradd', True)]
if self.uid is not None:
cmd.append('-u')
cmd.append(self.uid)
if self.non_unique:
cmd.append('-o')
if self.group is not None:
if not self.group_exists(self.group):
self.module.fail_json(msg="Group %s does not exist" % self.group)
cmd.append('-g')
cmd.append(self.group)
if self.groups is not None:
groups = self.get_groups_set()
cmd.append('-G')
cmd.append(','.join(groups))
if self.comment is not None:
cmd.append('-c')
cmd.append(self.comment)
if self.home is not None:
cmd.append('-d')
cmd.append(self.home)
if self.shell is not None:
cmd.append('-s')
cmd.append(self.shell)
if self.login_class is not None:
cmd.append('-L')
cmd.append(self.login_class)
if self.password is not None and self.password != '*':
cmd.append('-p')
cmd.append(self.password)
if self.create_home:
cmd.append('-m')
if self.skeleton is not None:
cmd.append('-k')
cmd.append(self.skeleton)
if self.umask is not None:
cmd.append('-K')
cmd.append('UMASK=' + self.umask)
cmd.append(self.name)
return self.execute_command(cmd)
def remove_user_userdel(self):
cmd = [self.module.get_bin_path('userdel', True)]
if self.remove:
cmd.append('-r')
cmd.append(self.name)
return self.execute_command(cmd)
def modify_user(self):
cmd = [self.module.get_bin_path('usermod', True)]
info = self.user_info()
if self.uid is not None and info[2] != int(self.uid):
cmd.append('-u')
cmd.append(self.uid)
if self.non_unique:
cmd.append('-o')
if self.group is not None:
if not self.group_exists(self.group):
self.module.fail_json(msg="Group %s does not exist" % self.group)
ginfo = self.group_info(self.group)
if info[3] != ginfo[2]:
cmd.append('-g')
cmd.append(self.group)
if self.groups is not None:
current_groups = self.user_group_membership()
groups_need_mod = False
groups_option = '-S'
groups = []
if self.groups == '':
if current_groups and not self.append:
groups_need_mod = True
else:
groups = self.get_groups_set(names_only=True)
group_diff = set(current_groups).symmetric_difference(groups)
if group_diff:
if self.append:
for g in groups:
if g in group_diff:
groups_option = '-G'
groups_need_mod = True
break
else:
groups_need_mod = True
if groups_need_mod:
cmd.append(groups_option)
cmd.append(','.join(groups))
if self.comment is not None and info[4] != self.comment:
cmd.append('-c')
cmd.append(self.comment)
if self.home is not None and info[5] != self.home:
if self.move_home:
cmd.append('-m')
cmd.append('-d')
cmd.append(self.home)
if self.shell is not None and info[6] != self.shell:
cmd.append('-s')
cmd.append(self.shell)
if self.login_class is not None:
# find current login class
user_login_class = None
userinfo_cmd = [self.module.get_bin_path('userinfo', True), self.name]
(rc, out, err) = self.execute_command(userinfo_cmd, obey_checkmode=False)
for line in out.splitlines():
tokens = line.split()
if tokens[0] == 'class' and len(tokens) == 2:
user_login_class = tokens[1]
# act only if login_class change
if self.login_class != user_login_class:
cmd.append('-L')
cmd.append(self.login_class)
if self.password_lock and not info[1].startswith('*'):
cmd.append('-Z')
elif self.password_lock is False and info[1].startswith('*'):
cmd.append('-U')
if self.update_password == 'always' and self.password is not None \
and self.password != '*' and info[1] != self.password:
cmd.append('-p')
cmd.append(self.password)
# skip if no changes to be made
if len(cmd) == 1:
return (None, '', '')
cmd.append(self.name)
return self.execute_command(cmd)
class NetBSDUser(User):
"""
This is a NetBSD User manipulation class.
Main differences are that NetBSD:-
- has no concept of "system" account.
- has no force delete user
This overrides the following methods from the generic class:-
- create_user()
- remove_user()
- modify_user()
"""
platform = 'NetBSD'
distribution = None
SHADOWFILE = '/etc/master.passwd'
def create_user(self):
cmd = [self.module.get_bin_path('useradd', True)]
if self.uid is not None:
cmd.append('-u')
cmd.append(self.uid)
if self.non_unique:
cmd.append('-o')
if self.group is not None:
if not self.group_exists(self.group):
self.module.fail_json(msg="Group %s does not exist" % self.group)
cmd.append('-g')
cmd.append(self.group)
if self.groups is not None:
groups = self.get_groups_set()
if len(groups) > 16:
self.module.fail_json(msg="Too many groups (%d) NetBSD allows for 16 max." % len(groups))
cmd.append('-G')
cmd.append(','.join(groups))
if self.comment is not None:
cmd.append('-c')
cmd.append(self.comment)
if self.home is not None:
cmd.append('-d')
cmd.append(self.home)
if self.shell is not None:
cmd.append('-s')
cmd.append(self.shell)
if self.login_class is not None:
cmd.append('-L')
cmd.append(self.login_class)
if self.password is not None:
cmd.append('-p')
cmd.append(self.password)
if self.create_home:
cmd.append('-m')
if self.skeleton is not None:
cmd.append('-k')
cmd.append(self.skeleton)
if self.umask is not None:
cmd.append('-K')
cmd.append('UMASK=' + self.umask)
cmd.append(self.name)
return self.execute_command(cmd)
def remove_user_userdel(self):
cmd = [self.module.get_bin_path('userdel', True)]
if self.remove:
cmd.append('-r')
cmd.append(self.name)
return self.execute_command(cmd)
def modify_user(self):
cmd = [self.module.get_bin_path('usermod', True)]
info = self.user_info()
if self.uid is not None and info[2] != int(self.uid):
cmd.append('-u')
cmd.append(self.uid)
if self.non_unique:
cmd.append('-o')
if self.group is not None:
if not self.group_exists(self.group):
self.module.fail_json(msg="Group %s does not exist" % self.group)
ginfo = self.group_info(self.group)
if info[3] != ginfo[2]:
cmd.append('-g')
cmd.append(self.group)
if self.groups is not None:
current_groups = self.user_group_membership()
groups_need_mod = False
groups = []
if self.groups == '':
if current_groups and not self.append:
groups_need_mod = True
else:
groups = self.get_groups_set(names_only=True)
group_diff = set(current_groups).symmetric_difference(groups)
if group_diff:
if self.append:
for g in groups:
if g in group_diff:
groups = set(current_groups).union(groups)
groups_need_mod = True
break
else:
groups_need_mod = True
if groups_need_mod:
if len(groups) > 16:
self.module.fail_json(msg="Too many groups (%d) NetBSD allows for 16 max." % len(groups))
cmd.append('-G')
cmd.append(','.join(groups))
if self.comment is not None and info[4] != self.comment:
cmd.append('-c')
cmd.append(self.comment)
if self.home is not None and info[5] != self.home:
if self.move_home:
cmd.append('-m')
cmd.append('-d')
cmd.append(self.home)
if self.shell is not None and info[6] != self.shell:
cmd.append('-s')
cmd.append(self.shell)
if self.login_class is not None:
cmd.append('-L')
cmd.append(self.login_class)
if self.update_password == 'always' and self.password is not None and info[1] != self.password:
cmd.append('-p')
cmd.append(self.password)
if self.password_lock and not info[1].startswith('*LOCKED*'):
cmd.append('-C yes')
elif self.password_lock is False and info[1].startswith('*LOCKED*'):
cmd.append('-C no')
# skip if no changes to be made
if len(cmd) == 1:
return (None, '', '')
cmd.append(self.name)
return self.execute_command(cmd)
class SunOS(User):
"""
This is a SunOS User manipulation class - The main difference between
this class and the generic user class is that Solaris-type distros
don't support the concept of a "system" account and we need to
edit the /etc/shadow file manually to set a password. (Ugh)
This overrides the following methods from the generic class:-
- create_user()
- remove_user()
- modify_user()
- user_info()
"""
platform = 'SunOS'
distribution = None
SHADOWFILE = '/etc/shadow'
USER_ATTR = '/etc/user_attr'
def get_password_defaults(self):
# Read password aging defaults
try:
minweeks = ''
maxweeks = ''
warnweeks = ''
with open("/etc/default/passwd", 'r') as f:
for line in f:
line = line.strip()
if (line.startswith('#') or line == ''):
continue
m = re.match(r'^([^#]*)#(.*)$', line)
if m: # The line contains a hash / comment
line = m.group(1)
key, value = line.split('=')
if key == "MINWEEKS":
minweeks = value.rstrip('\n')
elif key == "MAXWEEKS":
maxweeks = value.rstrip('\n')
elif key == "WARNWEEKS":
warnweeks = value.rstrip('\n')
except Exception as err:
self.module.fail_json(msg="failed to read /etc/default/passwd: %s" % to_native(err))
return (minweeks, maxweeks, warnweeks)
def remove_user(self):
cmd = [self.module.get_bin_path('userdel', True)]
if self.remove:
cmd.append('-r')
cmd.append(self.name)
return self.execute_command(cmd)
def create_user(self):
cmd = [self.module.get_bin_path('useradd', True)]
if self.uid is not None:
cmd.append('-u')
cmd.append(self.uid)
if self.non_unique:
cmd.append('-o')
if self.group is not None:
if not self.group_exists(self.group):
self.module.fail_json(msg="Group %s does not exist" % self.group)
cmd.append('-g')
cmd.append(self.group)
if self.groups is not None:
groups = self.get_groups_set()
cmd.append('-G')
cmd.append(','.join(groups))
if self.comment is not None:
cmd.append('-c')
cmd.append(self.comment)
if self.home is not None:
cmd.append('-d')
cmd.append(self.home)
if self.shell is not None:
cmd.append('-s')
cmd.append(self.shell)
if self.create_home:
cmd.append('-m')
if self.skeleton is not None:
cmd.append('-k')
cmd.append(self.skeleton)
if self.umask is not None:
cmd.append('-K')
cmd.append('UMASK=' + self.umask)
if self.profile is not None:
cmd.append('-P')
cmd.append(self.profile)
if self.authorization is not None:
cmd.append('-A')
cmd.append(self.authorization)
if self.role is not None:
cmd.append('-R')
cmd.append(self.role)
cmd.append(self.name)
(rc, out, err) = self.execute_command(cmd)
if rc is not None and rc != 0:
self.module.fail_json(name=self.name, msg=err, rc=rc)
if not self.module.check_mode:
# we have to set the password by editing the /etc/shadow file
if self.password is not None:
self.backup_shadow()
minweeks, maxweeks, warnweeks = self.get_password_defaults()
try:
lines = []
with open(self.SHADOWFILE, 'rb') as f:
for line in f:
line = to_native(line, errors='surrogate_or_strict')
fields = line.strip().split(':')
if not fields[0] == self.name:
lines.append(line)
continue
fields[1] = self.password
fields[2] = str(int(time.time() // 86400))
if minweeks:
try:
fields[3] = str(int(minweeks) * 7)
except ValueError:
# mirror solaris, which allows for any value in this field, and ignores anything that is not an int.
pass
if maxweeks:
try:
fields[4] = str(int(maxweeks) * 7)
except ValueError:
# mirror solaris, which allows for any value in this field, and ignores anything that is not an int.
pass
if warnweeks:
try:
fields[5] = str(int(warnweeks) * 7)
except ValueError:
# mirror solaris, which allows for any value in this field, and ignores anything that is not an int.
pass
line = ':'.join(fields)
lines.append('%s\n' % line)
with open(self.SHADOWFILE, 'w+') as f:
f.writelines(lines)
except Exception as err:
self.module.fail_json(msg="failed to update users password: %s" % to_native(err))
return (rc, out, err)
def modify_user_usermod(self):
cmd = [self.module.get_bin_path('usermod', True)]
cmd_len = len(cmd)
info = self.user_info()
if self.uid is not None and info[2] != int(self.uid):
cmd.append('-u')
cmd.append(self.uid)
if self.non_unique:
cmd.append('-o')
if self.group is not None:
if not self.group_exists(self.group):
self.module.fail_json(msg="Group %s does not exist" % self.group)
ginfo = self.group_info(self.group)
if info[3] != ginfo[2]:
cmd.append('-g')
cmd.append(self.group)
if self.groups is not None:
current_groups = self.user_group_membership()
groups = self.get_groups_set(names_only=True)
group_diff = set(current_groups).symmetric_difference(groups)
groups_need_mod = False
if group_diff:
if self.append:
for g in groups:
if g in group_diff:
groups_need_mod = True
break
else:
groups_need_mod = True
if groups_need_mod:
cmd.append('-G')
new_groups = groups
if self.append:
new_groups.update(current_groups)
cmd.append(','.join(new_groups))
if self.comment is not None and info[4] != self.comment:
cmd.append('-c')
cmd.append(self.comment)
if self.home is not None and info[5] != self.home:
if self.move_home:
cmd.append('-m')
cmd.append('-d')
cmd.append(self.home)
if self.shell is not None and info[6] != self.shell:
cmd.append('-s')
cmd.append(self.shell)
if self.profile is not None and info[7] != self.profile:
cmd.append('-P')
cmd.append(self.profile)
if self.authorization is not None and info[8] != self.authorization:
cmd.append('-A')
cmd.append(self.authorization)
if self.role is not None and info[9] != self.role:
cmd.append('-R')
cmd.append(self.role)
# modify the user if cmd will do anything
if cmd_len != len(cmd):
cmd.append(self.name)
(rc, out, err) = self.execute_command(cmd)
if rc is not None and rc != 0:
self.module.fail_json(name=self.name, msg=err, rc=rc)
else:
(rc, out, err) = (None, '', '')
# we have to set the password by editing the /etc/shadow file
if self.update_password == 'always' and self.password is not None and info[1] != self.password:
self.backup_shadow()
(rc, out, err) = (0, '', '')
if not self.module.check_mode:
minweeks, maxweeks, warnweeks = self.get_password_defaults()
try:
lines = []
with open(self.SHADOWFILE, 'rb') as f:
for line in f:
line = to_native(line, errors='surrogate_or_strict')
fields = line.strip().split(':')
if not fields[0] == self.name:
lines.append(line)
continue
fields[1] = self.password
fields[2] = str(int(time.time() // 86400))
if minweeks:
fields[3] = str(int(minweeks) * 7)
if maxweeks:
fields[4] = str(int(maxweeks) * 7)
if warnweeks:
fields[5] = str(int(warnweeks) * 7)
line = ':'.join(fields)
lines.append('%s\n' % line)
with open(self.SHADOWFILE, 'w+') as f:
f.writelines(lines)
rc = 0
except Exception as err:
self.module.fail_json(msg="failed to update users password: %s" % to_native(err))
return (rc, out, err)
def user_info(self):
info = super(SunOS, self).user_info()
if info:
info += self._user_attr_info()
return info
def _user_attr_info(self):
info = [''] * 3
with open(self.USER_ATTR, 'r') as file_handler:
for line in file_handler:
lines = line.strip().split('::::')
if lines[0] == self.name:
tmp = dict(x.split('=') for x in lines[1].split(';'))
info[0] = tmp.get('profiles', '')
info[1] = tmp.get('auths', '')
info[2] = tmp.get('roles', '')
return info
class DarwinUser(User):
"""
This is a Darwin macOS User manipulation class.
Main differences are that Darwin:-
- Handles accounts in a database managed by dscl(1)
- Has no useradd/groupadd
- Does not create home directories
- User password must be cleartext
- UID must be given
- System users must ben under 500
This overrides the following methods from the generic class:-
- user_exists()
- create_user()
- remove_user()
- modify_user()
"""
platform = 'Darwin'
distribution = None
SHADOWFILE = None
dscl_directory = '.'
fields = [
('comment', 'RealName'),
('home', 'NFSHomeDirectory'),
('shell', 'UserShell'),
('uid', 'UniqueID'),
('group', 'PrimaryGroupID'),
('hidden', 'IsHidden'),
]
def __init__(self, module):
super(DarwinUser, self).__init__(module)
# make the user hidden if option is set or deffer to system option
if self.hidden is None:
if self.system:
self.hidden = 1
elif self.hidden:
self.hidden = 1
else:
self.hidden = 0
# add hidden to processing if set
if self.hidden is not None:
self.fields.append(('hidden', 'IsHidden'))
def _get_dscl(self):
return [self.module.get_bin_path('dscl', True), self.dscl_directory]
def _list_user_groups(self):
cmd = self._get_dscl()
cmd += ['-search', '/Groups', 'GroupMembership', self.name]
(rc, out, err) = self.execute_command(cmd, obey_checkmode=False)
groups = []
for line in out.splitlines():
if line.startswith(' ') or line.startswith(')'):
continue
groups.append(line.split()[0])
return groups
def _get_user_property(self, property):
'''Return user PROPERTY as given my dscl(1) read or None if not found.'''
cmd = self._get_dscl()
cmd += ['-read', '/Users/%s' % self.name, property]
(rc, out, err) = self.execute_command(cmd, obey_checkmode=False)
if rc != 0:
return None
# from dscl(1)
# if property contains embedded spaces, the list will instead be
# displayed one entry per line, starting on the line after the key.
lines = out.splitlines()
# sys.stderr.write('*** |%s| %s -> %s\n' % (property, out, lines))
if len(lines) == 1:
return lines[0].split(': ')[1]
if len(lines) > 2:
return '\n'.join([lines[1].strip()] + lines[2:])
if len(lines) == 2:
return lines[1].strip()
return None
def _get_next_uid(self, system=None):
'''
Return the next available uid. If system=True, then
uid should be below of 500, if possible.
'''
cmd = self._get_dscl()
cmd += ['-list', '/Users', 'UniqueID']
(rc, out, err) = self.execute_command(cmd, obey_checkmode=False)
if rc != 0:
self.module.fail_json(
msg="Unable to get the next available uid",
rc=rc,
out=out,
err=err
)
max_uid = 0
max_system_uid = 0
for line in out.splitlines():
current_uid = int(line.split(' ')[-1])
if max_uid < current_uid:
max_uid = current_uid
if max_system_uid < current_uid and current_uid < 500:
max_system_uid = current_uid
if system and (0 < max_system_uid < 499):
return max_system_uid + 1
return max_uid + 1
def _change_user_password(self):
'''Change password for SELF.NAME against SELF.PASSWORD.
Please note that password must be cleartext.
'''
# some documentation on how is stored passwords on OSX:
# http://blog.lostpassword.com/2012/07/cracking-mac-os-x-lion-accounts-passwords/
# http://null-byte.wonderhowto.com/how-to/hack-mac-os-x-lion-passwords-0130036/
# http://pastebin.com/RYqxi7Ca
# on OSX 10.8+ hash is SALTED-SHA512-PBKDF2
# https://pythonhosted.org/passlib/lib/passlib.hash.pbkdf2_digest.html
# https://gist.github.com/nueh/8252572
cmd = self._get_dscl()
if self.password:
cmd += ['-passwd', '/Users/%s' % self.name, self.password]
else:
cmd += ['-create', '/Users/%s' % self.name, 'Password', '*']
(rc, out, err) = self.execute_command(cmd)
if rc != 0:
self.module.fail_json(msg='Error when changing password', err=err, out=out, rc=rc)
return (rc, out, err)
def _make_group_numerical(self):
'''Convert SELF.GROUP to is stringed numerical value suitable for dscl.'''
if self.group is None:
self.group = 'nogroup'
try:
self.group = grp.getgrnam(self.group).gr_gid
except KeyError:
self.module.fail_json(msg='Group "%s" not found. Try to create it first using "group" module.' % self.group)
# We need to pass a string to dscl
self.group = str(self.group)
def __modify_group(self, group, action):
'''Add or remove SELF.NAME to or from GROUP depending on ACTION.
ACTION can be 'add' or 'remove' otherwise 'remove' is assumed. '''
if action == 'add':
option = '-a'
else:
option = '-d'
cmd = ['dseditgroup', '-o', 'edit', option, self.name, '-t', 'user', group]
(rc, out, err) = self.execute_command(cmd)
if rc != 0:
self.module.fail_json(msg='Cannot %s user "%s" to group "%s".'
% (action, self.name, group), err=err, out=out, rc=rc)
return (rc, out, err)
def _modify_group(self):
'''Add or remove SELF.NAME to or from GROUP depending on ACTION.
ACTION can be 'add' or 'remove' otherwise 'remove' is assumed. '''
rc = 0
out = ''
err = ''
changed = False
current = set(self._list_user_groups())
if self.groups is not None:
target = self.get_groups_set(names_only=True)
else:
target = set([])
if self.append is False:
for remove in current - target:
(_rc, _out, _err) = self.__modify_group(remove, 'delete')
rc += rc
out += _out
err += _err
changed = True
for add in target - current:
(_rc, _out, _err) = self.__modify_group(add, 'add')
rc += _rc
out += _out
err += _err
changed = True
return (rc, out, err, changed)
def _update_system_user(self):
'''Hide or show user on login window according SELF.SYSTEM.
Returns 0 if a change has been made, None otherwise.'''
plist_file = '/Library/Preferences/com.apple.loginwindow.plist'
# http://support.apple.com/kb/HT5017?viewlocale=en_US
cmd = ['defaults', 'read', plist_file, 'HiddenUsersList']
(rc, out, err) = self.execute_command(cmd, obey_checkmode=False)
# returned value is
# (
# "_userA",
# "_UserB",
# userc
# )
hidden_users = []
for x in out.splitlines()[1:-1]:
try:
x = x.split('"')[1]
except IndexError:
x = x.strip()
hidden_users.append(x)
if self.system:
if self.name not in hidden_users:
cmd = ['defaults', 'write', plist_file, 'HiddenUsersList', '-array-add', self.name]
(rc, out, err) = self.execute_command(cmd)
if rc != 0:
self.module.fail_json(msg='Cannot user "%s" to hidden user list.' % self.name, err=err, out=out, rc=rc)
return 0
else:
if self.name in hidden_users:
del (hidden_users[hidden_users.index(self.name)])
cmd = ['defaults', 'write', plist_file, 'HiddenUsersList', '-array'] + hidden_users
(rc, out, err) = self.execute_command(cmd)
if rc != 0:
self.module.fail_json(msg='Cannot remove user "%s" from hidden user list.' % self.name, err=err, out=out, rc=rc)
return 0
def user_exists(self):
'''Check is SELF.NAME is a known user on the system.'''
cmd = self._get_dscl()
cmd += ['-read', '/Users/%s' % self.name, 'UniqueID']
(rc, out, err) = self.execute_command(cmd, obey_checkmode=False)
return rc == 0
def remove_user(self):
'''Delete SELF.NAME. If SELF.FORCE is true, remove its home directory.'''
info = self.user_info()
cmd = self._get_dscl()
cmd += ['-delete', '/Users/%s' % self.name]
(rc, out, err) = self.execute_command(cmd)
if rc != 0:
self.module.fail_json(msg='Cannot delete user "%s".' % self.name, err=err, out=out, rc=rc)
if self.force:
if os.path.exists(info[5]):
shutil.rmtree(info[5])
out += "Removed %s" % info[5]
return (rc, out, err)
def create_user(self, command_name='dscl'):
cmd = self._get_dscl()
cmd += ['-create', '/Users/%s' % self.name]
(rc, out, err) = self.execute_command(cmd)
if rc != 0:
self.module.fail_json(msg='Cannot create user "%s".' % self.name, err=err, out=out, rc=rc)
# Make the Gecos (alias display name) default to username
if self.comment is None:
self.comment = self.name
# Make user group default to 'staff'
if self.group is None:
self.group = 'staff'
self._make_group_numerical()
if self.uid is None:
self.uid = str(self._get_next_uid(self.system))
# Homedir is not created by default
if self.create_home:
if self.home is None:
self.home = '/Users/%s' % self.name
if not self.module.check_mode:
if not os.path.exists(self.home):
os.makedirs(self.home)
self.chown_homedir(int(self.uid), int(self.group), self.home)
# dscl sets shell to /usr/bin/false when UserShell is not specified
# so set the shell to /bin/bash when the user is not a system user
if not self.system and self.shell is None:
self.shell = '/bin/bash'
for field in self.fields:
if field[0] in self.__dict__ and self.__dict__[field[0]]:
cmd = self._get_dscl()
cmd += ['-create', '/Users/%s' % self.name, field[1], self.__dict__[field[0]]]
(rc, _out, _err) = self.execute_command(cmd)
if rc != 0:
self.module.fail_json(msg='Cannot add property "%s" to user "%s".' % (field[0], self.name), err=err, out=out, rc=rc)
out += _out
err += _err
if rc != 0:
return (rc, _out, _err)
(rc, _out, _err) = self._change_user_password()
out += _out
err += _err
self._update_system_user()
# here we don't care about change status since it is a creation,
# thus changed is always true.
if self.groups:
(rc, _out, _err, changed) = self._modify_group()
out += _out
err += _err
return (rc, out, err)
def modify_user(self):
changed = None
out = ''
err = ''
if self.group:
self._make_group_numerical()
for field in self.fields:
if field[0] in self.__dict__ and self.__dict__[field[0]]:
current = self._get_user_property(field[1])
if current is None or current != to_text(self.__dict__[field[0]]):
cmd = self._get_dscl()
cmd += ['-create', '/Users/%s' % self.name, field[1], self.__dict__[field[0]]]
(rc, _out, _err) = self.execute_command(cmd)
if rc != 0:
self.module.fail_json(
msg='Cannot update property "%s" for user "%s".'
% (field[0], self.name), err=err, out=out, rc=rc)
changed = rc
out += _out
err += _err
if self.update_password == 'always' and self.password is not None:
(rc, _out, _err) = self._change_user_password()
out += _out
err += _err
changed = rc
if self.groups:
(rc, _out, _err, _changed) = self._modify_group()
out += _out
err += _err
if _changed is True:
changed = rc
rc = self._update_system_user()
if rc == 0:
changed = rc
return (changed, out, err)
class AIX(User):
"""
This is a AIX User manipulation class.
This overrides the following methods from the generic class:-
- create_user()
- remove_user()
- modify_user()
- parse_shadow_file()
"""
platform = 'AIX'
distribution = None
SHADOWFILE = '/etc/security/passwd'
def remove_user(self):
cmd = [self.module.get_bin_path('userdel', True)]
if self.remove:
cmd.append('-r')
cmd.append(self.name)
return self.execute_command(cmd)
def create_user_useradd(self, command_name='useradd'):
cmd = [self.module.get_bin_path(command_name, True)]
if self.uid is not None:
cmd.append('-u')
cmd.append(self.uid)
if self.group is not None:
if not self.group_exists(self.group):
self.module.fail_json(msg="Group %s does not exist" % self.group)
cmd.append('-g')
cmd.append(self.group)
if self.groups is not None and len(self.groups):
groups = self.get_groups_set()
cmd.append('-G')
cmd.append(','.join(groups))
if self.comment is not None:
cmd.append('-c')
cmd.append(self.comment)
if self.home is not None:
cmd.append('-d')
cmd.append(self.home)
if self.shell is not None:
cmd.append('-s')
cmd.append(self.shell)
if self.create_home:
cmd.append('-m')
if self.skeleton is not None:
cmd.append('-k')
cmd.append(self.skeleton)
if self.umask is not None:
cmd.append('-K')
cmd.append('UMASK=' + self.umask)
cmd.append(self.name)
(rc, out, err) = self.execute_command(cmd)
# set password with chpasswd
if self.password is not None:
cmd = []
cmd.append(self.module.get_bin_path('chpasswd', True))
cmd.append('-e')
cmd.append('-c')
self.execute_command(cmd, data="%s:%s" % (self.name, self.password))
return (rc, out, err)
def modify_user_usermod(self):
cmd = [self.module.get_bin_path('usermod', True)]
info = self.user_info()
if self.uid is not None and info[2] != int(self.uid):
cmd.append('-u')
cmd.append(self.uid)
if self.group is not None:
if not self.group_exists(self.group):
self.module.fail_json(msg="Group %s does not exist" % self.group)
ginfo = self.group_info(self.group)
if info[3] != ginfo[2]:
cmd.append('-g')
cmd.append(self.group)
if self.groups is not None:
current_groups = self.user_group_membership()
groups_need_mod = False
groups = []
if self.groups == '':
if current_groups and not self.append:
groups_need_mod = True
else:
groups = self.get_groups_set(names_only=True)
group_diff = set(current_groups).symmetric_difference(groups)
if group_diff:
if self.append:
for g in groups:
if g in group_diff:
groups_need_mod = True
break
else:
groups_need_mod = True
if groups_need_mod:
cmd.append('-G')
cmd.append(','.join(groups))
if self.comment is not None and info[4] != self.comment:
cmd.append('-c')
cmd.append(self.comment)
if self.home is not None and info[5] != self.home:
if self.move_home:
cmd.append('-m')
cmd.append('-d')
cmd.append(self.home)
if self.shell is not None and info[6] != self.shell:
cmd.append('-s')
cmd.append(self.shell)
# skip if no changes to be made
if len(cmd) == 1:
(rc, out, err) = (None, '', '')
else:
cmd.append(self.name)
(rc, out, err) = self.execute_command(cmd)
# set password with chpasswd
if self.update_password == 'always' and self.password is not None and info[1] != self.password:
cmd = []
cmd.append(self.module.get_bin_path('chpasswd', True))
cmd.append('-e')
cmd.append('-c')
(rc2, out2, err2) = self.execute_command(cmd, data="%s:%s" % (self.name, self.password))
else:
(rc2, out2, err2) = (None, '', '')
if rc is not None:
return (rc, out + out2, err + err2)
else:
return (rc2, out + out2, err + err2)
def parse_shadow_file(self):
"""Example AIX shadowfile data:
nobody:
password = *
operator1:
password = {ssha512}06$xxxxxxxxxxxx....
lastupdate = 1549558094
test1:
password = *
lastupdate = 1553695126
"""
b_name = to_bytes(self.name)
b_passwd = b''
b_expires = b''
if os.path.exists(self.SHADOWFILE) and os.access(self.SHADOWFILE, os.R_OK):
with open(self.SHADOWFILE, 'rb') as bf:
b_lines = bf.readlines()
b_passwd_line = b''
b_expires_line = b''
try:
for index, b_line in enumerate(b_lines):
# Get password and lastupdate lines which come after the username
if b_line.startswith(b'%s:' % b_name):
b_passwd_line = b_lines[index + 1]
b_expires_line = b_lines[index + 2]
break
# Sanity check the lines because sometimes both are not present
if b' = ' in b_passwd_line:
b_passwd = b_passwd_line.split(b' = ', 1)[-1].strip()
if b' = ' in b_expires_line:
b_expires = b_expires_line.split(b' = ', 1)[-1].strip()
except IndexError:
self.module.fail_json(msg='Failed to parse shadow file %s' % self.SHADOWFILE)
passwd = to_native(b_passwd)
expires = to_native(b_expires) or -1
return passwd, expires
class HPUX(User):
"""
This is a HP-UX User manipulation class.
This overrides the following methods from the generic class:-
- create_user()
- remove_user()
- modify_user()
"""
platform = 'HP-UX'
distribution = None
SHADOWFILE = '/etc/shadow'
def create_user(self):
cmd = ['/usr/sam/lbin/useradd.sam']
if self.uid is not None:
cmd.append('-u')
cmd.append(self.uid)
if self.non_unique:
cmd.append('-o')
if self.group is not None:
if not self.group_exists(self.group):
self.module.fail_json(msg="Group %s does not exist" % self.group)
cmd.append('-g')
cmd.append(self.group)
if self.groups is not None and len(self.groups):
groups = self.get_groups_set()
cmd.append('-G')
cmd.append(','.join(groups))
if self.comment is not None:
cmd.append('-c')
cmd.append(self.comment)
if self.home is not None:
cmd.append('-d')
cmd.append(self.home)
if self.shell is not None:
cmd.append('-s')
cmd.append(self.shell)
if self.password is not None:
cmd.append('-p')
cmd.append(self.password)
if self.create_home:
cmd.append('-m')
else:
cmd.append('-M')
if self.system:
cmd.append('-r')
cmd.append(self.name)
return self.execute_command(cmd)
def remove_user(self):
cmd = ['/usr/sam/lbin/userdel.sam']
if self.force:
cmd.append('-F')
if self.remove:
cmd.append('-r')
cmd.append(self.name)
return self.execute_command(cmd)
def modify_user(self):
cmd = ['/usr/sam/lbin/usermod.sam']
info = self.user_info()
if self.uid is not None and info[2] != int(self.uid):
cmd.append('-u')
cmd.append(self.uid)
if self.non_unique:
cmd.append('-o')
if self.group is not None:
if not self.group_exists(self.group):
self.module.fail_json(msg="Group %s does not exist" % self.group)
ginfo = self.group_info(self.group)
if info[3] != ginfo[2]:
cmd.append('-g')
cmd.append(self.group)
if self.groups is not None:
current_groups = self.user_group_membership()
groups_need_mod = False
groups = []
if self.groups == '':
if current_groups and not self.append:
groups_need_mod = True
else:
groups = self.get_groups_set(remove_existing=False, names_only=True)
group_diff = set(current_groups).symmetric_difference(groups)
if group_diff:
if self.append:
for g in groups:
if g in group_diff:
groups_need_mod = True
break
else:
groups_need_mod = True
if groups_need_mod:
cmd.append('-G')
new_groups = groups
if self.append:
new_groups = groups | set(current_groups)
cmd.append(','.join(new_groups))
if self.comment is not None and info[4] != self.comment:
cmd.append('-c')
cmd.append(self.comment)
if self.home is not None and info[5] != self.home:
cmd.append('-d')
cmd.append(self.home)
if self.move_home:
cmd.append('-m')
if self.shell is not None and info[6] != self.shell:
cmd.append('-s')
cmd.append(self.shell)
if self.update_password == 'always' and self.password is not None and info[1] != self.password:
cmd.append('-F')
cmd.append('-p')
cmd.append(self.password)
# skip if no changes to be made
if len(cmd) == 1:
return (None, '', '')
cmd.append(self.name)
return self.execute_command(cmd)
class BusyBox(User):
"""
This is the BusyBox class for use on systems that have adduser, deluser,
and delgroup commands. It overrides the following methods:
- create_user()
- remove_user()
- modify_user()
"""
def create_user(self):
cmd = [self.module.get_bin_path('adduser', True)]
cmd.append('-D')
if self.uid is not None:
cmd.append('-u')
cmd.append(self.uid)
if self.group is not None:
if not self.group_exists(self.group):
self.module.fail_json(msg='Group {0} does not exist'.format(self.group))
cmd.append('-G')
cmd.append(self.group)
if self.comment is not None:
cmd.append('-g')
cmd.append(self.comment)
if self.home is not None:
cmd.append('-h')
cmd.append(self.home)
if self.shell is not None:
cmd.append('-s')
cmd.append(self.shell)
if not self.create_home:
cmd.append('-H')
if self.skeleton is not None:
cmd.append('-k')
cmd.append(self.skeleton)
if self.umask is not None:
cmd.append('-K')
cmd.append('UMASK=' + self.umask)
if self.system:
cmd.append('-S')
cmd.append(self.name)
rc, out, err = self.execute_command(cmd)
if rc is not None and rc != 0:
self.module.fail_json(name=self.name, msg=err, rc=rc)
if self.password is not None:
cmd = [self.module.get_bin_path('chpasswd', True)]
cmd.append('--encrypted')
data = '{name}:{password}'.format(name=self.name, password=self.password)
rc, out, err = self.execute_command(cmd, data=data)
if rc is not None and rc != 0:
self.module.fail_json(name=self.name, msg=err, rc=rc)
# Add to additional groups
if self.groups is not None and len(self.groups):
groups = self.get_groups_set()
add_cmd_bin = self.module.get_bin_path('adduser', True)
for group in groups:
cmd = [add_cmd_bin, self.name, group]
rc, out, err = self.execute_command(cmd)
if rc is not None and rc != 0:
self.module.fail_json(name=self.name, msg=err, rc=rc)
return rc, out, err
def remove_user(self):
cmd = [
self.module.get_bin_path('deluser', True),
self.name
]
if self.remove:
cmd.append('--remove-home')
return self.execute_command(cmd)
def modify_user(self):
current_groups = self.user_group_membership()
groups = []
rc = None
out = ''
err = ''
info = self.user_info()
add_cmd_bin = self.module.get_bin_path('adduser', True)
remove_cmd_bin = self.module.get_bin_path('delgroup', True)
# Manage group membership
if self.groups is not None and len(self.groups):
groups = self.get_groups_set()
group_diff = set(current_groups).symmetric_difference(groups)
if group_diff:
for g in groups:
if g in group_diff:
add_cmd = [add_cmd_bin, self.name, g]
rc, out, err = self.execute_command(add_cmd)
if rc is not None and rc != 0:
self.module.fail_json(name=self.name, msg=err, rc=rc)
for g in group_diff:
if g not in groups and not self.append:
remove_cmd = [remove_cmd_bin, self.name, g]
rc, out, err = self.execute_command(remove_cmd)
if rc is not None and rc != 0:
self.module.fail_json(name=self.name, msg=err, rc=rc)
# Manage password
if self.update_password == 'always' and self.password is not None and info[1] != self.password:
cmd = [self.module.get_bin_path('chpasswd', True)]
cmd.append('--encrypted')
data = '{name}:{password}'.format(name=self.name, password=self.password)
rc, out, err = self.execute_command(cmd, data=data)
if rc is not None and rc != 0:
self.module.fail_json(name=self.name, msg=err, rc=rc)
return rc, out, err
class Alpine(BusyBox):
"""
This is the Alpine User manipulation class. It inherits the BusyBox class
behaviors such as using adduser and deluser commands.
"""
platform = 'Linux'
distribution = 'Alpine'
def main():
ssh_defaults = dict(
bits=0,
type='rsa',
passphrase=None,
comment='ansible-generated on %s' % socket.gethostname()
)
module = AnsibleModule(
argument_spec=dict(
state=dict(type='str', default='present', choices=['absent', 'present']),
name=dict(type='str', required=True, aliases=['user']),
uid=dict(type='int'),
non_unique=dict(type='bool', default=False),
group=dict(type='str'),
groups=dict(type='list', elements='str'),
comment=dict(type='str'),
home=dict(type='path'),
shell=dict(type='str'),
password=dict(type='str', no_log=True),
login_class=dict(type='str'),
password_expire_max=dict(type='int', no_log=False),
password_expire_min=dict(type='int', no_log=False),
password_expire_warn=dict(type='int', no_log=False),
# following options are specific to macOS
hidden=dict(type='bool'),
# following options are specific to selinux
seuser=dict(type='str'),
# following options are specific to userdel
force=dict(type='bool', default=False),
remove=dict(type='bool', default=False),
# following options are specific to useradd
create_home=dict(type='bool', default=True, aliases=['createhome']),
skeleton=dict(type='str'),
system=dict(type='bool', default=False),
# following options are specific to usermod
move_home=dict(type='bool', default=False),
append=dict(type='bool', default=False),
# following are specific to ssh key generation
generate_ssh_key=dict(type='bool'),
ssh_key_bits=dict(type='int', default=ssh_defaults['bits']),
ssh_key_type=dict(type='str', default=ssh_defaults['type']),
ssh_key_file=dict(type='path'),
ssh_key_comment=dict(type='str', default=ssh_defaults['comment']),
ssh_key_passphrase=dict(type='str', no_log=True),
update_password=dict(type='str', default='always', choices=['always', 'on_create'], no_log=False),
expires=dict(type='float'),
password_lock=dict(type='bool', no_log=False),
local=dict(type='bool'),
profile=dict(type='str'),
authorization=dict(type='str'),
role=dict(type='str'),
umask=dict(type='str'),
),
supports_check_mode=True,
)
user = User(module)
user.check_password_encrypted()
module.debug('User instantiated - platform %s' % user.platform)
if user.distribution:
module.debug('User instantiated - distribution %s' % user.distribution)
rc = None
out = ''
err = ''
result = {}
result['name'] = user.name
result['state'] = user.state
if user.state == 'absent':
if user.user_exists():
if module.check_mode:
module.exit_json(changed=True)
(rc, out, err) = user.remove_user()
if rc != 0:
module.fail_json(name=user.name, msg=err, rc=rc)
result['force'] = user.force
result['remove'] = user.remove
elif user.state == 'present':
if not user.user_exists():
if module.check_mode:
module.exit_json(changed=True)
# Check to see if the provided home path contains parent directories
# that do not exist.
path_needs_parents = False
if user.home and user.create_home:
parent = os.path.dirname(user.home)
if not os.path.isdir(parent):
path_needs_parents = True
(rc, out, err) = user.create_user()
# If the home path had parent directories that needed to be created,
# make sure file permissions are correct in the created home directory.
if path_needs_parents:
info = user.user_info()
if info is not False:
user.chown_homedir(info[2], info[3], user.home)
if module.check_mode:
result['system'] = user.name
else:
result['system'] = user.system
result['create_home'] = user.create_home
else:
# modify user (note: this function is check mode aware)
(rc, out, err) = user.modify_user()
result['append'] = user.append
result['move_home'] = user.move_home
if rc is not None and rc != 0:
module.fail_json(name=user.name, msg=err, rc=rc)
if user.password is not None:
result['password'] = 'NOT_LOGGING_PASSWORD'
if rc is None:
result['changed'] = False
else:
result['changed'] = True
if out:
result['stdout'] = out
if err:
result['stderr'] = err
if user.user_exists() and user.state == 'present':
info = user.user_info()
if info is False:
result['msg'] = "failed to look up user name: %s" % user.name
result['failed'] = True
result['uid'] = info[2]
result['group'] = info[3]
result['comment'] = info[4]
result['home'] = info[5]
result['shell'] = info[6]
if user.groups is not None:
result['groups'] = user.groups
# handle missing homedirs
info = user.user_info()
if user.home is None:
user.home = info[5]
if not os.path.exists(user.home) and user.create_home:
if not module.check_mode:
user.create_homedir(user.home)
user.chown_homedir(info[2], info[3], user.home)
result['changed'] = True
# deal with ssh key
if user.sshkeygen:
# generate ssh key (note: this function is check mode aware)
(rc, out, err) = user.ssh_key_gen()
if rc is not None and rc != 0:
module.fail_json(name=user.name, msg=err, rc=rc)
if rc == 0:
result['changed'] = True
(rc, out, err) = user.ssh_key_fingerprint()
if rc == 0:
result['ssh_fingerprint'] = out.strip()
else:
result['ssh_fingerprint'] = err.strip()
result['ssh_key_file'] = user.get_ssh_key_path()
result['ssh_public_key'] = user.get_ssh_public_key()
(rc, out, err) = user.set_password_expire()
if rc is None:
pass # target state reached, nothing to do
else:
if rc != 0:
module.fail_json(name=user.name, msg=err, rc=rc)
else:
result['changed'] = True
module.exit_json(**result)
# import module snippets
if __name__ == '__main__':
main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 71,916 |
user module: ValueError: invalid literal for int() with base 10: '' when attempting to set 'expires' on a user already in /etc/passwd
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below -->
Ansible user module fails on casting the `expires` value to an `int` when an entry already exists in `/etc/passwd` for a user with the error message `ValueError: invalid literal for int() with base 10: ''`.
https://github.com/ansible/ansible/blob/31ddca4c0db2584b0a68880bdea1d97bd8b22032/lib/ansible/modules/user.py#L800
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
user module
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.9.6
config file = /etc/ansible/ansible.cfg
configured module search path = ['/export/home/rrotaru/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /tech/home/rrotaru/venv/lib64/python3.6/site-packages/ansible
executable location = /tech/home/rrotaru/venv/bin/ansible
python version = 3.6.8 (default, Jun 11 2019, 15:15:01) [GCC 4.8.5 20150623 (Red Hat 4.8.5-39)]
```
```
ansible 2.10.1
config file = /etc/ansible/ansible.cfg
configured module search path = ['/export/home/rrotaru/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /tech/home/rrotaru/venv/lib64/python3.6/site-packages/ansible
executable location = /tech/home/rrotaru/venv/bin/ansible
python version = 3.6.8 (default, Jun 11 2019, 15:15:01) [GCC 4.8.5 20150623 (Red Hat 4.8.5-39)]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
N/A
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
Red Hat Enterprise Linux Server release 7.5 (Maipo)
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
Run the playbook below.
<!--- Paste example playbooks or commands between quotes below -->
```yaml
---
- hosts: localhost
become: yes
tasks:
- lineinfile:
path: /etc/passwd
line: "dummy::123:123::/export/home/dummy:/bin/ksh"
regexp: "^dummy.*"
state: present
- user:
name: dummy
state: present
uid: 123
expires: -1
password: "dummy#password | password_hash('sha512') }}"
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
User module should update the user expiration to non-expiring.
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
User module fails with python ValueError.
<!--- Paste verbatim command output between quotes -->
```paste below
(venv) $ ansible-playbook expirestest.yml -K -v
Using /etc/ansible/ansible.cfg as config file
BECOME password:
[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit
localhost does not match 'all'
PLAY [localhost] *******************************************************************************
TASK [Gathering Facts] *************************************************************************
ok: [localhost]
TASK [lineinfile] ******************************************************************************
changed: [localhost] => {"backup": "", "changed": true, "msg": "line added"}
TASK [user] ************************************************************************************
An exception occurred during task execution. To see the full traceback, use -vvv. The error was: ValueError: invalid literal for int() with base 10: ''
fatal: [localhost]: FAILED! => {"changed": false, "module_stderr": "Traceback (most recent call last):\n File \"/export/home/rrotaru/.ansible/tmp/ansible-tmp-1600960729.601071-12482-194155520219212/AnsiballZ_user.py\", line 102, in <module>\n _ansiballz_main()\n File \"/export/home/rrotaru/.ansible/tmp/ansible-tmp-1600960729.601071-12482-194155520219212/AnsiballZ_user.py\", line 94, in _ansiballz_main\n invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)\n File \"/export/home/rrotaru/.ansible/tmp/ansible-tmp-1600960729.601071-12482-194155520219212/AnsiballZ_user.py\", line 40, in invoke_module\n runpy.run_module(mod_name='ansible.modules.user', init_globals=None, run_name='__main__', alter_sys=True)\n File \"/usr/lib64/python3.6/runpy.py\", line 205, in run_module\n return _run_module_code(code, init_globals, run_name, mod_spec)\n File \"/usr/lib64/python3.6/runpy.py\", line 96, in _run_module_code\n mod_name, mod_spec, pkg_name, script_name)\n File \"/usr/lib64/python3.6/runpy.py\", line 85, in _run_code\n exec(code, run_globals)\n File \"/tmp/ansible_user_payload_v5n53kd3/ansible_user_payload.zip/ansible/modules/user.py\", line 3026, in <module>\n File \"/tmp/ansible_user_payload_v5n53kd3/ansible_user_payload.zip/ansible/modules/user.py\", line 2965, in main\n File \"/tmp/ansible_user_payload_v5n53kd3/ansible_user_payload.zip/ansible/modules/user.py\", line 1111, in modify_user\n File \"/tmp/ansible_user_payload_v5n53kd3/ansible_user_payload.zip/ansible/modules/user.py\", line 799, in modify_user_usermod\nValueError: invalid literal for int() with base 10: ''\n", "module_stdout": "", "msg": "MODULE FAILURE\nSee stdout/stderr for the exact error", "rc": 1}
PLAY RECAP *************************************************************************************
localhost : ok=2 changed=1 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
```
|
https://github.com/ansible/ansible/issues/71916
|
https://github.com/ansible/ansible/pull/75194
|
2e6d849bdb80364b5229f4b9190935e84ee9bfe8
|
116948cd1468b8a3e34af8e3671f4089e1f7584c
| 2020-09-24T15:22:58Z |
python
| 2023-08-23T18:18:57Z |
test/integration/targets/user/tasks/main.yml
|
# Test code for the user module.
# (c) 2017, James Tanner <[email protected]>
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
#
- name: skip broken distros
meta: end_host
when: ansible_distribution == 'Alpine'
- import_tasks: test_create_user.yml
- import_tasks: test_create_system_user.yml
- import_tasks: test_create_user_uid.yml
- import_tasks: test_create_user_password.yml
- import_tasks: test_create_user_home.yml
- import_tasks: test_remove_user.yml
- import_tasks: test_no_home_fallback.yml
- import_tasks: test_expires.yml
- import_tasks: test_expires_new_account.yml
- import_tasks: test_expires_new_account_epoch_negative.yml
- import_tasks: test_expires_min_max.yml
- import_tasks: test_expires_warn.yml
- import_tasks: test_shadow_backup.yml
- import_tasks: test_ssh_key_passphrase.yml
- import_tasks: test_password_lock.yml
- import_tasks: test_password_lock_new_user.yml
- import_tasks: test_local.yml
when: not (ansible_distribution == 'openSUSE Leap' and ansible_distribution_version is version('15.4', '>='))
- import_tasks: test_umask.yml
when: ansible_facts.system == 'Linux'
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 71,916 |
user module: ValueError: invalid literal for int() with base 10: '' when attempting to set 'expires' on a user already in /etc/passwd
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below -->
Ansible user module fails on casting the `expires` value to an `int` when an entry already exists in `/etc/passwd` for a user with the error message `ValueError: invalid literal for int() with base 10: ''`.
https://github.com/ansible/ansible/blob/31ddca4c0db2584b0a68880bdea1d97bd8b22032/lib/ansible/modules/user.py#L800
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
user module
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.9.6
config file = /etc/ansible/ansible.cfg
configured module search path = ['/export/home/rrotaru/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /tech/home/rrotaru/venv/lib64/python3.6/site-packages/ansible
executable location = /tech/home/rrotaru/venv/bin/ansible
python version = 3.6.8 (default, Jun 11 2019, 15:15:01) [GCC 4.8.5 20150623 (Red Hat 4.8.5-39)]
```
```
ansible 2.10.1
config file = /etc/ansible/ansible.cfg
configured module search path = ['/export/home/rrotaru/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /tech/home/rrotaru/venv/lib64/python3.6/site-packages/ansible
executable location = /tech/home/rrotaru/venv/bin/ansible
python version = 3.6.8 (default, Jun 11 2019, 15:15:01) [GCC 4.8.5 20150623 (Red Hat 4.8.5-39)]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
N/A
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
Red Hat Enterprise Linux Server release 7.5 (Maipo)
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
Run the playbook below.
<!--- Paste example playbooks or commands between quotes below -->
```yaml
---
- hosts: localhost
become: yes
tasks:
- lineinfile:
path: /etc/passwd
line: "dummy::123:123::/export/home/dummy:/bin/ksh"
regexp: "^dummy.*"
state: present
- user:
name: dummy
state: present
uid: 123
expires: -1
password: "dummy#password | password_hash('sha512') }}"
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
User module should update the user expiration to non-expiring.
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
User module fails with python ValueError.
<!--- Paste verbatim command output between quotes -->
```paste below
(venv) $ ansible-playbook expirestest.yml -K -v
Using /etc/ansible/ansible.cfg as config file
BECOME password:
[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit
localhost does not match 'all'
PLAY [localhost] *******************************************************************************
TASK [Gathering Facts] *************************************************************************
ok: [localhost]
TASK [lineinfile] ******************************************************************************
changed: [localhost] => {"backup": "", "changed": true, "msg": "line added"}
TASK [user] ************************************************************************************
An exception occurred during task execution. To see the full traceback, use -vvv. The error was: ValueError: invalid literal for int() with base 10: ''
fatal: [localhost]: FAILED! => {"changed": false, "module_stderr": "Traceback (most recent call last):\n File \"/export/home/rrotaru/.ansible/tmp/ansible-tmp-1600960729.601071-12482-194155520219212/AnsiballZ_user.py\", line 102, in <module>\n _ansiballz_main()\n File \"/export/home/rrotaru/.ansible/tmp/ansible-tmp-1600960729.601071-12482-194155520219212/AnsiballZ_user.py\", line 94, in _ansiballz_main\n invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)\n File \"/export/home/rrotaru/.ansible/tmp/ansible-tmp-1600960729.601071-12482-194155520219212/AnsiballZ_user.py\", line 40, in invoke_module\n runpy.run_module(mod_name='ansible.modules.user', init_globals=None, run_name='__main__', alter_sys=True)\n File \"/usr/lib64/python3.6/runpy.py\", line 205, in run_module\n return _run_module_code(code, init_globals, run_name, mod_spec)\n File \"/usr/lib64/python3.6/runpy.py\", line 96, in _run_module_code\n mod_name, mod_spec, pkg_name, script_name)\n File \"/usr/lib64/python3.6/runpy.py\", line 85, in _run_code\n exec(code, run_globals)\n File \"/tmp/ansible_user_payload_v5n53kd3/ansible_user_payload.zip/ansible/modules/user.py\", line 3026, in <module>\n File \"/tmp/ansible_user_payload_v5n53kd3/ansible_user_payload.zip/ansible/modules/user.py\", line 2965, in main\n File \"/tmp/ansible_user_payload_v5n53kd3/ansible_user_payload.zip/ansible/modules/user.py\", line 1111, in modify_user\n File \"/tmp/ansible_user_payload_v5n53kd3/ansible_user_payload.zip/ansible/modules/user.py\", line 799, in modify_user_usermod\nValueError: invalid literal for int() with base 10: ''\n", "module_stdout": "", "msg": "MODULE FAILURE\nSee stdout/stderr for the exact error", "rc": 1}
PLAY RECAP *************************************************************************************
localhost : ok=2 changed=1 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
```
|
https://github.com/ansible/ansible/issues/71916
|
https://github.com/ansible/ansible/pull/75194
|
2e6d849bdb80364b5229f4b9190935e84ee9bfe8
|
116948cd1468b8a3e34af8e3671f4089e1f7584c
| 2020-09-24T15:22:58Z |
python
| 2023-08-23T18:18:57Z |
test/integration/targets/user/tasks/test_expires_no_shadow.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 81,349 |
ansible.builtin.script creates field behavior and documentation
|
### Summary
* `creates` field in ansible documentation should specify `type: str`
* `creates` field should be type checked. For example, passing a `bool` triggers vague stack trace
### Issue Type
Bug Report
### Component Name
ansible.builtin.script
### Ansible Version
```console
ansible [core 2.14.5]
config file = None
configured module search path = ['/home/john/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /data/john/projects/cf/env/lib/python3.10/site-packages/ansible
ansible collection location = /home/john/.ansible/collections:/usr/share/ansible/collections
executable location = /data/john/projects/cf/env/bin/ansible
python version = 3.10.11 (main, Apr 20 2023, 19:02:41) [GCC 11.2.0] (/data/john/projects/cf/env/bin/python)
jinja version = 3.1.2
libyaml = True
```
### Configuration
```console
CONFIG_FILE() = None
```
### OS / Environment
Ubuntu
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
---
- name: Example Playbook
hosts: localhost
gather_facts: false
tasks:
- name: Run Script
ansible.builtin.script:
cmd: pwd
chdir: "/usr/bin"
creates: true
register: script_output
- name: Display script output
debug:
var: script_output.stdout
```
### Expected Results
Expected the module to report that `str` value should be provided instead of `bool`.
### Actual Results
```console
ansible-playbook [core 2.14.5]
config file = None
configured module search path = ['/home/john/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /data/john/projects/cf/env/lib/python3.10/site-packages/ansible
ansible collection location = /home/john/.ansible/collections:/usr/share/ansible/collections
executable location = /data/john/projects/cf/env/bin/ansible-playbook
python version = 3.10.11 (main, Apr 20 2023, 19:02:41) [GCC 11.2.0] (/data/john/projects/cf/env/bin/python)
jinja version = 3.1.2
libyaml = True
No config file found; using defaults
setting up inventory plugins
host_list declined parsing /data/john/projects/cf/data/module_yaml/20230720-184644/lv3/ansible.builtin.script/hosts.ini as it did not pass its verify_file() method
script declined parsing /data/john/projects/cf/data/module_yaml/20230720-184644/lv3/ansible.builtin.script/hosts.ini as it did not pass its verify_file() method
auto declined parsing /data/john/projects/cf/data/module_yaml/20230720-184644/lv3/ansible.builtin.script/hosts.ini as it did not pass its verify_file() method
yaml declined parsing /data/john/projects/cf/data/module_yaml/20230720-184644/lv3/ansible.builtin.script/hosts.ini as it did not pass its verify_file() method
Parsed /data/john/projects/cf/data/module_yaml/20230720-184644/lv3/ansible.builtin.script/hosts.ini inventory source with ini plugin
Loading callback plugin default of type stdout, v2.0 from /data/john/projects/cf/env/lib/python3.10/site-packages/ansible/plugins/callback/default.py
Skipping callback 'default', as we already have a stdout callback.
Skipping callback 'minimal', as we already have a stdout callback.
Skipping callback 'oneline', as we already have a stdout callback.
PLAYBOOK: test.yaml ************************************************************
Positional arguments: test.yaml
verbosity: 4
connection: smart
timeout: 10
become_method: sudo
tags: ('all',)
inventory: ('/data/john/projects/cf/data/module_yaml/20230720-184644/lv3/ansible.builtin.script/hosts.ini',)
forks: 5
1 plays in test.yaml
PLAY [Example Playbook] ********************************************************
TASK [Run Script] **************************************************************
task path: /data/john/projects/cf/data/module_yaml/20230720-184644/lv3/ansible.builtin.script/test.yaml:7
<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: john
<127.0.0.1> EXEC /bin/sh -c 'echo ~john && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /home/john/.ansible/tmp `"&& mkdir "` echo /home/john/.ansible/tmp/ansible-tmp-1690392695.3185742-1664374-141461953585361 `" && echo ansible-tmp-1690392695.3185742-1664374-141461953585361="` echo /home/john/.ansible/tmp/ansible-tmp-1690392695.3185742-1664374-141461953585361 `" ) && sleep 0'
<127.0.0.1> EXEC /bin/sh -c 'rm -f -r /home/john/.ansible/tmp/ansible-tmp-1690392695.3185742-1664374-141461953585361/ > /dev/null 2>&1 && sleep 0'
The full traceback is:
Traceback (most recent call last):
File "/data/john/projects/cf/env/lib/python3.10/site-packages/ansible/executor/task_executor.py", line 158, in run
res = self._execute()
File "/data/john/projects/cf/env/lib/python3.10/site-packages/ansible/executor/task_executor.py", line 633, in _execute
result = self._handler.run(task_vars=vars_copy)
File "/data/john/projects/cf/env/lib/python3.10/site-packages/ansible/plugins/action/script.py", line 52, in run
if self._remote_file_exists(creates):
File "/data/john/projects/cf/env/lib/python3.10/site-packages/ansible/plugins/action/__init__.py", line 204, in _remote_file_exists
cmd = self._connection._shell.exists(path)
File "/data/john/projects/cf/env/lib/python3.10/site-packages/ansible/plugins/shell/__init__.py", line 138, in exists
cmd = ['test', '-e', shlex.quote(path)]
File "/home/john/miniconda3/envs/3.10/lib/python3.10/shlex.py", line 329, in quote
if _find_unsafe(s) is None:
TypeError: expected string or bytes-like object
fatal: [localhost]: FAILED! => {
"msg": "Unexpected failure during module execution: expected string or bytes-like object",
"stdout": ""
}
PLAY RECAP *********************************************************************
localhost : ok=0 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/81349
|
https://github.com/ansible/ansible/pull/81469
|
37cb44ec37355524ca6a9ec6296e19e3ee74ac98
|
da63f32d59fe882bc77532e734af7348b65cb6cb
| 2023-07-26T17:32:53Z |
python
| 2023-08-25T17:27:26Z |
lib/ansible/modules/command.py
|
# -*- coding: utf-8 -*-
# Copyright: (c) 2012, Michael DeHaan <[email protected]>, and others
# Copyright: (c) 2016, Toshio Kuratomi <[email protected]>
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
DOCUMENTATION = r'''
---
module: command
short_description: Execute commands on targets
version_added: historical
description:
- The M(ansible.builtin.command) module takes the command name followed by a list of space-delimited arguments.
- The given command will be executed on all selected nodes.
- The command(s) will not be
processed through the shell, so variables like C($HOSTNAME) and operations
like C("*"), C("<"), C(">"), C("|"), C(";") and C("&") will not work.
Use the M(ansible.builtin.shell) module if you need these features.
- To create C(command) tasks that are easier to read than the ones using space-delimited
arguments, pass parameters using the C(args) L(task keyword,https://docs.ansible.com/ansible/latest/reference_appendices/playbooks_keywords.html#task)
or use O(cmd) parameter.
- Either a free form command or O(cmd) parameter is required, see the examples.
- For Windows targets, use the M(ansible.windows.win_command) module instead.
extends_documentation_fragment:
- action_common_attributes
- action_common_attributes.raw
attributes:
check_mode:
details: while the command itself is arbitrary and cannot be subject to the check mode semantics it adds O(creates)/O(removes) options as a workaround
support: partial
diff_mode:
support: none
platform:
support: full
platforms: posix
raw:
support: full
options:
expand_argument_vars:
description:
- Expands the arguments that are variables, for example C($HOME) will be expanded before being passed to the
command to run.
- Set to V(false) to disable expansion and treat the value as a literal argument.
type: bool
default: true
version_added: "2.16"
free_form:
description:
- The command module takes a free form string as a command to run.
- There is no actual parameter named 'free form'.
cmd:
type: str
description:
- The command to run.
argv:
type: list
elements: str
description:
- Passes the command as a list rather than a string.
- Use O(argv) to avoid quoting values that would otherwise be interpreted incorrectly (for example "user name").
- Only the string (free form) or the list (argv) form can be provided, not both. One or the other must be provided.
version_added: "2.6"
creates:
type: path
description:
- A filename or (since 2.0) glob pattern. If a matching file already exists, this step B(will not) be run.
- This is checked before O(removes) is checked.
removes:
type: path
description:
- A filename or (since 2.0) glob pattern. If a matching file exists, this step B(will) be run.
- This is checked after O(creates) is checked.
version_added: "0.8"
chdir:
type: path
description:
- Change into this directory before running the command.
version_added: "0.6"
stdin:
description:
- Set the stdin of the command directly to the specified value.
type: str
version_added: "2.4"
stdin_add_newline:
type: bool
default: yes
description:
- If set to V(true), append a newline to stdin data.
version_added: "2.8"
strip_empty_ends:
description:
- Strip empty lines from the end of stdout/stderr in result.
version_added: "2.8"
type: bool
default: yes
notes:
- If you want to run a command through the shell (say you are using C(<), C(>), C(|), and so on),
you actually want the M(ansible.builtin.shell) module instead.
Parsing shell metacharacters can lead to unexpected commands being executed if quoting is not done correctly so it is more secure to
use the M(ansible.builtin.command) module when possible.
- O(creates), O(removes), and O(chdir) can be specified after the command.
For instance, if you only want to run a command if a certain file does not exist, use this.
- Check mode is supported when passing O(creates) or O(removes). If running in check mode and either of these are specified, the module will
check for the existence of the file and report the correct changed status. If these are not supplied, the task will be skipped.
- The O(ignore:executable) parameter is removed since version 2.4. If you have a need for this parameter, use the M(ansible.builtin.shell) module instead.
- For Windows targets, use the M(ansible.windows.win_command) module instead.
- For rebooting systems, use the M(ansible.builtin.reboot) or M(ansible.windows.win_reboot) module.
- If the command returns non UTF-8 data, it must be encoded to avoid issues. This may necessitate using M(ansible.builtin.shell) so the output
can be piped through C(base64).
seealso:
- module: ansible.builtin.raw
- module: ansible.builtin.script
- module: ansible.builtin.shell
- module: ansible.windows.win_command
author:
- Ansible Core Team
- Michael DeHaan
'''
EXAMPLES = r'''
- name: Return motd to registered var
ansible.builtin.command: cat /etc/motd
register: mymotd
# free-form (string) arguments, all arguments on one line
- name: Run command if /path/to/database does not exist (without 'args')
ansible.builtin.command: /usr/bin/make_database.sh db_user db_name creates=/path/to/database
# free-form (string) arguments, some arguments on separate lines with the 'args' keyword
# 'args' is a task keyword, passed at the same level as the module
- name: Run command if /path/to/database does not exist (with 'args' keyword)
ansible.builtin.command: /usr/bin/make_database.sh db_user db_name
args:
creates: /path/to/database
# 'cmd' is module parameter
- name: Run command if /path/to/database does not exist (with 'cmd' parameter)
ansible.builtin.command:
cmd: /usr/bin/make_database.sh db_user db_name
creates: /path/to/database
- name: Change the working directory to somedir/ and run the command as db_owner if /path/to/database does not exist
ansible.builtin.command: /usr/bin/make_database.sh db_user db_name
become: yes
become_user: db_owner
args:
chdir: somedir/
creates: /path/to/database
# argv (list) arguments, each argument on a separate line, 'args' keyword not necessary
# 'argv' is a parameter, indented one level from the module
- name: Use 'argv' to send a command as a list - leave 'command' empty
ansible.builtin.command:
argv:
- /usr/bin/make_database.sh
- Username with whitespace
- dbname with whitespace
creates: /path/to/database
- name: Run command using argv with mixed argument formats
ansible.builtin.command:
argv:
- /path/to/binary
- -v
- --debug
- --longopt
- value for longopt
- --other-longopt=value for other longopt
- positional
- name: Safely use templated variable to run command. Always use the quote filter to avoid injection issues
ansible.builtin.command: cat {{ myfile|quote }}
register: myoutput
'''
RETURN = r'''
msg:
description: changed
returned: always
type: bool
sample: True
start:
description: The command execution start time.
returned: always
type: str
sample: '2017-09-29 22:03:48.083128'
end:
description: The command execution end time.
returned: always
type: str
sample: '2017-09-29 22:03:48.084657'
delta:
description: The command execution delta time.
returned: always
type: str
sample: '0:00:00.001529'
stdout:
description: The command standard output.
returned: always
type: str
sample: 'Clustering node rabbit@slave1 with rabbit@master β¦'
stderr:
description: The command standard error.
returned: always
type: str
sample: 'ls cannot access foo: No such file or directory'
cmd:
description: The command executed by the task.
returned: always
type: list
sample:
- echo
- hello
rc:
description: The command return code (0 means success).
returned: always
type: int
sample: 0
stdout_lines:
description: The command standard output split in lines.
returned: always
type: list
sample: [u'Clustering node rabbit@slave1 with rabbit@master β¦']
stderr_lines:
description: The command standard error split in lines.
returned: always
type: list
sample: [u'ls cannot access foo: No such file or directory', u'ls β¦']
'''
import datetime
import glob
import os
import shlex
from ansible.module_utils.basic import AnsibleModule
from ansible.module_utils.common.text.converters import to_native, to_bytes, to_text
from ansible.module_utils.common.collections import is_iterable
def main():
# the command module is the one ansible module that does not take key=value args
# hence don't copy this one if you are looking to build others!
# NOTE: ensure splitter.py is kept in sync for exceptions
module = AnsibleModule(
argument_spec=dict(
_raw_params=dict(),
_uses_shell=dict(type='bool', default=False),
argv=dict(type='list', elements='str'),
chdir=dict(type='path'),
executable=dict(),
expand_argument_vars=dict(type='bool', default=True),
creates=dict(type='path'),
removes=dict(type='path'),
# The default for this really comes from the action plugin
stdin=dict(required=False),
stdin_add_newline=dict(type='bool', default=True),
strip_empty_ends=dict(type='bool', default=True),
),
supports_check_mode=True,
)
shell = module.params['_uses_shell']
chdir = module.params['chdir']
executable = module.params['executable']
args = module.params['_raw_params']
argv = module.params['argv']
creates = module.params['creates']
removes = module.params['removes']
stdin = module.params['stdin']
stdin_add_newline = module.params['stdin_add_newline']
strip = module.params['strip_empty_ends']
expand_argument_vars = module.params['expand_argument_vars']
# we promissed these in 'always' ( _lines get autoaded on action plugin)
r = {'changed': False, 'stdout': '', 'stderr': '', 'rc': None, 'cmd': None, 'start': None, 'end': None, 'delta': None, 'msg': ''}
if not shell and executable:
module.warn("As of Ansible 2.4, the parameter 'executable' is no longer supported with the 'command' module. Not using '%s'." % executable)
executable = None
if (not args or args.strip() == '') and not argv:
r['rc'] = 256
r['msg'] = "no command given"
module.fail_json(**r)
if args and argv:
r['rc'] = 256
r['msg'] = "only command or argv can be given, not both"
module.fail_json(**r)
if not shell and args:
args = shlex.split(args)
args = args or argv
# All args must be strings
if is_iterable(args, include_strings=False):
args = [to_native(arg, errors='surrogate_or_strict', nonstring='simplerepr') for arg in args]
r['cmd'] = args
if chdir:
chdir = to_bytes(chdir, errors='surrogate_or_strict')
try:
os.chdir(chdir)
except (IOError, OSError) as e:
r['msg'] = 'Unable to change directory before execution: %s' % to_text(e)
module.fail_json(**r)
# check_mode partial support, since it only really works in checking creates/removes
if module.check_mode:
shoulda = "Would"
else:
shoulda = "Did"
# special skips for idempotence if file exists (assumes command creates)
if creates:
if glob.glob(creates):
r['msg'] = "%s not run command since '%s' exists" % (shoulda, creates)
r['stdout'] = "skipped, since %s exists" % creates # TODO: deprecate
r['rc'] = 0
# special skips for idempotence if file does not exist (assumes command removes)
if not r['msg'] and removes:
if not glob.glob(removes):
r['msg'] = "%s not run command since '%s' does not exist" % (shoulda, removes)
r['stdout'] = "skipped, since %s does not exist" % removes # TODO: deprecate
r['rc'] = 0
if r['msg']:
module.exit_json(**r)
r['changed'] = True
# actually executes command (or not ...)
if not module.check_mode:
r['start'] = datetime.datetime.now()
r['rc'], r['stdout'], r['stderr'] = module.run_command(args, executable=executable, use_unsafe_shell=shell, encoding=None,
data=stdin, binary_data=(not stdin_add_newline),
expand_user_and_vars=expand_argument_vars)
r['end'] = datetime.datetime.now()
else:
# this is partial check_mode support, since we end up skipping if we get here
r['rc'] = 0
r['msg'] = "Command would have run if not in check mode"
if creates is None and removes is None:
r['skipped'] = True
# skipped=True and changed=True are mutually exclusive
r['changed'] = False
# convert to text for jsonization and usability
if r['start'] is not None and r['end'] is not None:
# these are datetime objects, but need them as strings to pass back
r['delta'] = to_text(r['end'] - r['start'])
r['end'] = to_text(r['end'])
r['start'] = to_text(r['start'])
if strip:
r['stdout'] = to_text(r['stdout']).rstrip("\r\n")
r['stderr'] = to_text(r['stderr']).rstrip("\r\n")
if r['rc'] != 0:
r['msg'] = 'non-zero return code'
module.fail_json(**r)
module.exit_json(**r)
if __name__ == '__main__':
main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 81,349 |
ansible.builtin.script creates field behavior and documentation
|
### Summary
* `creates` field in ansible documentation should specify `type: str`
* `creates` field should be type checked. For example, passing a `bool` triggers vague stack trace
### Issue Type
Bug Report
### Component Name
ansible.builtin.script
### Ansible Version
```console
ansible [core 2.14.5]
config file = None
configured module search path = ['/home/john/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /data/john/projects/cf/env/lib/python3.10/site-packages/ansible
ansible collection location = /home/john/.ansible/collections:/usr/share/ansible/collections
executable location = /data/john/projects/cf/env/bin/ansible
python version = 3.10.11 (main, Apr 20 2023, 19:02:41) [GCC 11.2.0] (/data/john/projects/cf/env/bin/python)
jinja version = 3.1.2
libyaml = True
```
### Configuration
```console
CONFIG_FILE() = None
```
### OS / Environment
Ubuntu
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
---
- name: Example Playbook
hosts: localhost
gather_facts: false
tasks:
- name: Run Script
ansible.builtin.script:
cmd: pwd
chdir: "/usr/bin"
creates: true
register: script_output
- name: Display script output
debug:
var: script_output.stdout
```
### Expected Results
Expected the module to report that `str` value should be provided instead of `bool`.
### Actual Results
```console
ansible-playbook [core 2.14.5]
config file = None
configured module search path = ['/home/john/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /data/john/projects/cf/env/lib/python3.10/site-packages/ansible
ansible collection location = /home/john/.ansible/collections:/usr/share/ansible/collections
executable location = /data/john/projects/cf/env/bin/ansible-playbook
python version = 3.10.11 (main, Apr 20 2023, 19:02:41) [GCC 11.2.0] (/data/john/projects/cf/env/bin/python)
jinja version = 3.1.2
libyaml = True
No config file found; using defaults
setting up inventory plugins
host_list declined parsing /data/john/projects/cf/data/module_yaml/20230720-184644/lv3/ansible.builtin.script/hosts.ini as it did not pass its verify_file() method
script declined parsing /data/john/projects/cf/data/module_yaml/20230720-184644/lv3/ansible.builtin.script/hosts.ini as it did not pass its verify_file() method
auto declined parsing /data/john/projects/cf/data/module_yaml/20230720-184644/lv3/ansible.builtin.script/hosts.ini as it did not pass its verify_file() method
yaml declined parsing /data/john/projects/cf/data/module_yaml/20230720-184644/lv3/ansible.builtin.script/hosts.ini as it did not pass its verify_file() method
Parsed /data/john/projects/cf/data/module_yaml/20230720-184644/lv3/ansible.builtin.script/hosts.ini inventory source with ini plugin
Loading callback plugin default of type stdout, v2.0 from /data/john/projects/cf/env/lib/python3.10/site-packages/ansible/plugins/callback/default.py
Skipping callback 'default', as we already have a stdout callback.
Skipping callback 'minimal', as we already have a stdout callback.
Skipping callback 'oneline', as we already have a stdout callback.
PLAYBOOK: test.yaml ************************************************************
Positional arguments: test.yaml
verbosity: 4
connection: smart
timeout: 10
become_method: sudo
tags: ('all',)
inventory: ('/data/john/projects/cf/data/module_yaml/20230720-184644/lv3/ansible.builtin.script/hosts.ini',)
forks: 5
1 plays in test.yaml
PLAY [Example Playbook] ********************************************************
TASK [Run Script] **************************************************************
task path: /data/john/projects/cf/data/module_yaml/20230720-184644/lv3/ansible.builtin.script/test.yaml:7
<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: john
<127.0.0.1> EXEC /bin/sh -c 'echo ~john && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /home/john/.ansible/tmp `"&& mkdir "` echo /home/john/.ansible/tmp/ansible-tmp-1690392695.3185742-1664374-141461953585361 `" && echo ansible-tmp-1690392695.3185742-1664374-141461953585361="` echo /home/john/.ansible/tmp/ansible-tmp-1690392695.3185742-1664374-141461953585361 `" ) && sleep 0'
<127.0.0.1> EXEC /bin/sh -c 'rm -f -r /home/john/.ansible/tmp/ansible-tmp-1690392695.3185742-1664374-141461953585361/ > /dev/null 2>&1 && sleep 0'
The full traceback is:
Traceback (most recent call last):
File "/data/john/projects/cf/env/lib/python3.10/site-packages/ansible/executor/task_executor.py", line 158, in run
res = self._execute()
File "/data/john/projects/cf/env/lib/python3.10/site-packages/ansible/executor/task_executor.py", line 633, in _execute
result = self._handler.run(task_vars=vars_copy)
File "/data/john/projects/cf/env/lib/python3.10/site-packages/ansible/plugins/action/script.py", line 52, in run
if self._remote_file_exists(creates):
File "/data/john/projects/cf/env/lib/python3.10/site-packages/ansible/plugins/action/__init__.py", line 204, in _remote_file_exists
cmd = self._connection._shell.exists(path)
File "/data/john/projects/cf/env/lib/python3.10/site-packages/ansible/plugins/shell/__init__.py", line 138, in exists
cmd = ['test', '-e', shlex.quote(path)]
File "/home/john/miniconda3/envs/3.10/lib/python3.10/shlex.py", line 329, in quote
if _find_unsafe(s) is None:
TypeError: expected string or bytes-like object
fatal: [localhost]: FAILED! => {
"msg": "Unexpected failure during module execution: expected string or bytes-like object",
"stdout": ""
}
PLAY RECAP *********************************************************************
localhost : ok=0 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/81349
|
https://github.com/ansible/ansible/pull/81469
|
37cb44ec37355524ca6a9ec6296e19e3ee74ac98
|
da63f32d59fe882bc77532e734af7348b65cb6cb
| 2023-07-26T17:32:53Z |
python
| 2023-08-25T17:27:26Z |
lib/ansible/modules/script.py
|
# Copyright: (c) 2012, Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
DOCUMENTATION = r'''
---
module: script
version_added: "0.9"
short_description: Runs a local script on a remote node after transferring it
description:
- The M(ansible.builtin.script) module takes the script name followed by a list of space-delimited arguments.
- Either a free form command or O(cmd) parameter is required, see the examples.
- The local script at path will be transferred to the remote node and then executed.
- The given script will be processed through the shell environment on the remote node.
- This module does not require python on the remote system, much like the M(ansible.builtin.raw) module.
- This module is also supported for Windows targets.
options:
free_form:
description:
- Path to the local script file followed by optional arguments.
cmd:
type: str
description:
- Path to the local script to run followed by optional arguments.
creates:
description:
- A filename on the remote node, when it already exists, this step will B(not) be run.
version_added: "1.5"
removes:
description:
- A filename on the remote node, when it does not exist, this step will B(not) be run.
version_added: "1.5"
chdir:
description:
- Change into this directory on the remote node before running the script.
version_added: "2.4"
executable:
description:
- Name or path of a executable to invoke the script with.
version_added: "2.6"
notes:
- It is usually preferable to write Ansible modules rather than pushing scripts. Convert your script to an Ansible module for bonus points!
- The P(ansible.builtin.ssh#connection) connection plugin will force pseudo-tty allocation via C(-tt) when scripts are executed.
Pseudo-ttys do not have a stderr channel and all stderr is sent to stdout. If you depend on separated stdout and stderr result keys,
please switch to a copy+command set of tasks instead of using script.
- If the path to the local script contains spaces, it needs to be quoted.
- This module is also supported for Windows targets.
- If the script returns non UTF-8 data, it must be encoded to avoid issues. One option is to pipe
the output through C(base64).
seealso:
- module: ansible.builtin.shell
- module: ansible.windows.win_shell
author:
- Ansible Core Team
- Michael DeHaan
extends_documentation_fragment:
- action_common_attributes
- action_common_attributes.files
- action_common_attributes.raw
- decrypt
attributes:
check_mode:
support: partial
details: while the script itself is arbitrary and cannot be subject to the check mode semantics it adds O(creates)/O(removes) options as a workaround
diff_mode:
support: none
platform:
details: This action is one of the few that requires no Python on the remote as it passes the command directly into the connection string
platforms: all
raw:
support: full
safe_file_operations:
support: none
vault:
support: full
'''
EXAMPLES = r'''
- name: Run a script with arguments (free form)
ansible.builtin.script: /some/local/script.sh --some-argument 1234
- name: Run a script with arguments (using 'cmd' parameter)
ansible.builtin.script:
cmd: /some/local/script.sh --some-argument 1234
- name: Run a script only if file.txt does not exist on the remote node
ansible.builtin.script: /some/local/create_file.sh --some-argument 1234
args:
creates: /the/created/file.txt
- name: Run a script only if file.txt exists on the remote node
ansible.builtin.script: /some/local/remove_file.sh --some-argument 1234
args:
removes: /the/removed/file.txt
- name: Run a script using an executable in a non-system path
ansible.builtin.script: /some/local/script
args:
executable: /some/remote/executable
- name: Run a script using an executable in a system path
ansible.builtin.script: /some/local/script.py
args:
executable: python3
- name: Run a Powershell script on a windows host
script: subdirectories/under/path/with/your/playbook/script.ps1
'''
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 81,349 |
ansible.builtin.script creates field behavior and documentation
|
### Summary
* `creates` field in ansible documentation should specify `type: str`
* `creates` field should be type checked. For example, passing a `bool` triggers vague stack trace
### Issue Type
Bug Report
### Component Name
ansible.builtin.script
### Ansible Version
```console
ansible [core 2.14.5]
config file = None
configured module search path = ['/home/john/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /data/john/projects/cf/env/lib/python3.10/site-packages/ansible
ansible collection location = /home/john/.ansible/collections:/usr/share/ansible/collections
executable location = /data/john/projects/cf/env/bin/ansible
python version = 3.10.11 (main, Apr 20 2023, 19:02:41) [GCC 11.2.0] (/data/john/projects/cf/env/bin/python)
jinja version = 3.1.2
libyaml = True
```
### Configuration
```console
CONFIG_FILE() = None
```
### OS / Environment
Ubuntu
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
---
- name: Example Playbook
hosts: localhost
gather_facts: false
tasks:
- name: Run Script
ansible.builtin.script:
cmd: pwd
chdir: "/usr/bin"
creates: true
register: script_output
- name: Display script output
debug:
var: script_output.stdout
```
### Expected Results
Expected the module to report that `str` value should be provided instead of `bool`.
### Actual Results
```console
ansible-playbook [core 2.14.5]
config file = None
configured module search path = ['/home/john/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /data/john/projects/cf/env/lib/python3.10/site-packages/ansible
ansible collection location = /home/john/.ansible/collections:/usr/share/ansible/collections
executable location = /data/john/projects/cf/env/bin/ansible-playbook
python version = 3.10.11 (main, Apr 20 2023, 19:02:41) [GCC 11.2.0] (/data/john/projects/cf/env/bin/python)
jinja version = 3.1.2
libyaml = True
No config file found; using defaults
setting up inventory plugins
host_list declined parsing /data/john/projects/cf/data/module_yaml/20230720-184644/lv3/ansible.builtin.script/hosts.ini as it did not pass its verify_file() method
script declined parsing /data/john/projects/cf/data/module_yaml/20230720-184644/lv3/ansible.builtin.script/hosts.ini as it did not pass its verify_file() method
auto declined parsing /data/john/projects/cf/data/module_yaml/20230720-184644/lv3/ansible.builtin.script/hosts.ini as it did not pass its verify_file() method
yaml declined parsing /data/john/projects/cf/data/module_yaml/20230720-184644/lv3/ansible.builtin.script/hosts.ini as it did not pass its verify_file() method
Parsed /data/john/projects/cf/data/module_yaml/20230720-184644/lv3/ansible.builtin.script/hosts.ini inventory source with ini plugin
Loading callback plugin default of type stdout, v2.0 from /data/john/projects/cf/env/lib/python3.10/site-packages/ansible/plugins/callback/default.py
Skipping callback 'default', as we already have a stdout callback.
Skipping callback 'minimal', as we already have a stdout callback.
Skipping callback 'oneline', as we already have a stdout callback.
PLAYBOOK: test.yaml ************************************************************
Positional arguments: test.yaml
verbosity: 4
connection: smart
timeout: 10
become_method: sudo
tags: ('all',)
inventory: ('/data/john/projects/cf/data/module_yaml/20230720-184644/lv3/ansible.builtin.script/hosts.ini',)
forks: 5
1 plays in test.yaml
PLAY [Example Playbook] ********************************************************
TASK [Run Script] **************************************************************
task path: /data/john/projects/cf/data/module_yaml/20230720-184644/lv3/ansible.builtin.script/test.yaml:7
<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: john
<127.0.0.1> EXEC /bin/sh -c 'echo ~john && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /home/john/.ansible/tmp `"&& mkdir "` echo /home/john/.ansible/tmp/ansible-tmp-1690392695.3185742-1664374-141461953585361 `" && echo ansible-tmp-1690392695.3185742-1664374-141461953585361="` echo /home/john/.ansible/tmp/ansible-tmp-1690392695.3185742-1664374-141461953585361 `" ) && sleep 0'
<127.0.0.1> EXEC /bin/sh -c 'rm -f -r /home/john/.ansible/tmp/ansible-tmp-1690392695.3185742-1664374-141461953585361/ > /dev/null 2>&1 && sleep 0'
The full traceback is:
Traceback (most recent call last):
File "/data/john/projects/cf/env/lib/python3.10/site-packages/ansible/executor/task_executor.py", line 158, in run
res = self._execute()
File "/data/john/projects/cf/env/lib/python3.10/site-packages/ansible/executor/task_executor.py", line 633, in _execute
result = self._handler.run(task_vars=vars_copy)
File "/data/john/projects/cf/env/lib/python3.10/site-packages/ansible/plugins/action/script.py", line 52, in run
if self._remote_file_exists(creates):
File "/data/john/projects/cf/env/lib/python3.10/site-packages/ansible/plugins/action/__init__.py", line 204, in _remote_file_exists
cmd = self._connection._shell.exists(path)
File "/data/john/projects/cf/env/lib/python3.10/site-packages/ansible/plugins/shell/__init__.py", line 138, in exists
cmd = ['test', '-e', shlex.quote(path)]
File "/home/john/miniconda3/envs/3.10/lib/python3.10/shlex.py", line 329, in quote
if _find_unsafe(s) is None:
TypeError: expected string or bytes-like object
fatal: [localhost]: FAILED! => {
"msg": "Unexpected failure during module execution: expected string or bytes-like object",
"stdout": ""
}
PLAY RECAP *********************************************************************
localhost : ok=0 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/81349
|
https://github.com/ansible/ansible/pull/81469
|
37cb44ec37355524ca6a9ec6296e19e3ee74ac98
|
da63f32d59fe882bc77532e734af7348b65cb6cb
| 2023-07-26T17:32:53Z |
python
| 2023-08-25T17:27:26Z |
lib/ansible/plugins/action/script.py
|
# (c) 2012, Michael DeHaan <[email protected]>
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import os
import re
import shlex
from ansible.errors import AnsibleError, AnsibleAction, _AnsibleActionDone, AnsibleActionFail, AnsibleActionSkip
from ansible.executor.powershell import module_manifest as ps_manifest
from ansible.module_utils.common.text.converters import to_bytes, to_native, to_text
from ansible.plugins.action import ActionBase
class ActionModule(ActionBase):
TRANSFERS_FILES = True
# On Windows platform, absolute paths begin with a (back)slash
# after chopping off a potential drive letter.
windows_absolute_path_detection = re.compile(r'^(?:[a-zA-Z]\:)?(\\|\/)')
def run(self, tmp=None, task_vars=None):
''' handler for file transfer operations '''
if task_vars is None:
task_vars = dict()
result = super(ActionModule, self).run(tmp, task_vars)
del tmp # tmp no longer has any effect
try:
creates = self._task.args.get('creates')
if creates:
# do not run the command if the line contains creates=filename
# and the filename already exists. This allows idempotence
# of command executions.
if self._remote_file_exists(creates):
raise AnsibleActionSkip("%s exists, matching creates option" % creates)
removes = self._task.args.get('removes')
if removes:
# do not run the command if the line contains removes=filename
# and the filename does not exist. This allows idempotence
# of command executions.
if not self._remote_file_exists(removes):
raise AnsibleActionSkip("%s does not exist, matching removes option" % removes)
# The chdir must be absolute, because a relative path would rely on
# remote node behaviour & user config.
chdir = self._task.args.get('chdir')
if chdir:
# Powershell is the only Windows-path aware shell
if getattr(self._connection._shell, "_IS_WINDOWS", False) and \
not self.windows_absolute_path_detection.match(chdir):
raise AnsibleActionFail('chdir %s must be an absolute path for a Windows remote node' % chdir)
# Every other shell is unix-path-aware.
if not getattr(self._connection._shell, "_IS_WINDOWS", False) and not chdir.startswith('/'):
raise AnsibleActionFail('chdir %s must be an absolute path for a Unix-aware remote node' % chdir)
# Split out the script as the first item in raw_params using
# shlex.split() in order to support paths and files with spaces in the name.
# Any arguments passed to the script will be added back later.
raw_params = to_native(self._task.args.get('_raw_params', ''), errors='surrogate_or_strict')
parts = [to_text(s, errors='surrogate_or_strict') for s in shlex.split(raw_params.strip())]
source = parts[0]
# Support executable paths and files with spaces in the name.
executable = to_native(self._task.args.get('executable', ''), errors='surrogate_or_strict')
try:
source = self._loader.get_real_file(self._find_needle('files', source), decrypt=self._task.args.get('decrypt', True))
except AnsibleError as e:
raise AnsibleActionFail(to_native(e))
if self._task.check_mode:
# check mode is supported if 'creates' or 'removes' are provided
# the task has already been skipped if a change would not occur
if self._task.args.get('creates') or self._task.args.get('removes'):
result['changed'] = True
raise _AnsibleActionDone(result=result)
# If the script doesn't return changed in the result, it defaults to True,
# but since the script may override 'changed', just skip instead of guessing.
else:
result['changed'] = False
raise AnsibleActionSkip('Check mode is not supported for this task.', result=result)
# now we execute script, always assume changed.
result['changed'] = True
# transfer the file to a remote tmp location
tmp_src = self._connection._shell.join_path(self._connection._shell.tmpdir,
os.path.basename(source))
# Convert raw_params to text for the purpose of replacing the script since
# parts and tmp_src are both unicode strings and raw_params will be different
# depending on Python version.
#
# Once everything is encoded consistently, replace the script path on the remote
# system with the remainder of the raw_params. This preserves quoting in parameters
# that would have been removed by shlex.split().
target_command = to_text(raw_params).strip().replace(parts[0], tmp_src)
self._transfer_file(source, tmp_src)
# set file permissions, more permissive when the copy is done as a different user
self._fixup_perms2((self._connection._shell.tmpdir, tmp_src), execute=True)
# add preparation steps to one ssh roundtrip executing the script
env_dict = dict()
env_string = self._compute_environment_string(env_dict)
if executable:
script_cmd = ' '.join([env_string, executable, target_command])
else:
script_cmd = ' '.join([env_string, target_command])
script_cmd = self._connection._shell.wrap_for_exec(script_cmd)
exec_data = None
# PowerShell runs the script in a special wrapper to enable things
# like become and environment args
if getattr(self._connection._shell, "_IS_WINDOWS", False):
# FUTURE: use a more public method to get the exec payload
pc = self._play_context
exec_data = ps_manifest._create_powershell_wrapper(
to_bytes(script_cmd), source, {}, env_dict, self._task.async_val,
pc.become, pc.become_method, pc.become_user,
pc.become_pass, pc.become_flags, "script", task_vars, None
)
# build the necessary exec wrapper command
# FUTURE: this still doesn't let script work on Windows with non-pipelined connections or
# full manual exec of KEEP_REMOTE_FILES
script_cmd = self._connection._shell.build_module_command(env_string='', shebang='#!powershell', cmd='')
result.update(self._low_level_execute_command(cmd=script_cmd, in_data=exec_data, sudoable=True, chdir=chdir))
if 'rc' in result and result['rc'] != 0:
raise AnsibleActionFail('non-zero return code')
except AnsibleAction as e:
result.update(e.result)
finally:
self._remove_tmp_path(self._connection._shell.tmpdir)
return result
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 81,349 |
ansible.builtin.script creates field behavior and documentation
|
### Summary
* `creates` field in ansible documentation should specify `type: str`
* `creates` field should be type checked. For example, passing a `bool` triggers vague stack trace
### Issue Type
Bug Report
### Component Name
ansible.builtin.script
### Ansible Version
```console
ansible [core 2.14.5]
config file = None
configured module search path = ['/home/john/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /data/john/projects/cf/env/lib/python3.10/site-packages/ansible
ansible collection location = /home/john/.ansible/collections:/usr/share/ansible/collections
executable location = /data/john/projects/cf/env/bin/ansible
python version = 3.10.11 (main, Apr 20 2023, 19:02:41) [GCC 11.2.0] (/data/john/projects/cf/env/bin/python)
jinja version = 3.1.2
libyaml = True
```
### Configuration
```console
CONFIG_FILE() = None
```
### OS / Environment
Ubuntu
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
---
- name: Example Playbook
hosts: localhost
gather_facts: false
tasks:
- name: Run Script
ansible.builtin.script:
cmd: pwd
chdir: "/usr/bin"
creates: true
register: script_output
- name: Display script output
debug:
var: script_output.stdout
```
### Expected Results
Expected the module to report that `str` value should be provided instead of `bool`.
### Actual Results
```console
ansible-playbook [core 2.14.5]
config file = None
configured module search path = ['/home/john/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /data/john/projects/cf/env/lib/python3.10/site-packages/ansible
ansible collection location = /home/john/.ansible/collections:/usr/share/ansible/collections
executable location = /data/john/projects/cf/env/bin/ansible-playbook
python version = 3.10.11 (main, Apr 20 2023, 19:02:41) [GCC 11.2.0] (/data/john/projects/cf/env/bin/python)
jinja version = 3.1.2
libyaml = True
No config file found; using defaults
setting up inventory plugins
host_list declined parsing /data/john/projects/cf/data/module_yaml/20230720-184644/lv3/ansible.builtin.script/hosts.ini as it did not pass its verify_file() method
script declined parsing /data/john/projects/cf/data/module_yaml/20230720-184644/lv3/ansible.builtin.script/hosts.ini as it did not pass its verify_file() method
auto declined parsing /data/john/projects/cf/data/module_yaml/20230720-184644/lv3/ansible.builtin.script/hosts.ini as it did not pass its verify_file() method
yaml declined parsing /data/john/projects/cf/data/module_yaml/20230720-184644/lv3/ansible.builtin.script/hosts.ini as it did not pass its verify_file() method
Parsed /data/john/projects/cf/data/module_yaml/20230720-184644/lv3/ansible.builtin.script/hosts.ini inventory source with ini plugin
Loading callback plugin default of type stdout, v2.0 from /data/john/projects/cf/env/lib/python3.10/site-packages/ansible/plugins/callback/default.py
Skipping callback 'default', as we already have a stdout callback.
Skipping callback 'minimal', as we already have a stdout callback.
Skipping callback 'oneline', as we already have a stdout callback.
PLAYBOOK: test.yaml ************************************************************
Positional arguments: test.yaml
verbosity: 4
connection: smart
timeout: 10
become_method: sudo
tags: ('all',)
inventory: ('/data/john/projects/cf/data/module_yaml/20230720-184644/lv3/ansible.builtin.script/hosts.ini',)
forks: 5
1 plays in test.yaml
PLAY [Example Playbook] ********************************************************
TASK [Run Script] **************************************************************
task path: /data/john/projects/cf/data/module_yaml/20230720-184644/lv3/ansible.builtin.script/test.yaml:7
<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: john
<127.0.0.1> EXEC /bin/sh -c 'echo ~john && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /home/john/.ansible/tmp `"&& mkdir "` echo /home/john/.ansible/tmp/ansible-tmp-1690392695.3185742-1664374-141461953585361 `" && echo ansible-tmp-1690392695.3185742-1664374-141461953585361="` echo /home/john/.ansible/tmp/ansible-tmp-1690392695.3185742-1664374-141461953585361 `" ) && sleep 0'
<127.0.0.1> EXEC /bin/sh -c 'rm -f -r /home/john/.ansible/tmp/ansible-tmp-1690392695.3185742-1664374-141461953585361/ > /dev/null 2>&1 && sleep 0'
The full traceback is:
Traceback (most recent call last):
File "/data/john/projects/cf/env/lib/python3.10/site-packages/ansible/executor/task_executor.py", line 158, in run
res = self._execute()
File "/data/john/projects/cf/env/lib/python3.10/site-packages/ansible/executor/task_executor.py", line 633, in _execute
result = self._handler.run(task_vars=vars_copy)
File "/data/john/projects/cf/env/lib/python3.10/site-packages/ansible/plugins/action/script.py", line 52, in run
if self._remote_file_exists(creates):
File "/data/john/projects/cf/env/lib/python3.10/site-packages/ansible/plugins/action/__init__.py", line 204, in _remote_file_exists
cmd = self._connection._shell.exists(path)
File "/data/john/projects/cf/env/lib/python3.10/site-packages/ansible/plugins/shell/__init__.py", line 138, in exists
cmd = ['test', '-e', shlex.quote(path)]
File "/home/john/miniconda3/envs/3.10/lib/python3.10/shlex.py", line 329, in quote
if _find_unsafe(s) is None:
TypeError: expected string or bytes-like object
fatal: [localhost]: FAILED! => {
"msg": "Unexpected failure during module execution: expected string or bytes-like object",
"stdout": ""
}
PLAY RECAP *********************************************************************
localhost : ok=0 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/81349
|
https://github.com/ansible/ansible/pull/81469
|
37cb44ec37355524ca6a9ec6296e19e3ee74ac98
|
da63f32d59fe882bc77532e734af7348b65cb6cb
| 2023-07-26T17:32:53Z |
python
| 2023-08-25T17:27:26Z |
test/integration/targets/script/tasks/main.yml
|
# Test code for the script module and action_plugin.
# (c) 2014, Richard Isaacson <[email protected]>
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
##
## prep
##
- set_fact:
remote_tmp_dir_test: "{{ remote_tmp_dir }}/test_script"
- name: make sure our testing sub-directory does not exist
file:
path: "{{ remote_tmp_dir_test }}"
state: absent
- name: create our testing sub-directory
file:
path: "{{ remote_tmp_dir_test }}"
state: directory
##
## script
##
- name: execute the test.sh script via command
script: test.sh
register: script_result0
- name: assert that the script executed correctly
assert:
that:
- "script_result0.rc == 0"
- "script_result0.stdout == 'win'"
- name: Execute a script with a space in the path
script: "'space path/test.sh'"
register: _space_path_test
tags:
- spacepath
- name: Assert that script with space in path ran successfully
assert:
that:
- _space_path_test is success
- _space_path_test.stdout == 'Script with space in path'
tags:
- spacepath
- name: Execute a script with arguments including a unicode character
script: test_with_args.sh -this -that -Σ¦ther
register: unicode_args
- name: Assert that script with unicode character ran successfully
assert:
that:
- unicode_args is success
- unicode_args.stdout_lines[0] == '-this'
- unicode_args.stdout_lines[1] == '-that'
- unicode_args.stdout_lines[2] == '-Σ¦ther'
# creates
- name: verify that afile.txt is absent
file:
path: "{{ remote_tmp_dir_test }}/afile.txt"
state: absent
- name: create afile.txt with create_afile.sh via command
script: create_afile.sh {{ remote_tmp_dir_test | expanduser }}/afile.txt
args:
creates: "{{ remote_tmp_dir_test | expanduser }}/afile.txt"
register: _create_test1
- name: Check state of created file
stat:
path: "{{ remote_tmp_dir_test | expanduser }}/afile.txt"
register: _create_stat1
- name: Run create_afile.sh again to ensure it is skipped
script: create_afile.sh {{ remote_tmp_dir_test | expanduser }}/afile.txt
args:
creates: "{{ remote_tmp_dir_test | expanduser }}/afile.txt"
register: _create_test2
- name: Assert that script report a change, file was created, second run was skipped
assert:
that:
- _create_test1 is changed
- _create_stat1.stat.exists
- _create_test2 is skipped
# removes
- name: verify that afile.txt is present
file:
path: "{{ remote_tmp_dir_test }}/afile.txt"
state: file
- name: remove afile.txt with remote_afile.sh via command
script: remove_afile.sh {{ remote_tmp_dir_test | expanduser }}/afile.txt
args:
removes: "{{ remote_tmp_dir_test | expanduser }}/afile.txt"
register: _remove_test1
- name: Check state of removed file
stat:
path: "{{ remote_tmp_dir_test | expanduser }}/afile.txt"
register: _remove_stat1
- name: Run remote_afile.sh again to enure it is skipped
script: remove_afile.sh {{ remote_tmp_dir_test | expanduser }}/afile.txt
args:
removes: "{{ remote_tmp_dir_test | expanduser }}/afile.txt"
register: _remove_test2
- name: Assert that script report a change, file was removed, second run was skipped
assert:
that:
- _remove_test1 is changed
- not _remove_stat1.stat.exists
- _remove_test2 is skipped
# async
- name: verify that afile.txt is absent
file:
path: "{{ remote_tmp_dir_test }}/afile.txt"
state: absent
- name: test task failure with async param
script: /some/script.sh
async: 2
ignore_errors: true
register: script_result3
- name: assert task with async param failed
assert:
that:
- script_result3 is failed
- script_result3.msg == "async is not supported for this task."
# check mode
- name: Run script to create a file in check mode
script: create_afile.sh {{ remote_tmp_dir_test | expanduser }}/afile2.txt
check_mode: yes
register: _check_mode_test
- debug:
var: _check_mode_test
verbosity: 2
- name: Get state of file created by script
stat:
path: "{{ remote_tmp_dir_test | expanduser }}/afile2.txt"
register: _afile_stat
- debug:
var: _afile_stat
verbosity: 2
- name: Assert that a change was reported but the script did not make changes
assert:
that:
- _check_mode_test is not changed
- _check_mode_test is skipped
- not _afile_stat.stat.exists
- name: Run script to create a file
script: create_afile.sh {{ remote_tmp_dir_test | expanduser }}/afile2.txt
- name: Run script to create a file in check mode with 'creates' argument
script: create_afile.sh {{ remote_tmp_dir_test | expanduser }}/afile2.txt
args:
creates: "{{ remote_tmp_dir_test | expanduser }}/afile2.txt"
register: _check_mode_test2
check_mode: yes
- debug:
var: _check_mode_test2
verbosity: 2
- name: Assert that task was skipped and mesage was returned
assert:
that:
- _check_mode_test2 is skipped
- '_check_mode_test2.msg == "{{ remote_tmp_dir_test | expanduser }}/afile2.txt exists, matching creates option"'
- name: Remove afile2.txt
file:
path: "{{ remote_tmp_dir_test | expanduser }}/afile2.txt"
state: absent
- name: Run script to remove a file in check mode with 'removes' argument
script: remove_afile.sh {{ remote_tmp_dir_test | expanduser }}/afile2.txt
args:
removes: "{{ remote_tmp_dir_test | expanduser }}/afile2.txt"
register: _check_mode_test3
check_mode: yes
- debug:
var: _check_mode_test3
verbosity: 2
- name: Assert that task was skipped and message was returned
assert:
that:
- _check_mode_test3 is skipped
- '_check_mode_test3.msg == "{{ remote_tmp_dir_test | expanduser }}/afile2.txt does not exist, matching removes option"'
# executable
- name: Run script with shebang omitted
script: no_shebang.py
args:
executable: "{{ ansible_python_interpreter }}"
register: _shebang_omitted_test
tags:
- noshebang
- name: Assert that script with shebang omitted succeeded
assert:
that:
- _shebang_omitted_test is success
- _shebang_omitted_test.stdout == 'Script with shebang omitted'
tags:
- noshebang
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 81,376 |
dnf module failure for a package from URI with state=latest update_only=true
|
### Summary
Unable using `latest` together with `update_only` option when installing a package using DNF module from an URL.
Tested on [https://dl.fedoraproject.org/pub/epel/epel-release-latest-9.noarch.rpm](url) and [https://download.postgresql.org/pub/repos/yum/reporpms/EL-9-x86_64/pgdg-redhat-repo-latest.noarch.rpm](url)
The states `latest` or `present` without specifying `update_only` works as expected.
### Issue Type
Bug Report
### Component Name
dnf
### Ansible Version
```console
$ ansible --version
ansible [core 2.14.2]
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/zelenya/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.11/site-packages/ansible
ansible collection location = /home/zelenya/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/bin/ansible
python version = 3.11.2 (main, May 24 2023, 00:00:00) [GCC 11.3.1 20221121 (Red Hat 11.3.1-4)] (/usr/bin/python3.11)
jinja version = 3.1.2
libyaml = True
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
CONFIG_FILE() = /etc/ansible/ansible.cfg
```
### OS / Environment
AlmaLinux release 9.2 (Turquoise Kodkod)
### Steps to Reproduce
Cleran setup Alma linux (using Vagrant)
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
# install certificaters
sudo dnf -y install ca-certificates
# install ansible itself
sudo dnf -y install ansible-core
# install a package form URL
ansible localhost --become -m ansible.builtin.rpm_key -a "key='https://dl.fedoraproject.org/pub/epel/RPM-GPG-KEY-EPEL-9' state=present"
# Update the package to the latest version
ansible localhost --become -m ansible.builtin.dnf -a "name='https://dl.fedoraproject.org/pub/epel/epel-release-latest-9.noarch.rpm' state=present"
# fails with Error: AttributeError: 'Package' object has no attribute 'rpartition'
```
Based on Ansible-lint [https://ansible.readthedocs.io/projects/lint/rules/package-latest/#correct-code](package-latest) I'd expect that the correct flow is to install a package *state=present* and afterward (if requested) update to the latest version by using *state=present* together with *update_only=true*.
The intention is to be sure, that the package is updated if a playbook is executed after several months from initial installation time.
### Expected Results
I expect package installation with both options *state=latest* and *update_only=true* will work or report a misuse instead of failing with an exception.
### Actual Results
```console
$ ansible localhost --become -m ansible.builtin.dnf -a "name='https://dl.fedoraproject.org/pub/epel/epel-release-latest-9.noarch.rpm' state=latest update_only=true"
An exception occurred during task execution. To see the full traceback, use -vvv. The error was: AttributeError: 'Package' object has no attribute 'rpartition'
localhost | FAILED! => {
"changed": false,
"module_stderr": "Traceback (most recent call last):\n File \"<stdin>\", line 16, in <module>\n File \"/usr/lib64/python3.9/runpy.py\", line 225, in run_module\n return _run_module_code(code, init_globals, run_name, mod_spec)\n File \"/usr/lib64/python3.9/runpy.py\", line 97, in _run_module_code\n _run_code(code, mod_globals, init_globals,\n File \"/usr/lib64/python3.9/runpy.py\", line 87, in _run_code\n exec(code, run_globals)\n File \"/tmp/ansible_ansible.builtin.dnf_payload_t7xrps67/ansible_ansible.builtin.dnf_payload.zip/ansible/modules/dnf.py\", line 1460, in <module>\n File \"/tmp/ansible_ansible.builtin.dnf_payload_t7xrps67/ansible_ansible.builtin.dnf_payload.zip/ansible/modules/dnf.py\", line 1449, in main\n File \"/tmp/ansible_ansible.builtin.dnf_payload_t7xrps67/ansible_ansible.builtin.dnf_payload.zip/ansible/modules/dnf.py\", line 1423, in run\n File \"/tmp/ansible_ansible.builtin.dnf_payload_t7xrps67/ansible_ansible.builtin.dnf_payload.zip/ansible/modules/dnf.py\", line 1134, in ensure\n File \"/tmp/ansible_ansible.builtin.dnf_payload_t7xrps67/ansible_ansible.builtin.dnf_payload.zip/ansible/modules/dnf.py\", line 982, in _install_remote_rpms\n File \"/tmp/ansible_ansible.builtin.dnf_payload_t7xrps67/ansible_ansible.builtin.dnf_payload.zip/ansible/modules/dnf.py\", line 949, in _update_only\n File \"/tmp/ansible_ansible.builtin.dnf_payload_t7xrps67/ansible_ansible.builtin.dnf_payload.zip/ansible/modules/dnf.py\", line 795, in _is_installed\n File \"/tmp/ansible_ansible.builtin.dnf_payload_t7xrps67/ansible_ansible.builtin.dnf_payload.zip/ansible/modules/dnf.py\", line 481, in _split_package_arch\nAttributeError: 'Package' object has no attribute 'rpartition'\n",
"module_stdout": "",
"msg": "MODULE FAILURE\nSee stdout/stderr for the exact error",
"rc": 1
}
$ ansible -vvvv localhost --become -m ansible.builtin.dnf -a "name='https://dl.fedoraproject.org/pub/epel/epel-release-latest-9.noarch.rpm' state=latest update_only=true"
ansible [core 2.14.2]
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/zelenya/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.11/site-packages/ansible
ansible collection location = /home/zelenya/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/bin/ansible
python version = 3.11.2 (main, May 24 2023, 00:00:00) [GCC 11.3.1 20221121 (Red Hat 11.3.1-4)] (/usr/bin/python3.11)
jinja version = 3.1.2
libyaml = True
Using /etc/ansible/ansible.cfg as config file
setting up inventory plugins
host_list declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
script declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
auto declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
Parsed /etc/ansible/hosts inventory source with ini plugin
Loading callback plugin minimal of type stdout, v2.0 from /usr/lib/python3.11/site-packages/ansible/plugins/callback/minimal.py
Skipping callback 'default', as we already have a stdout callback.
Skipping callback 'minimal', as we already have a stdout callback.
Skipping callback 'oneline', as we already have a stdout callback.
<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: zelenya
<127.0.0.1> EXEC /bin/sh -c 'echo ~zelenya && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /home/zelenya/.ansible/tmp `"&& mkdir "` echo /home/zelenya/.ansible/tmp/ansible-tmp-1690820028.8473563-4456-128814682130530 `" && echo ansible-tmp-1690820028.8473563-4456-128814682130530="` echo /home/zelenya/.ansible/tmp/ansible-tmp-1690820028.8473563-4456-128814682130530 `" ) && sleep 0'
Using module file /usr/lib/python3.11/site-packages/ansible/modules/dnf.py
<127.0.0.1> PUT /home/zelenya/.ansible/tmp/ansible-local-4452uveazbbj/tmpl3btemgd TO /home/zelenya/.ansible/tmp/ansible-tmp-1690820028.8473563-4456-128814682130530/AnsiballZ_dnf.py
<127.0.0.1> EXEC /bin/sh -c 'chmod u+x /home/zelenya/.ansible/tmp/ansible-tmp-1690820028.8473563-4456-128814682130530/ /home/zelenya/.ansible/tmp/ansible-tmp-1690820028.8473563-4456-128814682130530/AnsiballZ_dnf.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c 'sudo -H -S -n -u root /bin/sh -c '"'"'echo BECOME-SUCCESS-zrmfzgclfrznlmkupsgkhvmruypmywyc ; /usr/bin/python3.11 /home/zelenya/.ansible/tmp/ansible-tmp-1690820028.8473563-4456-128814682130530/AnsiballZ_dnf.py'"'"' && sleep 0'
<127.0.0.1> EXEC /bin/sh -c 'rm -f -r /home/zelenya/.ansible/tmp/ansible-tmp-1690820028.8473563-4456-128814682130530/ > /dev/null 2>&1 && sleep 0'
The full traceback is:
Traceback (most recent call last):
File "<stdin>", line 16, in <module>
File "/usr/lib64/python3.9/runpy.py", line 225, in run_module
return _run_module_code(code, init_globals, run_name, mod_spec)
File "/usr/lib64/python3.9/runpy.py", line 97, in _run_module_code
_run_code(code, mod_globals, init_globals,
File "/usr/lib64/python3.9/runpy.py", line 87, in _run_code
exec(code, run_globals)
File "/tmp/ansible_ansible.builtin.dnf_payload_qxu1ivin/ansible_ansible.builtin.dnf_payload.zip/ansible/modules/dnf.py", line 1460, in <module>
File "/tmp/ansible_ansible.builtin.dnf_payload_qxu1ivin/ansible_ansible.builtin.dnf_payload.zip/ansible/modules/dnf.py", line 1449, in main
File "/tmp/ansible_ansible.builtin.dnf_payload_qxu1ivin/ansible_ansible.builtin.dnf_payload.zip/ansible/modules/dnf.py", line 1423, in run
File "/tmp/ansible_ansible.builtin.dnf_payload_qxu1ivin/ansible_ansible.builtin.dnf_payload.zip/ansible/modules/dnf.py", line 1134, in ensure
File "/tmp/ansible_ansible.builtin.dnf_payload_qxu1ivin/ansible_ansible.builtin.dnf_payload.zip/ansible/modules/dnf.py", line 982, in _install_remote_rpms
File "/tmp/ansible_ansible.builtin.dnf_payload_qxu1ivin/ansible_ansible.builtin.dnf_payload.zip/ansible/modules/dnf.py", line 949, in _update_only
File "/tmp/ansible_ansible.builtin.dnf_payload_qxu1ivin/ansible_ansible.builtin.dnf_payload.zip/ansible/modules/dnf.py", line 795, in _is_installed
File "/tmp/ansible_ansible.builtin.dnf_payload_qxu1ivin/ansible_ansible.builtin.dnf_payload.zip/ansible/modules/dnf.py", line 481, in _split_package_arch
AttributeError: 'Package' object has no attribute 'rpartition'
localhost | FAILED! => {
"changed": false,
"module_stderr": "Traceback (most recent call last):\n File \"<stdin>\", line 16, in <module>\n File \"/usr/lib64/python3.9/runpy.py\", line 225, in run_module\n return _run_module_code(code, init_globals, run_name, mod_spec)\n File \"/usr/lib64/python3.9/runpy.py\", line 97, in _run_module_code\n _run_code(code, mod_globals, init_globals,\n File \"/usr/lib64/python3.9/runpy.py\", line 87, in _run_code\n exec(code, run_globals)\n File \"/tmp/ansible_ansible.builtin.dnf_payload_qxu1ivin/ansible_ansible.builtin.dnf_payload.zip/ansible/modules/dnf.py\", line 1460, in <module>\n File \"/tmp/ansible_ansible.builtin.dnf_payload_qxu1ivin/ansible_ansible.builtin.dnf_payload.zip/ansible/modules/dnf.py\", line 1449, in main\n File \"/tmp/ansible_ansible.builtin.dnf_payload_qxu1ivin/ansible_ansible.builtin.dnf_payload.zip/ansible/modules/dnf.py\", line 1423, in run\n File \"/tmp/ansible_ansible.builtin.dnf_payload_qxu1ivin/ansible_ansible.builtin.dnf_payload.zip/ansible/modules/dnf.py\", line 1134, in ensure\n File \"/tmp/ansible_ansible.builtin.dnf_payload_qxu1ivin/ansible_ansible.builtin.dnf_payload.zip/ansible/modules/dnf.py\", line 982, in _install_remote_rpms\n File \"/tmp/ansible_ansible.builtin.dnf_payload_qxu1ivin/ansible_ansible.builtin.dnf_payload.zip/ansible/modules/dnf.py\", line 949, in _update_only\n File \"/tmp/ansible_ansible.builtin.dnf_payload_qxu1ivin/ansible_ansible.builtin.dnf_payload.zip/ansible/modules/dnf.py\", line 795, in _is_installed\n File \"/tmp/ansible_ansible.builtin.dnf_payload_qxu1ivin/ansible_ansible.builtin.dnf_payload.zip/ansible/modules/dnf.py\", line 481, in _split_package_arch\nAttributeError: 'Package' object has no attribute 'rpartition'\n",
"module_stdout": "",
"msg": "MODULE FAILURE\nSee stdout/stderr for the exact error",
"rc": 1
}
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/81376
|
https://github.com/ansible/ansible/pull/81568
|
da63f32d59fe882bc77532e734af7348b65cb6cb
|
4ab5ecbe814fca5dcdf25fb162f098fd3162b1c4
| 2023-07-31T16:14:35Z |
python
| 2023-08-28T08:48:45Z |
changelogs/fragments/dnf-update-only-latest.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 81,376 |
dnf module failure for a package from URI with state=latest update_only=true
|
### Summary
Unable using `latest` together with `update_only` option when installing a package using DNF module from an URL.
Tested on [https://dl.fedoraproject.org/pub/epel/epel-release-latest-9.noarch.rpm](url) and [https://download.postgresql.org/pub/repos/yum/reporpms/EL-9-x86_64/pgdg-redhat-repo-latest.noarch.rpm](url)
The states `latest` or `present` without specifying `update_only` works as expected.
### Issue Type
Bug Report
### Component Name
dnf
### Ansible Version
```console
$ ansible --version
ansible [core 2.14.2]
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/zelenya/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.11/site-packages/ansible
ansible collection location = /home/zelenya/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/bin/ansible
python version = 3.11.2 (main, May 24 2023, 00:00:00) [GCC 11.3.1 20221121 (Red Hat 11.3.1-4)] (/usr/bin/python3.11)
jinja version = 3.1.2
libyaml = True
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
CONFIG_FILE() = /etc/ansible/ansible.cfg
```
### OS / Environment
AlmaLinux release 9.2 (Turquoise Kodkod)
### Steps to Reproduce
Cleran setup Alma linux (using Vagrant)
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
# install certificaters
sudo dnf -y install ca-certificates
# install ansible itself
sudo dnf -y install ansible-core
# install a package form URL
ansible localhost --become -m ansible.builtin.rpm_key -a "key='https://dl.fedoraproject.org/pub/epel/RPM-GPG-KEY-EPEL-9' state=present"
# Update the package to the latest version
ansible localhost --become -m ansible.builtin.dnf -a "name='https://dl.fedoraproject.org/pub/epel/epel-release-latest-9.noarch.rpm' state=present"
# fails with Error: AttributeError: 'Package' object has no attribute 'rpartition'
```
Based on Ansible-lint [https://ansible.readthedocs.io/projects/lint/rules/package-latest/#correct-code](package-latest) I'd expect that the correct flow is to install a package *state=present* and afterward (if requested) update to the latest version by using *state=present* together with *update_only=true*.
The intention is to be sure, that the package is updated if a playbook is executed after several months from initial installation time.
### Expected Results
I expect package installation with both options *state=latest* and *update_only=true* will work or report a misuse instead of failing with an exception.
### Actual Results
```console
$ ansible localhost --become -m ansible.builtin.dnf -a "name='https://dl.fedoraproject.org/pub/epel/epel-release-latest-9.noarch.rpm' state=latest update_only=true"
An exception occurred during task execution. To see the full traceback, use -vvv. The error was: AttributeError: 'Package' object has no attribute 'rpartition'
localhost | FAILED! => {
"changed": false,
"module_stderr": "Traceback (most recent call last):\n File \"<stdin>\", line 16, in <module>\n File \"/usr/lib64/python3.9/runpy.py\", line 225, in run_module\n return _run_module_code(code, init_globals, run_name, mod_spec)\n File \"/usr/lib64/python3.9/runpy.py\", line 97, in _run_module_code\n _run_code(code, mod_globals, init_globals,\n File \"/usr/lib64/python3.9/runpy.py\", line 87, in _run_code\n exec(code, run_globals)\n File \"/tmp/ansible_ansible.builtin.dnf_payload_t7xrps67/ansible_ansible.builtin.dnf_payload.zip/ansible/modules/dnf.py\", line 1460, in <module>\n File \"/tmp/ansible_ansible.builtin.dnf_payload_t7xrps67/ansible_ansible.builtin.dnf_payload.zip/ansible/modules/dnf.py\", line 1449, in main\n File \"/tmp/ansible_ansible.builtin.dnf_payload_t7xrps67/ansible_ansible.builtin.dnf_payload.zip/ansible/modules/dnf.py\", line 1423, in run\n File \"/tmp/ansible_ansible.builtin.dnf_payload_t7xrps67/ansible_ansible.builtin.dnf_payload.zip/ansible/modules/dnf.py\", line 1134, in ensure\n File \"/tmp/ansible_ansible.builtin.dnf_payload_t7xrps67/ansible_ansible.builtin.dnf_payload.zip/ansible/modules/dnf.py\", line 982, in _install_remote_rpms\n File \"/tmp/ansible_ansible.builtin.dnf_payload_t7xrps67/ansible_ansible.builtin.dnf_payload.zip/ansible/modules/dnf.py\", line 949, in _update_only\n File \"/tmp/ansible_ansible.builtin.dnf_payload_t7xrps67/ansible_ansible.builtin.dnf_payload.zip/ansible/modules/dnf.py\", line 795, in _is_installed\n File \"/tmp/ansible_ansible.builtin.dnf_payload_t7xrps67/ansible_ansible.builtin.dnf_payload.zip/ansible/modules/dnf.py\", line 481, in _split_package_arch\nAttributeError: 'Package' object has no attribute 'rpartition'\n",
"module_stdout": "",
"msg": "MODULE FAILURE\nSee stdout/stderr for the exact error",
"rc": 1
}
$ ansible -vvvv localhost --become -m ansible.builtin.dnf -a "name='https://dl.fedoraproject.org/pub/epel/epel-release-latest-9.noarch.rpm' state=latest update_only=true"
ansible [core 2.14.2]
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/zelenya/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.11/site-packages/ansible
ansible collection location = /home/zelenya/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/bin/ansible
python version = 3.11.2 (main, May 24 2023, 00:00:00) [GCC 11.3.1 20221121 (Red Hat 11.3.1-4)] (/usr/bin/python3.11)
jinja version = 3.1.2
libyaml = True
Using /etc/ansible/ansible.cfg as config file
setting up inventory plugins
host_list declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
script declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
auto declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
Parsed /etc/ansible/hosts inventory source with ini plugin
Loading callback plugin minimal of type stdout, v2.0 from /usr/lib/python3.11/site-packages/ansible/plugins/callback/minimal.py
Skipping callback 'default', as we already have a stdout callback.
Skipping callback 'minimal', as we already have a stdout callback.
Skipping callback 'oneline', as we already have a stdout callback.
<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: zelenya
<127.0.0.1> EXEC /bin/sh -c 'echo ~zelenya && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /home/zelenya/.ansible/tmp `"&& mkdir "` echo /home/zelenya/.ansible/tmp/ansible-tmp-1690820028.8473563-4456-128814682130530 `" && echo ansible-tmp-1690820028.8473563-4456-128814682130530="` echo /home/zelenya/.ansible/tmp/ansible-tmp-1690820028.8473563-4456-128814682130530 `" ) && sleep 0'
Using module file /usr/lib/python3.11/site-packages/ansible/modules/dnf.py
<127.0.0.1> PUT /home/zelenya/.ansible/tmp/ansible-local-4452uveazbbj/tmpl3btemgd TO /home/zelenya/.ansible/tmp/ansible-tmp-1690820028.8473563-4456-128814682130530/AnsiballZ_dnf.py
<127.0.0.1> EXEC /bin/sh -c 'chmod u+x /home/zelenya/.ansible/tmp/ansible-tmp-1690820028.8473563-4456-128814682130530/ /home/zelenya/.ansible/tmp/ansible-tmp-1690820028.8473563-4456-128814682130530/AnsiballZ_dnf.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c 'sudo -H -S -n -u root /bin/sh -c '"'"'echo BECOME-SUCCESS-zrmfzgclfrznlmkupsgkhvmruypmywyc ; /usr/bin/python3.11 /home/zelenya/.ansible/tmp/ansible-tmp-1690820028.8473563-4456-128814682130530/AnsiballZ_dnf.py'"'"' && sleep 0'
<127.0.0.1> EXEC /bin/sh -c 'rm -f -r /home/zelenya/.ansible/tmp/ansible-tmp-1690820028.8473563-4456-128814682130530/ > /dev/null 2>&1 && sleep 0'
The full traceback is:
Traceback (most recent call last):
File "<stdin>", line 16, in <module>
File "/usr/lib64/python3.9/runpy.py", line 225, in run_module
return _run_module_code(code, init_globals, run_name, mod_spec)
File "/usr/lib64/python3.9/runpy.py", line 97, in _run_module_code
_run_code(code, mod_globals, init_globals,
File "/usr/lib64/python3.9/runpy.py", line 87, in _run_code
exec(code, run_globals)
File "/tmp/ansible_ansible.builtin.dnf_payload_qxu1ivin/ansible_ansible.builtin.dnf_payload.zip/ansible/modules/dnf.py", line 1460, in <module>
File "/tmp/ansible_ansible.builtin.dnf_payload_qxu1ivin/ansible_ansible.builtin.dnf_payload.zip/ansible/modules/dnf.py", line 1449, in main
File "/tmp/ansible_ansible.builtin.dnf_payload_qxu1ivin/ansible_ansible.builtin.dnf_payload.zip/ansible/modules/dnf.py", line 1423, in run
File "/tmp/ansible_ansible.builtin.dnf_payload_qxu1ivin/ansible_ansible.builtin.dnf_payload.zip/ansible/modules/dnf.py", line 1134, in ensure
File "/tmp/ansible_ansible.builtin.dnf_payload_qxu1ivin/ansible_ansible.builtin.dnf_payload.zip/ansible/modules/dnf.py", line 982, in _install_remote_rpms
File "/tmp/ansible_ansible.builtin.dnf_payload_qxu1ivin/ansible_ansible.builtin.dnf_payload.zip/ansible/modules/dnf.py", line 949, in _update_only
File "/tmp/ansible_ansible.builtin.dnf_payload_qxu1ivin/ansible_ansible.builtin.dnf_payload.zip/ansible/modules/dnf.py", line 795, in _is_installed
File "/tmp/ansible_ansible.builtin.dnf_payload_qxu1ivin/ansible_ansible.builtin.dnf_payload.zip/ansible/modules/dnf.py", line 481, in _split_package_arch
AttributeError: 'Package' object has no attribute 'rpartition'
localhost | FAILED! => {
"changed": false,
"module_stderr": "Traceback (most recent call last):\n File \"<stdin>\", line 16, in <module>\n File \"/usr/lib64/python3.9/runpy.py\", line 225, in run_module\n return _run_module_code(code, init_globals, run_name, mod_spec)\n File \"/usr/lib64/python3.9/runpy.py\", line 97, in _run_module_code\n _run_code(code, mod_globals, init_globals,\n File \"/usr/lib64/python3.9/runpy.py\", line 87, in _run_code\n exec(code, run_globals)\n File \"/tmp/ansible_ansible.builtin.dnf_payload_qxu1ivin/ansible_ansible.builtin.dnf_payload.zip/ansible/modules/dnf.py\", line 1460, in <module>\n File \"/tmp/ansible_ansible.builtin.dnf_payload_qxu1ivin/ansible_ansible.builtin.dnf_payload.zip/ansible/modules/dnf.py\", line 1449, in main\n File \"/tmp/ansible_ansible.builtin.dnf_payload_qxu1ivin/ansible_ansible.builtin.dnf_payload.zip/ansible/modules/dnf.py\", line 1423, in run\n File \"/tmp/ansible_ansible.builtin.dnf_payload_qxu1ivin/ansible_ansible.builtin.dnf_payload.zip/ansible/modules/dnf.py\", line 1134, in ensure\n File \"/tmp/ansible_ansible.builtin.dnf_payload_qxu1ivin/ansible_ansible.builtin.dnf_payload.zip/ansible/modules/dnf.py\", line 982, in _install_remote_rpms\n File \"/tmp/ansible_ansible.builtin.dnf_payload_qxu1ivin/ansible_ansible.builtin.dnf_payload.zip/ansible/modules/dnf.py\", line 949, in _update_only\n File \"/tmp/ansible_ansible.builtin.dnf_payload_qxu1ivin/ansible_ansible.builtin.dnf_payload.zip/ansible/modules/dnf.py\", line 795, in _is_installed\n File \"/tmp/ansible_ansible.builtin.dnf_payload_qxu1ivin/ansible_ansible.builtin.dnf_payload.zip/ansible/modules/dnf.py\", line 481, in _split_package_arch\nAttributeError: 'Package' object has no attribute 'rpartition'\n",
"module_stdout": "",
"msg": "MODULE FAILURE\nSee stdout/stderr for the exact error",
"rc": 1
}
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/81376
|
https://github.com/ansible/ansible/pull/81568
|
da63f32d59fe882bc77532e734af7348b65cb6cb
|
4ab5ecbe814fca5dcdf25fb162f098fd3162b1c4
| 2023-07-31T16:14:35Z |
python
| 2023-08-28T08:48:45Z |
lib/ansible/modules/dnf.py
|
# -*- coding: utf-8 -*-
# Copyright 2015 Cristian van Ee <cristian at cvee.org>
# Copyright 2015 Igor Gnatenko <[email protected]>
# Copyright 2018 Adam Miller <[email protected]>
#
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
DOCUMENTATION = '''
---
module: dnf
version_added: 1.9
short_description: Manages packages with the I(dnf) package manager
description:
- Installs, upgrade, removes, and lists packages and groups with the I(dnf) package manager.
options:
use_backend:
description:
- By default, this module will select the backend based on the C(ansible_pkg_mgr) fact.
default: "auto"
choices: [ auto, dnf4, dnf5 ]
type: str
version_added: 2.15
name:
description:
- "A package name or package specifier with version, like C(name-1.0).
When using state=latest, this can be '*' which means run: dnf -y update.
You can also pass a url or a local path to a rpm file.
To operate on several packages this can accept a comma separated string of packages or a list of packages."
- Comparison operators for package version are valid here C(>), C(<), C(>=), C(<=). Example - C(name >= 1.0).
Spaces around the operator are required.
- You can also pass an absolute path for a binary which is provided by the package to install.
See examples for more information.
aliases:
- pkg
type: list
elements: str
default: []
list:
description:
- Various (non-idempotent) commands for usage with C(/usr/bin/ansible) and I(not) playbooks.
Use M(ansible.builtin.package_facts) instead of the O(list) argument as a best practice.
type: str
state:
description:
- Whether to install (V(present), V(latest)), or remove (V(absent)) a package.
- Default is V(None), however in effect the default action is V(present) unless the O(autoremove) option is
enabled for this module, then V(absent) is inferred.
choices: ['absent', 'present', 'installed', 'removed', 'latest']
type: str
enablerepo:
description:
- I(Repoid) of repositories to enable for the install/update operation.
These repos will not persist beyond the transaction.
When specifying multiple repos, separate them with a ",".
type: list
elements: str
default: []
disablerepo:
description:
- I(Repoid) of repositories to disable for the install/update operation.
These repos will not persist beyond the transaction.
When specifying multiple repos, separate them with a ",".
type: list
elements: str
default: []
conf_file:
description:
- The remote dnf configuration file to use for the transaction.
type: str
disable_gpg_check:
description:
- Whether to disable the GPG checking of signatures of packages being
installed. Has an effect only if O(state) is V(present) or V(latest).
- This setting affects packages installed from a repository as well as
"local" packages installed from the filesystem or a URL.
type: bool
default: 'no'
installroot:
description:
- Specifies an alternative installroot, relative to which all packages
will be installed.
version_added: "2.3"
default: "/"
type: str
releasever:
description:
- Specifies an alternative release from which all packages will be
installed.
version_added: "2.6"
type: str
autoremove:
description:
- If V(true), removes all "leaf" packages from the system that were originally
installed as dependencies of user-installed packages but which are no longer
required by any such package. Should be used alone or when O(state) is V(absent)
type: bool
default: "no"
version_added: "2.4"
exclude:
description:
- Package name(s) to exclude when state=present, or latest. This can be a
list or a comma separated string.
version_added: "2.7"
type: list
elements: str
default: []
skip_broken:
description:
- Skip all unavailable packages or packages with broken dependencies
without raising an error. Equivalent to passing the --skip-broken option.
type: bool
default: "no"
version_added: "2.7"
update_cache:
description:
- Force dnf to check if cache is out of date and redownload if needed.
Has an effect only if O(state) is V(present) or V(latest).
type: bool
default: "no"
aliases: [ expire-cache ]
version_added: "2.7"
update_only:
description:
- When using latest, only update installed packages. Do not install packages.
- Has an effect only if O(state) is V(latest)
default: "no"
type: bool
version_added: "2.7"
security:
description:
- If set to V(true), and O(state=latest) then only installs updates that have been marked security related.
- Note that, similar to C(dnf upgrade-minimal), this filter applies to dependencies as well.
type: bool
default: "no"
version_added: "2.7"
bugfix:
description:
- If set to V(true), and O(state=latest) then only installs updates that have been marked bugfix related.
- Note that, similar to C(dnf upgrade-minimal), this filter applies to dependencies as well.
default: "no"
type: bool
version_added: "2.7"
enable_plugin:
description:
- I(Plugin) name to enable for the install/update operation.
The enabled plugin will not persist beyond the transaction.
version_added: "2.7"
type: list
elements: str
default: []
disable_plugin:
description:
- I(Plugin) name to disable for the install/update operation.
The disabled plugins will not persist beyond the transaction.
version_added: "2.7"
type: list
default: []
elements: str
disable_excludes:
description:
- Disable the excludes defined in DNF config files.
- If set to V(all), disables all excludes.
- If set to V(main), disable excludes defined in [main] in dnf.conf.
- If set to V(repoid), disable excludes defined for given repo id.
version_added: "2.7"
type: str
validate_certs:
description:
- This only applies if using a https url as the source of the rpm. e.g. for localinstall. If set to V(false), the SSL certificates will not be validated.
- This should only set to V(false) used on personally controlled sites using self-signed certificates as it avoids verifying the source site.
type: bool
default: "yes"
version_added: "2.7"
sslverify:
description:
- Disables SSL validation of the repository server for this transaction.
- This should be set to V(false) if one of the configured repositories is using an untrusted or self-signed certificate.
type: bool
default: "yes"
version_added: "2.13"
allow_downgrade:
description:
- Specify if the named package and version is allowed to downgrade
a maybe already installed higher version of that package.
Note that setting allow_downgrade=True can make this module
behave in a non-idempotent way. The task could end up with a set
of packages that does not match the complete list of specified
packages to install (because dependencies between the downgraded
package and others can cause changes to the packages which were
in the earlier transaction).
type: bool
default: "no"
version_added: "2.7"
install_repoquery:
description:
- This is effectively a no-op in DNF as it is not needed with DNF, but is an accepted parameter for feature
parity/compatibility with the M(ansible.builtin.yum) module.
type: bool
default: "yes"
version_added: "2.7"
download_only:
description:
- Only download the packages, do not install them.
default: "no"
type: bool
version_added: "2.7"
lock_timeout:
description:
- Amount of time to wait for the dnf lockfile to be freed.
required: false
default: 30
type: int
version_added: "2.8"
install_weak_deps:
description:
- Will also install all packages linked by a weak dependency relation.
type: bool
default: "yes"
version_added: "2.8"
download_dir:
description:
- Specifies an alternate directory to store packages.
- Has an effect only if O(download_only) is specified.
type: str
version_added: "2.8"
allowerasing:
description:
- If V(true) it allows erasing of installed packages to resolve dependencies.
required: false
type: bool
default: "no"
version_added: "2.10"
nobest:
description:
- Set best option to False, so that transactions are not limited to best candidates only.
required: false
type: bool
default: "no"
version_added: "2.11"
cacheonly:
description:
- Tells dnf to run entirely from system cache; does not download or update metadata.
type: bool
default: "no"
version_added: "2.12"
extends_documentation_fragment:
- action_common_attributes
- action_common_attributes.flow
attributes:
action:
details: In the case of dnf, it has 2 action plugins that use it under the hood, M(ansible.builtin.yum) and M(ansible.builtin.package).
support: partial
async:
support: none
bypass_host_loop:
support: none
check_mode:
support: full
diff_mode:
support: full
platform:
platforms: rhel
notes:
- When used with a C(loop:) each package will be processed individually, it is much more efficient to pass the list directly to the I(name) option.
- Group removal doesn't work if the group was installed with Ansible because
upstream dnf's API doesn't properly mark groups as installed, therefore upon
removal the module is unable to detect that the group is installed
(https://bugzilla.redhat.com/show_bug.cgi?id=1620324)
requirements:
- "python >= 2.6"
- python-dnf
- for the autoremove option you need dnf >= 2.0.1"
author:
- Igor Gnatenko (@ignatenkobrain) <[email protected]>
- Cristian van Ee (@DJMuggs) <cristian at cvee.org>
- Berend De Schouwer (@berenddeschouwer)
- Adam Miller (@maxamillion) <[email protected]>
'''
EXAMPLES = '''
- name: Install the latest version of Apache
ansible.builtin.dnf:
name: httpd
state: latest
- name: Install Apache >= 2.4
ansible.builtin.dnf:
name: httpd >= 2.4
state: present
- name: Install the latest version of Apache and MariaDB
ansible.builtin.dnf:
name:
- httpd
- mariadb-server
state: latest
- name: Remove the Apache package
ansible.builtin.dnf:
name: httpd
state: absent
- name: Install the latest version of Apache from the testing repo
ansible.builtin.dnf:
name: httpd
enablerepo: testing
state: present
- name: Upgrade all packages
ansible.builtin.dnf:
name: "*"
state: latest
- name: Update the webserver, depending on which is installed on the system. Do not install the other one
ansible.builtin.dnf:
name:
- httpd
- nginx
state: latest
update_only: yes
- name: Install the nginx rpm from a remote repo
ansible.builtin.dnf:
name: 'http://nginx.org/packages/centos/6/noarch/RPMS/nginx-release-centos-6-0.el6.ngx.noarch.rpm'
state: present
- name: Install nginx rpm from a local file
ansible.builtin.dnf:
name: /usr/local/src/nginx-release-centos-6-0.el6.ngx.noarch.rpm
state: present
- name: Install Package based upon the file it provides
ansible.builtin.dnf:
name: /usr/bin/cowsay
state: present
- name: Install the 'Development tools' package group
ansible.builtin.dnf:
name: '@Development tools'
state: present
- name: Autoremove unneeded packages installed as dependencies
ansible.builtin.dnf:
autoremove: yes
- name: Uninstall httpd but keep its dependencies
ansible.builtin.dnf:
name: httpd
state: absent
autoremove: no
- name: Install a modularity appstream with defined stream and profile
ansible.builtin.dnf:
name: '@postgresql:9.6/client'
state: present
- name: Install a modularity appstream with defined stream
ansible.builtin.dnf:
name: '@postgresql:9.6'
state: present
- name: Install a modularity appstream with defined profile
ansible.builtin.dnf:
name: '@postgresql/client'
state: present
'''
import os
import re
import sys
from ansible.module_utils.common.text.converters import to_native, to_text
from ansible.module_utils.urls import fetch_file
from ansible.module_utils.six import text_type
from ansible.module_utils.compat.version import LooseVersion
from ansible.module_utils.basic import AnsibleModule
from ansible.module_utils.common.locale import get_best_parsable_locale
from ansible.module_utils.common.respawn import has_respawned, probe_interpreters_for_module, respawn_module
from ansible.module_utils.yumdnf import YumDnf, yumdnf_argument_spec
# NOTE dnf Python bindings import is postponed, see DnfModule._ensure_dnf(),
# because we need AnsibleModule object to use get_best_parsable_locale()
# to set proper locale before importing dnf to be able to scrape
# the output in some cases (FIXME?).
dnf = None
class DnfModule(YumDnf):
"""
DNF Ansible module back-end implementation
"""
def __init__(self, module):
# This populates instance vars for all argument spec params
super(DnfModule, self).__init__(module)
self._ensure_dnf()
self.lockfile = "/var/cache/dnf/*_lock.pid"
self.pkg_mgr_name = "dnf"
try:
self.with_modules = dnf.base.WITH_MODULES
except AttributeError:
self.with_modules = False
# DNF specific args that are not part of YumDnf
self.allowerasing = self.module.params['allowerasing']
self.nobest = self.module.params['nobest']
def is_lockfile_pid_valid(self):
# FIXME? it looks like DNF takes care of invalid lock files itself?
# https://github.com/ansible/ansible/issues/57189
return True
def _sanitize_dnf_error_msg_install(self, spec, error):
"""
For unhandled dnf.exceptions.Error scenarios, there are certain error
messages we want to filter in an install scenario. Do that here.
"""
if (
to_text("no package matched") in to_text(error) or
to_text("No match for argument:") in to_text(error)
):
return "No package {0} available.".format(spec)
return error
def _sanitize_dnf_error_msg_remove(self, spec, error):
"""
For unhandled dnf.exceptions.Error scenarios, there are certain error
messages we want to ignore in a removal scenario as known benign
failures. Do that here.
"""
if (
'no package matched' in to_native(error) or
'No match for argument:' in to_native(error)
):
return (False, "{0} is not installed".format(spec))
# Return value is tuple of:
# ("Is this actually a failure?", "Error Message")
return (True, error)
def _package_dict(self, package):
"""Return a dictionary of information for the package."""
# NOTE: This no longer contains the 'dnfstate' field because it is
# already known based on the query type.
result = {
'name': package.name,
'arch': package.arch,
'epoch': str(package.epoch),
'release': package.release,
'version': package.version,
'repo': package.repoid}
# envra format for alignment with the yum module
result['envra'] = '{epoch}:{name}-{version}-{release}.{arch}'.format(**result)
# keep nevra key for backwards compat as it was previously
# defined with a value in envra format
result['nevra'] = result['envra']
if package.installtime == 0:
result['yumstate'] = 'available'
else:
result['yumstate'] = 'installed'
return result
def _split_package_arch(self, packagename):
# This list was auto generated on a Fedora 28 system with the following one-liner
# printf '[ '; for arch in $(ls /usr/lib/rpm/platform); do printf '"%s", ' ${arch%-linux}; done; printf ']\n'
redhat_rpm_arches = [
"aarch64", "alphaev56", "alphaev5", "alphaev67", "alphaev6", "alpha",
"alphapca56", "amd64", "armv3l", "armv4b", "armv4l", "armv5tejl", "armv5tel",
"armv5tl", "armv6hl", "armv6l", "armv7hl", "armv7hnl", "armv7l", "athlon",
"geode", "i386", "i486", "i586", "i686", "ia32e", "ia64", "m68k", "mips64el",
"mips64", "mips64r6el", "mips64r6", "mipsel", "mips", "mipsr6el", "mipsr6",
"noarch", "pentium3", "pentium4", "ppc32dy4", "ppc64iseries", "ppc64le", "ppc64",
"ppc64p7", "ppc64pseries", "ppc8260", "ppc8560", "ppciseries", "ppc", "ppcpseries",
"riscv64", "s390", "s390x", "sh3", "sh4a", "sh4", "sh", "sparc64", "sparc64v",
"sparc", "sparcv8", "sparcv9", "sparcv9v", "x86_64"
]
name, delimiter, arch = packagename.rpartition('.')
if name and arch and arch in redhat_rpm_arches:
return name, arch
return packagename, None
def _packagename_dict(self, packagename):
"""
Return a dictionary of information for a package name string or None
if the package name doesn't contain at least all NVR elements
"""
if packagename[-4:] == '.rpm':
packagename = packagename[:-4]
rpm_nevr_re = re.compile(r'(\S+)-(?:(\d*):)?(.*)-(~?\w+[\w.+]*)')
try:
arch = None
nevr, arch = self._split_package_arch(packagename)
if arch:
packagename = nevr
rpm_nevr_match = rpm_nevr_re.match(packagename)
if rpm_nevr_match:
name, epoch, version, release = rpm_nevr_re.match(packagename).groups()
if not version or not version.split('.')[0].isdigit():
return None
else:
return None
except AttributeError as e:
self.module.fail_json(
msg='Error attempting to parse package: %s, %s' % (packagename, to_native(e)),
rc=1,
results=[]
)
if not epoch:
epoch = "0"
if ':' in name:
epoch_name = name.split(":")
epoch = epoch_name[0]
name = ''.join(epoch_name[1:])
result = {
'name': name,
'epoch': epoch,
'release': release,
'version': version,
}
return result
# Original implementation from yum.rpmUtils.miscutils (GPLv2+)
# http://yum.baseurl.org/gitweb?p=yum.git;a=blob;f=rpmUtils/miscutils.py
def _compare_evr(self, e1, v1, r1, e2, v2, r2):
# return 1: a is newer than b
# 0: a and b are the same version
# -1: b is newer than a
if e1 is None:
e1 = '0'
else:
e1 = str(e1)
v1 = str(v1)
r1 = str(r1)
if e2 is None:
e2 = '0'
else:
e2 = str(e2)
v2 = str(v2)
r2 = str(r2)
rc = dnf.rpm.rpm.labelCompare((e1, v1, r1), (e2, v2, r2))
return rc
def _ensure_dnf(self):
locale = get_best_parsable_locale(self.module)
os.environ['LC_ALL'] = os.environ['LC_MESSAGES'] = locale
os.environ['LANGUAGE'] = os.environ['LANG'] = locale
global dnf
try:
import dnf
import dnf.cli
import dnf.const
import dnf.exceptions
import dnf.subject
import dnf.util
HAS_DNF = True
except ImportError:
HAS_DNF = False
if HAS_DNF:
return
system_interpreters = ['/usr/libexec/platform-python',
'/usr/bin/python3',
'/usr/bin/python2',
'/usr/bin/python']
if not has_respawned():
# probe well-known system Python locations for accessible bindings, favoring py3
interpreter = probe_interpreters_for_module(system_interpreters, 'dnf')
if interpreter:
# respawn under the interpreter where the bindings should be found
respawn_module(interpreter)
# end of the line for this module, the process will exit here once the respawned module completes
# done all we can do, something is just broken (auto-install isn't useful anymore with respawn, so it was removed)
self.module.fail_json(
msg="Could not import the dnf python module using {0} ({1}). "
"Please install `python3-dnf` or `python2-dnf` package or ensure you have specified the "
"correct ansible_python_interpreter. (attempted {2})"
.format(sys.executable, sys.version.replace('\n', ''), system_interpreters),
results=[]
)
def _configure_base(self, base, conf_file, disable_gpg_check, installroot='/', sslverify=True):
"""Configure the dnf Base object."""
conf = base.conf
# Change the configuration file path if provided, this must be done before conf.read() is called
if conf_file:
# Fail if we can't read the configuration file.
if not os.access(conf_file, os.R_OK):
self.module.fail_json(
msg="cannot read configuration file", conf_file=conf_file,
results=[],
)
else:
conf.config_file_path = conf_file
# Read the configuration file
conf.read()
# Turn off debug messages in the output
conf.debuglevel = 0
# Set whether to check gpg signatures
conf.gpgcheck = not disable_gpg_check
conf.localpkg_gpgcheck = not disable_gpg_check
# Don't prompt for user confirmations
conf.assumeyes = True
# Set certificate validation
conf.sslverify = sslverify
# Set installroot
conf.installroot = installroot
# Load substitutions from the filesystem
conf.substitutions.update_from_etc(installroot)
# Handle different DNF versions immutable mutable datatypes and
# dnf v1/v2/v3
#
# In DNF < 3.0 are lists, and modifying them works
# In DNF >= 3.0 < 3.6 are lists, but modifying them doesn't work
# In DNF >= 3.6 have been turned into tuples, to communicate that modifying them doesn't work
#
# https://www.happyassassin.net/2018/06/27/adams-debugging-adventures-the-immutable-mutable-object/
#
# Set excludes
if self.exclude:
_excludes = list(conf.exclude)
_excludes.extend(self.exclude)
conf.exclude = _excludes
# Set disable_excludes
if self.disable_excludes:
_disable_excludes = list(conf.disable_excludes)
if self.disable_excludes not in _disable_excludes:
_disable_excludes.append(self.disable_excludes)
conf.disable_excludes = _disable_excludes
# Set releasever
if self.releasever is not None:
conf.substitutions['releasever'] = self.releasever
if conf.substitutions.get('releasever') is None:
self.module.warn(
'Unable to detect release version (use "releasever" option to specify release version)'
)
# values of conf.substitutions are expected to be strings
# setting this to an empty string instead of None appears to mimic the DNF CLI behavior
conf.substitutions['releasever'] = ''
# Set skip_broken (in dnf this is strict=0)
if self.skip_broken:
conf.strict = 0
# Set best
if self.nobest:
conf.best = 0
if self.download_only:
conf.downloadonly = True
if self.download_dir:
conf.destdir = self.download_dir
if self.cacheonly:
conf.cacheonly = True
# Default in dnf upstream is true
conf.clean_requirements_on_remove = self.autoremove
# Default in dnf (and module default) is True
conf.install_weak_deps = self.install_weak_deps
def _specify_repositories(self, base, disablerepo, enablerepo):
"""Enable and disable repositories matching the provided patterns."""
base.read_all_repos()
repos = base.repos
# Disable repositories
for repo_pattern in disablerepo:
if repo_pattern:
for repo in repos.get_matching(repo_pattern):
repo.disable()
# Enable repositories
for repo_pattern in enablerepo:
if repo_pattern:
for repo in repos.get_matching(repo_pattern):
repo.enable()
def _base(self, conf_file, disable_gpg_check, disablerepo, enablerepo, installroot, sslverify):
"""Return a fully configured dnf Base object."""
base = dnf.Base()
self._configure_base(base, conf_file, disable_gpg_check, installroot, sslverify)
try:
# this method has been supported in dnf-4.2.17-6 or later
# https://bugzilla.redhat.com/show_bug.cgi?id=1788212
base.setup_loggers()
except AttributeError:
pass
try:
base.init_plugins(set(self.disable_plugin), set(self.enable_plugin))
base.pre_configure_plugins()
except AttributeError:
pass # older versions of dnf didn't require this and don't have these methods
self._specify_repositories(base, disablerepo, enablerepo)
try:
base.configure_plugins()
except AttributeError:
pass # older versions of dnf didn't require this and don't have these methods
try:
if self.update_cache:
try:
base.update_cache()
except dnf.exceptions.RepoError as e:
self.module.fail_json(
msg="{0}".format(to_text(e)),
results=[],
rc=1
)
base.fill_sack(load_system_repo='auto')
except dnf.exceptions.RepoError as e:
self.module.fail_json(
msg="{0}".format(to_text(e)),
results=[],
rc=1
)
add_security_filters = getattr(base, "add_security_filters", None)
if callable(add_security_filters):
filters = {}
if self.bugfix:
filters.setdefault('types', []).append('bugfix')
if self.security:
filters.setdefault('types', []).append('security')
if filters:
add_security_filters('eq', **filters)
else:
filters = []
if self.bugfix:
key = {'advisory_type__eq': 'bugfix'}
filters.append(base.sack.query().upgrades().filter(**key))
if self.security:
key = {'advisory_type__eq': 'security'}
filters.append(base.sack.query().upgrades().filter(**key))
if filters:
base._update_security_filters = filters
return base
def list_items(self, command):
"""List package info based on the command."""
# Rename updates to upgrades
if command == 'updates':
command = 'upgrades'
# Return the corresponding packages
if command in ['installed', 'upgrades', 'available']:
results = [
self._package_dict(package)
for package in getattr(self.base.sack.query(), command)()]
# Return the enabled repository ids
elif command in ['repos', 'repositories']:
results = [
{'repoid': repo.id, 'state': 'enabled'}
for repo in self.base.repos.iter_enabled()]
# Return any matching packages
else:
packages = dnf.subject.Subject(command).get_best_query(self.base.sack)
results = [self._package_dict(package) for package in packages]
self.module.exit_json(msg="", results=results)
def _is_installed(self, pkg):
installed = self.base.sack.query().installed()
package_spec = {}
name, arch = self._split_package_arch(pkg)
if arch:
package_spec['arch'] = arch
package_details = self._packagename_dict(pkg)
if package_details:
package_details['epoch'] = int(package_details['epoch'])
package_spec.update(package_details)
else:
package_spec['name'] = name
return bool(installed.filter(**package_spec))
def _is_newer_version_installed(self, pkg_name):
candidate_pkg = self._packagename_dict(pkg_name)
if not candidate_pkg:
# The user didn't provide a versioned rpm, so version checking is
# not required
return False
installed = self.base.sack.query().installed()
installed_pkg = installed.filter(name=candidate_pkg['name']).run()
if installed_pkg:
installed_pkg = installed_pkg[0]
# this looks weird but one is a dict and the other is a dnf.Package
evr_cmp = self._compare_evr(
installed_pkg.epoch, installed_pkg.version, installed_pkg.release,
candidate_pkg['epoch'], candidate_pkg['version'], candidate_pkg['release'],
)
return evr_cmp == 1
else:
return False
def _mark_package_install(self, pkg_spec, upgrade=False):
"""Mark the package for install."""
is_newer_version_installed = self._is_newer_version_installed(pkg_spec)
is_installed = self._is_installed(pkg_spec)
try:
if is_newer_version_installed:
if self.allow_downgrade:
# dnf only does allow_downgrade, we have to handle this ourselves
# because it allows a possibility for non-idempotent transactions
# on a system's package set (pending the yum repo has many old
# NVRs indexed)
if upgrade:
if is_installed: # Case 1
# TODO: Is this case reachable?
#
# _is_installed() demands a name (*not* NVR) or else is always False
# (wildcards are treated literally).
#
# Meanwhile, _is_newer_version_installed() demands something versioned
# or else is always false.
#
# I fail to see how they can both be true at the same time for any
# given pkg_spec. -re
self.base.upgrade(pkg_spec)
else: # Case 2
self.base.install(pkg_spec, strict=self.base.conf.strict)
else: # Case 3
self.base.install(pkg_spec, strict=self.base.conf.strict)
else: # Case 4, Nothing to do, report back
pass
elif is_installed: # A potentially older (or same) version is installed
if upgrade: # Case 5
self.base.upgrade(pkg_spec)
else: # Case 6, Nothing to do, report back
pass
else: # Case 7, The package is not installed, simply install it
self.base.install(pkg_spec, strict=self.base.conf.strict)
return {'failed': False, 'msg': '', 'failure': '', 'rc': 0}
except dnf.exceptions.MarkingError as e:
return {
'failed': True,
'msg': "No package {0} available.".format(pkg_spec),
'failure': " ".join((pkg_spec, to_native(e))),
'rc': 1,
"results": []
}
except dnf.exceptions.DepsolveError as e:
return {
'failed': True,
'msg': "Depsolve Error occurred for package {0}.".format(pkg_spec),
'failure': " ".join((pkg_spec, to_native(e))),
'rc': 1,
"results": []
}
except dnf.exceptions.Error as e:
if to_text("already installed") in to_text(e):
return {'failed': False, 'msg': '', 'failure': ''}
else:
return {
'failed': True,
'msg': "Unknown Error occurred for package {0}.".format(pkg_spec),
'failure': " ".join((pkg_spec, to_native(e))),
'rc': 1,
"results": []
}
def _whatprovides(self, filepath):
self.base.read_all_repos()
available = self.base.sack.query().available()
# Search in file
files_filter = available.filter(file=filepath)
# And Search in provides
pkg_spec = files_filter.union(available.filter(provides=filepath)).run()
if pkg_spec:
return pkg_spec[0].name
def _parse_spec_group_file(self):
pkg_specs, grp_specs, module_specs, filenames = [], [], [], []
already_loaded_comps = False # Only load this if necessary, it's slow
for name in self.names:
if '://' in name:
name = fetch_file(self.module, name)
filenames.append(name)
elif name.endswith(".rpm"):
filenames.append(name)
elif name.startswith('/'):
# like "dnf install /usr/bin/vi"
pkg_spec = self._whatprovides(name)
if pkg_spec:
pkg_specs.append(pkg_spec)
continue
elif name.startswith("@") or ('/' in name):
if not already_loaded_comps:
self.base.read_comps()
already_loaded_comps = True
grp_env_mdl_candidate = name[1:].strip()
if self.with_modules:
mdl = self.module_base._get_modules(grp_env_mdl_candidate)
if mdl[0]:
module_specs.append(grp_env_mdl_candidate)
else:
grp_specs.append(grp_env_mdl_candidate)
else:
grp_specs.append(grp_env_mdl_candidate)
else:
pkg_specs.append(name)
return pkg_specs, grp_specs, module_specs, filenames
def _update_only(self, pkgs):
not_installed = []
for pkg in pkgs:
if self._is_installed(pkg):
try:
if isinstance(to_text(pkg), text_type):
self.base.upgrade(pkg)
else:
self.base.package_upgrade(pkg)
except Exception as e:
self.module.fail_json(
msg="Error occurred attempting update_only operation: {0}".format(to_native(e)),
results=[],
rc=1,
)
else:
not_installed.append(pkg)
return not_installed
def _install_remote_rpms(self, filenames):
if int(dnf.__version__.split(".")[0]) >= 2:
pkgs = list(sorted(self.base.add_remote_rpms(list(filenames)), reverse=True))
else:
pkgs = []
try:
for filename in filenames:
pkgs.append(self.base.add_remote_rpm(filename))
except IOError as e:
if to_text("Can not load RPM file") in to_text(e):
self.module.fail_json(
msg="Error occurred attempting remote rpm install of package: {0}. {1}".format(filename, to_native(e)),
results=[],
rc=1,
)
if self.update_only:
self._update_only(pkgs)
else:
for pkg in pkgs:
try:
if self._is_newer_version_installed(self._package_dict(pkg)['nevra']):
if self.allow_downgrade:
self.base.package_install(pkg, strict=self.base.conf.strict)
else:
self.base.package_install(pkg, strict=self.base.conf.strict)
except Exception as e:
self.module.fail_json(
msg="Error occurred attempting remote rpm operation: {0}".format(to_native(e)),
results=[],
rc=1,
)
def _is_module_installed(self, module_spec):
if self.with_modules:
module_spec = module_spec.strip()
module_list, nsv = self.module_base._get_modules(module_spec)
enabled_streams = self.base._moduleContainer.getEnabledStream(nsv.name)
if enabled_streams:
if nsv.stream:
if nsv.stream in enabled_streams:
return True # The provided stream was found
else:
return False # The provided stream was not found
else:
return True # No stream provided, but module found
return False # seems like a sane default
def ensure(self):
response = {
'msg': "",
'changed': False,
'results': [],
'rc': 0
}
# Accumulate failures. Package management modules install what they can
# and fail with a message about what they can't.
failure_response = {
'msg': "",
'failures': [],
'results': [],
'rc': 1
}
# Autoremove is called alone
# Jump to remove path where base.autoremove() is run
if not self.names and self.autoremove:
self.names = []
self.state = 'absent'
if self.names == ['*'] and self.state == 'latest':
try:
self.base.upgrade_all()
except dnf.exceptions.DepsolveError as e:
failure_response['msg'] = "Depsolve Error occurred attempting to upgrade all packages"
self.module.fail_json(**failure_response)
else:
pkg_specs, group_specs, module_specs, filenames = self._parse_spec_group_file()
pkg_specs = [p.strip() for p in pkg_specs]
filenames = [f.strip() for f in filenames]
groups = []
environments = []
for group_spec in (g.strip() for g in group_specs):
group = self.base.comps.group_by_pattern(group_spec)
if group:
groups.append(group.id)
else:
environment = self.base.comps.environment_by_pattern(group_spec)
if environment:
environments.append(environment.id)
else:
self.module.fail_json(
msg="No group {0} available.".format(group_spec),
results=[],
)
if self.state in ['installed', 'present']:
# Install files.
self._install_remote_rpms(filenames)
for filename in filenames:
response['results'].append("Installed {0}".format(filename))
# Install modules
if module_specs and self.with_modules:
for module in module_specs:
try:
if not self._is_module_installed(module):
response['results'].append("Module {0} installed.".format(module))
self.module_base.install([module])
self.module_base.enable([module])
except dnf.exceptions.MarkingErrors as e:
failure_response['failures'].append(' '.join((module, to_native(e))))
# Install groups.
for group in groups:
try:
group_pkg_count_installed = self.base.group_install(group, dnf.const.GROUP_PACKAGE_TYPES)
if group_pkg_count_installed == 0:
response['results'].append("Group {0} already installed.".format(group))
else:
response['results'].append("Group {0} installed.".format(group))
except dnf.exceptions.DepsolveError as e:
failure_response['msg'] = "Depsolve Error occurred attempting to install group: {0}".format(group)
self.module.fail_json(**failure_response)
except dnf.exceptions.Error as e:
# In dnf 2.0 if all the mandatory packages in a group do
# not install, an error is raised. We want to capture
# this but still install as much as possible.
failure_response['failures'].append(" ".join((group, to_native(e))))
for environment in environments:
try:
self.base.environment_install(environment, dnf.const.GROUP_PACKAGE_TYPES)
except dnf.exceptions.DepsolveError as e:
failure_response['msg'] = "Depsolve Error occurred attempting to install environment: {0}".format(environment)
self.module.fail_json(**failure_response)
except dnf.exceptions.Error as e:
failure_response['failures'].append(" ".join((environment, to_native(e))))
if module_specs and not self.with_modules:
# This means that the group or env wasn't found in comps
self.module.fail_json(
msg="No group {0} available.".format(module_specs[0]),
results=[],
)
# Install packages.
if self.update_only:
not_installed = self._update_only(pkg_specs)
for spec in not_installed:
response['results'].append("Packages providing %s not installed due to update_only specified" % spec)
else:
for pkg_spec in pkg_specs:
install_result = self._mark_package_install(pkg_spec)
if install_result['failed']:
if install_result['msg']:
failure_response['msg'] += install_result['msg']
failure_response['failures'].append(self._sanitize_dnf_error_msg_install(pkg_spec, install_result['failure']))
else:
if install_result['msg']:
response['results'].append(install_result['msg'])
elif self.state == 'latest':
# "latest" is same as "installed" for filenames.
self._install_remote_rpms(filenames)
for filename in filenames:
response['results'].append("Installed {0}".format(filename))
# Upgrade modules
if module_specs and self.with_modules:
for module in module_specs:
try:
if self._is_module_installed(module):
response['results'].append("Module {0} upgraded.".format(module))
self.module_base.upgrade([module])
except dnf.exceptions.MarkingErrors as e:
failure_response['failures'].append(' '.join((module, to_native(e))))
for group in groups:
try:
try:
self.base.group_upgrade(group)
response['results'].append("Group {0} upgraded.".format(group))
except dnf.exceptions.CompsError:
if not self.update_only:
# If not already installed, try to install.
group_pkg_count_installed = self.base.group_install(group, dnf.const.GROUP_PACKAGE_TYPES)
if group_pkg_count_installed == 0:
response['results'].append("Group {0} already installed.".format(group))
else:
response['results'].append("Group {0} installed.".format(group))
except dnf.exceptions.Error as e:
failure_response['failures'].append(" ".join((group, to_native(e))))
for environment in environments:
try:
try:
self.base.environment_upgrade(environment)
except dnf.exceptions.CompsError:
# If not already installed, try to install.
self.base.environment_install(environment, dnf.const.GROUP_PACKAGE_TYPES)
except dnf.exceptions.DepsolveError as e:
failure_response['msg'] = "Depsolve Error occurred attempting to install environment: {0}".format(environment)
except dnf.exceptions.Error as e:
failure_response['failures'].append(" ".join((environment, to_native(e))))
if self.update_only:
not_installed = self._update_only(pkg_specs)
for spec in not_installed:
response['results'].append("Packages providing %s not installed due to update_only specified" % spec)
else:
for pkg_spec in pkg_specs:
# Previously we forced base.conf.best=True here.
# However in 2.11+ there is a self.nobest option, so defer to that.
# Note, however, that just because nobest isn't set, doesn't mean that
# base.conf.best is actually true. We only force it false in
# _configure_base(), we never set it to true, and it can default to false.
# Thus, we still need to explicitly set it here.
self.base.conf.best = not self.nobest
install_result = self._mark_package_install(pkg_spec, upgrade=True)
if install_result['failed']:
if install_result['msg']:
failure_response['msg'] += install_result['msg']
failure_response['failures'].append(self._sanitize_dnf_error_msg_install(pkg_spec, install_result['failure']))
else:
if install_result['msg']:
response['results'].append(install_result['msg'])
else:
# state == absent
if filenames:
self.module.fail_json(
msg="Cannot remove paths -- please specify package name.",
results=[],
)
# Remove modules
if module_specs and self.with_modules:
for module in module_specs:
try:
if self._is_module_installed(module):
response['results'].append("Module {0} removed.".format(module))
self.module_base.remove([module])
self.module_base.disable([module])
self.module_base.reset([module])
except dnf.exceptions.MarkingErrors as e:
failure_response['failures'].append(' '.join((module, to_native(e))))
for group in groups:
try:
self.base.group_remove(group)
except dnf.exceptions.CompsError:
# Group is already uninstalled.
pass
except AttributeError:
# Group either isn't installed or wasn't marked installed at install time
# because of DNF bug
#
# This is necessary until the upstream dnf API bug is fixed where installing
# a group via the dnf API doesn't actually mark the group as installed
# https://bugzilla.redhat.com/show_bug.cgi?id=1620324
pass
for environment in environments:
try:
self.base.environment_remove(environment)
except dnf.exceptions.CompsError:
# Environment is already uninstalled.
pass
installed = self.base.sack.query().installed()
for pkg_spec in pkg_specs:
# short-circuit installed check for wildcard matching
if '*' in pkg_spec:
try:
self.base.remove(pkg_spec)
except dnf.exceptions.MarkingError as e:
is_failure, handled_remove_error = self._sanitize_dnf_error_msg_remove(pkg_spec, to_native(e))
if is_failure:
failure_response['failures'].append('{0} - {1}'.format(pkg_spec, to_native(e)))
else:
response['results'].append(handled_remove_error)
continue
installed_pkg = dnf.subject.Subject(pkg_spec).get_best_query(
sack=self.base.sack).installed().run()
for pkg in installed_pkg:
self.base.remove(str(pkg))
# Like the dnf CLI we want to allow recursive removal of dependent
# packages
self.allowerasing = True
if self.autoremove:
self.base.autoremove()
try:
# NOTE for people who go down the rabbit hole of figuring out why
# resolve() throws DepsolveError here on dep conflict, but not when
# called from the CLI: It's controlled by conf.best. When best is
# set, Hawkey will fail the goal, and resolve() in dnf.base.Base
# will throw. Otherwise if it's not set, the update (install) will
# be (almost silently) removed from the goal, and Hawkey will report
# success. Note that in this case, similar to the CLI, skip_broken
# does nothing to help here, so we don't take it into account at
# all.
if not self.base.resolve(allow_erasing=self.allowerasing):
if failure_response['failures']:
failure_response['msg'] = 'Failed to install some of the specified packages'
self.module.fail_json(**failure_response)
response['msg'] = "Nothing to do"
self.module.exit_json(**response)
else:
response['changed'] = True
# If packages got installed/removed, add them to the results.
# We do this early so we can use it for both check_mode and not.
if self.download_only:
install_action = 'Downloaded'
else:
install_action = 'Installed'
for package in self.base.transaction.install_set:
response['results'].append("{0}: {1}".format(install_action, package))
for package in self.base.transaction.remove_set:
response['results'].append("Removed: {0}".format(package))
if failure_response['failures']:
failure_response['msg'] = 'Failed to install some of the specified packages'
self.module.fail_json(**failure_response)
if self.module.check_mode:
response['msg'] = "Check mode: No changes made, but would have if not in check mode"
self.module.exit_json(**response)
try:
if self.download_only and self.download_dir and self.base.conf.destdir:
dnf.util.ensure_dir(self.base.conf.destdir)
self.base.repos.all().pkgdir = self.base.conf.destdir
self.base.download_packages(self.base.transaction.install_set)
except dnf.exceptions.DownloadError as e:
self.module.fail_json(
msg="Failed to download packages: {0}".format(to_text(e)),
results=[],
)
# Validate GPG. This is NOT done in dnf.Base (it's done in the
# upstream CLI subclass of dnf.Base)
if not self.disable_gpg_check:
for package in self.base.transaction.install_set:
fail = False
gpgres, gpgerr = self.base._sig_check_pkg(package)
if gpgres == 0: # validated successfully
continue
elif gpgres == 1: # validation failed, install cert?
try:
self.base._get_key_for_package(package)
except dnf.exceptions.Error as e:
fail = True
else: # fatal error
fail = True
if fail:
msg = 'Failed to validate GPG signature for {0}: {1}'.format(package, gpgerr)
self.module.fail_json(msg)
if self.download_only:
# No further work left to do, and the results were already updated above.
# Just return them.
self.module.exit_json(**response)
else:
tid = self.base.do_transaction()
if tid is not None:
transaction = self.base.history.old([tid])[0]
if transaction.return_code:
failure_response['failures'].append(transaction.output())
if failure_response['failures']:
failure_response['msg'] = 'Failed to install some of the specified packages'
self.module.fail_json(**failure_response)
self.module.exit_json(**response)
except dnf.exceptions.DepsolveError as e:
failure_response['msg'] = "Depsolve Error occurred: {0}".format(to_native(e))
self.module.fail_json(**failure_response)
except dnf.exceptions.Error as e:
if to_text("already installed") in to_text(e):
response['changed'] = False
response['results'].append("Package already installed: {0}".format(to_native(e)))
self.module.exit_json(**response)
else:
failure_response['msg'] = "Unknown Error occurred: {0}".format(to_native(e))
self.module.fail_json(**failure_response)
def run(self):
"""The main function."""
# Check if autoremove is called correctly
if self.autoremove:
if LooseVersion(dnf.__version__) < LooseVersion('2.0.1'):
self.module.fail_json(
msg="Autoremove requires dnf>=2.0.1. Current dnf version is %s" % dnf.__version__,
results=[],
)
# Check if download_dir is called correctly
if self.download_dir:
if LooseVersion(dnf.__version__) < LooseVersion('2.6.2'):
self.module.fail_json(
msg="download_dir requires dnf>=2.6.2. Current dnf version is %s" % dnf.__version__,
results=[],
)
if self.update_cache and not self.names and not self.list:
self.base = self._base(
self.conf_file, self.disable_gpg_check, self.disablerepo,
self.enablerepo, self.installroot, self.sslverify
)
self.module.exit_json(
msg="Cache updated",
changed=False,
results=[],
rc=0
)
# Set state as installed by default
# This is not set in AnsibleModule() because the following shouldn't happen
# - dnf: autoremove=yes state=installed
if self.state is None:
self.state = 'installed'
if self.list:
self.base = self._base(
self.conf_file, self.disable_gpg_check, self.disablerepo,
self.enablerepo, self.installroot, self.sslverify
)
self.list_items(self.list)
else:
# Note: base takes a long time to run so we want to check for failure
# before running it.
if not self.download_only and not dnf.util.am_i_root():
self.module.fail_json(
msg="This command has to be run under the root user.",
results=[],
)
self.base = self._base(
self.conf_file, self.disable_gpg_check, self.disablerepo,
self.enablerepo, self.installroot, self.sslverify
)
if self.with_modules:
self.module_base = dnf.module.module_base.ModuleBase(self.base)
self.ensure()
def main():
# state=installed name=pkgspec
# state=removed name=pkgspec
# state=latest name=pkgspec
#
# informational commands:
# list=installed
# list=updates
# list=available
# list=repos
# list=pkgspec
# Extend yumdnf_argument_spec with dnf-specific features that will never be
# backported to yum because yum is now in "maintenance mode" upstream
yumdnf_argument_spec['argument_spec']['allowerasing'] = dict(default=False, type='bool')
yumdnf_argument_spec['argument_spec']['nobest'] = dict(default=False, type='bool')
yumdnf_argument_spec['argument_spec']['use_backend'] = dict(default='auto', choices=['auto', 'dnf4', 'dnf5'])
module = AnsibleModule(
**yumdnf_argument_spec
)
module_implementation = DnfModule(module)
try:
module_implementation.run()
except dnf.exceptions.RepoError as de:
module.fail_json(
msg="Failed to synchronize repodata: {0}".format(to_native(de)),
rc=1,
results=[],
changed=False
)
if __name__ == '__main__':
main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 81,376 |
dnf module failure for a package from URI with state=latest update_only=true
|
### Summary
Unable using `latest` together with `update_only` option when installing a package using DNF module from an URL.
Tested on [https://dl.fedoraproject.org/pub/epel/epel-release-latest-9.noarch.rpm](url) and [https://download.postgresql.org/pub/repos/yum/reporpms/EL-9-x86_64/pgdg-redhat-repo-latest.noarch.rpm](url)
The states `latest` or `present` without specifying `update_only` works as expected.
### Issue Type
Bug Report
### Component Name
dnf
### Ansible Version
```console
$ ansible --version
ansible [core 2.14.2]
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/zelenya/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.11/site-packages/ansible
ansible collection location = /home/zelenya/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/bin/ansible
python version = 3.11.2 (main, May 24 2023, 00:00:00) [GCC 11.3.1 20221121 (Red Hat 11.3.1-4)] (/usr/bin/python3.11)
jinja version = 3.1.2
libyaml = True
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
CONFIG_FILE() = /etc/ansible/ansible.cfg
```
### OS / Environment
AlmaLinux release 9.2 (Turquoise Kodkod)
### Steps to Reproduce
Cleran setup Alma linux (using Vagrant)
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
# install certificaters
sudo dnf -y install ca-certificates
# install ansible itself
sudo dnf -y install ansible-core
# install a package form URL
ansible localhost --become -m ansible.builtin.rpm_key -a "key='https://dl.fedoraproject.org/pub/epel/RPM-GPG-KEY-EPEL-9' state=present"
# Update the package to the latest version
ansible localhost --become -m ansible.builtin.dnf -a "name='https://dl.fedoraproject.org/pub/epel/epel-release-latest-9.noarch.rpm' state=present"
# fails with Error: AttributeError: 'Package' object has no attribute 'rpartition'
```
Based on Ansible-lint [https://ansible.readthedocs.io/projects/lint/rules/package-latest/#correct-code](package-latest) I'd expect that the correct flow is to install a package *state=present* and afterward (if requested) update to the latest version by using *state=present* together with *update_only=true*.
The intention is to be sure, that the package is updated if a playbook is executed after several months from initial installation time.
### Expected Results
I expect package installation with both options *state=latest* and *update_only=true* will work or report a misuse instead of failing with an exception.
### Actual Results
```console
$ ansible localhost --become -m ansible.builtin.dnf -a "name='https://dl.fedoraproject.org/pub/epel/epel-release-latest-9.noarch.rpm' state=latest update_only=true"
An exception occurred during task execution. To see the full traceback, use -vvv. The error was: AttributeError: 'Package' object has no attribute 'rpartition'
localhost | FAILED! => {
"changed": false,
"module_stderr": "Traceback (most recent call last):\n File \"<stdin>\", line 16, in <module>\n File \"/usr/lib64/python3.9/runpy.py\", line 225, in run_module\n return _run_module_code(code, init_globals, run_name, mod_spec)\n File \"/usr/lib64/python3.9/runpy.py\", line 97, in _run_module_code\n _run_code(code, mod_globals, init_globals,\n File \"/usr/lib64/python3.9/runpy.py\", line 87, in _run_code\n exec(code, run_globals)\n File \"/tmp/ansible_ansible.builtin.dnf_payload_t7xrps67/ansible_ansible.builtin.dnf_payload.zip/ansible/modules/dnf.py\", line 1460, in <module>\n File \"/tmp/ansible_ansible.builtin.dnf_payload_t7xrps67/ansible_ansible.builtin.dnf_payload.zip/ansible/modules/dnf.py\", line 1449, in main\n File \"/tmp/ansible_ansible.builtin.dnf_payload_t7xrps67/ansible_ansible.builtin.dnf_payload.zip/ansible/modules/dnf.py\", line 1423, in run\n File \"/tmp/ansible_ansible.builtin.dnf_payload_t7xrps67/ansible_ansible.builtin.dnf_payload.zip/ansible/modules/dnf.py\", line 1134, in ensure\n File \"/tmp/ansible_ansible.builtin.dnf_payload_t7xrps67/ansible_ansible.builtin.dnf_payload.zip/ansible/modules/dnf.py\", line 982, in _install_remote_rpms\n File \"/tmp/ansible_ansible.builtin.dnf_payload_t7xrps67/ansible_ansible.builtin.dnf_payload.zip/ansible/modules/dnf.py\", line 949, in _update_only\n File \"/tmp/ansible_ansible.builtin.dnf_payload_t7xrps67/ansible_ansible.builtin.dnf_payload.zip/ansible/modules/dnf.py\", line 795, in _is_installed\n File \"/tmp/ansible_ansible.builtin.dnf_payload_t7xrps67/ansible_ansible.builtin.dnf_payload.zip/ansible/modules/dnf.py\", line 481, in _split_package_arch\nAttributeError: 'Package' object has no attribute 'rpartition'\n",
"module_stdout": "",
"msg": "MODULE FAILURE\nSee stdout/stderr for the exact error",
"rc": 1
}
$ ansible -vvvv localhost --become -m ansible.builtin.dnf -a "name='https://dl.fedoraproject.org/pub/epel/epel-release-latest-9.noarch.rpm' state=latest update_only=true"
ansible [core 2.14.2]
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/zelenya/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.11/site-packages/ansible
ansible collection location = /home/zelenya/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/bin/ansible
python version = 3.11.2 (main, May 24 2023, 00:00:00) [GCC 11.3.1 20221121 (Red Hat 11.3.1-4)] (/usr/bin/python3.11)
jinja version = 3.1.2
libyaml = True
Using /etc/ansible/ansible.cfg as config file
setting up inventory plugins
host_list declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
script declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
auto declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
Parsed /etc/ansible/hosts inventory source with ini plugin
Loading callback plugin minimal of type stdout, v2.0 from /usr/lib/python3.11/site-packages/ansible/plugins/callback/minimal.py
Skipping callback 'default', as we already have a stdout callback.
Skipping callback 'minimal', as we already have a stdout callback.
Skipping callback 'oneline', as we already have a stdout callback.
<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: zelenya
<127.0.0.1> EXEC /bin/sh -c 'echo ~zelenya && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /home/zelenya/.ansible/tmp `"&& mkdir "` echo /home/zelenya/.ansible/tmp/ansible-tmp-1690820028.8473563-4456-128814682130530 `" && echo ansible-tmp-1690820028.8473563-4456-128814682130530="` echo /home/zelenya/.ansible/tmp/ansible-tmp-1690820028.8473563-4456-128814682130530 `" ) && sleep 0'
Using module file /usr/lib/python3.11/site-packages/ansible/modules/dnf.py
<127.0.0.1> PUT /home/zelenya/.ansible/tmp/ansible-local-4452uveazbbj/tmpl3btemgd TO /home/zelenya/.ansible/tmp/ansible-tmp-1690820028.8473563-4456-128814682130530/AnsiballZ_dnf.py
<127.0.0.1> EXEC /bin/sh -c 'chmod u+x /home/zelenya/.ansible/tmp/ansible-tmp-1690820028.8473563-4456-128814682130530/ /home/zelenya/.ansible/tmp/ansible-tmp-1690820028.8473563-4456-128814682130530/AnsiballZ_dnf.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c 'sudo -H -S -n -u root /bin/sh -c '"'"'echo BECOME-SUCCESS-zrmfzgclfrznlmkupsgkhvmruypmywyc ; /usr/bin/python3.11 /home/zelenya/.ansible/tmp/ansible-tmp-1690820028.8473563-4456-128814682130530/AnsiballZ_dnf.py'"'"' && sleep 0'
<127.0.0.1> EXEC /bin/sh -c 'rm -f -r /home/zelenya/.ansible/tmp/ansible-tmp-1690820028.8473563-4456-128814682130530/ > /dev/null 2>&1 && sleep 0'
The full traceback is:
Traceback (most recent call last):
File "<stdin>", line 16, in <module>
File "/usr/lib64/python3.9/runpy.py", line 225, in run_module
return _run_module_code(code, init_globals, run_name, mod_spec)
File "/usr/lib64/python3.9/runpy.py", line 97, in _run_module_code
_run_code(code, mod_globals, init_globals,
File "/usr/lib64/python3.9/runpy.py", line 87, in _run_code
exec(code, run_globals)
File "/tmp/ansible_ansible.builtin.dnf_payload_qxu1ivin/ansible_ansible.builtin.dnf_payload.zip/ansible/modules/dnf.py", line 1460, in <module>
File "/tmp/ansible_ansible.builtin.dnf_payload_qxu1ivin/ansible_ansible.builtin.dnf_payload.zip/ansible/modules/dnf.py", line 1449, in main
File "/tmp/ansible_ansible.builtin.dnf_payload_qxu1ivin/ansible_ansible.builtin.dnf_payload.zip/ansible/modules/dnf.py", line 1423, in run
File "/tmp/ansible_ansible.builtin.dnf_payload_qxu1ivin/ansible_ansible.builtin.dnf_payload.zip/ansible/modules/dnf.py", line 1134, in ensure
File "/tmp/ansible_ansible.builtin.dnf_payload_qxu1ivin/ansible_ansible.builtin.dnf_payload.zip/ansible/modules/dnf.py", line 982, in _install_remote_rpms
File "/tmp/ansible_ansible.builtin.dnf_payload_qxu1ivin/ansible_ansible.builtin.dnf_payload.zip/ansible/modules/dnf.py", line 949, in _update_only
File "/tmp/ansible_ansible.builtin.dnf_payload_qxu1ivin/ansible_ansible.builtin.dnf_payload.zip/ansible/modules/dnf.py", line 795, in _is_installed
File "/tmp/ansible_ansible.builtin.dnf_payload_qxu1ivin/ansible_ansible.builtin.dnf_payload.zip/ansible/modules/dnf.py", line 481, in _split_package_arch
AttributeError: 'Package' object has no attribute 'rpartition'
localhost | FAILED! => {
"changed": false,
"module_stderr": "Traceback (most recent call last):\n File \"<stdin>\", line 16, in <module>\n File \"/usr/lib64/python3.9/runpy.py\", line 225, in run_module\n return _run_module_code(code, init_globals, run_name, mod_spec)\n File \"/usr/lib64/python3.9/runpy.py\", line 97, in _run_module_code\n _run_code(code, mod_globals, init_globals,\n File \"/usr/lib64/python3.9/runpy.py\", line 87, in _run_code\n exec(code, run_globals)\n File \"/tmp/ansible_ansible.builtin.dnf_payload_qxu1ivin/ansible_ansible.builtin.dnf_payload.zip/ansible/modules/dnf.py\", line 1460, in <module>\n File \"/tmp/ansible_ansible.builtin.dnf_payload_qxu1ivin/ansible_ansible.builtin.dnf_payload.zip/ansible/modules/dnf.py\", line 1449, in main\n File \"/tmp/ansible_ansible.builtin.dnf_payload_qxu1ivin/ansible_ansible.builtin.dnf_payload.zip/ansible/modules/dnf.py\", line 1423, in run\n File \"/tmp/ansible_ansible.builtin.dnf_payload_qxu1ivin/ansible_ansible.builtin.dnf_payload.zip/ansible/modules/dnf.py\", line 1134, in ensure\n File \"/tmp/ansible_ansible.builtin.dnf_payload_qxu1ivin/ansible_ansible.builtin.dnf_payload.zip/ansible/modules/dnf.py\", line 982, in _install_remote_rpms\n File \"/tmp/ansible_ansible.builtin.dnf_payload_qxu1ivin/ansible_ansible.builtin.dnf_payload.zip/ansible/modules/dnf.py\", line 949, in _update_only\n File \"/tmp/ansible_ansible.builtin.dnf_payload_qxu1ivin/ansible_ansible.builtin.dnf_payload.zip/ansible/modules/dnf.py\", line 795, in _is_installed\n File \"/tmp/ansible_ansible.builtin.dnf_payload_qxu1ivin/ansible_ansible.builtin.dnf_payload.zip/ansible/modules/dnf.py\", line 481, in _split_package_arch\nAttributeError: 'Package' object has no attribute 'rpartition'\n",
"module_stdout": "",
"msg": "MODULE FAILURE\nSee stdout/stderr for the exact error",
"rc": 1
}
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/81376
|
https://github.com/ansible/ansible/pull/81568
|
da63f32d59fe882bc77532e734af7348b65cb6cb
|
4ab5ecbe814fca5dcdf25fb162f098fd3162b1c4
| 2023-07-31T16:14:35Z |
python
| 2023-08-28T08:48:45Z |
test/integration/targets/dnf/tasks/dnf.yml
|
# UNINSTALL 'python2-dnf'
# The `dnf` module has the smarts to auto-install the relevant python
# bindings. To test, we will first uninstall python2-dnf (so that the tests
# on python2 will require python2-dnf)
- name: check python2-dnf with rpm
shell: rpm -q python2-dnf
register: rpm_result
ignore_errors: true
# Don't uninstall python2-dnf with the `dnf` module in case it needs to load
# some dnf python files after the package is uninstalled.
- name: uninstall python2-dnf with shell
shell: dnf -y remove python2-dnf
when: rpm_result is successful
# UNINSTALL
# With 'python2-dnf' uninstalled, the first call to 'dnf' should install
# python2-dnf.
- name: uninstall sos
dnf:
name: sos
state: removed
register: dnf_result
- name: check sos with rpm
shell: rpm -q sos
failed_when: False
register: rpm_result
- name: verify uninstallation of sos
assert:
that:
- "not dnf_result.failed | default(False)"
- "rpm_result.rc == 1"
# UNINSTALL AGAIN
- name: uninstall sos
dnf:
name: sos
state: removed
register: dnf_result
- name: verify no change on re-uninstall
assert:
that:
- "not dnf_result.changed"
# INSTALL
- name: install sos (check_mode)
dnf:
name: sos
state: present
update_cache: True
check_mode: True
register: dnf_result
- assert:
that:
- dnf_result is success
- dnf_result.results|length > 0
- "dnf_result.results[0].startswith('Installed: ')"
- name: install sos
dnf:
name: sos
state: present
update_cache: True
register: dnf_result
- name: check sos with rpm
shell: rpm -q sos
failed_when: False
register: rpm_result
- name: verify installation of sos
assert:
that:
- "not dnf_result.failed | default(False)"
- "dnf_result.changed"
- "rpm_result.rc == 0"
- name: verify dnf module outputs
assert:
that:
- "'changed' in dnf_result"
- "'results' in dnf_result"
# INSTALL AGAIN
- name: install sos again (check_mode)
dnf:
name: sos
state: present
check_mode: True
register: dnf_result
- assert:
that:
- dnf_result is not changed
- dnf_result.results|length == 0
- name: install sos again
dnf:
name: sos
state: present
register: dnf_result
- name: verify no change on second install
assert:
that:
- "not dnf_result.changed"
# Multiple packages
- name: uninstall sos and dos2unix
dnf: name=sos,dos2unix state=removed
register: dnf_result
- name: check sos with rpm
shell: rpm -q sos
failed_when: False
register: rpm_sos_result
- name: check dos2unix with rpm
shell: rpm -q dos2unix
failed_when: False
register: rpm_dos2unix_result
- name: verify packages installed
assert:
that:
- "rpm_sos_result.rc != 0"
- "rpm_dos2unix_result.rc != 0"
- name: install sos and dos2unix as comma separated
dnf: name=sos,dos2unix state=present
register: dnf_result
- name: check sos with rpm
shell: rpm -q sos
failed_when: False
register: rpm_sos_result
- name: check dos2unix with rpm
shell: rpm -q dos2unix
failed_when: False
register: rpm_dos2unix_result
- name: verify packages installed
assert:
that:
- "not dnf_result.failed | default(False)"
- "dnf_result.changed"
- "rpm_sos_result.rc == 0"
- "rpm_dos2unix_result.rc == 0"
- name: uninstall sos and dos2unix
dnf: name=sos,dos2unix state=removed
register: dnf_result
- name: install sos and dos2unix as list
dnf:
name:
- sos
- dos2unix
state: present
register: dnf_result
- name: check sos with rpm
shell: rpm -q sos
failed_when: False
register: rpm_sos_result
- name: check dos2unix with rpm
shell: rpm -q dos2unix
failed_when: False
register: rpm_dos2unix_result
- name: verify packages installed
assert:
that:
- "not dnf_result.failed | default(False)"
- "dnf_result.changed"
- "rpm_sos_result.rc == 0"
- "rpm_dos2unix_result.rc == 0"
- name: uninstall sos and dos2unix
dnf:
name: "sos,dos2unix"
state: removed
register: dnf_result
- name: install sos and dos2unix as comma separated with spaces
dnf:
name: "sos, dos2unix"
state: present
register: dnf_result
- name: check sos with rpm
shell: rpm -q sos
failed_when: False
register: rpm_sos_result
- name: check sos with rpm
shell: rpm -q dos2unix
failed_when: False
register: rpm_dos2unix_result
- name: verify packages installed
assert:
that:
- "not dnf_result.failed | default(False)"
- "dnf_result.changed"
- "rpm_sos_result.rc == 0"
- "rpm_dos2unix_result.rc == 0"
- name: uninstall sos and dos2unix (check_mode)
dnf:
name:
- sos
- dos2unix
state: removed
check_mode: True
register: dnf_result
- assert:
that:
- dnf_result is success
- dnf_result.results|length >= 2
- "dnf_result.results[0].startswith('Removed: ')"
- "dnf_result.results[1].startswith('Removed: ')"
- name: uninstall sos and dos2unix
dnf:
name:
- sos
- dos2unix
state: removed
register: dnf_result
- assert:
that:
- dnf_result is changed
- name: install non-existent rpm
dnf:
name: does-not-exist
register: non_existent_rpm
ignore_errors: True
- name: check non-existent rpm install failed
assert:
that:
- non_existent_rpm is failed
# Install in installroot='/'. This should be identical to default
- name: install sos in /
dnf: name=sos state=present installroot='/'
register: dnf_result
- name: check sos with rpm in /
shell: rpm -q sos --root=/
failed_when: False
register: rpm_result
- name: verify installation of sos in /
assert:
that:
- "not dnf_result.failed | default(False)"
- "dnf_result.changed"
- "rpm_result.rc == 0"
- name: verify dnf module outputs in /
assert:
that:
- "'changed' in dnf_result"
- "'results' in dnf_result"
- name: uninstall sos in /
dnf: name=sos installroot='/'
register: dnf_result
- name: uninstall sos for downloadonly test
dnf:
name: sos
state: absent
- name: Test download_only (check_mode)
dnf:
name: sos
state: latest
download_only: true
check_mode: true
register: dnf_result
- assert:
that:
- dnf_result is success
- "dnf_result.results[0].startswith('Downloaded: ')"
- name: Test download_only
dnf:
name: sos
state: latest
download_only: true
register: dnf_result
- name: verify download of sos (part 1 -- dnf "install" succeeded)
assert:
that:
- "dnf_result is success"
- "dnf_result is changed"
- name: uninstall sos (noop)
dnf:
name: sos
state: absent
register: dnf_result
- name: verify download of sos (part 2 -- nothing removed during uninstall)
assert:
that:
- "dnf_result is success"
- "not dnf_result is changed"
- name: uninstall sos for downloadonly/downloaddir test
dnf:
name: sos
state: absent
- name: Test download_only/download_dir
dnf:
name: sos
state: latest
download_only: true
download_dir: "/var/tmp/packages"
register: dnf_result
- name: verify dnf output
assert:
that:
- "dnf_result is success"
- "dnf_result is changed"
- command: "ls /var/tmp/packages"
register: ls_out
- name: Verify specified download_dir was used
assert:
that:
- "'sos' in ls_out.stdout"
# GROUP INSTALL
- name: install Custom Group group
dnf:
name: "@Custom Group"
state: present
register: dnf_result
- name: check dinginessentail with rpm
command: rpm -q dinginessentail
failed_when: False
register: dinginessentail_result
- name: verify installation of the group
assert:
that:
- not dnf_result is failed
- dnf_result is changed
- "'results' in dnf_result"
- dinginessentail_result.rc == 0
- name: install the group again
dnf:
name: "@Custom Group"
state: present
register: dnf_result
- name: verify nothing changed
assert:
that:
- not dnf_result is changed
- "'msg' in dnf_result"
- name: verify that landsidescalping is not installed
dnf:
name: landsidescalping
state: absent
- name: install the group again but also with a package that is not yet installed
dnf:
name:
- "@Custom Group"
- landsidescalping
state: present
register: dnf_result
- name: check landsidescalping with rpm
command: rpm -q landsidescalping
failed_when: False
register: landsidescalping_result
- name: verify landsidescalping is installed
assert:
that:
- dnf_result is changed
- "'results' in dnf_result"
- landsidescalping_result.rc == 0
- name: try to install the group again, with --check to check 'changed'
dnf:
name: "@Custom Group"
state: present
check_mode: yes
register: dnf_result
- name: verify nothing changed
assert:
that:
- not dnf_result is changed
- "'msg' in dnf_result"
- name: remove landsidescalping after test
dnf:
name: landsidescalping
state: absent
# cleanup until https://github.com/ansible/ansible/issues/27377 is resolved
- shell: 'dnf -y group install "Custom Group" && dnf -y group remove "Custom Group"'
register: shell_dnf_result
- dnf:
name: "@Custom Group"
state: absent
# GROUP UPGRADE - this will go to the same method as group install
# but through group_update - it is its invocation we're testing here
# see commit 119c9e5d6eb572c4a4800fbe8136095f9063c37b
- name: install latest Custom Group
dnf:
name: "@Custom Group"
state: latest
register: dnf_result
- name: verify installation of the group
assert:
that:
- not dnf_result is failed
- dnf_result is changed
- "'results' in dnf_result"
# cleanup until https://github.com/ansible/ansible/issues/27377 is resolved
- shell: dnf -y group install "Custom Group" && dnf -y group remove "Custom Group"
- dnf:
name: "@Custom Group"
state: absent
- name: try to install non existing group
dnf:
name: "@non-existing-group"
state: present
register: dnf_result
ignore_errors: True
- name: verify installation of the non existing group failed
assert:
that:
- "not dnf_result.changed"
- "dnf_result is failed"
- name: verify dnf module outputs
assert:
that:
- "'changed' in dnf_result"
- "'msg' in dnf_result"
- name: try to install non existing file
dnf:
name: /tmp/non-existing-1.0.0.fc26.noarch.rpm
state: present
register: dnf_result
ignore_errors: yes
- name: verify installation failed
assert:
that:
- "dnf_result is failed"
- "not dnf_result.changed"
- name: verify dnf module outputs
assert:
that:
- "'changed' in dnf_result"
- "'msg' in dnf_result"
- name: try to install from non existing url
dnf:
name: https://ci-files.testing.ansible.com/test/integration/targets/dnf/non-existing-1.0.0.fc26.noarch.rpm
state: present
register: dnf_result
ignore_errors: yes
- name: verify installation failed
assert:
that:
- "dnf_result is failed"
- "not dnf_result.changed"
- name: verify dnf module outputs
assert:
that:
- "'changed' in dnf_result"
- "'msg' in dnf_result"
# ENVIRONMENT UPGRADE
# see commit de299ef77c03a64a8f515033a79ac6b7db1bc710
- name: install Custom Environment Group
dnf:
name: "@Custom Environment Group"
state: latest
register: dnf_result
- name: check landsidescalping with rpm
command: rpm -q landsidescalping
register: landsidescalping_result
- name: verify installation of the environment
assert:
that:
- not dnf_result is failed
- dnf_result is changed
- "'results' in dnf_result"
- landsidescalping_result.rc == 0
# Fedora 28 (DNF 2) does not support this, just remove the package itself
- name: remove landsidescalping package on Fedora 28
dnf:
name: landsidescalping
state: absent
when: ansible_distribution == 'Fedora' and ansible_distribution_major_version|int <= 28
# cleanup until https://github.com/ansible/ansible/issues/27377 is resolved
- name: remove Custom Environment Group
shell: dnf -y group install "Custom Environment Group" && dnf -y group remove "Custom Environment Group"
when: not (ansible_distribution == 'Fedora' and ansible_distribution_major_version|int <= 28)
# https://github.com/ansible/ansible/issues/39704
- name: install non-existent rpm, state=latest
dnf:
name: non-existent-rpm
state: latest
ignore_errors: yes
register: dnf_result
- name: verify the result
assert:
that:
- "dnf_result is failed"
- "'non-existent-rpm' in dnf_result['failures'][0]"
- "'No package non-existent-rpm available' in dnf_result['failures'][0]"
- "'Failed to install some of the specified packages' in dnf_result['msg']"
- name: ensure sos isn't installed
dnf:
name: sos
state: absent
- name: use latest to install sos
dnf:
name: sos
state: latest
register: dnf_result
- name: verify sos was installed
assert:
that:
- dnf_result is changed
- name: uninstall sos
dnf:
name: sos
state: removed
- name: update sos only if it exists
dnf:
name: sos
state: latest
update_only: yes
register: dnf_result
- name: verify sos not installed
assert:
that:
- "not dnf_result is changed"
- name: try to install not compatible arch rpm, should fail
dnf:
name: https://ci-files.testing.ansible.com/test/integration/targets/dnf/banner-1.3.4-3.el7.ppc64le.rpm
state: present
register: dnf_result
ignore_errors: True
- name: verify that dnf failed
assert:
that:
- "not dnf_result is changed"
- "dnf_result is failed"
# setup for testing installing an RPM from local file
- set_fact:
pkg_name: noarchfake
pkg_path: '{{ repodir }}/noarchfake-1.0-1.noarch.rpm'
- name: cleanup
dnf:
name: "{{ pkg_name }}"
state: absent
# setup end
- name: install a local noarch rpm from file
dnf:
name: "{{ pkg_path }}"
state: present
disable_gpg_check: true
register: dnf_result
- name: verify installation
assert:
that:
- "dnf_result is success"
- "dnf_result is changed"
- name: install the downloaded rpm again
dnf:
name: "{{ pkg_path }}"
state: present
register: dnf_result
- name: verify installation
assert:
that:
- "dnf_result is success"
- "not dnf_result is changed"
- name: clean up
dnf:
name: "{{ pkg_name }}"
state: absent
- name: install from url
dnf:
name: "file://{{ pkg_path }}"
state: present
disable_gpg_check: true
register: dnf_result
- name: verify installation
assert:
that:
- "dnf_result is success"
- "dnf_result is changed"
- "dnf_result is not failed"
- name: verify dnf module outputs
assert:
that:
- "'changed' in dnf_result"
- "'results' in dnf_result"
- name: Create a temp RPM file which does not contain nevra information
file:
name: "/tmp/non_existent_pkg.rpm"
state: touch
- name: Try installing RPM file which does not contain nevra information
dnf:
name: "/tmp/non_existent_pkg.rpm"
state: present
register: no_nevra_info_result
ignore_errors: yes
- name: Verify RPM failed to install
assert:
that:
- "'changed' in no_nevra_info_result"
- "'msg' in no_nevra_info_result"
- name: Delete a temp RPM file
file:
name: "/tmp/non_existent_pkg.rpm"
state: absent
- name: uninstall lsof
dnf:
name: lsof
state: removed
- name: check lsof with rpm
shell: rpm -q lsof
ignore_errors: True
register: rpm_lsof_result
- name: verify lsof is uninstalled
assert:
that:
- "rpm_lsof_result is failed"
- name: create conf file that excludes lsof
copy:
content: |
[main]
exclude=lsof*
dest: '{{ remote_tmp_dir }}/test-dnf.conf'
register: test_dnf_copy
- block:
# begin test case where disable_excludes is supported
- name: Try install lsof without disable_excludes
dnf: name=lsof state=latest conf_file={{ test_dnf_copy.dest }}
register: dnf_lsof_result
ignore_errors: True
- name: verify lsof did not install because it is in exclude list
assert:
that:
- "dnf_lsof_result is failed"
- name: install lsof with disable_excludes
dnf: name=lsof state=latest disable_excludes=all conf_file={{ test_dnf_copy.dest }}
register: dnf_lsof_result_using_excludes
- name: verify lsof did install using disable_excludes=all
assert:
that:
- "dnf_lsof_result_using_excludes is success"
- "dnf_lsof_result_using_excludes is changed"
- "dnf_lsof_result_using_excludes is not failed"
always:
- name: remove exclude lsof conf file
file:
path: '{{ remote_tmp_dir }}/test-dnf.conf'
state: absent
# end test case where disable_excludes is supported
- name: Test "dnf install /usr/bin/vi"
block:
- name: Clean vim-minimal
dnf:
name: vim-minimal
state: absent
- name: Install vim-minimal by specifying "/usr/bin/vi"
dnf:
name: /usr/bin/vi
state: present
- name: Get rpm output
command: rpm -q vim-minimal
register: rpm_output
- name: Check installation was successful
assert:
that:
- "'vim-minimal' in rpm_output.stdout"
when:
- ansible_distribution == 'Fedora'
- name: Remove wildcard package that isn't installed
dnf:
name: firefox*
state: absent
register: wildcard_absent
- assert:
that:
- wildcard_absent is successful
- wildcard_absent is not changed
- name: Test removing with various package specs
block:
- name: Ensure sos is installed
dnf:
name: sos
state: present
- name: Determine version of sos
command: rpm -q --queryformat=%{version} sos
register: sos_version_command
- name: Determine release of sos
command: rpm -q --queryformat=%{release} sos
register: sos_release_command
- name: Determine arch of sos
command: rpm -q --queryformat=%{arch} sos
register: sos_arch_command
- set_fact:
sos_version: "{{ sos_version_command.stdout | trim }}"
sos_release: "{{ sos_release_command.stdout | trim }}"
sos_arch: "{{ sos_arch_command.stdout | trim }}"
# We are just trying to remove the package by specifying its spec in a
# bunch of different ways. Each "item" here is a test (a name passed, to make
# sure it matches, with how we call Hawkey in the dnf module).
- include_tasks: test_sos_removal.yml
with_items:
- sos
- sos-{{ sos_version }}
- sos-{{ sos_version }}-{{ sos_release }}
- sos-{{ sos_version }}-{{ sos_release }}.{{ sos_arch }}
- sos-*-{{ sos_release }}
- sos-{{ sos_version[0] }}*
- sos-{{ sos_version[0] }}*-{{ sos_release }}
- sos-{{ sos_version[0] }}*{{ sos_arch }}
- name: Ensure deleting a non-existing package fails correctly
dnf:
name: a-non-existent-package
state: absent
ignore_errors: true
register: nonexisting
- assert:
that:
- nonexisting is success
- nonexisting.msg == 'Nothing to do'
# running on RHEL which is --remote where .mo language files are present
# for dnf as opposed to in --docker
- when: ansible_distribution == 'RedHat'
block:
- dnf:
name: langpacks-ja
state: present
- dnf:
name: nginx-mod*
state: absent
environment:
LANG: ja_JP.UTF-8
always:
- dnf:
name: langpacks-ja
state: absent
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 81,553 |
ansible-galaxy install of roles with Java inner classes fails due to $ in the file name
|
### Summary
When I try to install a role that contains files with `$` in their names (such as Java class names), it fails because we don't allow `$` in the file names because file names could get evaluated via `os.path.expandvars`.
See https://github.com/ansible/galaxy/issues/271 for the original context
The error message is "Not a directory" (errno 20).
### Issue Type
Bug Report
### Component Name
role
### Ansible Version
```console
$ ansible --version
ansible [core 2.14.5]
config file = /Users/atsalolikhin/.ansible.cfg
configured module search path = ['/Users/atsalolikhin/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /Users/atsalolikhin/py-3.9.13/lib/python3.9/site-packages/ansible
ansible collection location = /Users/atsalolikhin/.ansible/collections:/usr/share/ansible/collections
executable location = /Users/atsalolikhin/py-3.9.13/bin/ansible
python version = 3.9.13 (main, Jul 20 2022, 17:45:35) [Clang 13.0.0 (clang-1300.0.29.3)] (/Users/atsalolikhin/py-3.9.13/bin/python3.9)
jinja version = 3.0.3
libyaml = True
```
### Configuration
N/A
### OS / Environment
N/A
### Steps to Reproduce
Add this to your `requirements.yaml`:
```yaml
- src: https://gitlab.com/atsaloli/ansible-galaxy-issue-271.git
version: main
scm: git
```
and then try to install roles, with e.g., `ansible-galaxy install -r requirements.yml`
### Expected Results
I expected the role to be installed.
### Actual Results
```console
The role install failed.
$ ansible-galaxy install -r requirements.yml --force
Starting galaxy role install process
- extracting ansible-galaxy-issue-271 to /home/atsaloli/.ansible/roles/ansible-galaxy-issue-271
[WARNING]: - ansible-galaxy-issue-271 was NOT installed successfully: Could not update files in /home/atsaloli/.ansible/roles/ansible-galaxy-issue-271:
[Errno 20] Not a directory: '/home/atsaloli/.ansible/roles/ansible-galaxy-issue-271/files/udf/CMC_VWAP.class'
ERROR! - you can use --ignore-errors to skip failed roles and finish processing the list.
$
```
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/81553
|
https://github.com/ansible/ansible/pull/81555
|
4ab5ecbe814fca5dcdf25fb162f098fd3162b1c4
|
bdaa091b33f0ebb273c6ad99b3835530ba2b5a30
| 2023-08-21T18:54:45Z |
python
| 2023-08-28T18:54:08Z |
changelogs/fragments/81555-add-warning-for-illegal-filenames-in-roles.yaml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 81,553 |
ansible-galaxy install of roles with Java inner classes fails due to $ in the file name
|
### Summary
When I try to install a role that contains files with `$` in their names (such as Java class names), it fails because we don't allow `$` in the file names because file names could get evaluated via `os.path.expandvars`.
See https://github.com/ansible/galaxy/issues/271 for the original context
The error message is "Not a directory" (errno 20).
### Issue Type
Bug Report
### Component Name
role
### Ansible Version
```console
$ ansible --version
ansible [core 2.14.5]
config file = /Users/atsalolikhin/.ansible.cfg
configured module search path = ['/Users/atsalolikhin/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /Users/atsalolikhin/py-3.9.13/lib/python3.9/site-packages/ansible
ansible collection location = /Users/atsalolikhin/.ansible/collections:/usr/share/ansible/collections
executable location = /Users/atsalolikhin/py-3.9.13/bin/ansible
python version = 3.9.13 (main, Jul 20 2022, 17:45:35) [Clang 13.0.0 (clang-1300.0.29.3)] (/Users/atsalolikhin/py-3.9.13/bin/python3.9)
jinja version = 3.0.3
libyaml = True
```
### Configuration
N/A
### OS / Environment
N/A
### Steps to Reproduce
Add this to your `requirements.yaml`:
```yaml
- src: https://gitlab.com/atsaloli/ansible-galaxy-issue-271.git
version: main
scm: git
```
and then try to install roles, with e.g., `ansible-galaxy install -r requirements.yml`
### Expected Results
I expected the role to be installed.
### Actual Results
```console
The role install failed.
$ ansible-galaxy install -r requirements.yml --force
Starting galaxy role install process
- extracting ansible-galaxy-issue-271 to /home/atsaloli/.ansible/roles/ansible-galaxy-issue-271
[WARNING]: - ansible-galaxy-issue-271 was NOT installed successfully: Could not update files in /home/atsaloli/.ansible/roles/ansible-galaxy-issue-271:
[Errno 20] Not a directory: '/home/atsaloli/.ansible/roles/ansible-galaxy-issue-271/files/udf/CMC_VWAP.class'
ERROR! - you can use --ignore-errors to skip failed roles and finish processing the list.
$
```
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/81553
|
https://github.com/ansible/ansible/pull/81555
|
4ab5ecbe814fca5dcdf25fb162f098fd3162b1c4
|
bdaa091b33f0ebb273c6ad99b3835530ba2b5a30
| 2023-08-21T18:54:45Z |
python
| 2023-08-28T18:54:08Z |
lib/ansible/galaxy/role.py
|
########################################################################
#
# (C) 2015, Brian Coca <[email protected]>
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
#
########################################################################
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import errno
import datetime
import os
import tarfile
import tempfile
from collections.abc import MutableSequence
from shutil import rmtree
from ansible import context
from ansible.errors import AnsibleError, AnsibleParserError
from ansible.galaxy.api import GalaxyAPI
from ansible.galaxy.user_agent import user_agent
from ansible.module_utils.common.text.converters import to_native, to_text
from ansible.module_utils.common.yaml import yaml_dump, yaml_load
from ansible.module_utils.compat.version import LooseVersion
from ansible.module_utils.urls import open_url
from ansible.playbook.role.requirement import RoleRequirement
from ansible.utils.display import Display
display = Display()
class GalaxyRole(object):
SUPPORTED_SCMS = set(['git', 'hg'])
META_MAIN = (os.path.join('meta', 'main.yml'), os.path.join('meta', 'main.yaml'))
META_INSTALL = os.path.join('meta', '.galaxy_install_info')
META_REQUIREMENTS = (os.path.join('meta', 'requirements.yml'), os.path.join('meta', 'requirements.yaml'))
ROLE_DIRS = ('defaults', 'files', 'handlers', 'meta', 'tasks', 'templates', 'vars', 'tests')
def __init__(self, galaxy, api, name, src=None, version=None, scm=None, path=None):
self._metadata = None
self._metadata_dependencies = None
self._requirements = None
self._install_info = None
self._validate_certs = not context.CLIARGS['ignore_certs']
display.debug('Validate TLS certificates: %s' % self._validate_certs)
self.galaxy = galaxy
self._api = api
self.name = name
self.version = version
self.src = src or name
self.download_url = None
self.scm = scm
self.paths = [os.path.join(x, self.name) for x in galaxy.roles_paths]
if path is not None:
if not path.endswith(os.path.join(os.path.sep, self.name)):
path = os.path.join(path, self.name)
else:
# Look for a meta/main.ya?ml inside the potential role dir in case
# the role name is the same as parent directory of the role.
#
# Example:
# ./roles/testing/testing/meta/main.yml
for meta_main in self.META_MAIN:
if os.path.exists(os.path.join(path, name, meta_main)):
path = os.path.join(path, self.name)
break
self.path = path
else:
# use the first path by default
self.path = self.paths[0]
def __repr__(self):
"""
Returns "rolename (version)" if version is not null
Returns "rolename" otherwise
"""
if self.version:
return "%s (%s)" % (self.name, self.version)
else:
return self.name
def __eq__(self, other):
return self.name == other.name
@property
def api(self):
if not isinstance(self._api, GalaxyAPI):
return self._api.api
return self._api
@property
def metadata(self):
"""
Returns role metadata
"""
if self._metadata is None:
for path in self.paths:
for meta_main in self.META_MAIN:
meta_path = os.path.join(path, meta_main)
if os.path.isfile(meta_path):
try:
with open(meta_path, 'r') as f:
self._metadata = yaml_load(f)
except Exception:
display.vvvvv("Unable to load metadata for %s" % self.name)
return False
break
return self._metadata
@property
def metadata_dependencies(self):
"""
Returns a list of dependencies from role metadata
"""
if self._metadata_dependencies is None:
self._metadata_dependencies = []
if self.metadata is not None:
self._metadata_dependencies = self.metadata.get('dependencies') or []
if not isinstance(self._metadata_dependencies, MutableSequence):
raise AnsibleParserError(
f"Expected role dependencies to be a list. Role {self} has meta/main.yml with dependencies {self._metadata_dependencies}"
)
return self._metadata_dependencies
@property
def install_info(self):
"""
Returns role install info
"""
if self._install_info is None:
info_path = os.path.join(self.path, self.META_INSTALL)
if os.path.isfile(info_path):
try:
f = open(info_path, 'r')
self._install_info = yaml_load(f)
except Exception:
display.vvvvv("Unable to load Galaxy install info for %s" % self.name)
return False
finally:
f.close()
return self._install_info
@property
def _exists(self):
for path in self.paths:
if os.path.isdir(path):
return True
return False
def _write_galaxy_install_info(self):
"""
Writes a YAML-formatted file to the role's meta/ directory
(named .galaxy_install_info) which contains some information
we can use later for commands like 'list' and 'info'.
"""
info = dict(
version=self.version,
install_date=datetime.datetime.now(datetime.timezone.utc).strftime("%c"),
)
if not os.path.exists(os.path.join(self.path, 'meta')):
os.makedirs(os.path.join(self.path, 'meta'))
info_path = os.path.join(self.path, self.META_INSTALL)
with open(info_path, 'w+') as f:
try:
self._install_info = yaml_dump(info, f)
except Exception:
return False
return True
def remove(self):
"""
Removes the specified role from the roles path.
There is a sanity check to make sure there's a meta/main.yml file at this
path so the user doesn't blow away random directories.
"""
if self.metadata:
try:
rmtree(self.path)
return True
except Exception:
pass
return False
def fetch(self, role_data):
"""
Downloads the archived role to a temp location based on role data
"""
if role_data:
# first grab the file and save it to a temp location
if self.download_url is not None:
archive_url = self.download_url
elif "github_user" in role_data and "github_repo" in role_data:
archive_url = 'https://github.com/%s/%s/archive/%s.tar.gz' % (role_data["github_user"], role_data["github_repo"], self.version)
else:
archive_url = self.src
display.display("- downloading role from %s" % archive_url)
try:
url_file = open_url(archive_url, validate_certs=self._validate_certs, http_agent=user_agent())
temp_file = tempfile.NamedTemporaryFile(delete=False)
data = url_file.read()
while data:
temp_file.write(data)
data = url_file.read()
temp_file.close()
return temp_file.name
except Exception as e:
display.error(u"failed to download the file: %s" % to_text(e))
return False
def install(self):
if self.scm:
# create tar file from scm url
tmp_file = RoleRequirement.scm_archive_role(keep_scm_meta=context.CLIARGS['keep_scm_meta'], **self.spec)
elif self.src:
if os.path.isfile(self.src):
tmp_file = self.src
elif '://' in self.src:
role_data = self.src
tmp_file = self.fetch(role_data)
else:
role_data = self.api.lookup_role_by_name(self.src)
if not role_data:
raise AnsibleError("- sorry, %s was not found on %s." % (self.src, self.api.api_server))
if role_data.get('role_type') == 'APP':
# Container Role
display.warning("%s is a Container App role, and should only be installed using Ansible "
"Container" % self.name)
role_versions = self.api.fetch_role_related('versions', role_data['id'])
if not self.version:
# convert the version names to LooseVersion objects
# and sort them to get the latest version. If there
# are no versions in the list, we'll grab the head
# of the master branch
if len(role_versions) > 0:
loose_versions = [LooseVersion(a.get('name', None)) for a in role_versions]
try:
loose_versions.sort()
except TypeError:
raise AnsibleError(
'Unable to compare role versions (%s) to determine the most recent version due to incompatible version formats. '
'Please contact the role author to resolve versioning conflicts, or specify an explicit role version to '
'install.' % ', '.join([v.vstring for v in loose_versions])
)
self.version = to_text(loose_versions[-1])
elif role_data.get('github_branch', None):
self.version = role_data['github_branch']
else:
self.version = 'master'
elif self.version != 'master':
if role_versions and to_text(self.version) not in [a.get('name', None) for a in role_versions]:
raise AnsibleError("- the specified version (%s) of %s was not found in the list of available versions (%s)." % (self.version,
self.name,
role_versions))
# check if there's a source link/url for our role_version
for role_version in role_versions:
if role_version['name'] == self.version and 'source' in role_version:
self.src = role_version['source']
if role_version['name'] == self.version and 'download_url' in role_version:
self.download_url = role_version['download_url']
tmp_file = self.fetch(role_data)
else:
raise AnsibleError("No valid role data found")
if tmp_file:
display.debug("installing from %s" % tmp_file)
if not tarfile.is_tarfile(tmp_file):
raise AnsibleError("the downloaded file does not appear to be a valid tar archive.")
else:
role_tar_file = tarfile.open(tmp_file, "r")
# verify the role's meta file
meta_file = None
members = role_tar_file.getmembers()
# next find the metadata file
for member in members:
for meta_main in self.META_MAIN:
if meta_main in member.name:
# Look for parent of meta/main.yml
# Due to possibility of sub roles each containing meta/main.yml
# look for shortest length parent
meta_parent_dir = os.path.dirname(os.path.dirname(member.name))
if not meta_file:
archive_parent_dir = meta_parent_dir
meta_file = member
else:
if len(meta_parent_dir) < len(archive_parent_dir):
archive_parent_dir = meta_parent_dir
meta_file = member
if not meta_file:
raise AnsibleError("this role does not appear to have a meta/main.yml file.")
else:
try:
self._metadata = yaml_load(role_tar_file.extractfile(meta_file))
except Exception:
raise AnsibleError("this role does not appear to have a valid meta/main.yml file.")
paths = self.paths
if self.path != paths[0]:
# path can be passed though __init__
# FIXME should this be done in __init__?
paths[:0] = self.path
paths_len = len(paths)
for idx, path in enumerate(paths):
self.path = path
display.display("- extracting %s to %s" % (self.name, self.path))
try:
if os.path.exists(self.path):
if not os.path.isdir(self.path):
raise AnsibleError("the specified roles path exists and is not a directory.")
elif not context.CLIARGS.get("force", False):
raise AnsibleError("the specified role %s appears to already exist. Use --force to replace it." % self.name)
else:
# using --force, remove the old path
if not self.remove():
raise AnsibleError("%s doesn't appear to contain a role.\n please remove this directory manually if you really "
"want to put the role here." % self.path)
else:
os.makedirs(self.path)
# We strip off any higher-level directories for all of the files
# contained within the tar file here. The default is 'github_repo-target'.
# Gerrit instances, on the other hand, does not have a parent directory at all.
for member in members:
# we only extract files, and remove any relative path
# bits that might be in the file for security purposes
# and drop any containing directory, as mentioned above
if member.isreg() or member.issym():
n_member_name = to_native(member.name)
n_archive_parent_dir = to_native(archive_parent_dir)
n_parts = n_member_name.replace(n_archive_parent_dir, "", 1).split(os.sep)
n_final_parts = []
for n_part in n_parts:
# TODO if the condition triggers it produces a broken installation.
# It will create the parent directory as an empty file and will
# explode if the directory contains valid files.
# Leaving this as is since the whole module needs a rewrite.
if n_part != '..' and not n_part.startswith('~') and '$' not in n_part:
n_final_parts.append(n_part)
member.name = os.path.join(*n_final_parts)
role_tar_file.extract(member, to_native(self.path))
# write out the install info file for later use
self._write_galaxy_install_info()
break
except OSError as e:
if e.errno == errno.EACCES and idx < paths_len - 1:
continue
raise AnsibleError("Could not update files in %s: %s" % (self.path, to_native(e)))
# return the parsed yaml metadata
display.display("- %s was installed successfully" % str(self))
if not (self.src and os.path.isfile(self.src)):
try:
os.unlink(tmp_file)
except (OSError, IOError) as e:
display.warning(u"Unable to remove tmp file (%s): %s" % (tmp_file, to_text(e)))
return True
return False
@property
def spec(self):
"""
Returns role spec info
{
'scm': 'git',
'src': 'http://git.example.com/repos/repo.git',
'version': 'v1.0',
'name': 'repo'
}
"""
return dict(scm=self.scm, src=self.src, version=self.version, name=self.name)
@property
def requirements(self):
"""
Returns role requirements
"""
if self._requirements is None:
self._requirements = []
for meta_requirements in self.META_REQUIREMENTS:
meta_path = os.path.join(self.path, meta_requirements)
if os.path.isfile(meta_path):
try:
f = open(meta_path, 'r')
self._requirements = yaml_load(f)
except Exception:
display.vvvvv("Unable to load requirements for %s" % self.name)
finally:
f.close()
break
if not isinstance(self._requirements, MutableSequence):
raise AnsibleParserError(f"Expected role dependencies to be a list. Role {self} has meta/requirements.yml {self._requirements}")
return self._requirements
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 81,553 |
ansible-galaxy install of roles with Java inner classes fails due to $ in the file name
|
### Summary
When I try to install a role that contains files with `$` in their names (such as Java class names), it fails because we don't allow `$` in the file names because file names could get evaluated via `os.path.expandvars`.
See https://github.com/ansible/galaxy/issues/271 for the original context
The error message is "Not a directory" (errno 20).
### Issue Type
Bug Report
### Component Name
role
### Ansible Version
```console
$ ansible --version
ansible [core 2.14.5]
config file = /Users/atsalolikhin/.ansible.cfg
configured module search path = ['/Users/atsalolikhin/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /Users/atsalolikhin/py-3.9.13/lib/python3.9/site-packages/ansible
ansible collection location = /Users/atsalolikhin/.ansible/collections:/usr/share/ansible/collections
executable location = /Users/atsalolikhin/py-3.9.13/bin/ansible
python version = 3.9.13 (main, Jul 20 2022, 17:45:35) [Clang 13.0.0 (clang-1300.0.29.3)] (/Users/atsalolikhin/py-3.9.13/bin/python3.9)
jinja version = 3.0.3
libyaml = True
```
### Configuration
N/A
### OS / Environment
N/A
### Steps to Reproduce
Add this to your `requirements.yaml`:
```yaml
- src: https://gitlab.com/atsaloli/ansible-galaxy-issue-271.git
version: main
scm: git
```
and then try to install roles, with e.g., `ansible-galaxy install -r requirements.yml`
### Expected Results
I expected the role to be installed.
### Actual Results
```console
The role install failed.
$ ansible-galaxy install -r requirements.yml --force
Starting galaxy role install process
- extracting ansible-galaxy-issue-271 to /home/atsaloli/.ansible/roles/ansible-galaxy-issue-271
[WARNING]: - ansible-galaxy-issue-271 was NOT installed successfully: Could not update files in /home/atsaloli/.ansible/roles/ansible-galaxy-issue-271:
[Errno 20] Not a directory: '/home/atsaloli/.ansible/roles/ansible-galaxy-issue-271/files/udf/CMC_VWAP.class'
ERROR! - you can use --ignore-errors to skip failed roles and finish processing the list.
$
```
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/81553
|
https://github.com/ansible/ansible/pull/81555
|
4ab5ecbe814fca5dcdf25fb162f098fd3162b1c4
|
bdaa091b33f0ebb273c6ad99b3835530ba2b5a30
| 2023-08-21T18:54:45Z |
python
| 2023-08-28T18:54:08Z |
test/integration/targets/ansible-galaxy-role/tasks/main.yml
|
- name: Install role from Galaxy (should not fail with AttributeError)
command: ansible-galaxy role install ansible.nope -vvvv --ignore-errors
- name: Archive directories
file:
state: directory
path: "{{ remote_tmp_dir }}/role.d/{{item}}"
loop:
- meta
- tasks
- name: Metadata file
copy:
content: "'galaxy_info': {}"
dest: "{{ remote_tmp_dir }}/role.d/meta/main.yml"
- name: Valid files
copy:
content: ""
dest: "{{ remote_tmp_dir }}/role.d/tasks/{{item}}"
loop:
- "main.yml"
- "valid~file.yml"
- name: Valid role archive
command: "tar cf {{ remote_tmp_dir }}/valid-role.tar {{ remote_tmp_dir }}/role.d"
- name: Invalid file
copy:
content: ""
dest: "{{ remote_tmp_dir }}/role.d/tasks/~invalid.yml"
- name: Valid requirements file
copy:
dest: valid-requirements.yml
content: "[{'src': '{{ remote_tmp_dir }}/valid-role.tar', 'name': 'valid-testrole'}]"
- name: Invalid role archive
command: "tar cf {{ remote_tmp_dir }}/invalid-role.tar {{ remote_tmp_dir }}/role.d"
- name: Invalid requirements file
copy:
dest: invalid-requirements.yml
content: "[{'src': '{{ remote_tmp_dir }}/invalid-role.tar', 'name': 'invalid-testrole'}]"
- name: Install valid role
command: ansible-galaxy install -r valid-requirements.yml
- name: Uninstall valid role
command: ansible-galaxy role remove valid-testrole
- name: Install invalid role
command: ansible-galaxy install -r invalid-requirements.yml
ignore_errors: yes
register: invalid
- assert:
that: "invalid.rc != 0"
- name: Uninstall invalid role
command: ansible-galaxy role remove invalid-testrole
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 80,427 |
Update the supported coverage versions in ansible-test
|
### Summary
Add support for the next version of `coverage` as needed to support Python 3.12.
### Issue Type
Feature Idea
### Component Name
`ansible-test`
|
https://github.com/ansible/ansible/issues/80427
|
https://github.com/ansible/ansible/pull/81077
|
65a96daaf40920c9fa1c8da10203f6a1cbe91ac7
|
c59bcbe627cf781dbf500b8623d24b658b2f47a6
| 2023-04-05T22:40:03Z |
python
| 2023-08-30T15:58:45Z |
changelogs/fragments/ansible-test-coverage-update.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 80,427 |
Update the supported coverage versions in ansible-test
|
### Summary
Add support for the next version of `coverage` as needed to support Python 3.12.
### Issue Type
Feature Idea
### Component Name
`ansible-test`
|
https://github.com/ansible/ansible/issues/80427
|
https://github.com/ansible/ansible/pull/81077
|
65a96daaf40920c9fa1c8da10203f6a1cbe91ac7
|
c59bcbe627cf781dbf500b8623d24b658b2f47a6
| 2023-04-05T22:40:03Z |
python
| 2023-08-30T15:58:45Z |
test/lib/ansible_test/_data/requirements/ansible-test.txt
|
# The test-constraints sanity test verifies this file, but changes must be made manually to keep it in up-to-date.
virtualenv == 16.7.12 ; python_version < '3'
coverage == 6.5.0 ; python_version >= '3.7' and python_version <= '3.12'
coverage == 4.5.4 ; python_version >= '2.6' and python_version <= '3.6'
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 80,427 |
Update the supported coverage versions in ansible-test
|
### Summary
Add support for the next version of `coverage` as needed to support Python 3.12.
### Issue Type
Feature Idea
### Component Name
`ansible-test`
|
https://github.com/ansible/ansible/issues/80427
|
https://github.com/ansible/ansible/pull/81077
|
65a96daaf40920c9fa1c8da10203f6a1cbe91ac7
|
c59bcbe627cf781dbf500b8623d24b658b2f47a6
| 2023-04-05T22:40:03Z |
python
| 2023-08-30T15:58:45Z |
test/lib/ansible_test/_internal/coverage_util.py
|
"""Utility code for facilitating collection of code coverage when running tests."""
from __future__ import annotations
import dataclasses
import os
import sqlite3
import tempfile
import typing as t
from .config import (
IntegrationConfig,
SanityConfig,
TestConfig,
)
from .io import (
write_text_file,
make_dirs,
open_binary_file,
)
from .util import (
ApplicationError,
InternalError,
COVERAGE_CONFIG_NAME,
remove_tree,
sanitize_host_name,
str_to_version,
)
from .data import (
data_context,
)
from .util_common import (
ExitHandler,
intercept_python,
ResultType,
)
from .host_configs import (
DockerConfig,
HostConfig,
OriginConfig,
PosixRemoteConfig,
PosixSshConfig,
PythonConfig,
)
from .constants import (
SUPPORTED_PYTHON_VERSIONS,
CONTROLLER_PYTHON_VERSIONS,
)
from .thread import (
mutex,
)
@dataclasses.dataclass(frozen=True)
class CoverageVersion:
"""Details about a coverage version and its supported Python versions."""
coverage_version: str
schema_version: int
min_python: tuple[int, int]
max_python: tuple[int, int]
COVERAGE_VERSIONS = (
# IMPORTANT: Keep this in sync with the ansible-test.txt requirements file.
CoverageVersion('6.5.0', 7, (3, 7), (3, 12)),
CoverageVersion('4.5.4', 0, (2, 6), (3, 6)),
)
"""
This tuple specifies the coverage version to use for Python version ranges.
"""
CONTROLLER_COVERAGE_VERSION = COVERAGE_VERSIONS[0]
"""The coverage version supported on the controller."""
class CoverageError(ApplicationError):
"""Exception caused while attempting to read a coverage file."""
def __init__(self, path: str, message: str) -> None:
self.path = path
self.message = message
super().__init__(f'Error reading coverage file "{os.path.relpath(path)}": {message}')
def get_coverage_version(version: str) -> CoverageVersion:
"""Return the coverage version to use with the specified Python version."""
python_version = str_to_version(version)
supported_versions = [entry for entry in COVERAGE_VERSIONS if entry.min_python <= python_version <= entry.max_python]
if not supported_versions:
raise InternalError(f'Python {version} has no matching entry in COVERAGE_VERSIONS.')
if len(supported_versions) > 1:
raise InternalError(f'Python {version} has multiple matching entries in COVERAGE_VERSIONS.')
coverage_version = supported_versions[0]
return coverage_version
def get_coverage_file_schema_version(path: str) -> int:
"""
Return the schema version from the specified coverage file.
SQLite based files report schema version 1 or later.
JSON based files are reported as schema version 0.
An exception is raised if the file is not recognized or the schema version cannot be determined.
"""
with open_binary_file(path) as file_obj:
header = file_obj.read(16)
if header.startswith(b'!coverage.py: '):
return 0
if header.startswith(b'SQLite'):
return get_sqlite_schema_version(path)
raise CoverageError(path, f'Unknown header: {header!r}')
def get_sqlite_schema_version(path: str) -> int:
"""Return the schema version from a SQLite based coverage file."""
try:
with sqlite3.connect(path) as connection:
cursor = connection.cursor()
cursor.execute('select version from coverage_schema')
schema_version = cursor.fetchmany(1)[0][0]
except Exception as ex:
raise CoverageError(path, f'SQLite error: {ex}') from ex
if not isinstance(schema_version, int):
raise CoverageError(path, f'Schema version is {type(schema_version)} instead of {int}: {schema_version}')
if schema_version < 1:
raise CoverageError(path, f'Schema version is out-of-range: {schema_version}')
return schema_version
def cover_python(
args: TestConfig,
python: PythonConfig,
cmd: list[str],
target_name: str,
env: dict[str, str],
capture: bool,
data: t.Optional[str] = None,
cwd: t.Optional[str] = None,
) -> tuple[t.Optional[str], t.Optional[str]]:
"""Run a command while collecting Python code coverage."""
if args.coverage:
env.update(get_coverage_environment(args, target_name, python.version))
return intercept_python(args, python, cmd, env, capture, data, cwd)
def get_coverage_platform(config: HostConfig) -> str:
"""Return the platform label for the given host config."""
if isinstance(config, PosixRemoteConfig):
platform = f'remote-{sanitize_host_name(config.name)}'
elif isinstance(config, DockerConfig):
platform = f'docker-{sanitize_host_name(config.name)}'
elif isinstance(config, PosixSshConfig):
platform = f'ssh-{sanitize_host_name(config.host)}'
elif isinstance(config, OriginConfig):
platform = 'origin' # previous versions of ansible-test used "local-{python_version}"
else:
raise NotImplementedError(f'Coverage platform label not defined for type: {type(config)}')
return platform
def get_coverage_environment(
args: TestConfig,
target_name: str,
version: str,
) -> dict[str, str]:
"""Return environment variables needed to collect code coverage."""
# unit tests, sanity tests and other special cases (localhost only)
# config is in a temporary directory
# results are in the source tree
config_file = get_coverage_config(args)
coverage_name = '='.join((args.command, target_name, get_coverage_platform(args.controller), f'python-{version}', 'coverage'))
coverage_dir = os.path.join(data_context().content.root, data_context().content.results_path, ResultType.COVERAGE.name)
coverage_file = os.path.join(coverage_dir, coverage_name)
make_dirs(coverage_dir)
if args.coverage_check:
# cause the 'coverage' module to be found, but not imported or enabled
coverage_file = ''
# Enable code coverage collection on local Python programs (this does not include Ansible modules).
# Used by the injectors to support code coverage.
# Used by the pytest unit test plugin to support code coverage.
# The COVERAGE_FILE variable is also used directly by the 'coverage' module.
env = dict(
COVERAGE_CONF=config_file,
COVERAGE_FILE=coverage_file,
)
return env
@mutex
def get_coverage_config(args: TestConfig) -> str:
"""Return the path to the coverage config, creating the config if it does not already exist."""
try:
return get_coverage_config.path # type: ignore[attr-defined]
except AttributeError:
pass
coverage_config = generate_coverage_config(args)
if args.explain:
temp_dir = '/tmp/coverage-temp-dir'
else:
temp_dir = tempfile.mkdtemp()
ExitHandler.register(lambda: remove_tree(temp_dir))
path = os.path.join(temp_dir, COVERAGE_CONFIG_NAME)
if not args.explain:
write_text_file(path, coverage_config)
get_coverage_config.path = path # type: ignore[attr-defined]
return path
def generate_coverage_config(args: TestConfig) -> str:
"""Generate code coverage configuration for tests."""
if data_context().content.collection:
coverage_config = generate_collection_coverage_config(args)
else:
coverage_config = generate_ansible_coverage_config()
return coverage_config
def generate_ansible_coverage_config() -> str:
"""Generate code coverage configuration for Ansible tests."""
coverage_config = '''
[run]
branch = True
concurrency = multiprocessing
parallel = True
omit =
*/python*/dist-packages/*
*/python*/site-packages/*
*/python*/distutils/*
*/pyshared/*
*/pytest
*/AnsiballZ_*.py
*/test/results/*
'''
return coverage_config
def generate_collection_coverage_config(args: TestConfig) -> str:
"""Generate code coverage configuration for Ansible Collection tests."""
coverage_config = '''
[run]
branch = True
concurrency = multiprocessing
parallel = True
disable_warnings =
no-data-collected
'''
if isinstance(args, IntegrationConfig):
coverage_config += '''
include =
%s/*
*/%s/*
''' % (data_context().content.root, data_context().content.collection.directory)
elif isinstance(args, SanityConfig):
# temporary work-around for import sanity test
coverage_config += '''
include =
%s/*
omit =
%s/*
''' % (data_context().content.root, os.path.join(data_context().content.root, data_context().content.results_path))
else:
coverage_config += '''
include =
%s/*
''' % data_context().content.root
return coverage_config
def self_check() -> None:
"""Check for internal errors due to incorrect code changes."""
# Verify all supported Python versions have a coverage version.
for version in SUPPORTED_PYTHON_VERSIONS:
get_coverage_version(version)
# Verify all controller Python versions are mapped to the latest coverage version.
for version in CONTROLLER_PYTHON_VERSIONS:
if get_coverage_version(version) != CONTROLLER_COVERAGE_VERSION:
raise InternalError(f'Controller Python version {version} is not mapped to the latest coverage version.')
self_check()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 80,523 |
RFE: Introduce short option '-J' for --ask-vault-pass
|
### Summary
I use the option `--ask-vault-pass` very often because storing the password to some file on the same host is not desired or prohibited in some environments. So I have to type this option quite some time.
It would be nice to have a short option for this and propose to implement '-J' for this.
This what save a lot of typing.
### Issue Type
Feature Idea
### Component Name
!component
ansible-core
### Additional Information
Instead of typing:
```
ansible-playbook --ask-vault-pass playbook.yml
```
It could be just:
```
ansible-playbook -J playbook.yml
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/80523
|
https://github.com/ansible/ansible/pull/80527
|
e1e0e2709c103814308eb9ed44b9ff2289439e87
|
ae69b280ad1dfabe6238fffebbbfa400067cd204
| 2023-04-14T11:47:11Z |
python
| 2023-08-31T16:43:26Z |
changelogs/fragments/80523_-_adding_short_option_for_--ask-vault-pass.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 80,523 |
RFE: Introduce short option '-J' for --ask-vault-pass
|
### Summary
I use the option `--ask-vault-pass` very often because storing the password to some file on the same host is not desired or prohibited in some environments. So I have to type this option quite some time.
It would be nice to have a short option for this and propose to implement '-J' for this.
This what save a lot of typing.
### Issue Type
Feature Idea
### Component Name
!component
ansible-core
### Additional Information
Instead of typing:
```
ansible-playbook --ask-vault-pass playbook.yml
```
It could be just:
```
ansible-playbook -J playbook.yml
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/80523
|
https://github.com/ansible/ansible/pull/80527
|
e1e0e2709c103814308eb9ed44b9ff2289439e87
|
ae69b280ad1dfabe6238fffebbbfa400067cd204
| 2023-04-14T11:47:11Z |
python
| 2023-08-31T16:43:26Z |
lib/ansible/cli/arguments/option_helpers.py
|
# Copyright: (c) 2018, Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import copy
import operator
import argparse
import os
import os.path
import sys
import time
from jinja2 import __version__ as j2_version
import ansible
from ansible import constants as C
from ansible.module_utils.common.text.converters import to_native
from ansible.module_utils.common.yaml import HAS_LIBYAML, yaml_load
from ansible.release import __version__
from ansible.utils.path import unfrackpath
#
# Special purpose OptionParsers
#
class SortingHelpFormatter(argparse.HelpFormatter):
def add_arguments(self, actions):
actions = sorted(actions, key=operator.attrgetter('option_strings'))
super(SortingHelpFormatter, self).add_arguments(actions)
class ArgumentParser(argparse.ArgumentParser):
def add_argument(self, *args, **kwargs):
action = kwargs.get('action')
help = kwargs.get('help')
if help and action in {'append', 'append_const', 'count', 'extend', PrependListAction}:
help = f'{help.rstrip(".")}. This argument may be specified multiple times.'
kwargs['help'] = help
return super().add_argument(*args, **kwargs)
class AnsibleVersion(argparse.Action):
def __call__(self, parser, namespace, values, option_string=None):
ansible_version = to_native(version(getattr(parser, 'prog')))
print(ansible_version)
parser.exit()
class UnrecognizedArgument(argparse.Action):
def __init__(self, option_strings, dest, const=True, default=None, required=False, help=None, metavar=None, nargs=0):
super(UnrecognizedArgument, self).__init__(option_strings=option_strings, dest=dest, nargs=nargs, const=const,
default=default, required=required, help=help)
def __call__(self, parser, namespace, values, option_string=None):
parser.error('unrecognized arguments: %s' % option_string)
class PrependListAction(argparse.Action):
"""A near clone of ``argparse._AppendAction``, but designed to prepend list values
instead of appending.
"""
def __init__(self, option_strings, dest, nargs=None, const=None, default=None, type=None,
choices=None, required=False, help=None, metavar=None):
if nargs == 0:
raise ValueError('nargs for append actions must be > 0; if arg '
'strings are not supplying the value to append, '
'the append const action may be more appropriate')
if const is not None and nargs != argparse.OPTIONAL:
raise ValueError('nargs must be %r to supply const' % argparse.OPTIONAL)
super(PrependListAction, self).__init__(
option_strings=option_strings,
dest=dest,
nargs=nargs,
const=const,
default=default,
type=type,
choices=choices,
required=required,
help=help,
metavar=metavar
)
def __call__(self, parser, namespace, values, option_string=None):
items = copy.copy(ensure_value(namespace, self.dest, []))
items[0:0] = values
setattr(namespace, self.dest, items)
def ensure_value(namespace, name, value):
if getattr(namespace, name, None) is None:
setattr(namespace, name, value)
return getattr(namespace, name)
#
# Callbacks to validate and normalize Options
#
def unfrack_path(pathsep=False, follow=True):
"""Turn an Option's data into a single path in Ansible locations"""
def inner(value):
if pathsep:
return [unfrackpath(x, follow=follow) for x in value.split(os.pathsep) if x]
if value == '-':
return value
return unfrackpath(value, follow=follow)
return inner
def maybe_unfrack_path(beacon):
def inner(value):
if value.startswith(beacon):
return beacon + unfrackpath(value[1:])
return value
return inner
def _git_repo_info(repo_path):
""" returns a string containing git branch, commit id and commit date """
result = None
if os.path.exists(repo_path):
# Check if the .git is a file. If it is a file, it means that we are in a submodule structure.
if os.path.isfile(repo_path):
try:
with open(repo_path) as f:
gitdir = yaml_load(f).get('gitdir')
# There is a possibility the .git file to have an absolute path.
if os.path.isabs(gitdir):
repo_path = gitdir
else:
repo_path = os.path.join(repo_path[:-4], gitdir)
except (IOError, AttributeError):
return ''
with open(os.path.join(repo_path, "HEAD")) as f:
line = f.readline().rstrip("\n")
if line.startswith("ref:"):
branch_path = os.path.join(repo_path, line[5:])
else:
branch_path = None
if branch_path and os.path.exists(branch_path):
branch = '/'.join(line.split('/')[2:])
with open(branch_path) as f:
commit = f.readline()[:10]
else:
# detached HEAD
commit = line[:10]
branch = 'detached HEAD'
branch_path = os.path.join(repo_path, "HEAD")
date = time.localtime(os.stat(branch_path).st_mtime)
if time.daylight == 0:
offset = time.timezone
else:
offset = time.altzone
result = "({0} {1}) last updated {2} (GMT {3:+04d})".format(branch, commit, time.strftime("%Y/%m/%d %H:%M:%S", date), int(offset / -36))
else:
result = ''
return result
def _gitinfo():
basedir = os.path.normpath(os.path.join(os.path.dirname(__file__), '..', '..', '..', '..'))
repo_path = os.path.join(basedir, '.git')
return _git_repo_info(repo_path)
def version(prog=None):
""" return ansible version """
if prog:
result = ["{0} [core {1}]".format(prog, __version__)]
else:
result = [__version__]
gitinfo = _gitinfo()
if gitinfo:
result[0] = "{0} {1}".format(result[0], gitinfo)
result.append(" config file = %s" % C.CONFIG_FILE)
if C.DEFAULT_MODULE_PATH is None:
cpath = "Default w/o overrides"
else:
cpath = C.DEFAULT_MODULE_PATH
result.append(" configured module search path = %s" % cpath)
result.append(" ansible python module location = %s" % ':'.join(ansible.__path__))
result.append(" ansible collection location = %s" % ':'.join(C.COLLECTIONS_PATHS))
result.append(" executable location = %s" % sys.argv[0])
result.append(" python version = %s (%s)" % (''.join(sys.version.splitlines()), to_native(sys.executable)))
result.append(" jinja version = %s" % j2_version)
result.append(" libyaml = %s" % HAS_LIBYAML)
return "\n".join(result)
#
# Functions to add pre-canned options to an OptionParser
#
def create_base_parser(prog, usage="", desc=None, epilog=None):
"""
Create an options parser for all ansible scripts
"""
# base opts
parser = ArgumentParser(
prog=prog,
formatter_class=SortingHelpFormatter,
epilog=epilog,
description=desc,
conflict_handler='resolve',
)
version_help = "show program's version number, config file location, configured module search path," \
" module location, executable location and exit"
parser.add_argument('--version', action=AnsibleVersion, nargs=0, help=version_help)
add_verbosity_options(parser)
return parser
def add_verbosity_options(parser):
"""Add options for verbosity"""
parser.add_argument('-v', '--verbose', dest='verbosity', default=C.DEFAULT_VERBOSITY, action="count",
help="Causes Ansible to print more debug messages. Adding multiple -v will increase the verbosity, "
"the builtin plugins currently evaluate up to -vvvvvv. A reasonable level to start is -vvv, "
"connection debugging might require -vvvv.")
def add_async_options(parser):
"""Add options for commands which can launch async tasks"""
parser.add_argument('-P', '--poll', default=C.DEFAULT_POLL_INTERVAL, type=int, dest='poll_interval',
help="set the poll interval if using -B (default=%s)" % C.DEFAULT_POLL_INTERVAL)
parser.add_argument('-B', '--background', dest='seconds', type=int, default=0,
help='run asynchronously, failing after X seconds (default=N/A)')
def add_basedir_options(parser):
"""Add options for commands which can set a playbook basedir"""
parser.add_argument('--playbook-dir', default=C.PLAYBOOK_DIR, dest='basedir', action='store',
help="Since this tool does not use playbooks, use this as a substitute playbook directory. "
"This sets the relative path for many features including roles/ group_vars/ etc.",
type=unfrack_path())
def add_check_options(parser):
"""Add options for commands which can run with diagnostic information of tasks"""
parser.add_argument("-C", "--check", default=False, dest='check', action='store_true',
help="don't make any changes; instead, try to predict some of the changes that may occur")
parser.add_argument("-D", "--diff", default=C.DIFF_ALWAYS, dest='diff', action='store_true',
help="when changing (small) files and templates, show the differences in those"
" files; works great with --check")
def add_connect_options(parser):
"""Add options for commands which need to connection to other hosts"""
connect_group = parser.add_argument_group("Connection Options", "control as whom and how to connect to hosts")
connect_group.add_argument('--private-key', '--key-file', default=C.DEFAULT_PRIVATE_KEY_FILE, dest='private_key_file',
help='use this file to authenticate the connection', type=unfrack_path())
connect_group.add_argument('-u', '--user', default=C.DEFAULT_REMOTE_USER, dest='remote_user',
help='connect as this user (default=%s)' % C.DEFAULT_REMOTE_USER)
connect_group.add_argument('-c', '--connection', dest='connection', default=C.DEFAULT_TRANSPORT,
help="connection type to use (default=%s)" % C.DEFAULT_TRANSPORT)
connect_group.add_argument('-T', '--timeout', default=None, type=int, dest='timeout',
help="override the connection timeout in seconds (default depends on connection)")
# ssh only
connect_group.add_argument('--ssh-common-args', default=None, dest='ssh_common_args',
help="specify common arguments to pass to sftp/scp/ssh (e.g. ProxyCommand)")
connect_group.add_argument('--sftp-extra-args', default=None, dest='sftp_extra_args',
help="specify extra arguments to pass to sftp only (e.g. -f, -l)")
connect_group.add_argument('--scp-extra-args', default=None, dest='scp_extra_args',
help="specify extra arguments to pass to scp only (e.g. -l)")
connect_group.add_argument('--ssh-extra-args', default=None, dest='ssh_extra_args',
help="specify extra arguments to pass to ssh only (e.g. -R)")
parser.add_argument_group(connect_group)
connect_password_group = parser.add_mutually_exclusive_group()
connect_password_group.add_argument('-k', '--ask-pass', default=C.DEFAULT_ASK_PASS, dest='ask_pass', action='store_true',
help='ask for connection password')
connect_password_group.add_argument('--connection-password-file', '--conn-pass-file', default=C.CONNECTION_PASSWORD_FILE, dest='connection_password_file',
help="Connection password file", type=unfrack_path(), action='store')
parser.add_argument_group(connect_password_group)
def add_fork_options(parser):
"""Add options for commands that can fork worker processes"""
parser.add_argument('-f', '--forks', dest='forks', default=C.DEFAULT_FORKS, type=int,
help="specify number of parallel processes to use (default=%s)" % C.DEFAULT_FORKS)
def add_inventory_options(parser):
"""Add options for commands that utilize inventory"""
parser.add_argument('-i', '--inventory', '--inventory-file', dest='inventory', action="append",
help="specify inventory host path or comma separated host list. --inventory-file is deprecated")
parser.add_argument('--list-hosts', dest='listhosts', action='store_true',
help='outputs a list of matching hosts; does not execute anything else')
parser.add_argument('-l', '--limit', default=C.DEFAULT_SUBSET, dest='subset',
help='further limit selected hosts to an additional pattern')
def add_meta_options(parser):
"""Add options for commands which can launch meta tasks from the command line"""
parser.add_argument('--force-handlers', default=C.DEFAULT_FORCE_HANDLERS, dest='force_handlers', action='store_true',
help="run handlers even if a task fails")
parser.add_argument('--flush-cache', dest='flush_cache', action='store_true',
help="clear the fact cache for every host in inventory")
def add_module_options(parser):
"""Add options for commands that load modules"""
module_path = C.config.get_configuration_definition('DEFAULT_MODULE_PATH').get('default', '')
parser.add_argument('-M', '--module-path', dest='module_path', default=None,
help="prepend colon-separated path(s) to module library (default=%s)" % module_path,
type=unfrack_path(pathsep=True), action=PrependListAction)
def add_output_options(parser):
"""Add options for commands which can change their output"""
parser.add_argument('-o', '--one-line', dest='one_line', action='store_true',
help='condense output')
parser.add_argument('-t', '--tree', dest='tree', default=None,
help='log output to this directory')
def add_runas_options(parser):
"""
Add options for commands which can run tasks as another user
Note that this includes the options from add_runas_prompt_options(). Only one of these
functions should be used.
"""
runas_group = parser.add_argument_group("Privilege Escalation Options", "control how and which user you become as on target hosts")
# consolidated privilege escalation (become)
runas_group.add_argument("-b", "--become", default=C.DEFAULT_BECOME, action="store_true", dest='become',
help="run operations with become (does not imply password prompting)")
runas_group.add_argument('--become-method', dest='become_method', default=C.DEFAULT_BECOME_METHOD,
help='privilege escalation method to use (default=%s)' % C.DEFAULT_BECOME_METHOD +
', use `ansible-doc -t become -l` to list valid choices.')
runas_group.add_argument('--become-user', default=None, dest='become_user', type=str,
help='run operations as this user (default=%s)' % C.DEFAULT_BECOME_USER)
parser.add_argument_group(runas_group)
add_runas_prompt_options(parser)
def add_runas_prompt_options(parser, runas_group=None):
"""
Add options for commands which need to prompt for privilege escalation credentials
Note that add_runas_options() includes these options already. Only one of the two functions
should be used.
"""
if runas_group is not None:
parser.add_argument_group(runas_group)
runas_pass_group = parser.add_mutually_exclusive_group()
runas_pass_group.add_argument('-K', '--ask-become-pass', dest='become_ask_pass', action='store_true',
default=C.DEFAULT_BECOME_ASK_PASS,
help='ask for privilege escalation password')
runas_pass_group.add_argument('--become-password-file', '--become-pass-file', default=C.BECOME_PASSWORD_FILE, dest='become_password_file',
help="Become password file", type=unfrack_path(), action='store')
parser.add_argument_group(runas_pass_group)
def add_runtask_options(parser):
"""Add options for commands that run a task"""
parser.add_argument('-e', '--extra-vars', dest="extra_vars", action="append", type=maybe_unfrack_path('@'),
help="set additional variables as key=value or YAML/JSON, if filename prepend with @", default=[])
def add_tasknoplay_options(parser):
"""Add options for commands that run a task w/o a defined play"""
parser.add_argument('--task-timeout', type=int, dest="task_timeout", action="store", default=C.TASK_TIMEOUT,
help="set task timeout limit in seconds, must be positive integer.")
def add_subset_options(parser):
"""Add options for commands which can run a subset of tasks"""
parser.add_argument('-t', '--tags', dest='tags', default=C.TAGS_RUN, action='append',
help="only run plays and tasks tagged with these values")
parser.add_argument('--skip-tags', dest='skip_tags', default=C.TAGS_SKIP, action='append',
help="only run plays and tasks whose tags do not match these values")
def add_vault_options(parser):
"""Add options for loading vault files"""
parser.add_argument('--vault-id', default=[], dest='vault_ids', action='append', type=str,
help='the vault identity to use')
base_group = parser.add_mutually_exclusive_group()
base_group.add_argument('--ask-vault-password', '--ask-vault-pass', default=C.DEFAULT_ASK_VAULT_PASS, dest='ask_vault_pass', action='store_true',
help='ask for vault password')
base_group.add_argument('--vault-password-file', '--vault-pass-file', default=[], dest='vault_password_files',
help="vault password file", type=unfrack_path(follow=False), action='append')
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 80,141 |
copy module does not preserve file ownership
|
### Summary
When using the `copy` module with `become: true` and `remote_src: true` the owner of the destination file is `root` rather than the owner of the source file.
### Issue Type
Feature request/Documentation report
### Component Name
copy
### Ansible Version
```console
$ ansible --version
ansible [core 2.14.3]
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/Laborratte5/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.10/site-packages/ansible
ansible collection location = /home/Laborratte5/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/bin/ansible
python version = 3.10.9 (main, Dec 19 2022, 17:35:49) [GCC 12.2.0] (/usr/bin/python)
jinja version = 3.1.2
libyaml = True
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
CONFIG_FILE() = /etc/ansible/ansible.cfg
```
### OS / Environment
Control Node: Arch Linux
Managed Node: Debian GNU/Linux 11 (bullseye)
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
- name: Copy test
become: true
ansible.builtin.copy:
src: "/home/provision/copytest.src"
remote_src: true
dest: "/home/provision/copytest.dest"
mode: preserve
```
### Expected Results
I expected `copytest.dest` file having owner:group of `provision:provision`.
### Actual Results
```console
-rw-r--r-- 1 root root 8 5. MΓ€r 00:22 copytest.dest
-rw-r--r-- 1 provision provision 8 5. MΓ€r 00:22 copytest.src
changed: [test_server01] => {
"changed": true,
"checksum": "a32ac2d7196b96b084a4e9d792fa8fba7f1ed29d",
"dest": "/home/provision/copytest.dest",
"gid": 0,
"group": "root",
"md5sum": "4a251a2ef9bbf4ccc35f97aba2c9cbda",
"mode": "0644",
"owner": "root",
"size": 8,
"src": "/home/provision/copytest.src",
"state": "file",
"uid": 0
}
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/80141
|
https://github.com/ansible/ansible/pull/81592
|
1cc5efa77b3dd8b0e24cd60175b034a6c4ca4c2c
|
0f20540e967e2baedc61618379ae31dc6761ddbb
| 2023-03-05T22:03:55Z |
python
| 2023-08-31T19:03:10Z |
lib/ansible/modules/copy.py
|
# -*- coding: utf-8 -*-
# Copyright: (c) 2012, Michael DeHaan <[email protected]>
# Copyright: (c) 2017, Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
DOCUMENTATION = r'''
---
module: copy
version_added: historical
short_description: Copy files to remote locations
description:
- The M(ansible.builtin.copy) module copies a file from the local or remote machine to a location on the remote machine.
- Use the M(ansible.builtin.fetch) module to copy files from remote locations to the local box.
- If you need variable interpolation in copied files, use the M(ansible.builtin.template) module.
Using a variable in the O(content) field will result in unpredictable output.
- For Windows targets, use the M(ansible.windows.win_copy) module instead.
options:
src:
description:
- Local path to a file to copy to the remote server.
- This can be absolute or relative.
- If path is a directory, it is copied recursively. In this case, if path ends
with "/", only inside contents of that directory are copied to destination.
Otherwise, if it does not end with "/", the directory itself with all contents
is copied. This behavior is similar to the C(rsync) command line tool.
type: path
content:
description:
- When used instead of O(src), sets the contents of a file directly to the specified value.
- Works only when O(dest) is a file. Creates the file if it does not exist.
- For advanced formatting or if O(content) contains a variable, use the
M(ansible.builtin.template) module.
type: str
version_added: '1.1'
dest:
description:
- Remote absolute path where the file should be copied to.
- If O(src) is a directory, this must be a directory too.
- If O(dest) is a non-existent path and if either O(dest) ends with "/" or O(src) is a directory, O(dest) is created.
- If O(dest) is a relative path, the starting directory is determined by the remote host.
- If O(src) and O(dest) are files, the parent directory of O(dest) is not created and the task fails if it does not already exist.
type: path
required: yes
backup:
description:
- Create a backup file including the timestamp information so you can get the original file back if you somehow clobbered it incorrectly.
type: bool
default: no
version_added: '0.7'
force:
description:
- Influence whether the remote file must always be replaced.
- If V(true), the remote file will be replaced when contents are different than the source.
- If V(false), the file will only be transferred if the destination does not exist.
type: bool
default: yes
version_added: '1.1'
mode:
description:
- The permissions of the destination file or directory.
- For those used to C(/usr/bin/chmod) remember that modes are actually octal numbers.
You must either add a leading zero so that Ansible's YAML parser knows it is an octal number
(like V(0644) or V(01777)) or quote it (like V('644') or V('1777')) so Ansible receives a string
and can do its own conversion from string into number. Giving Ansible a number without following
one of these rules will end up with a decimal number which will have unexpected results.
- As of Ansible 1.8, the mode may be specified as a symbolic mode (for example, V(u+rwx) or V(u=rw,g=r,o=r)).
- As of Ansible 2.3, the mode may also be the special string V(preserve).
- V(preserve) means that the file will be given the same permissions as the source file.
- When doing a recursive copy, see also O(directory_mode).
- If O(mode) is not specified and the destination file B(does not) exist, the default C(umask) on the system will be used
when setting the mode for the newly created file.
- If O(mode) is not specified and the destination file B(does) exist, the mode of the existing file will be used.
- Specifying O(mode) is the best way to ensure files are created with the correct permissions.
See CVE-2020-1736 for further details.
directory_mode:
description:
- Set the access permissions of newly created directories to the given mode.
Permissions on existing directories do not change.
- See O(mode) for the syntax of accepted values.
- The target system's defaults determine permissions when this parameter is not set.
type: raw
version_added: '1.5'
remote_src:
description:
- Influence whether O(src) needs to be transferred or already is present remotely.
- If V(false), it will search for O(src) on the controller node.
- If V(true) it will search for O(src) on the managed (remote) node.
- O(remote_src) supports recursive copying as of version 2.8.
- O(remote_src) only works with O(mode=preserve) as of version 2.6.
- Autodecryption of files does not work when O(remote_src=yes).
type: bool
default: no
version_added: '2.0'
follow:
description:
- This flag indicates that filesystem links in the destination, if they exist, should be followed.
type: bool
default: no
version_added: '1.8'
local_follow:
description:
- This flag indicates that filesystem links in the source tree, if they exist, should be followed.
type: bool
default: yes
version_added: '2.4'
checksum:
description:
- SHA1 checksum of the file being transferred.
- Used to validate that the copy of the file was successful.
- If this is not provided, ansible will use the local calculated checksum of the src file.
type: str
version_added: '2.5'
extends_documentation_fragment:
- decrypt
- files
- validate
- action_common_attributes
- action_common_attributes.files
- action_common_attributes.flow
notes:
- The M(ansible.builtin.copy) module recursively copy facility does not scale to lots (>hundreds) of files.
seealso:
- module: ansible.builtin.assemble
- module: ansible.builtin.fetch
- module: ansible.builtin.file
- module: ansible.builtin.template
- module: ansible.posix.synchronize
- module: ansible.windows.win_copy
author:
- Ansible Core Team
- Michael DeHaan
attributes:
action:
support: full
async:
support: none
bypass_host_loop:
support: none
check_mode:
support: full
diff_mode:
support: full
platform:
platforms: posix
safe_file_operations:
support: full
vault:
support: full
version_added: '2.2'
'''
EXAMPLES = r'''
- name: Copy file with owner and permissions
ansible.builtin.copy:
src: /srv/myfiles/foo.conf
dest: /etc/foo.conf
owner: foo
group: foo
mode: '0644'
- name: Copy file with owner and permission, using symbolic representation
ansible.builtin.copy:
src: /srv/myfiles/foo.conf
dest: /etc/foo.conf
owner: foo
group: foo
mode: u=rw,g=r,o=r
- name: Another symbolic mode example, adding some permissions and removing others
ansible.builtin.copy:
src: /srv/myfiles/foo.conf
dest: /etc/foo.conf
owner: foo
group: foo
mode: u+rw,g-wx,o-rwx
- name: Copy a new "ntp.conf" file into place, backing up the original if it differs from the copied version
ansible.builtin.copy:
src: /mine/ntp.conf
dest: /etc/ntp.conf
owner: root
group: root
mode: '0644'
backup: yes
- name: Copy a new "sudoers" file into place, after passing validation with visudo
ansible.builtin.copy:
src: /mine/sudoers
dest: /etc/sudoers
validate: /usr/sbin/visudo -csf %s
- name: Copy a "sudoers" file on the remote machine for editing
ansible.builtin.copy:
src: /etc/sudoers
dest: /etc/sudoers.edit
remote_src: yes
validate: /usr/sbin/visudo -csf %s
- name: Copy using inline content
ansible.builtin.copy:
content: '# This file was moved to /etc/other.conf'
dest: /etc/mine.conf
- name: If follow=yes, /path/to/file will be overwritten by contents of foo.conf
ansible.builtin.copy:
src: /etc/foo.conf
dest: /path/to/link # link to /path/to/file
follow: yes
- name: If follow=no, /path/to/link will become a file and be overwritten by contents of foo.conf
ansible.builtin.copy:
src: /etc/foo.conf
dest: /path/to/link # link to /path/to/file
follow: no
'''
RETURN = r'''
dest:
description: Destination file/path.
returned: success
type: str
sample: /path/to/file.txt
src:
description: Source file used for the copy on the target machine.
returned: changed
type: str
sample: /home/httpd/.ansible/tmp/ansible-tmp-1423796390.97-147729857856000/source
md5sum:
description: MD5 checksum of the file after running copy.
returned: when supported
type: str
sample: 2a5aeecc61dc98c4d780b14b330e3282
checksum:
description: SHA1 checksum of the file after running copy.
returned: success
type: str
sample: 6e642bb8dd5c2e027bf21dd923337cbb4214f827
backup_file:
description: Name of backup file created.
returned: changed and if backup=yes
type: str
sample: /path/to/file.txt.2015-02-12@22:09~
gid:
description: Group id of the file, after execution.
returned: success
type: int
sample: 100
group:
description: Group of the file, after execution.
returned: success
type: str
sample: httpd
owner:
description: Owner of the file, after execution.
returned: success
type: str
sample: httpd
uid:
description: Owner id of the file, after execution.
returned: success
type: int
sample: 100
mode:
description: Permissions of the target, after execution.
returned: success
type: str
sample: "0644"
size:
description: Size of the target, after execution.
returned: success
type: int
sample: 1220
state:
description: State of the target, after execution.
returned: success
type: str
sample: file
'''
import errno
import filecmp
import grp
import os
import os.path
import platform
import pwd
import shutil
import stat
import tempfile
import traceback
from ansible.module_utils.common.text.converters import to_bytes, to_native
from ansible.module_utils.basic import AnsibleModule
from ansible.module_utils.common.process import get_bin_path
from ansible.module_utils.common.locale import get_best_parsable_locale
from ansible.module_utils.six import PY3
# The AnsibleModule object
module = None
class AnsibleModuleError(Exception):
def __init__(self, results):
self.results = results
# Once we get run_command moved into common, we can move this into a common/files module. We can't
# until then because of the module.run_command() method. We may need to move it into
# basic::AnsibleModule() until then but if so, make it a private function so that we don't have to
# keep it for backwards compatibility later.
def clear_facls(path):
setfacl = get_bin_path('setfacl')
# FIXME "setfacl -b" is available on Linux and FreeBSD. There is "setfacl -D e" on z/OS. Others?
acl_command = [setfacl, '-b', path]
b_acl_command = [to_bytes(x) for x in acl_command]
locale = get_best_parsable_locale(module)
rc, out, err = module.run_command(b_acl_command, environ_update=dict(LANG=locale, LC_ALL=locale, LC_MESSAGES=locale))
if rc != 0:
raise RuntimeError('Error running "{0}": stdout: "{1}"; stderr: "{2}"'.format(' '.join(b_acl_command), out, err))
def split_pre_existing_dir(dirname):
'''
Return the first pre-existing directory and a list of the new directories that will be created.
'''
head, tail = os.path.split(dirname)
b_head = to_bytes(head, errors='surrogate_or_strict')
if head == '':
return ('.', [tail])
if not os.path.exists(b_head):
if head == '/':
raise AnsibleModuleError(results={'msg': "The '/' directory doesn't exist on this machine."})
(pre_existing_dir, new_directory_list) = split_pre_existing_dir(head)
else:
return (head, [tail])
new_directory_list.append(tail)
return (pre_existing_dir, new_directory_list)
def adjust_recursive_directory_permissions(pre_existing_dir, new_directory_list, module, directory_args, changed):
'''
Walk the new directories list and make sure that permissions are as we would expect
'''
if new_directory_list:
working_dir = os.path.join(pre_existing_dir, new_directory_list.pop(0))
directory_args['path'] = working_dir
changed = module.set_fs_attributes_if_different(directory_args, changed)
changed = adjust_recursive_directory_permissions(working_dir, new_directory_list, module, directory_args, changed)
return changed
def chown_recursive(path, module):
changed = False
owner = module.params['owner']
group = module.params['group']
if owner is not None:
if not module.check_mode:
for dirpath, dirnames, filenames in os.walk(path):
owner_changed = module.set_owner_if_different(dirpath, owner, False)
if owner_changed is True:
changed = owner_changed
for dir in [os.path.join(dirpath, d) for d in dirnames]:
owner_changed = module.set_owner_if_different(dir, owner, False)
if owner_changed is True:
changed = owner_changed
for file in [os.path.join(dirpath, f) for f in filenames]:
owner_changed = module.set_owner_if_different(file, owner, False)
if owner_changed is True:
changed = owner_changed
else:
uid = pwd.getpwnam(owner).pw_uid
for dirpath, dirnames, filenames in os.walk(path):
owner_changed = (os.stat(dirpath).st_uid != uid)
if owner_changed is True:
changed = owner_changed
for dir in [os.path.join(dirpath, d) for d in dirnames]:
owner_changed = (os.stat(dir).st_uid != uid)
if owner_changed is True:
changed = owner_changed
for file in [os.path.join(dirpath, f) for f in filenames]:
owner_changed = (os.stat(file).st_uid != uid)
if owner_changed is True:
changed = owner_changed
if group is not None:
if not module.check_mode:
for dirpath, dirnames, filenames in os.walk(path):
group_changed = module.set_group_if_different(dirpath, group, False)
if group_changed is True:
changed = group_changed
for dir in [os.path.join(dirpath, d) for d in dirnames]:
group_changed = module.set_group_if_different(dir, group, False)
if group_changed is True:
changed = group_changed
for file in [os.path.join(dirpath, f) for f in filenames]:
group_changed = module.set_group_if_different(file, group, False)
if group_changed is True:
changed = group_changed
else:
gid = grp.getgrnam(group).gr_gid
for dirpath, dirnames, filenames in os.walk(path):
group_changed = (os.stat(dirpath).st_gid != gid)
if group_changed is True:
changed = group_changed
for dir in [os.path.join(dirpath, d) for d in dirnames]:
group_changed = (os.stat(dir).st_gid != gid)
if group_changed is True:
changed = group_changed
for file in [os.path.join(dirpath, f) for f in filenames]:
group_changed = (os.stat(file).st_gid != gid)
if group_changed is True:
changed = group_changed
return changed
def copy_diff_files(src, dest, module):
"""Copy files that are different between `src` directory and `dest` directory."""
changed = False
owner = module.params['owner']
group = module.params['group']
local_follow = module.params['local_follow']
diff_files = filecmp.dircmp(src, dest).diff_files
if len(diff_files):
changed = True
if not module.check_mode:
for item in diff_files:
src_item_path = os.path.join(src, item)
dest_item_path = os.path.join(dest, item)
b_src_item_path = to_bytes(src_item_path, errors='surrogate_or_strict')
b_dest_item_path = to_bytes(dest_item_path, errors='surrogate_or_strict')
if os.path.islink(b_src_item_path) and local_follow is False:
linkto = os.readlink(b_src_item_path)
os.symlink(linkto, b_dest_item_path)
else:
shutil.copyfile(b_src_item_path, b_dest_item_path)
shutil.copymode(b_src_item_path, b_dest_item_path)
if owner is not None:
module.set_owner_if_different(b_dest_item_path, owner, False)
if group is not None:
module.set_group_if_different(b_dest_item_path, group, False)
changed = True
return changed
def copy_left_only(src, dest, module):
"""Copy files that exist in `src` directory only to the `dest` directory."""
changed = False
owner = module.params['owner']
group = module.params['group']
local_follow = module.params['local_follow']
left_only = filecmp.dircmp(src, dest).left_only
if len(left_only):
changed = True
if not module.check_mode:
for item in left_only:
src_item_path = os.path.join(src, item)
dest_item_path = os.path.join(dest, item)
b_src_item_path = to_bytes(src_item_path, errors='surrogate_or_strict')
b_dest_item_path = to_bytes(dest_item_path, errors='surrogate_or_strict')
if os.path.islink(b_src_item_path) and os.path.isdir(b_src_item_path) and local_follow is True:
shutil.copytree(b_src_item_path, b_dest_item_path, symlinks=not local_follow)
chown_recursive(b_dest_item_path, module)
if os.path.islink(b_src_item_path) and os.path.isdir(b_src_item_path) and local_follow is False:
linkto = os.readlink(b_src_item_path)
os.symlink(linkto, b_dest_item_path)
if os.path.islink(b_src_item_path) and os.path.isfile(b_src_item_path) and local_follow is True:
shutil.copyfile(b_src_item_path, b_dest_item_path)
if owner is not None:
module.set_owner_if_different(b_dest_item_path, owner, False)
if group is not None:
module.set_group_if_different(b_dest_item_path, group, False)
if os.path.islink(b_src_item_path) and os.path.isfile(b_src_item_path) and local_follow is False:
linkto = os.readlink(b_src_item_path)
os.symlink(linkto, b_dest_item_path)
if not os.path.islink(b_src_item_path) and os.path.isfile(b_src_item_path):
shutil.copyfile(b_src_item_path, b_dest_item_path)
shutil.copymode(b_src_item_path, b_dest_item_path)
if owner is not None:
module.set_owner_if_different(b_dest_item_path, owner, False)
if group is not None:
module.set_group_if_different(b_dest_item_path, group, False)
if not os.path.islink(b_src_item_path) and os.path.isdir(b_src_item_path):
shutil.copytree(b_src_item_path, b_dest_item_path, symlinks=not local_follow)
chown_recursive(b_dest_item_path, module)
changed = True
return changed
def copy_common_dirs(src, dest, module):
changed = False
common_dirs = filecmp.dircmp(src, dest).common_dirs
for item in common_dirs:
src_item_path = os.path.join(src, item)
dest_item_path = os.path.join(dest, item)
b_src_item_path = to_bytes(src_item_path, errors='surrogate_or_strict')
b_dest_item_path = to_bytes(dest_item_path, errors='surrogate_or_strict')
diff_files_changed = copy_diff_files(b_src_item_path, b_dest_item_path, module)
left_only_changed = copy_left_only(b_src_item_path, b_dest_item_path, module)
if diff_files_changed or left_only_changed:
changed = True
# recurse into subdirectory
changed = copy_common_dirs(os.path.join(src, item), os.path.join(dest, item), module) or changed
return changed
def main():
global module
module = AnsibleModule(
# not checking because of daisy chain to file module
argument_spec=dict(
src=dict(type='path'),
_original_basename=dict(type='str'), # used to handle 'dest is a directory' via template, a slight hack
content=dict(type='str', no_log=True),
dest=dict(type='path', required=True),
backup=dict(type='bool', default=False),
force=dict(type='bool', default=True),
validate=dict(type='str'),
directory_mode=dict(type='raw'),
remote_src=dict(type='bool'),
local_follow=dict(type='bool'),
checksum=dict(type='str'),
follow=dict(type='bool', default=False),
),
add_file_common_args=True,
supports_check_mode=True,
)
src = module.params['src']
b_src = to_bytes(src, errors='surrogate_or_strict')
dest = module.params['dest']
# Make sure we always have a directory component for later processing
if os.path.sep not in dest:
dest = '.{0}{1}'.format(os.path.sep, dest)
b_dest = to_bytes(dest, errors='surrogate_or_strict')
backup = module.params['backup']
force = module.params['force']
_original_basename = module.params.get('_original_basename', None)
validate = module.params.get('validate', None)
follow = module.params['follow']
local_follow = module.params['local_follow']
mode = module.params['mode']
owner = module.params['owner']
group = module.params['group']
remote_src = module.params['remote_src']
checksum = module.params['checksum']
if not os.path.exists(b_src):
module.fail_json(msg="Source %s not found" % (src))
if not os.access(b_src, os.R_OK):
module.fail_json(msg="Source %s not readable" % (src))
# Preserve is usually handled in the action plugin but mode + remote_src has to be done on the
# remote host
if module.params['mode'] == 'preserve':
module.params['mode'] = '0%03o' % stat.S_IMODE(os.stat(b_src).st_mode)
mode = module.params['mode']
changed = False
checksum_dest = None
checksum_src = None
md5sum_src = None
if os.path.isfile(src):
try:
checksum_src = module.sha1(src)
except (OSError, IOError) as e:
module.warn("Unable to calculate src checksum, assuming change: %s" % to_native(e))
try:
# Backwards compat only. This will be None in FIPS mode
md5sum_src = module.md5(src)
except ValueError:
pass
elif remote_src and not os.path.isdir(src):
module.fail_json("Cannot copy invalid source '%s': not a file" % to_native(src))
if checksum and checksum_src != checksum:
module.fail_json(
msg='Copied file does not match the expected checksum. Transfer failed.',
checksum=checksum_src,
expected_checksum=checksum
)
# Special handling for recursive copy - create intermediate dirs
if dest.endswith(os.sep):
if _original_basename:
dest = os.path.join(dest, _original_basename)
b_dest = to_bytes(dest, errors='surrogate_or_strict')
dirname = os.path.dirname(dest)
b_dirname = to_bytes(dirname, errors='surrogate_or_strict')
if not os.path.exists(b_dirname):
try:
(pre_existing_dir, new_directory_list) = split_pre_existing_dir(dirname)
except AnsibleModuleError as e:
e.result['msg'] += ' Could not copy to {0}'.format(dest)
module.fail_json(**e.results)
if module.check_mode:
module.exit_json(msg='dest directory %s would be created' % dirname, changed=True, src=src)
os.makedirs(b_dirname)
changed = True
directory_args = module.load_file_common_arguments(module.params)
directory_mode = module.params["directory_mode"]
if directory_mode is not None:
directory_args['mode'] = directory_mode
else:
directory_args['mode'] = None
adjust_recursive_directory_permissions(pre_existing_dir, new_directory_list, module, directory_args, changed)
if os.path.isdir(b_dest):
basename = os.path.basename(src)
if _original_basename:
basename = _original_basename
dest = os.path.join(dest, basename)
b_dest = to_bytes(dest, errors='surrogate_or_strict')
if os.path.exists(b_dest):
if os.path.islink(b_dest) and follow:
b_dest = os.path.realpath(b_dest)
dest = to_native(b_dest, errors='surrogate_or_strict')
if not force:
module.exit_json(msg="file already exists", src=src, dest=dest, changed=False)
if os.access(b_dest, os.R_OK) and os.path.isfile(b_dest):
checksum_dest = module.sha1(dest)
else:
if not os.path.exists(os.path.dirname(b_dest)):
try:
# os.path.exists() can return false in some
# circumstances where the directory does not have
# the execute bit for the current user set, in
# which case the stat() call will raise an OSError
os.stat(os.path.dirname(b_dest))
except OSError as e:
if "permission denied" in to_native(e).lower():
module.fail_json(msg="Destination directory %s is not accessible" % (os.path.dirname(dest)))
module.fail_json(msg="Destination directory %s does not exist" % (os.path.dirname(dest)))
if not os.access(os.path.dirname(b_dest), os.W_OK) and not module.params['unsafe_writes']:
module.fail_json(msg="Destination %s not writable" % (os.path.dirname(dest)))
backup_file = None
if checksum_src != checksum_dest or os.path.islink(b_dest):
if not module.check_mode:
try:
if backup:
if os.path.exists(b_dest):
backup_file = module.backup_local(dest)
# allow for conversion from symlink.
if os.path.islink(b_dest):
os.unlink(b_dest)
open(b_dest, 'w').close()
if validate:
# if we have a mode, make sure we set it on the temporary
# file source as some validations may require it
if mode is not None:
module.set_mode_if_different(src, mode, False)
if owner is not None:
module.set_owner_if_different(src, owner, False)
if group is not None:
module.set_group_if_different(src, group, False)
if "%s" not in validate:
module.fail_json(msg="validate must contain %%s: %s" % (validate))
(rc, out, err) = module.run_command(validate % src)
if rc != 0:
module.fail_json(msg="failed to validate", exit_status=rc, stdout=out, stderr=err)
b_mysrc = b_src
if remote_src and os.path.isfile(b_src):
dummy, b_mysrc = tempfile.mkstemp(dir=os.path.dirname(b_dest))
shutil.copyfile(b_src, b_mysrc)
try:
shutil.copystat(b_src, b_mysrc)
except OSError as err:
if err.errno == errno.ENOSYS and mode == "preserve":
module.warn("Unable to copy stats {0}".format(to_native(b_src)))
else:
raise
# might be needed below
if PY3 and hasattr(os, 'listxattr'):
try:
src_has_acls = 'system.posix_acl_access' in os.listxattr(src)
except Exception as e:
# assume unwanted ACLs by default
src_has_acls = True
# at this point we should always have tmp file
module.atomic_move(b_mysrc, dest, unsafe_writes=module.params['unsafe_writes'])
if PY3 and hasattr(os, 'listxattr') and platform.system() == 'Linux' and not remote_src:
# atomic_move used above to copy src into dest might, in some cases,
# use shutil.copy2 which in turn uses shutil.copystat.
# Since Python 3.3, shutil.copystat copies file extended attributes:
# https://docs.python.org/3/library/shutil.html#shutil.copystat
# os.listxattr (along with others) was added to handle the operation.
# This means that on Python 3 we are copying the extended attributes which includes
# the ACLs on some systems - further limited to Linux as the documentation above claims
# that the extended attributes are copied only on Linux. Also, os.listxattr is only
# available on Linux.
# If not remote_src, then the file was copied from the controller. In that
# case, any filesystem ACLs are artifacts of the copy rather than preservation
# of existing attributes. Get rid of them:
if src_has_acls:
# FIXME If dest has any default ACLs, there are not applied to src now because
# they were overridden by copystat. Should/can we do anything about this?
# 'system.posix_acl_default' in os.listxattr(os.path.dirname(b_dest))
try:
clear_facls(dest)
except ValueError as e:
if 'setfacl' in to_native(e):
# No setfacl so we're okay. The controller couldn't have set a facl
# without the setfacl command
pass
else:
raise
except RuntimeError as e:
# setfacl failed.
if 'Operation not supported' in to_native(e):
# The file system does not support ACLs.
pass
else:
raise
except (IOError, OSError):
module.fail_json(msg="failed to copy: %s to %s" % (src, dest), traceback=traceback.format_exc())
changed = True
# If neither have checksums, both src and dest are directories.
if checksum_src is None and checksum_dest is None:
if remote_src and os.path.isdir(module.params['src']):
b_src = to_bytes(module.params['src'], errors='surrogate_or_strict')
b_dest = to_bytes(module.params['dest'], errors='surrogate_or_strict')
if src.endswith(os.path.sep) and os.path.isdir(module.params['dest']):
diff_files_changed = copy_diff_files(b_src, b_dest, module)
left_only_changed = copy_left_only(b_src, b_dest, module)
common_dirs_changed = copy_common_dirs(b_src, b_dest, module)
owner_group_changed = chown_recursive(b_dest, module)
if diff_files_changed or left_only_changed or common_dirs_changed or owner_group_changed:
changed = True
if src.endswith(os.path.sep) and not os.path.exists(module.params['dest']):
b_basename = to_bytes(os.path.basename(src), errors='surrogate_or_strict')
b_dest = to_bytes(os.path.join(b_dest, b_basename), errors='surrogate_or_strict')
b_src = to_bytes(os.path.join(module.params['src'], ""), errors='surrogate_or_strict')
if not module.check_mode:
shutil.copytree(b_src, b_dest, symlinks=not local_follow)
chown_recursive(dest, module)
changed = True
if not src.endswith(os.path.sep) and os.path.isdir(module.params['dest']):
b_basename = to_bytes(os.path.basename(src), errors='surrogate_or_strict')
b_dest = to_bytes(os.path.join(b_dest, b_basename), errors='surrogate_or_strict')
b_src = to_bytes(os.path.join(module.params['src'], ""), errors='surrogate_or_strict')
if not module.check_mode and not os.path.exists(b_dest):
shutil.copytree(b_src, b_dest, symlinks=not local_follow)
changed = True
chown_recursive(dest, module)
if module.check_mode and not os.path.exists(b_dest):
changed = True
if os.path.exists(b_dest):
diff_files_changed = copy_diff_files(b_src, b_dest, module)
left_only_changed = copy_left_only(b_src, b_dest, module)
common_dirs_changed = copy_common_dirs(b_src, b_dest, module)
owner_group_changed = chown_recursive(b_dest, module)
if diff_files_changed or left_only_changed or common_dirs_changed or owner_group_changed:
changed = True
if not src.endswith(os.path.sep) and not os.path.exists(module.params['dest']):
b_basename = to_bytes(os.path.basename(module.params['src']), errors='surrogate_or_strict')
b_dest = to_bytes(os.path.join(b_dest, b_basename), errors='surrogate_or_strict')
if not module.check_mode and not os.path.exists(b_dest):
os.makedirs(b_dest)
changed = True
b_src = to_bytes(os.path.join(module.params['src'], ""), errors='surrogate_or_strict')
diff_files_changed = copy_diff_files(b_src, b_dest, module)
left_only_changed = copy_left_only(b_src, b_dest, module)
common_dirs_changed = copy_common_dirs(b_src, b_dest, module)
owner_group_changed = chown_recursive(b_dest, module)
if module.check_mode and not os.path.exists(b_dest):
changed = True
res_args = dict(
dest=dest, src=src, md5sum=md5sum_src, checksum=checksum_src, changed=changed
)
if backup_file:
res_args['backup_file'] = backup_file
file_args = module.load_file_common_arguments(module.params, path=dest)
res_args['changed'] = module.set_fs_attributes_if_different(file_args, res_args['changed'])
module.exit_json(**res_args)
if __name__ == '__main__':
main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 81,613 |
Possible multiprocessing semaphore leak on now-unused `_LOCK`
|
### Summary
I'm in the middle of writing some Python commands around a bunch of playbooks I have, one of the commands interacts with some YAML files that potentially include Ansible Vault encrypted strings and generating passwords so I'm shortcutting and importing the various classes and functions from the `ansible` package for use in my own code.
One command imports `ansible.utils.encrypt:random_password` then eventually calls `os.execlp`. When the exec call happens I then get a warning from Python:
```
.../3.11.4/lib/python3.11/multiprocessing/resource_tracker.py:224: UserWarning: resource_tracker: There appear to be 1 leaked semaphore objects to clean up at shutdown
warnings.warn('resource_tracker: There appear to be %d '
```
I managed to track it down the culprit to this object: https://github.com/ansible/ansible/blob/devel/lib/ansible/utils/encrypt.py#L46
which after a quick search no longer appears to be used anywhere.
I've updated my code with the following to get rid of the issue. Ideally this lock is either removed or relocated out of the global scope.
```
import ansible.utils.encrypt
del ansible.utils.encrypt._LOCK
```
### Issue Type
Bug Report
### Component Name
ansible utils
### Ansible Version
```console
$ ansible --version
ansible [core 2.15.3]
config file = .../ansible.cfg
configured module search path = ['~/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = .../lib/python3.11/site-packages/ansible
ansible collection location = ~/.ansible/collections:/usr/share/ansible/collections
executable location = .../bin/ansible
python version = 3.11.4 (main, Aug 31 2023, 14:29:13) [Clang 14.0.3 (clang-1403.0.22.14.1)] (.../bin/python3.11)
jinja version = 3.1.2
libyaml = True
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
CONFIG_FILE() = ./ansible.cfg
DEFAULT_FILTER_PLUGIN_PATH(./ansible.cfg) = ['./ansible/filters']
DEFAULT_ROLES_PATH(/opt/dev/superna/seed/ansible.cfg) = ['./ansible/roles']
EDITOR(env: EDITOR) = mvim -f
```
### OS / Environment
macOS 13.4.1
### Steps to Reproduce
```python
import os
import ansible.utils.encrypt
os.execlp('true', 'true')
```
### Expected Results
Expected no warnings.
### Actual Results
```console
.../lib/python3.11/multiprocessing/resource_tracker.py:224: UserWarning: resource_tracker: There appear to be 1 leaked semaphore objects to clean up at shutdown
warnings.warn('resource_tracker: There appear to be %d
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/81613
|
https://github.com/ansible/ansible/pull/81614
|
786a8abee6b9e5af0ee557f4c794ea46a33e8922
|
24aac5036934f65e2cb0b0e1a30f306c6b1f24e6
| 2023-08-31T19:36:33Z |
python
| 2023-09-05T15:02:56Z |
changelogs/fragments/81613-remove-unusued-private-lock.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 81,613 |
Possible multiprocessing semaphore leak on now-unused `_LOCK`
|
### Summary
I'm in the middle of writing some Python commands around a bunch of playbooks I have, one of the commands interacts with some YAML files that potentially include Ansible Vault encrypted strings and generating passwords so I'm shortcutting and importing the various classes and functions from the `ansible` package for use in my own code.
One command imports `ansible.utils.encrypt:random_password` then eventually calls `os.execlp`. When the exec call happens I then get a warning from Python:
```
.../3.11.4/lib/python3.11/multiprocessing/resource_tracker.py:224: UserWarning: resource_tracker: There appear to be 1 leaked semaphore objects to clean up at shutdown
warnings.warn('resource_tracker: There appear to be %d '
```
I managed to track it down the culprit to this object: https://github.com/ansible/ansible/blob/devel/lib/ansible/utils/encrypt.py#L46
which after a quick search no longer appears to be used anywhere.
I've updated my code with the following to get rid of the issue. Ideally this lock is either removed or relocated out of the global scope.
```
import ansible.utils.encrypt
del ansible.utils.encrypt._LOCK
```
### Issue Type
Bug Report
### Component Name
ansible utils
### Ansible Version
```console
$ ansible --version
ansible [core 2.15.3]
config file = .../ansible.cfg
configured module search path = ['~/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = .../lib/python3.11/site-packages/ansible
ansible collection location = ~/.ansible/collections:/usr/share/ansible/collections
executable location = .../bin/ansible
python version = 3.11.4 (main, Aug 31 2023, 14:29:13) [Clang 14.0.3 (clang-1403.0.22.14.1)] (.../bin/python3.11)
jinja version = 3.1.2
libyaml = True
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
CONFIG_FILE() = ./ansible.cfg
DEFAULT_FILTER_PLUGIN_PATH(./ansible.cfg) = ['./ansible/filters']
DEFAULT_ROLES_PATH(/opt/dev/superna/seed/ansible.cfg) = ['./ansible/roles']
EDITOR(env: EDITOR) = mvim -f
```
### OS / Environment
macOS 13.4.1
### Steps to Reproduce
```python
import os
import ansible.utils.encrypt
os.execlp('true', 'true')
```
### Expected Results
Expected no warnings.
### Actual Results
```console
.../lib/python3.11/multiprocessing/resource_tracker.py:224: UserWarning: resource_tracker: There appear to be 1 leaked semaphore objects to clean up at shutdown
warnings.warn('resource_tracker: There appear to be %d
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/81613
|
https://github.com/ansible/ansible/pull/81614
|
786a8abee6b9e5af0ee557f4c794ea46a33e8922
|
24aac5036934f65e2cb0b0e1a30f306c6b1f24e6
| 2023-08-31T19:36:33Z |
python
| 2023-09-05T15:02:56Z |
lib/ansible/utils/encrypt.py
|
# (c) 2012-2014, Michael DeHaan <[email protected]>
# (c) 2017 Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import multiprocessing
import random
import re
import string
import sys
from collections import namedtuple
from ansible import constants as C
from ansible.errors import AnsibleError, AnsibleAssertionError
from ansible.module_utils.six import text_type
from ansible.module_utils.common.text.converters import to_text, to_bytes
from ansible.utils.display import Display
PASSLIB_E = CRYPT_E = None
HAS_CRYPT = PASSLIB_AVAILABLE = False
try:
import passlib
import passlib.hash
from passlib.utils.handlers import HasRawSalt, PrefixWrapper
try:
from passlib.utils.binary import bcrypt64
except ImportError:
from passlib.utils import bcrypt64
PASSLIB_AVAILABLE = True
except Exception as e:
PASSLIB_E = e
try:
import crypt
HAS_CRYPT = True
except Exception as e:
CRYPT_E = e
display = Display()
__all__ = ['do_encrypt']
_LOCK = multiprocessing.Lock()
DEFAULT_PASSWORD_LENGTH = 20
def random_password(length=DEFAULT_PASSWORD_LENGTH, chars=C.DEFAULT_PASSWORD_CHARS, seed=None):
'''Return a random password string of length containing only chars
:kwarg length: The number of characters in the new password. Defaults to 20.
:kwarg chars: The characters to choose from. The default is all ascii
letters, ascii digits, and these symbols ``.,:-_``
'''
if not isinstance(chars, text_type):
raise AnsibleAssertionError('%s (%s) is not a text_type' % (chars, type(chars)))
if seed is None:
random_generator = random.SystemRandom()
else:
random_generator = random.Random(seed)
return u''.join(random_generator.choice(chars) for dummy in range(length))
def random_salt(length=8):
"""Return a text string suitable for use as a salt for the hash functions we use to encrypt passwords.
"""
# Note passlib salt values must be pure ascii so we can't let the user
# configure this
salt_chars = string.ascii_letters + string.digits + u'./'
return random_password(length=length, chars=salt_chars)
class BaseHash(object):
algo = namedtuple('algo', ['crypt_id', 'salt_size', 'implicit_rounds', 'salt_exact', 'implicit_ident'])
algorithms = {
'md5_crypt': algo(crypt_id='1', salt_size=8, implicit_rounds=None, salt_exact=False, implicit_ident=None),
'bcrypt': algo(crypt_id='2b', salt_size=22, implicit_rounds=12, salt_exact=True, implicit_ident='2b'),
'sha256_crypt': algo(crypt_id='5', salt_size=16, implicit_rounds=535000, salt_exact=False, implicit_ident=None),
'sha512_crypt': algo(crypt_id='6', salt_size=16, implicit_rounds=656000, salt_exact=False, implicit_ident=None),
}
def __init__(self, algorithm):
self.algorithm = algorithm
class CryptHash(BaseHash):
def __init__(self, algorithm):
super(CryptHash, self).__init__(algorithm)
if not HAS_CRYPT:
raise AnsibleError("crypt.crypt cannot be used as the 'crypt' python library is not installed or is unusable.", orig_exc=CRYPT_E)
if sys.platform.startswith('darwin'):
raise AnsibleError("crypt.crypt not supported on Mac OS X/Darwin, install passlib python module")
if algorithm not in self.algorithms:
raise AnsibleError("crypt.crypt does not support '%s' algorithm" % self.algorithm)
display.deprecated(
"Encryption using the Python crypt module is deprecated. The "
"Python crypt module is deprecated and will be removed from "
"Python 3.13. Install the passlib library for continued "
"encryption functionality.",
version=2.17
)
self.algo_data = self.algorithms[algorithm]
def hash(self, secret, salt=None, salt_size=None, rounds=None, ident=None):
salt = self._salt(salt, salt_size)
rounds = self._rounds(rounds)
ident = self._ident(ident)
return self._hash(secret, salt, rounds, ident)
def _salt(self, salt, salt_size):
salt_size = salt_size or self.algo_data.salt_size
ret = salt or random_salt(salt_size)
if re.search(r'[^./0-9A-Za-z]', ret):
raise AnsibleError("invalid characters in salt")
if self.algo_data.salt_exact and len(ret) != self.algo_data.salt_size:
raise AnsibleError("invalid salt size")
elif not self.algo_data.salt_exact and len(ret) > self.algo_data.salt_size:
raise AnsibleError("invalid salt size")
return ret
def _rounds(self, rounds):
if self.algorithm == 'bcrypt':
# crypt requires 2 digits for rounds
return rounds or self.algo_data.implicit_rounds
elif rounds == self.algo_data.implicit_rounds:
# Passlib does not include the rounds if it is the same as implicit_rounds.
# Make crypt lib behave the same, by not explicitly specifying the rounds in that case.
return None
else:
return rounds
def _ident(self, ident):
if not ident:
return self.algo_data.crypt_id
if self.algorithm == 'bcrypt':
return ident
return None
def _hash(self, secret, salt, rounds, ident):
saltstring = ""
if ident:
saltstring = "$%s" % ident
if rounds:
if self.algorithm == 'bcrypt':
saltstring += "$%d" % rounds
else:
saltstring += "$rounds=%d" % rounds
saltstring += "$%s" % salt
# crypt.crypt throws OSError on Python >= 3.9 if it cannot parse saltstring.
try:
result = crypt.crypt(secret, saltstring)
orig_exc = None
except OSError as e:
result = None
orig_exc = e
# None as result would be interpreted by the some modules (user module)
# as no password at all.
if not result:
raise AnsibleError(
"crypt.crypt does not support '%s' algorithm" % self.algorithm,
orig_exc=orig_exc,
)
return result
class PasslibHash(BaseHash):
def __init__(self, algorithm):
super(PasslibHash, self).__init__(algorithm)
if not PASSLIB_AVAILABLE:
raise AnsibleError("passlib must be installed and usable to hash with '%s'" % algorithm, orig_exc=PASSLIB_E)
display.vv("Using passlib to hash input with '%s'" % algorithm)
try:
self.crypt_algo = getattr(passlib.hash, algorithm)
except Exception:
raise AnsibleError("passlib does not support '%s' algorithm" % algorithm)
def hash(self, secret, salt=None, salt_size=None, rounds=None, ident=None):
salt = self._clean_salt(salt)
rounds = self._clean_rounds(rounds)
ident = self._clean_ident(ident)
return self._hash(secret, salt=salt, salt_size=salt_size, rounds=rounds, ident=ident)
def _clean_ident(self, ident):
ret = None
if not ident:
if self.algorithm in self.algorithms:
return self.algorithms.get(self.algorithm).implicit_ident
return ret
if self.algorithm == 'bcrypt':
return ident
return ret
def _clean_salt(self, salt):
if not salt:
return None
elif issubclass(self.crypt_algo.wrapped if isinstance(self.crypt_algo, PrefixWrapper) else self.crypt_algo, HasRawSalt):
ret = to_bytes(salt, encoding='ascii', errors='strict')
else:
ret = to_text(salt, encoding='ascii', errors='strict')
# Ensure the salt has the correct padding
if self.algorithm == 'bcrypt':
ret = bcrypt64.repair_unused(ret)
return ret
def _clean_rounds(self, rounds):
algo_data = self.algorithms.get(self.algorithm)
if rounds:
return rounds
elif algo_data and algo_data.implicit_rounds:
# The default rounds used by passlib depend on the passlib version.
# For consistency ensure that passlib behaves the same as crypt in case no rounds were specified.
# Thus use the crypt defaults.
return algo_data.implicit_rounds
else:
return None
def _hash(self, secret, salt, salt_size, rounds, ident):
# Not every hash algorithm supports every parameter.
# Thus create the settings dict only with set parameters.
settings = {}
if salt:
settings['salt'] = salt
if salt_size:
settings['salt_size'] = salt_size
if rounds:
settings['rounds'] = rounds
if ident:
settings['ident'] = ident
# starting with passlib 1.7 'using' and 'hash' should be used instead of 'encrypt'
try:
if hasattr(self.crypt_algo, 'hash'):
result = self.crypt_algo.using(**settings).hash(secret)
elif hasattr(self.crypt_algo, 'encrypt'):
result = self.crypt_algo.encrypt(secret, **settings)
else:
raise AnsibleError("installed passlib version %s not supported" % passlib.__version__)
except ValueError as e:
raise AnsibleError("Could not hash the secret.", orig_exc=e)
# passlib.hash should always return something or raise an exception.
# Still ensure that there is always a result.
# Otherwise an empty password might be assumed by some modules, like the user module.
if not result:
raise AnsibleError("failed to hash with algorithm '%s'" % self.algorithm)
# Hashes from passlib.hash should be represented as ascii strings of hex
# digits so this should not traceback. If it's not representable as such
# we need to traceback and then block such algorithms because it may
# impact calling code.
return to_text(result, errors='strict')
def passlib_or_crypt(secret, algorithm, salt=None, salt_size=None, rounds=None, ident=None):
if PASSLIB_AVAILABLE:
return PasslibHash(algorithm).hash(secret, salt=salt, salt_size=salt_size, rounds=rounds, ident=ident)
if HAS_CRYPT:
return CryptHash(algorithm).hash(secret, salt=salt, salt_size=salt_size, rounds=rounds, ident=ident)
raise AnsibleError("Unable to encrypt nor hash, either crypt or passlib must be installed.", orig_exc=CRYPT_E)
def do_encrypt(result, encrypt, salt_size=None, salt=None, ident=None):
return passlib_or_crypt(result, encrypt, salt_size=salt_size, salt=salt, ident=ident)
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 81,608 |
Incorrect link for βRisks of becoming an unprivileged userβ
|
### Summary
In the file `lib/ansible/plugins/action/__init__.py` thereβs a link to document _Understanding privilege escalation: become_. The link has an anchor that refers to the section _Risks of becoming an unprivileged user_. However, there is an error in the code that makes the URL incorrect.
The correct URL is: https://docs.ansible.com/ansible-core/2.15/playbook_guide/playbooks_privilege_escalation.html#risks-of-becoming-an-unprivileged-user
The URL that is displayed to the user is: https://docs.ansible.com/ansible-core/2.15/playbook_guide/playbooks_privilege_escalation.html#risks-of-becoming-an-unprivileged-user#risks-of-becoming-an-unprivileged-user
The error is that the anchor part of the URL gets repeated.
### Suggested solution
`become_link` is currently defined like this:
`become_link = get_versioned_doclink('playbook_guide/playbooks_privilege_escalation.html#risks-of-becoming-an-unprivileged-user')`
One option could be to instead define it like this:
`become_link = get_versioned_doclink('playbook_guide/playbooks_privilege_escalation.html')`
I believe this would work since the anchor part (currently) gets added when `become_link` is referenced.
Another solution could be to omit the anchor part when referencing `become_link`. Example:
```
display.warning(
'Using world-readable permissions for temporary files Ansible '
'needs to create when becoming an unprivileged user. This may '
'be insecure. For information on securing this, see %s'
% become_link)
```
### Issue Type
Bug Report
### Component Name
lib
### Ansible Version
```console
$ ansible --version
ansible [core 2.15.3]
config file = None
configured module search path = ['/Users/carl.winback/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /opt/homebrew/Cellar/ansible/8.3.0/libexec/lib/python3.11/site-packages/ansible
ansible collection location = /Users/carl.winback/.ansible/collections:/usr/share/ansible/collections
executable location = /opt/homebrew/bin/ansible
python version = 3.11.5 (main, Aug 24 2023, 15:09:45) [Clang 14.0.3 (clang-1403.0.22.14.1)] (/opt/homebrew/Cellar/ansible/8.3.0/libexec/bin/python)
jinja version = 3.1.2
libyaml = True
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
CONFIG_FILE() = None
EDITOR(env: EDITOR) = emacsclient
```
### OS / Environment
macOS Ventura 13.5.1
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```
- name: foo
ansible.builtin.shell: echo "hello"
become: yes
become_user: johndoe
```
### Expected Results
I expected the documentation link to look like this: https://docs.ansible.com/ansible-core/2.15/playbook_guide/playbooks_privilege_escalation.html#risks-of-becoming-an-unprivileged-user
### Actual Results
```console
fatal: [foo.example.com]: FAILED! => {"msg": "Failed to set permissions on the temporary files Ansible needs to create when becoming an unprivileged user (rc: 1, err: chmod: invalid mode: βA+user:johndoe:rx:allowβ\nTry 'chmod --help' for more information.\n}). For information on working around this, see https://docs.ansible.com/ansible-core/2.15/playbook_guide/playbooks_privilege_escalation.html#risks-of-becoming-an-unprivileged-user#risks-of-becoming-an-unprivileged-user"}
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/81608
|
https://github.com/ansible/ansible/pull/81623
|
24aac5036934f65e2cb0b0e1a30f306c6b1f24e6
|
48d8e067bf6c947a96750b8a61c7d6ef8cad594b
| 2023-08-31T07:16:49Z |
python
| 2023-09-05T15:45:57Z |
lib/ansible/plugins/action/__init__.py
|
# coding: utf-8
# Copyright: (c) 2012-2014, Michael DeHaan <[email protected]>
# Copyright: (c) 2018, Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
# Make coding more python3-ish
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import base64
import json
import os
import random
import re
import shlex
import stat
import tempfile
from abc import ABC, abstractmethod
from collections.abc import Sequence
from ansible import constants as C
from ansible.errors import AnsibleError, AnsibleConnectionFailure, AnsibleActionSkip, AnsibleActionFail, AnsibleAuthenticationFailure
from ansible.executor.module_common import modify_module
from ansible.executor.interpreter_discovery import discover_interpreter, InterpreterDiscoveryRequiredError
from ansible.module_utils.common.arg_spec import ArgumentSpecValidator
from ansible.module_utils.errors import UnsupportedError
from ansible.module_utils.json_utils import _filter_non_json_lines
from ansible.module_utils.six import binary_type, string_types, text_type
from ansible.module_utils.common.text.converters import to_bytes, to_native, to_text
from ansible.parsing.utils.jsonify import jsonify
from ansible.release import __version__
from ansible.utils.collection_loader import resource_from_fqcr
from ansible.utils.display import Display
from ansible.utils.unsafe_proxy import wrap_var, AnsibleUnsafeText
from ansible.vars.clean import remove_internal_keys
from ansible.utils.plugin_docs import get_versioned_doclink
display = Display()
def _validate_utf8_json(d):
if isinstance(d, text_type):
# Purposefully not using to_bytes here for performance reasons
d.encode(encoding='utf-8', errors='strict')
elif isinstance(d, dict):
for o in d.items():
_validate_utf8_json(o)
elif isinstance(d, (list, tuple)):
for o in d:
_validate_utf8_json(o)
class ActionBase(ABC):
'''
This class is the base class for all action plugins, and defines
code common to all actions. The base class handles the connection
by putting/getting files and executing commands based on the current
action in use.
'''
# A set of valid arguments
_VALID_ARGS = frozenset([]) # type: frozenset[str]
# behavioral attributes
BYPASS_HOST_LOOP = False
TRANSFERS_FILES = False
_requires_connection = True
_supports_check_mode = True
_supports_async = False
def __init__(self, task, connection, play_context, loader, templar, shared_loader_obj):
self._task = task
self._connection = connection
self._play_context = play_context
self._loader = loader
self._templar = templar
self._shared_loader_obj = shared_loader_obj
self._cleanup_remote_tmp = False
# interpreter discovery state
self._discovered_interpreter_key = None
self._discovered_interpreter = False
self._discovery_deprecation_warnings = []
self._discovery_warnings = []
self._used_interpreter = None
# Backwards compat: self._display isn't really needed, just import the global display and use that.
self._display = display
@abstractmethod
def run(self, tmp=None, task_vars=None):
""" Action Plugins should implement this method to perform their
tasks. Everything else in this base class is a helper method for the
action plugin to do that.
:kwarg tmp: Deprecated parameter. This is no longer used. An action plugin that calls
another one and wants to use the same remote tmp for both should set
self._connection._shell.tmpdir rather than this parameter.
:kwarg task_vars: The variables (host vars, group vars, config vars,
etc) associated with this task.
:returns: dictionary of results from the module
Implementors of action modules may find the following variables especially useful:
* Module parameters. These are stored in self._task.args
"""
# does not default to {'changed': False, 'failed': False}, as it breaks async
result = {}
if tmp is not None:
result['warning'] = ['ActionModule.run() no longer honors the tmp parameter. Action'
' plugins should set self._connection._shell.tmpdir to share'
' the tmpdir']
del tmp
if self._task.async_val and not self._supports_async:
raise AnsibleActionFail('async is not supported for this task.')
elif self._task.check_mode and not self._supports_check_mode:
raise AnsibleActionSkip('check mode is not supported for this task.')
elif self._task.async_val and self._task.check_mode:
raise AnsibleActionFail('check mode and async cannot be used on same task.')
# Error if invalid argument is passed
if self._VALID_ARGS:
task_opts = frozenset(self._task.args.keys())
bad_opts = task_opts.difference(self._VALID_ARGS)
if bad_opts:
raise AnsibleActionFail('Invalid options for %s: %s' % (self._task.action, ','.join(list(bad_opts))))
if self._connection._shell.tmpdir is None and self._early_needs_tmp_path():
self._make_tmp_path()
return result
def validate_argument_spec(self, argument_spec=None,
mutually_exclusive=None,
required_together=None,
required_one_of=None,
required_if=None,
required_by=None,
):
"""Validate an argument spec against the task args
This will return a tuple of (ValidationResult, dict) where the dict
is the validated, coerced, and normalized task args.
Be cautious when directly passing ``new_module_args`` directly to a
module invocation, as it will contain the defaults, and not only
the args supplied from the task. If you do this, the module
should not define ``mututally_exclusive`` or similar.
This code is roughly copied from the ``validate_argument_spec``
action plugin for use by other action plugins.
"""
new_module_args = self._task.args.copy()
validator = ArgumentSpecValidator(
argument_spec,
mutually_exclusive=mutually_exclusive,
required_together=required_together,
required_one_of=required_one_of,
required_if=required_if,
required_by=required_by,
)
validation_result = validator.validate(new_module_args)
new_module_args.update(validation_result.validated_parameters)
try:
error = validation_result.errors[0]
except IndexError:
error = None
# Fail for validation errors, even in check mode
if error:
msg = validation_result.errors.msg
if isinstance(error, UnsupportedError):
msg = f"Unsupported parameters for ({self._load_name}) module: {msg}"
raise AnsibleActionFail(msg)
return validation_result, new_module_args
def cleanup(self, force=False):
"""Method to perform a clean up at the end of an action plugin execution
By default this is designed to clean up the shell tmpdir, and is toggled based on whether
async is in use
Action plugins may override this if they deem necessary, but should still call this method
via super
"""
if force or not self._task.async_val:
self._remove_tmp_path(self._connection._shell.tmpdir)
def get_plugin_option(self, plugin, option, default=None):
"""Helper to get an option from a plugin without having to use
the try/except dance everywhere to set a default
"""
try:
return plugin.get_option(option)
except (AttributeError, KeyError):
return default
def get_become_option(self, option, default=None):
return self.get_plugin_option(self._connection.become, option, default=default)
def get_connection_option(self, option, default=None):
return self.get_plugin_option(self._connection, option, default=default)
def get_shell_option(self, option, default=None):
return self.get_plugin_option(self._connection._shell, option, default=default)
def _remote_file_exists(self, path):
cmd = self._connection._shell.exists(path)
result = self._low_level_execute_command(cmd=cmd, sudoable=True)
if result['rc'] == 0:
return True
return False
def _configure_module(self, module_name, module_args, task_vars):
'''
Handles the loading and templating of the module code through the
modify_module() function.
'''
if self._task.delegate_to:
use_vars = task_vars.get('ansible_delegated_vars')[self._task.delegate_to]
else:
use_vars = task_vars
split_module_name = module_name.split('.')
collection_name = '.'.join(split_module_name[0:2]) if len(split_module_name) > 2 else ''
leaf_module_name = resource_from_fqcr(module_name)
# Search module path(s) for named module.
for mod_type in self._connection.module_implementation_preferences:
# Check to determine if PowerShell modules are supported, and apply
# some fixes (hacks) to module name + args.
if mod_type == '.ps1':
# FIXME: This should be temporary and moved to an exec subsystem plugin where we can define the mapping
# for each subsystem.
win_collection = 'ansible.windows'
rewrite_collection_names = ['ansible.builtin', 'ansible.legacy', '']
# async_status, win_stat, win_file, win_copy, and win_ping are not just like their
# python counterparts but they are compatible enough for our
# internal usage
# NB: we only rewrite the module if it's not being called by the user (eg, an action calling something else)
# and if it's unqualified or FQ to a builtin
if leaf_module_name in ('stat', 'file', 'copy', 'ping') and \
collection_name in rewrite_collection_names and self._task.action != module_name:
module_name = '%s.win_%s' % (win_collection, leaf_module_name)
elif leaf_module_name == 'async_status' and collection_name in rewrite_collection_names:
module_name = '%s.%s' % (win_collection, leaf_module_name)
# TODO: move this tweak down to the modules, not extensible here
# Remove extra quotes surrounding path parameters before sending to module.
if leaf_module_name in ['win_stat', 'win_file', 'win_copy', 'slurp'] and module_args and \
hasattr(self._connection._shell, '_unquote'):
for key in ('src', 'dest', 'path'):
if key in module_args:
module_args[key] = self._connection._shell._unquote(module_args[key])
result = self._shared_loader_obj.module_loader.find_plugin_with_context(module_name, mod_type, collection_list=self._task.collections)
if not result.resolved:
if result.redirect_list and len(result.redirect_list) > 1:
# take the last one in the redirect list, we may have successfully jumped through N other redirects
target_module_name = result.redirect_list[-1]
raise AnsibleError("The module {0} was redirected to {1}, which could not be loaded.".format(module_name, target_module_name))
module_path = result.plugin_resolved_path
if module_path:
break
else: # This is a for-else: http://bit.ly/1ElPkyg
raise AnsibleError("The module %s was not found in configured module paths" % (module_name))
# insert shared code and arguments into the module
final_environment = dict()
self._compute_environment_string(final_environment)
become_kwargs = {}
if self._connection.become:
become_kwargs['become'] = True
become_kwargs['become_method'] = self._connection.become.name
become_kwargs['become_user'] = self._connection.become.get_option('become_user',
playcontext=self._play_context)
become_kwargs['become_password'] = self._connection.become.get_option('become_pass',
playcontext=self._play_context)
become_kwargs['become_flags'] = self._connection.become.get_option('become_flags',
playcontext=self._play_context)
# modify_module will exit early if interpreter discovery is required; re-run after if necessary
for dummy in (1, 2):
try:
(module_data, module_style, module_shebang) = modify_module(module_name, module_path, module_args, self._templar,
task_vars=use_vars,
module_compression=C.config.get_config_value('DEFAULT_MODULE_COMPRESSION',
variables=task_vars),
async_timeout=self._task.async_val,
environment=final_environment,
remote_is_local=bool(getattr(self._connection, '_remote_is_local', False)),
**become_kwargs)
break
except InterpreterDiscoveryRequiredError as idre:
self._discovered_interpreter = AnsibleUnsafeText(discover_interpreter(
action=self,
interpreter_name=idre.interpreter_name,
discovery_mode=idre.discovery_mode,
task_vars=use_vars))
# update the local task_vars with the discovered interpreter (which might be None);
# we'll propagate back to the controller in the task result
discovered_key = 'discovered_interpreter_%s' % idre.interpreter_name
# update the local vars copy for the retry
use_vars['ansible_facts'][discovered_key] = self._discovered_interpreter
# TODO: this condition prevents 'wrong host' from being updated
# but in future we would want to be able to update 'delegated host facts'
# irrespective of task settings
if not self._task.delegate_to or self._task.delegate_facts:
# store in local task_vars facts collection for the retry and any other usages in this worker
task_vars['ansible_facts'][discovered_key] = self._discovered_interpreter
# preserve this so _execute_module can propagate back to controller as a fact
self._discovered_interpreter_key = discovered_key
else:
task_vars['ansible_delegated_vars'][self._task.delegate_to]['ansible_facts'][discovered_key] = self._discovered_interpreter
return (module_style, module_shebang, module_data, module_path)
def _compute_environment_string(self, raw_environment_out=None):
'''
Builds the environment string to be used when executing the remote task.
'''
final_environment = dict()
if self._task.environment is not None:
environments = self._task.environment
if not isinstance(environments, list):
environments = [environments]
# The order of environments matters to make sure we merge
# in the parent's values first so those in the block then
# task 'win' in precedence
for environment in environments:
if environment is None or len(environment) == 0:
continue
temp_environment = self._templar.template(environment)
if not isinstance(temp_environment, dict):
raise AnsibleError("environment must be a dictionary, received %s (%s)" % (temp_environment, type(temp_environment)))
# very deliberately using update here instead of combine_vars, as
# these environment settings should not need to merge sub-dicts
final_environment.update(temp_environment)
if len(final_environment) > 0:
final_environment = self._templar.template(final_environment)
if isinstance(raw_environment_out, dict):
raw_environment_out.clear()
raw_environment_out.update(final_environment)
return self._connection._shell.env_prefix(**final_environment)
def _early_needs_tmp_path(self):
'''
Determines if a tmp path should be created before the action is executed.
'''
return getattr(self, 'TRANSFERS_FILES', False)
def _is_pipelining_enabled(self, module_style, wrap_async=False):
'''
Determines if we are required and can do pipelining
'''
try:
is_enabled = self._connection.get_option('pipelining')
except (KeyError, AttributeError, ValueError):
is_enabled = self._play_context.pipelining
# winrm supports async pipeline
# TODO: make other class property 'has_async_pipelining' to separate cases
always_pipeline = self._connection.always_pipeline_modules
# su does not work with pipelining
# TODO: add has_pipelining class prop to become plugins
become_exception = (self._connection.become.name if self._connection.become else '') != 'su'
# any of these require a true
conditions = [
self._connection.has_pipelining, # connection class supports it
is_enabled or always_pipeline, # enabled via config or forced via connection (eg winrm)
module_style == "new", # old style modules do not support pipelining
not C.DEFAULT_KEEP_REMOTE_FILES, # user wants remote files
not wrap_async or always_pipeline, # async does not normally support pipelining unless it does (eg winrm)
become_exception,
]
return all(conditions)
def _get_admin_users(self):
'''
Returns a list of admin users that are configured for the current shell
plugin
'''
return self.get_shell_option('admin_users', ['root'])
def _get_remote_addr(self, tvars):
''' consistently get the 'remote_address' for the action plugin '''
remote_addr = tvars.get('delegated_vars', {}).get('ansible_host', tvars.get('ansible_host', tvars.get('inventory_hostname', None)))
for variation in ('remote_addr', 'host'):
try:
remote_addr = self._connection.get_option(variation)
except KeyError:
continue
break
else:
# plugin does not have, fallback to play_context
remote_addr = self._play_context.remote_addr
return remote_addr
def _get_remote_user(self):
''' consistently get the 'remote_user' for the action plugin '''
# TODO: use 'current user running ansible' as fallback when moving away from play_context
# pwd.getpwuid(os.getuid()).pw_name
remote_user = None
try:
remote_user = self._connection.get_option('remote_user')
except KeyError:
# plugin does not have remote_user option, fallback to default and/play_context
remote_user = getattr(self._connection, 'default_user', None) or self._play_context.remote_user
except AttributeError:
# plugin does not use config system, fallback to old play_context
remote_user = self._play_context.remote_user
return remote_user
def _is_become_unprivileged(self):
'''
The user is not the same as the connection user and is not part of the
shell configured admin users
'''
# if we don't use become then we know we aren't switching to a
# different unprivileged user
if not self._connection.become:
return False
# if we use become and the user is not an admin (or same user) then
# we need to return become_unprivileged as True
admin_users = self._get_admin_users()
remote_user = self._get_remote_user()
become_user = self.get_become_option('become_user')
return bool(become_user and become_user not in admin_users + [remote_user])
def _make_tmp_path(self, remote_user=None):
'''
Create and return a temporary path on a remote box.
'''
# Network connection plugins (network_cli, netconf, etc.) execute on the controller, rather than the remote host.
# As such, we want to avoid using remote_user for paths as remote_user may not line up with the local user
# This is a hack and should be solved by more intelligent handling of remote_tmp in 2.7
if getattr(self._connection, '_remote_is_local', False):
tmpdir = C.DEFAULT_LOCAL_TMP
else:
# NOTE: shell plugins should populate this setting anyways, but they dont do remote expansion, which
# we need for 'non posix' systems like cloud-init and solaris
tmpdir = self._remote_expand_user(self.get_shell_option('remote_tmp', default='~/.ansible/tmp'), sudoable=False)
become_unprivileged = self._is_become_unprivileged()
basefile = self._connection._shell._generate_temp_dir_name()
cmd = self._connection._shell.mkdtemp(basefile=basefile, system=become_unprivileged, tmpdir=tmpdir)
result = self._low_level_execute_command(cmd, sudoable=False)
# error handling on this seems a little aggressive?
if result['rc'] != 0:
if result['rc'] == 5:
output = 'Authentication failure.'
elif result['rc'] == 255 and self._connection.transport in ('ssh',):
if display.verbosity > 3:
output = u'SSH encountered an unknown error. The output was:\n%s%s' % (result['stdout'], result['stderr'])
else:
output = (u'SSH encountered an unknown error during the connection. '
'We recommend you re-run the command using -vvvv, which will enable SSH debugging output to help diagnose the issue')
elif u'No space left on device' in result['stderr']:
output = result['stderr']
else:
output = ('Failed to create temporary directory. '
'In some cases, you may have been able to authenticate and did not have permissions on the target directory. '
'Consider changing the remote tmp path in ansible.cfg to a path rooted in "/tmp", for more error information use -vvv. '
'Failed command was: %s, exited with result %d' % (cmd, result['rc']))
if 'stdout' in result and result['stdout'] != u'':
output = output + u", stdout output: %s" % result['stdout']
if display.verbosity > 3 and 'stderr' in result and result['stderr'] != u'':
output += u", stderr output: %s" % result['stderr']
raise AnsibleConnectionFailure(output)
else:
self._cleanup_remote_tmp = True
try:
stdout_parts = result['stdout'].strip().split('%s=' % basefile, 1)
rc = self._connection._shell.join_path(stdout_parts[-1], u'').splitlines()[-1]
except IndexError:
# stdout was empty or just space, set to / to trigger error in next if
rc = '/'
# Catch failure conditions, files should never be
# written to locations in /.
if rc == '/':
raise AnsibleError('failed to resolve remote temporary directory from %s: `%s` returned empty string' % (basefile, cmd))
self._connection._shell.tmpdir = rc
return rc
def _should_remove_tmp_path(self, tmp_path):
'''Determine if temporary path should be deleted or kept by user request/config'''
return tmp_path and self._cleanup_remote_tmp and not C.DEFAULT_KEEP_REMOTE_FILES and "-tmp-" in tmp_path
def _remove_tmp_path(self, tmp_path, force=False):
'''Remove a temporary path we created. '''
if tmp_path is None and self._connection._shell.tmpdir:
tmp_path = self._connection._shell.tmpdir
if force or self._should_remove_tmp_path(tmp_path):
cmd = self._connection._shell.remove(tmp_path, recurse=True)
# If we have gotten here we have a working connection configuration.
# If the connection breaks we could leave tmp directories out on the remote system.
tmp_rm_res = self._low_level_execute_command(cmd, sudoable=False)
if tmp_rm_res.get('rc', 0) != 0:
display.warning('Error deleting remote temporary files (rc: %s, stderr: %s})'
% (tmp_rm_res.get('rc'), tmp_rm_res.get('stderr', 'No error string available.')))
else:
self._connection._shell.tmpdir = None
def _transfer_file(self, local_path, remote_path):
"""
Copy a file from the controller to a remote path
:arg local_path: Path on controller to transfer
:arg remote_path: Path on the remote system to transfer into
.. warning::
* When you use this function you likely want to use use fixup_perms2() on the
remote_path to make sure that the remote file is readable when the user becomes
a non-privileged user.
* If you use fixup_perms2() on the file and copy or move the file into place, you will
need to then remove filesystem acls on the file once it has been copied into place by
the module. See how the copy module implements this for help.
"""
self._connection.put_file(local_path, remote_path)
return remote_path
def _transfer_data(self, remote_path, data):
'''
Copies the module data out to the temporary module path.
'''
if isinstance(data, dict):
data = jsonify(data)
afd, afile = tempfile.mkstemp(dir=C.DEFAULT_LOCAL_TMP)
afo = os.fdopen(afd, 'wb')
try:
data = to_bytes(data, errors='surrogate_or_strict')
afo.write(data)
except Exception as e:
raise AnsibleError("failure writing module data to temporary file for transfer: %s" % to_native(e))
afo.flush()
afo.close()
try:
self._transfer_file(afile, remote_path)
finally:
os.unlink(afile)
return remote_path
def _fixup_perms2(self, remote_paths, remote_user=None, execute=True):
"""
We need the files we upload to be readable (and sometimes executable)
by the user being sudo'd to but we want to limit other people's access
(because the files could contain passwords or other private
information. We achieve this in one of these ways:
* If no sudo is performed or the remote_user is sudo'ing to
themselves, we don't have to change permissions.
* If the remote_user sudo's to a privileged user (for instance, root),
we don't have to change permissions
* If the remote_user sudo's to an unprivileged user then we attempt to
grant the unprivileged user access via file system acls.
* If granting file system acls fails we try to change the owner of the
file with chown which only works in case the remote_user is
privileged or the remote systems allows chown calls by unprivileged
users (e.g. HP-UX)
* If the above fails, we next try 'chmod +a' which is a macOS way of
setting ACLs on files.
* If the above fails, we check if ansible_common_remote_group is set.
If it is, we attempt to chgrp the file to its value. This is useful
if the remote_user has a group in common with the become_user. As the
remote_user, we can chgrp the file to that group and allow the
become_user to read it.
* If (the chown fails AND ansible_common_remote_group is not set) OR
(ansible_common_remote_group is set AND the chgrp (or following chmod)
returned non-zero), we can set the file to be world readable so that
the second unprivileged user can read the file.
Since this could allow other users to get access to private
information we only do this if ansible is configured with
"allow_world_readable_tmpfiles" in the ansible.cfg. Also note that
when ansible_common_remote_group is set this final fallback is very
unlikely to ever be triggered, so long as chgrp was successful. But
just because the chgrp was successful, does not mean Ansible can
necessarily access the files (if, for example, the variable was set
to a group that remote_user is in, and can chgrp to, but does not have
in common with become_user).
"""
if remote_user is None:
remote_user = self._get_remote_user()
# Step 1: Are we on windows?
if getattr(self._connection._shell, "_IS_WINDOWS", False):
# This won't work on Powershell as-is, so we'll just completely
# skip until we have a need for it, at which point we'll have to do
# something different.
return remote_paths
# Step 2: If we're not becoming an unprivileged user, we are roughly
# done. Make the files +x if we're asked to, and return.
if not self._is_become_unprivileged():
if execute:
# Can't depend on the file being transferred with execute permissions.
# Only need user perms because no become was used here
res = self._remote_chmod(remote_paths, 'u+x')
if res['rc'] != 0:
raise AnsibleError(
'Failed to set execute bit on remote files '
'(rc: {0}, err: {1})'.format(
res['rc'],
to_native(res['stderr'])))
return remote_paths
# If we're still here, we have an unprivileged user that's different
# than the ssh user.
become_user = self.get_become_option('become_user')
# Try to use file system acls to make the files readable for sudo'd
# user
if execute:
chmod_mode = 'rx'
setfacl_mode = 'r-x'
# Apple patches their "file_cmds" chmod with ACL support
chmod_acl_mode = '{0} allow read,execute'.format(become_user)
# POSIX-draft ACL specification. Solaris, maybe others.
# See chmod(1) on something Solaris-based for syntax details.
posix_acl_mode = 'A+user:{0}:rx:allow'.format(become_user)
else:
chmod_mode = 'rX'
# TODO: this form fails silently on freebsd. We currently
# never call _fixup_perms2() with execute=False but if we
# start to we'll have to fix this.
setfacl_mode = 'r-X'
# Apple
chmod_acl_mode = '{0} allow read'.format(become_user)
# POSIX-draft
posix_acl_mode = 'A+user:{0}:r:allow'.format(become_user)
# Step 3a: Are we able to use setfacl to add user ACLs to the file?
res = self._remote_set_user_facl(
remote_paths,
become_user,
setfacl_mode)
if res['rc'] == 0:
return remote_paths
# Step 3b: Set execute if we need to. We do this before anything else
# because some of the methods below might work but not let us set +x
# as part of them.
if execute:
res = self._remote_chmod(remote_paths, 'u+x')
if res['rc'] != 0:
raise AnsibleError(
'Failed to set file mode or acl on remote temporary files '
'(rc: {0}, err: {1})'.format(
res['rc'],
to_native(res['stderr'])))
# Step 3c: File system ACLs failed above; try falling back to chown.
res = self._remote_chown(remote_paths, become_user)
if res['rc'] == 0:
return remote_paths
# Check if we are an admin/root user. If we are and got here, it means
# we failed to chown as root and something weird has happened.
if remote_user in self._get_admin_users():
raise AnsibleError(
'Failed to change ownership of the temporary files Ansible '
'(via chmod nor setfacl) needs to create despite connecting as a '
'privileged user. Unprivileged become user would be unable to read'
' the file.')
# Step 3d: Try macOS's special chmod + ACL
# macOS chmod's +a flag takes its own argument. As a slight hack, we
# pass that argument as the first element of remote_paths. So we end
# up running `chmod +a [that argument] [file 1] [file 2] ...`
try:
res = self._remote_chmod([chmod_acl_mode] + list(remote_paths), '+a')
except AnsibleAuthenticationFailure as e:
# Solaris-based chmod will return 5 when it sees an invalid mode,
# and +a is invalid there. Because it returns 5, which is the same
# thing sshpass returns on auth failure, our sshpass code will
# assume that auth failed. If we don't handle that case here, none
# of the other logic below will get run. This is fairly hacky and a
# corner case, but probably one that shows up pretty often in
# Solaris-based environments (and possibly others).
pass
else:
if res['rc'] == 0:
return remote_paths
# Step 3e: Try Solaris/OpenSolaris/OpenIndiana-sans-setfacl chmod
# Similar to macOS above, Solaris 11.4 drops setfacl and takes file ACLs
# via chmod instead. OpenSolaris and illumos-based distros allow for
# using either setfacl or chmod, and compatibility depends on filesystem.
# It should be possible to debug this branch by installing OpenIndiana
# (use ZFS) and going unpriv -> unpriv.
res = self._remote_chmod(remote_paths, posix_acl_mode)
if res['rc'] == 0:
return remote_paths
# we'll need this down here
become_link = get_versioned_doclink('playbook_guide/playbooks_privilege_escalation.html#risks-of-becoming-an-unprivileged-user')
# Step 3f: Common group
# Otherwise, we're a normal user. We failed to chown the paths to the
# unprivileged user, but if we have a common group with them, we should
# be able to chown it to that.
#
# Note that we have no way of knowing if this will actually work... just
# because chgrp exits successfully does not mean that Ansible will work.
# We could check if the become user is in the group, but this would
# create an extra round trip.
#
# Also note that due to the above, this can prevent the
# world_readable_temp logic below from ever getting called. We
# leave this up to the user to rectify if they have both of these
# features enabled.
group = self.get_shell_option('common_remote_group')
if group is not None:
res = self._remote_chgrp(remote_paths, group)
if res['rc'] == 0:
# warn user that something might go weirdly here.
if self.get_shell_option('world_readable_temp'):
display.warning(
'Both common_remote_group and '
'allow_world_readable_tmpfiles are set. chgrp was '
'successful, but there is no guarantee that Ansible '
'will be able to read the files after this operation, '
'particularly if common_remote_group was set to a '
'group of which the unprivileged become user is not a '
'member. In this situation, '
'allow_world_readable_tmpfiles is a no-op. See this '
'URL for more details: %s'
'#risks-of-becoming-an-unprivileged-user' % become_link)
if execute:
group_mode = 'g+rwx'
else:
group_mode = 'g+rw'
res = self._remote_chmod(remote_paths, group_mode)
if res['rc'] == 0:
return remote_paths
# Step 4: World-readable temp directory
if self.get_shell_option('world_readable_temp'):
# chown and fs acls failed -- do things this insecure way only if
# the user opted in in the config file
display.warning(
'Using world-readable permissions for temporary files Ansible '
'needs to create when becoming an unprivileged user. This may '
'be insecure. For information on securing this, see %s'
'#risks-of-becoming-an-unprivileged-user' % become_link)
res = self._remote_chmod(remote_paths, 'a+%s' % chmod_mode)
if res['rc'] == 0:
return remote_paths
raise AnsibleError(
'Failed to set file mode on remote files '
'(rc: {0}, err: {1})'.format(
res['rc'],
to_native(res['stderr'])))
raise AnsibleError(
'Failed to set permissions on the temporary files Ansible needs '
'to create when becoming an unprivileged user '
'(rc: %s, err: %s}). For information on working around this, see %s'
'#risks-of-becoming-an-unprivileged-user' % (
res['rc'],
to_native(res['stderr']), become_link))
def _remote_chmod(self, paths, mode, sudoable=False):
'''
Issue a remote chmod command
'''
cmd = self._connection._shell.chmod(paths, mode)
res = self._low_level_execute_command(cmd, sudoable=sudoable)
return res
def _remote_chown(self, paths, user, sudoable=False):
'''
Issue a remote chown command
'''
cmd = self._connection._shell.chown(paths, user)
res = self._low_level_execute_command(cmd, sudoable=sudoable)
return res
def _remote_chgrp(self, paths, group, sudoable=False):
'''
Issue a remote chgrp command
'''
cmd = self._connection._shell.chgrp(paths, group)
res = self._low_level_execute_command(cmd, sudoable=sudoable)
return res
def _remote_set_user_facl(self, paths, user, mode, sudoable=False):
'''
Issue a remote call to setfacl
'''
cmd = self._connection._shell.set_user_facl(paths, user, mode)
res = self._low_level_execute_command(cmd, sudoable=sudoable)
return res
def _execute_remote_stat(self, path, all_vars, follow, tmp=None, checksum=True):
'''
Get information from remote file.
'''
if tmp is not None:
display.warning('_execute_remote_stat no longer honors the tmp parameter. Action'
' plugins should set self._connection._shell.tmpdir to share'
' the tmpdir')
del tmp # No longer used
module_args = dict(
path=path,
follow=follow,
get_checksum=checksum,
checksum_algorithm='sha1',
)
mystat = self._execute_module(module_name='ansible.legacy.stat', module_args=module_args, task_vars=all_vars,
wrap_async=False)
if mystat.get('failed'):
msg = mystat.get('module_stderr')
if not msg:
msg = mystat.get('module_stdout')
if not msg:
msg = mystat.get('msg')
raise AnsibleError('Failed to get information on remote file (%s): %s' % (path, msg))
if not mystat['stat']['exists']:
# empty might be matched, 1 should never match, also backwards compatible
mystat['stat']['checksum'] = '1'
# happens sometimes when it is a dir and not on bsd
if 'checksum' not in mystat['stat']:
mystat['stat']['checksum'] = ''
elif not isinstance(mystat['stat']['checksum'], string_types):
raise AnsibleError("Invalid checksum returned by stat: expected a string type but got %s" % type(mystat['stat']['checksum']))
return mystat['stat']
def _remote_expand_user(self, path, sudoable=True, pathsep=None):
''' takes a remote path and performs tilde/$HOME expansion on the remote host '''
# We only expand ~/path and ~username/path
if not path.startswith('~'):
return path
# Per Jborean, we don't have to worry about Windows as we don't have a notion of user's home
# dir there.
split_path = path.split(os.path.sep, 1)
expand_path = split_path[0]
if expand_path == '~':
# Network connection plugins (network_cli, netconf, etc.) execute on the controller, rather than the remote host.
# As such, we want to avoid using remote_user for paths as remote_user may not line up with the local user
# This is a hack and should be solved by more intelligent handling of remote_tmp in 2.7
become_user = self.get_become_option('become_user')
if getattr(self._connection, '_remote_is_local', False):
pass
elif sudoable and self._connection.become and become_user:
expand_path = '~%s' % become_user
else:
# use remote user instead, if none set default to current user
expand_path = '~%s' % (self._get_remote_user() or '')
# use shell to construct appropriate command and execute
cmd = self._connection._shell.expand_user(expand_path)
data = self._low_level_execute_command(cmd, sudoable=False)
try:
initial_fragment = data['stdout'].strip().splitlines()[-1]
except IndexError:
initial_fragment = None
if not initial_fragment:
# Something went wrong trying to expand the path remotely. Try using pwd, if not, return
# the original string
cmd = self._connection._shell.pwd()
pwd = self._low_level_execute_command(cmd, sudoable=False).get('stdout', '').strip()
if pwd:
expanded = pwd
else:
expanded = path
elif len(split_path) > 1:
expanded = self._connection._shell.join_path(initial_fragment, *split_path[1:])
else:
expanded = initial_fragment
if '..' in os.path.dirname(expanded).split('/'):
raise AnsibleError("'%s' returned an invalid relative home directory path containing '..'" % self._get_remote_addr({}))
return expanded
def _strip_success_message(self, data):
'''
Removes the BECOME-SUCCESS message from the data.
'''
if data.strip().startswith('BECOME-SUCCESS-'):
data = re.sub(r'^((\r)?\n)?BECOME-SUCCESS.*(\r)?\n', '', data)
return data
def _update_module_args(self, module_name, module_args, task_vars):
# set check mode in the module arguments, if required
if self._task.check_mode:
if not self._supports_check_mode:
raise AnsibleError("check mode is not supported for this operation")
module_args['_ansible_check_mode'] = True
else:
module_args['_ansible_check_mode'] = False
# set no log in the module arguments, if required
no_target_syslog = C.config.get_config_value('DEFAULT_NO_TARGET_SYSLOG', variables=task_vars)
module_args['_ansible_no_log'] = self._task.no_log or no_target_syslog
# set debug in the module arguments, if required
module_args['_ansible_debug'] = C.DEFAULT_DEBUG
# let module know we are in diff mode
module_args['_ansible_diff'] = self._task.diff
# let module know our verbosity
module_args['_ansible_verbosity'] = display.verbosity
# give the module information about the ansible version
module_args['_ansible_version'] = __version__
# give the module information about its name
module_args['_ansible_module_name'] = module_name
# set the syslog facility to be used in the module
module_args['_ansible_syslog_facility'] = task_vars.get('ansible_syslog_facility', C.DEFAULT_SYSLOG_FACILITY)
# let module know about filesystems that selinux treats specially
module_args['_ansible_selinux_special_fs'] = C.DEFAULT_SELINUX_SPECIAL_FS
# what to do when parameter values are converted to strings
module_args['_ansible_string_conversion_action'] = C.STRING_CONVERSION_ACTION
# give the module the socket for persistent connections
module_args['_ansible_socket'] = getattr(self._connection, 'socket_path')
if not module_args['_ansible_socket']:
module_args['_ansible_socket'] = task_vars.get('ansible_socket')
# make sure all commands use the designated shell executable
module_args['_ansible_shell_executable'] = self._play_context.executable
# make sure modules are aware if they need to keep the remote files
module_args['_ansible_keep_remote_files'] = C.DEFAULT_KEEP_REMOTE_FILES
# make sure all commands use the designated temporary directory if created
if self._is_become_unprivileged(): # force fallback on remote_tmp as user cannot normally write to dir
module_args['_ansible_tmpdir'] = None
else:
module_args['_ansible_tmpdir'] = self._connection._shell.tmpdir
# make sure the remote_tmp value is sent through in case modules needs to create their own
module_args['_ansible_remote_tmp'] = self.get_shell_option('remote_tmp', default='~/.ansible/tmp')
def _execute_module(self, module_name=None, module_args=None, tmp=None, task_vars=None, persist_files=False, delete_remote_tmp=None, wrap_async=False):
'''
Transfer and run a module along with its arguments.
'''
if tmp is not None:
display.warning('_execute_module no longer honors the tmp parameter. Action plugins'
' should set self._connection._shell.tmpdir to share the tmpdir')
del tmp # No longer used
if delete_remote_tmp is not None:
display.warning('_execute_module no longer honors the delete_remote_tmp parameter.'
' Action plugins should check self._connection._shell.tmpdir to'
' see if a tmpdir existed before they were called to determine'
' if they are responsible for removing it.')
del delete_remote_tmp # No longer used
tmpdir = self._connection._shell.tmpdir
# We set the module_style to new here so the remote_tmp is created
# before the module args are built if remote_tmp is needed (async).
# If the module_style turns out to not be new and we didn't create the
# remote tmp here, it will still be created. This must be done before
# calling self._update_module_args() so the module wrapper has the
# correct remote_tmp value set
if not self._is_pipelining_enabled("new", wrap_async) and tmpdir is None:
self._make_tmp_path()
tmpdir = self._connection._shell.tmpdir
if task_vars is None:
task_vars = dict()
# if a module name was not specified for this execution, use the action from the task
if module_name is None:
module_name = self._task.action
if module_args is None:
module_args = self._task.args
self._update_module_args(module_name, module_args, task_vars)
remove_async_dir = None
if wrap_async or self._task.async_val:
async_dir = self.get_shell_option('async_dir', default="~/.ansible_async")
remove_async_dir = len(self._task.environment)
self._task.environment.append({"ANSIBLE_ASYNC_DIR": async_dir})
# FUTURE: refactor this along with module build process to better encapsulate "smart wrapper" functionality
(module_style, shebang, module_data, module_path) = self._configure_module(module_name=module_name, module_args=module_args, task_vars=task_vars)
display.vvv("Using module file %s" % module_path)
if not shebang and module_style != 'binary':
raise AnsibleError("module (%s) is missing interpreter line" % module_name)
self._used_interpreter = shebang
remote_module_path = None
if not self._is_pipelining_enabled(module_style, wrap_async):
# we might need remote tmp dir
if tmpdir is None:
self._make_tmp_path()
tmpdir = self._connection._shell.tmpdir
remote_module_filename = self._connection._shell.get_remote_filename(module_path)
remote_module_path = self._connection._shell.join_path(tmpdir, 'AnsiballZ_%s' % remote_module_filename)
args_file_path = None
if module_style in ('old', 'non_native_want_json', 'binary'):
# we'll also need a tmp file to hold our module arguments
args_file_path = self._connection._shell.join_path(tmpdir, 'args')
if remote_module_path or module_style != 'new':
display.debug("transferring module to remote %s" % remote_module_path)
if module_style == 'binary':
self._transfer_file(module_path, remote_module_path)
else:
self._transfer_data(remote_module_path, module_data)
if module_style == 'old':
# we need to dump the module args to a k=v string in a file on
# the remote system, which can be read and parsed by the module
args_data = ""
for k, v in module_args.items():
args_data += '%s=%s ' % (k, shlex.quote(text_type(v)))
self._transfer_data(args_file_path, args_data)
elif module_style in ('non_native_want_json', 'binary'):
self._transfer_data(args_file_path, json.dumps(module_args))
display.debug("done transferring module to remote")
environment_string = self._compute_environment_string()
# remove the ANSIBLE_ASYNC_DIR env entry if we added a temporary one for
# the async_wrapper task.
if remove_async_dir is not None:
del self._task.environment[remove_async_dir]
remote_files = []
if tmpdir and remote_module_path:
remote_files = [tmpdir, remote_module_path]
if args_file_path:
remote_files.append(args_file_path)
sudoable = True
in_data = None
cmd = ""
if wrap_async and not self._connection.always_pipeline_modules:
# configure, upload, and chmod the async_wrapper module
(async_module_style, shebang, async_module_data, async_module_path) = self._configure_module(
module_name='ansible.legacy.async_wrapper', module_args=dict(), task_vars=task_vars)
async_module_remote_filename = self._connection._shell.get_remote_filename(async_module_path)
remote_async_module_path = self._connection._shell.join_path(tmpdir, async_module_remote_filename)
self._transfer_data(remote_async_module_path, async_module_data)
remote_files.append(remote_async_module_path)
async_limit = self._task.async_val
async_jid = f'j{random.randint(0, 999999999999)}'
# call the interpreter for async_wrapper directly
# this permits use of a script for an interpreter on non-Linux platforms
interpreter = shebang.replace('#!', '').strip()
async_cmd = [interpreter, remote_async_module_path, async_jid, async_limit, remote_module_path]
if environment_string:
async_cmd.insert(0, environment_string)
if args_file_path:
async_cmd.append(args_file_path)
else:
# maintain a fixed number of positional parameters for async_wrapper
async_cmd.append('_')
if not self._should_remove_tmp_path(tmpdir):
async_cmd.append("-preserve_tmp")
cmd = " ".join(to_text(x) for x in async_cmd)
else:
if self._is_pipelining_enabled(module_style):
in_data = module_data
display.vvv("Pipelining is enabled.")
else:
cmd = remote_module_path
cmd = self._connection._shell.build_module_command(environment_string, shebang, cmd, arg_path=args_file_path).strip()
# Fix permissions of the tmpdir path and tmpdir files. This should be called after all
# files have been transferred.
if remote_files:
# remove none/empty
remote_files = [x for x in remote_files if x]
self._fixup_perms2(remote_files, self._get_remote_user())
# actually execute
res = self._low_level_execute_command(cmd, sudoable=sudoable, in_data=in_data)
# parse the main result
data = self._parse_returned_data(res)
# NOTE: INTERNAL KEYS ONLY ACCESSIBLE HERE
# get internal info before cleaning
if data.pop("_ansible_suppress_tmpdir_delete", False):
self._cleanup_remote_tmp = False
# NOTE: yum returns results .. but that made it 'compatible' with squashing, so we allow mappings, for now
if 'results' in data and (not isinstance(data['results'], Sequence) or isinstance(data['results'], string_types)):
data['ansible_module_results'] = data['results']
del data['results']
display.warning("Found internal 'results' key in module return, renamed to 'ansible_module_results'.")
# remove internal keys
remove_internal_keys(data)
if wrap_async:
# async_wrapper will clean up its tmpdir on its own so we want the controller side to
# forget about it now
self._connection._shell.tmpdir = None
# FIXME: for backwards compat, figure out if still makes sense
data['changed'] = True
# pre-split stdout/stderr into lines if needed
if 'stdout' in data and 'stdout_lines' not in data:
# if the value is 'False', a default won't catch it.
txt = data.get('stdout', None) or u''
data['stdout_lines'] = txt.splitlines()
if 'stderr' in data and 'stderr_lines' not in data:
# if the value is 'False', a default won't catch it.
txt = data.get('stderr', None) or u''
data['stderr_lines'] = txt.splitlines()
# propagate interpreter discovery results back to the controller
if self._discovered_interpreter_key:
if data.get('ansible_facts') is None:
data['ansible_facts'] = {}
data['ansible_facts'][self._discovered_interpreter_key] = self._discovered_interpreter
if self._discovery_warnings:
if data.get('warnings') is None:
data['warnings'] = []
data['warnings'].extend(self._discovery_warnings)
if self._discovery_deprecation_warnings:
if data.get('deprecations') is None:
data['deprecations'] = []
data['deprecations'].extend(self._discovery_deprecation_warnings)
# mark the entire module results untrusted as a template right here, since the current action could
# possibly template one of these values.
data = wrap_var(data)
display.debug("done with _execute_module (%s, %s)" % (module_name, module_args))
return data
def _parse_returned_data(self, res):
try:
filtered_output, warnings = _filter_non_json_lines(res.get('stdout', u''), objects_only=True)
for w in warnings:
display.warning(w)
data = json.loads(filtered_output)
if C.MODULE_STRICT_UTF8_RESPONSE and not data.pop('_ansible_trusted_utf8', None):
try:
_validate_utf8_json(data)
except UnicodeEncodeError:
# When removing this, also remove the loop and latin-1 from ansible.module_utils.common.text.converters.jsonify
display.deprecated(
f'Module "{self._task.resolved_action or self._task.action}" returned non UTF-8 data in '
'the JSON response. This will become an error in the future',
version='2.18',
)
data['_ansible_parsed'] = True
except ValueError:
# not valid json, lets try to capture error
data = dict(failed=True, _ansible_parsed=False)
data['module_stdout'] = res.get('stdout', u'')
if 'stderr' in res:
data['module_stderr'] = res['stderr']
if res['stderr'].startswith(u'Traceback'):
data['exception'] = res['stderr']
# in some cases a traceback will arrive on stdout instead of stderr, such as when using ssh with -tt
if 'exception' not in data and data['module_stdout'].startswith(u'Traceback'):
data['exception'] = data['module_stdout']
# The default
data['msg'] = "MODULE FAILURE"
# try to figure out if we are missing interpreter
if self._used_interpreter is not None:
interpreter = re.escape(self._used_interpreter.lstrip('!#'))
match = re.compile('%s: (?:No such file or directory|not found)' % interpreter)
if match.search(data['module_stderr']) or match.search(data['module_stdout']):
data['msg'] = "The module failed to execute correctly, you probably need to set the interpreter."
# always append hint
data['msg'] += '\nSee stdout/stderr for the exact error'
if 'rc' in res:
data['rc'] = res['rc']
return data
# FIXME: move to connection base
def _low_level_execute_command(self, cmd, sudoable=True, in_data=None, executable=None, encoding_errors='surrogate_then_replace', chdir=None):
'''
This is the function which executes the low level shell command, which
may be commands to create/remove directories for temporary files, or to
run the module code or python directly when pipelining.
:kwarg encoding_errors: If the value returned by the command isn't
utf-8 then we have to figure out how to transform it to unicode.
If the value is just going to be displayed to the user (or
discarded) then the default of 'replace' is fine. If the data is
used as a key or is going to be written back out to a file
verbatim, then this won't work. May have to use some sort of
replacement strategy (python3 could use surrogateescape)
:kwarg chdir: cd into this directory before executing the command.
'''
display.debug("_low_level_execute_command(): starting")
# if not cmd:
# # this can happen with powershell modules when there is no analog to a Windows command (like chmod)
# display.debug("_low_level_execute_command(): no command, exiting")
# return dict(stdout='', stderr='', rc=254)
if chdir:
display.debug("_low_level_execute_command(): changing cwd to %s for this command" % chdir)
cmd = self._connection._shell.append_command('cd %s' % chdir, cmd)
# https://github.com/ansible/ansible/issues/68054
if executable:
self._connection._shell.executable = executable
ruser = self._get_remote_user()
buser = self.get_become_option('become_user')
if (sudoable and self._connection.become and # if sudoable and have become
resource_from_fqcr(self._connection.transport) != 'network_cli' and # if not using network_cli
(C.BECOME_ALLOW_SAME_USER or (buser != ruser or not any((ruser, buser))))): # if we allow same user PE or users are different and either is set
display.debug("_low_level_execute_command(): using become for this command")
cmd = self._connection.become.build_become_command(cmd, self._connection._shell)
if self._connection.allow_executable:
if executable is None:
executable = self._play_context.executable
# mitigation for SSH race which can drop stdout (https://github.com/ansible/ansible/issues/13876)
# only applied for the default executable to avoid interfering with the raw action
cmd = self._connection._shell.append_command(cmd, 'sleep 0')
if executable:
cmd = executable + ' -c ' + shlex.quote(cmd)
display.debug("_low_level_execute_command(): executing: %s" % (cmd,))
# Change directory to basedir of task for command execution when connection is local
if self._connection.transport == 'local':
self._connection.cwd = to_bytes(self._loader.get_basedir(), errors='surrogate_or_strict')
rc, stdout, stderr = self._connection.exec_command(cmd, in_data=in_data, sudoable=sudoable)
# stdout and stderr may be either a file-like or a bytes object.
# Convert either one to a text type
if isinstance(stdout, binary_type):
out = to_text(stdout, errors=encoding_errors)
elif not isinstance(stdout, text_type):
out = to_text(b''.join(stdout.readlines()), errors=encoding_errors)
else:
out = stdout
if isinstance(stderr, binary_type):
err = to_text(stderr, errors=encoding_errors)
elif not isinstance(stderr, text_type):
err = to_text(b''.join(stderr.readlines()), errors=encoding_errors)
else:
err = stderr
if rc is None:
rc = 0
# be sure to remove the BECOME-SUCCESS message now
out = self._strip_success_message(out)
display.debug(u"_low_level_execute_command() done: rc=%d, stdout=%s, stderr=%s" % (rc, out, err))
return dict(rc=rc, stdout=out, stdout_lines=out.splitlines(), stderr=err, stderr_lines=err.splitlines())
def _get_diff_data(self, destination, source, task_vars, source_file=True):
# Note: Since we do not diff the source and destination before we transform from bytes into
# text the diff between source and destination may not be accurate. To fix this, we'd need
# to move the diffing from the callback plugins into here.
#
# Example of data which would cause trouble is src_content == b'\xff' and dest_content ==
# b'\xfe'. Neither of those are valid utf-8 so both get turned into the replacement
# character: diff['before'] = u'οΏ½' ; diff['after'] = u'οΏ½' When the callback plugin later
# diffs before and after it shows an empty diff.
diff = {}
display.debug("Going to peek to see if file has changed permissions")
peek_result = self._execute_module(
module_name='ansible.legacy.file', module_args=dict(path=destination, _diff_peek=True),
task_vars=task_vars, persist_files=True)
if peek_result.get('failed', False):
display.warning(u"Failed to get diff between '%s' and '%s': %s" % (os.path.basename(source), destination, to_text(peek_result.get(u'msg', u''))))
return diff
if peek_result.get('rc', 0) == 0:
if peek_result.get('state') in (None, 'absent'):
diff['before'] = u''
elif peek_result.get('appears_binary'):
diff['dst_binary'] = 1
elif peek_result.get('size') and C.MAX_FILE_SIZE_FOR_DIFF > 0 and peek_result['size'] > C.MAX_FILE_SIZE_FOR_DIFF:
diff['dst_larger'] = C.MAX_FILE_SIZE_FOR_DIFF
else:
display.debug(u"Slurping the file %s" % source)
dest_result = self._execute_module(
module_name='ansible.legacy.slurp', module_args=dict(path=destination),
task_vars=task_vars, persist_files=True)
if 'content' in dest_result:
dest_contents = dest_result['content']
if dest_result['encoding'] == u'base64':
dest_contents = base64.b64decode(dest_contents)
else:
raise AnsibleError("unknown encoding in content option, failed: %s" % to_native(dest_result))
diff['before_header'] = destination
diff['before'] = to_text(dest_contents)
if source_file:
st = os.stat(source)
if C.MAX_FILE_SIZE_FOR_DIFF > 0 and st[stat.ST_SIZE] > C.MAX_FILE_SIZE_FOR_DIFF:
diff['src_larger'] = C.MAX_FILE_SIZE_FOR_DIFF
else:
display.debug("Reading local copy of the file %s" % source)
try:
with open(source, 'rb') as src:
src_contents = src.read()
except Exception as e:
raise AnsibleError("Unexpected error while reading source (%s) for diff: %s " % (source, to_native(e)))
if b"\x00" in src_contents:
diff['src_binary'] = 1
else:
diff['after_header'] = source
diff['after'] = to_text(src_contents)
else:
display.debug(u"source of file passed in")
diff['after_header'] = u'dynamically generated'
diff['after'] = source
if self._task.no_log:
if 'before' in diff:
diff["before"] = u""
if 'after' in diff:
diff["after"] = u" [[ Diff output has been hidden because 'no_log: true' was specified for this result ]]\n"
return diff
def _find_needle(self, dirname, needle):
'''
find a needle in haystack of paths, optionally using 'dirname' as a subdir.
This will build the ordered list of paths to search and pass them to dwim
to get back the first existing file found.
'''
# dwim already deals with playbook basedirs
path_stack = self._task.get_search_path()
# if missing it will return a file not found exception
return self._loader.path_dwim_relative_stack(path_stack, dirname, needle)
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 80,841 |
Add type hints to `ansible.utils.display::Display`
|
### Summary
Add type hints to `ansible.utils.display::Display`
### Issue Type
Feature Idea
### Component Name
lib/ansible/utils/display.py
|
https://github.com/ansible/ansible/issues/80841
|
https://github.com/ansible/ansible/pull/81400
|
48d8e067bf6c947a96750b8a61c7d6ef8cad594b
|
4d409888762ca9ca0ae2d67153be5f21a77f5149
| 2023-05-18T16:31:55Z |
python
| 2023-09-05T19:08:13Z |
changelogs/fragments/80841-display-type-annotation.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 80,841 |
Add type hints to `ansible.utils.display::Display`
|
### Summary
Add type hints to `ansible.utils.display::Display`
### Issue Type
Feature Idea
### Component Name
lib/ansible/utils/display.py
|
https://github.com/ansible/ansible/issues/80841
|
https://github.com/ansible/ansible/pull/81400
|
48d8e067bf6c947a96750b8a61c7d6ef8cad594b
|
4d409888762ca9ca0ae2d67153be5f21a77f5149
| 2023-05-18T16:31:55Z |
python
| 2023-09-05T19:08:13Z |
lib/ansible/executor/playbook_executor.py
|
# (c) 2012-2014, Michael DeHaan <[email protected]>
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
# Make coding more python3-ish
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import os
from ansible import constants as C
from ansible import context
from ansible.executor.task_queue_manager import TaskQueueManager, AnsibleEndPlay
from ansible.module_utils.common.text.converters import to_text
from ansible.module_utils.parsing.convert_bool import boolean
from ansible.plugins.loader import become_loader, connection_loader, shell_loader
from ansible.playbook import Playbook
from ansible.template import Templar
from ansible.utils.helpers import pct_to_int
from ansible.utils.collection_loader import AnsibleCollectionConfig
from ansible.utils.collection_loader._collection_finder import _get_collection_name_from_path, _get_collection_playbook_path
from ansible.utils.path import makedirs_safe
from ansible.utils.ssh_functions import set_default_transport
from ansible.utils.display import Display
display = Display()
class PlaybookExecutor:
'''
This is the primary class for executing playbooks, and thus the
basis for bin/ansible-playbook operation.
'''
def __init__(self, playbooks, inventory, variable_manager, loader, passwords):
self._playbooks = playbooks
self._inventory = inventory
self._variable_manager = variable_manager
self._loader = loader
self.passwords = passwords
self._unreachable_hosts = dict()
if context.CLIARGS.get('listhosts') or context.CLIARGS.get('listtasks') or \
context.CLIARGS.get('listtags') or context.CLIARGS.get('syntax'):
self._tqm = None
else:
self._tqm = TaskQueueManager(
inventory=inventory,
variable_manager=variable_manager,
loader=loader,
passwords=self.passwords,
forks=context.CLIARGS.get('forks'),
)
# Note: We run this here to cache whether the default ansible ssh
# executable supports control persist. Sometime in the future we may
# need to enhance this to check that ansible_ssh_executable specified
# in inventory is also cached. We can't do this caching at the point
# where it is used (in task_executor) because that is post-fork and
# therefore would be discarded after every task.
set_default_transport()
def run(self):
'''
Run the given playbook, based on the settings in the play which
may limit the runs to serialized groups, etc.
'''
result = 0
entrylist = []
entry = {}
try:
# preload become/connection/shell to set config defs cached
list(connection_loader.all(class_only=True))
list(shell_loader.all(class_only=True))
list(become_loader.all(class_only=True))
for playbook in self._playbooks:
# deal with FQCN
resource = _get_collection_playbook_path(playbook)
if resource is not None:
playbook_path = resource[1]
playbook_collection = resource[2]
else:
playbook_path = playbook
# not fqcn, but might still be collection playbook
playbook_collection = _get_collection_name_from_path(playbook)
if playbook_collection:
display.v("running playbook inside collection {0}".format(playbook_collection))
AnsibleCollectionConfig.default_collection = playbook_collection
else:
AnsibleCollectionConfig.default_collection = None
pb = Playbook.load(playbook_path, variable_manager=self._variable_manager, loader=self._loader)
# FIXME: move out of inventory self._inventory.set_playbook_basedir(os.path.realpath(os.path.dirname(playbook_path)))
if self._tqm is None: # we are doing a listing
entry = {'playbook': playbook_path}
entry['plays'] = []
else:
# make sure the tqm has callbacks loaded
self._tqm.load_callbacks()
self._tqm.send_callback('v2_playbook_on_start', pb)
i = 1
plays = pb.get_plays()
display.vv(u'%d plays in %s' % (len(plays), to_text(playbook_path)))
for play in plays:
if play._included_path is not None:
self._loader.set_basedir(play._included_path)
else:
self._loader.set_basedir(pb._basedir)
# clear any filters which may have been applied to the inventory
self._inventory.remove_restriction()
# Allow variables to be used in vars_prompt fields.
all_vars = self._variable_manager.get_vars(play=play)
templar = Templar(loader=self._loader, variables=all_vars)
setattr(play, 'vars_prompt', templar.template(play.vars_prompt))
# FIXME: this should be a play 'sub object' like loop_control
if play.vars_prompt:
for var in play.vars_prompt:
vname = var['name']
prompt = var.get("prompt", vname)
default = var.get("default", None)
private = boolean(var.get("private", True))
confirm = boolean(var.get("confirm", False))
encrypt = var.get("encrypt", None)
salt_size = var.get("salt_size", None)
salt = var.get("salt", None)
unsafe = var.get("unsafe", None)
if vname not in self._variable_manager.extra_vars:
if self._tqm:
self._tqm.send_callback('v2_playbook_on_vars_prompt', vname, private, prompt, encrypt, confirm, salt_size, salt,
default, unsafe)
play.vars[vname] = display.do_var_prompt(vname, private, prompt, encrypt, confirm, salt_size, salt, default, unsafe)
else: # we are either in --list-<option> or syntax check
play.vars[vname] = default
# Post validate so any play level variables are templated
all_vars = self._variable_manager.get_vars(play=play)
templar = Templar(loader=self._loader, variables=all_vars)
play.post_validate(templar)
if context.CLIARGS['syntax']:
continue
if self._tqm is None:
# we are just doing a listing
entry['plays'].append(play)
else:
self._tqm._unreachable_hosts.update(self._unreachable_hosts)
previously_failed = len(self._tqm._failed_hosts)
previously_unreachable = len(self._tqm._unreachable_hosts)
break_play = False
# we are actually running plays
batches = self._get_serialized_batches(play)
if len(batches) == 0:
self._tqm.send_callback('v2_playbook_on_play_start', play)
self._tqm.send_callback('v2_playbook_on_no_hosts_matched')
for batch in batches:
# restrict the inventory to the hosts in the serialized batch
self._inventory.restrict_to_hosts(batch)
# and run it...
try:
result = self._tqm.run(play=play)
except AnsibleEndPlay as e:
result = e.result
break
# break the play if the result equals the special return code
if result & self._tqm.RUN_FAILED_BREAK_PLAY != 0:
result = self._tqm.RUN_FAILED_HOSTS
break_play = True
# check the number of failures here, to see if they're above the maximum
# failure percentage allowed, or if any errors are fatal. If either of those
# conditions are met, we break out, otherwise we only break out if the entire
# batch failed
failed_hosts_count = len(self._tqm._failed_hosts) + len(self._tqm._unreachable_hosts) - \
(previously_failed + previously_unreachable)
if len(batch) == failed_hosts_count:
break_play = True
break
# update the previous counts so they don't accumulate incorrectly
# over multiple serial batches
previously_failed += len(self._tqm._failed_hosts) - previously_failed
previously_unreachable += len(self._tqm._unreachable_hosts) - previously_unreachable
# save the unreachable hosts from this batch
self._unreachable_hosts.update(self._tqm._unreachable_hosts)
if break_play:
break
i = i + 1 # per play
if entry:
entrylist.append(entry) # per playbook
# send the stats callback for this playbook
if self._tqm is not None:
if C.RETRY_FILES_ENABLED:
retries = set(self._tqm._failed_hosts.keys())
retries.update(self._tqm._unreachable_hosts.keys())
retries = sorted(retries)
if len(retries) > 0:
if C.RETRY_FILES_SAVE_PATH:
basedir = C.RETRY_FILES_SAVE_PATH
elif playbook_path:
basedir = os.path.dirname(os.path.abspath(playbook_path))
else:
basedir = '~/'
(retry_name, ext) = os.path.splitext(os.path.basename(playbook_path))
filename = os.path.join(basedir, "%s.retry" % retry_name)
if self._generate_retry_inventory(filename, retries):
display.display("\tto retry, use: --limit @%s\n" % filename)
self._tqm.send_callback('v2_playbook_on_stats', self._tqm._stats)
# if the last result wasn't zero, break out of the playbook file name loop
if result != 0:
break
if entrylist:
return entrylist
finally:
if self._tqm is not None:
self._tqm.cleanup()
if self._loader:
self._loader.cleanup_all_tmp_files()
if context.CLIARGS['syntax']:
display.display("No issues encountered")
return result
if context.CLIARGS['start_at_task'] and not self._tqm._start_at_done:
display.error(
"No matching task \"%s\" found."
" Note: --start-at-task can only follow static includes."
% context.CLIARGS['start_at_task']
)
return result
def _get_serialized_batches(self, play):
'''
Returns a list of hosts, subdivided into batches based on
the serial size specified in the play.
'''
# make sure we have a unique list of hosts
all_hosts = self._inventory.get_hosts(play.hosts, order=play.order)
all_hosts_len = len(all_hosts)
# the serial value can be listed as a scalar or a list of
# scalars, so we make sure it's a list here
serial_batch_list = play.serial
if len(serial_batch_list) == 0:
serial_batch_list = [-1]
cur_item = 0
serialized_batches = []
while len(all_hosts) > 0:
# get the serial value from current item in the list
serial = pct_to_int(serial_batch_list[cur_item], all_hosts_len)
# if the serial count was not specified or is invalid, default to
# a list of all hosts, otherwise grab a chunk of the hosts equal
# to the current serial item size
if serial <= 0:
serialized_batches.append(all_hosts)
break
else:
play_hosts = []
for x in range(serial):
if len(all_hosts) > 0:
play_hosts.append(all_hosts.pop(0))
serialized_batches.append(play_hosts)
# increment the current batch list item number, and if we've hit
# the end keep using the last element until we've consumed all of
# the hosts in the inventory
cur_item += 1
if cur_item > len(serial_batch_list) - 1:
cur_item = len(serial_batch_list) - 1
return serialized_batches
def _generate_retry_inventory(self, retry_path, replay_hosts):
'''
Called when a playbook run fails. It generates an inventory which allows
re-running on ONLY the failed hosts. This may duplicate some variable
information in group_vars/host_vars but that is ok, and expected.
'''
try:
makedirs_safe(os.path.dirname(retry_path))
with open(retry_path, 'w') as fd:
for x in replay_hosts:
fd.write("%s\n" % x)
except Exception as e:
display.warning("Could not create retry file '%s'.\n\t%s" % (retry_path, to_text(e)))
return False
return True
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 80,841 |
Add type hints to `ansible.utils.display::Display`
|
### Summary
Add type hints to `ansible.utils.display::Display`
### Issue Type
Feature Idea
### Component Name
lib/ansible/utils/display.py
|
https://github.com/ansible/ansible/issues/80841
|
https://github.com/ansible/ansible/pull/81400
|
48d8e067bf6c947a96750b8a61c7d6ef8cad594b
|
4d409888762ca9ca0ae2d67153be5f21a77f5149
| 2023-05-18T16:31:55Z |
python
| 2023-09-05T19:08:13Z |
lib/ansible/playbook/play_context.py
|
# -*- coding: utf-8 -*-
# (c) 2012-2014, Michael DeHaan <[email protected]>
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
# Make coding more python3-ish
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
from ansible import constants as C
from ansible import context
from ansible.playbook.attribute import FieldAttribute
from ansible.playbook.base import Base
from ansible.utils.display import Display
display = Display()
__all__ = ['PlayContext']
TASK_ATTRIBUTE_OVERRIDES = (
'become',
'become_user',
'become_pass',
'become_method',
'become_flags',
'connection',
'docker_extra_args', # TODO: remove
'delegate_to',
'no_log',
'remote_user',
)
RESET_VARS = (
'ansible_connection',
'ansible_user',
'ansible_host',
'ansible_port',
# TODO: ???
'ansible_docker_extra_args',
'ansible_ssh_host',
'ansible_ssh_pass',
'ansible_ssh_port',
'ansible_ssh_user',
'ansible_ssh_private_key_file',
'ansible_ssh_pipelining',
'ansible_ssh_executable',
)
class PlayContext(Base):
'''
This class is used to consolidate the connection information for
hosts in a play and child tasks, where the task may override some
connection/authentication information.
'''
# base
module_compression = FieldAttribute(isa='string', default=C.DEFAULT_MODULE_COMPRESSION)
shell = FieldAttribute(isa='string')
executable = FieldAttribute(isa='string', default=C.DEFAULT_EXECUTABLE)
# connection fields, some are inherited from Base:
# (connection, port, remote_user, environment, no_log)
remote_addr = FieldAttribute(isa='string')
password = FieldAttribute(isa='string')
timeout = FieldAttribute(isa='int', default=C.DEFAULT_TIMEOUT)
connection_user = FieldAttribute(isa='string')
private_key_file = FieldAttribute(isa='string', default=C.DEFAULT_PRIVATE_KEY_FILE)
pipelining = FieldAttribute(isa='bool', default=C.ANSIBLE_PIPELINING)
# networking modules
network_os = FieldAttribute(isa='string')
# docker FIXME: remove these
docker_extra_args = FieldAttribute(isa='string')
# ???
connection_lockfd = FieldAttribute(isa='int')
# privilege escalation fields
become = FieldAttribute(isa='bool')
become_method = FieldAttribute(isa='string')
become_user = FieldAttribute(isa='string')
become_pass = FieldAttribute(isa='string')
become_exe = FieldAttribute(isa='string', default=C.DEFAULT_BECOME_EXE)
become_flags = FieldAttribute(isa='string', default=C.DEFAULT_BECOME_FLAGS)
prompt = FieldAttribute(isa='string')
# general flags
only_tags = FieldAttribute(isa='set', default=set)
skip_tags = FieldAttribute(isa='set', default=set)
start_at_task = FieldAttribute(isa='string')
step = FieldAttribute(isa='bool', default=False)
# "PlayContext.force_handlers should not be used, the calling code should be using play itself instead"
force_handlers = FieldAttribute(isa='bool', default=False)
@property
def verbosity(self):
display.deprecated(
"PlayContext.verbosity is deprecated, use ansible.utils.display.Display.verbosity instead.",
version=2.18
)
return self._internal_verbosity
@verbosity.setter
def verbosity(self, value):
display.deprecated(
"PlayContext.verbosity is deprecated, use ansible.utils.display.Display.verbosity instead.",
version=2.18
)
self._internal_verbosity = value
def __init__(self, play=None, passwords=None, connection_lockfd=None):
# Note: play is really not optional. The only time it could be omitted is when we create
# a PlayContext just so we can invoke its deserialize method to load it from a serialized
# data source.
super(PlayContext, self).__init__()
if passwords is None:
passwords = {}
self.password = passwords.get('conn_pass', '')
self.become_pass = passwords.get('become_pass', '')
self._become_plugin = None
self.prompt = ''
self.success_key = ''
# a file descriptor to be used during locking operations
self.connection_lockfd = connection_lockfd
# set options before play to allow play to override them
if context.CLIARGS:
self.set_attributes_from_cli()
else:
self._internal_verbosity = 0
if play:
self.set_attributes_from_play(play)
def set_attributes_from_plugin(self, plugin):
# generic derived from connection plugin, temporary for backwards compat, in the end we should not set play_context properties
# get options for plugins
options = C.config.get_configuration_definitions(plugin.plugin_type, plugin._load_name)
for option in options:
if option:
flag = options[option].get('name')
if flag:
setattr(self, flag, plugin.get_option(flag))
def set_attributes_from_play(self, play):
self.force_handlers = play.force_handlers
def set_attributes_from_cli(self):
'''
Configures this connection information instance with data from
options specified by the user on the command line. These have a
lower precedence than those set on the play or host.
'''
if context.CLIARGS.get('timeout', False):
self.timeout = int(context.CLIARGS['timeout'])
# From the command line. These should probably be used directly by plugins instead
# For now, they are likely to be moved to FieldAttribute defaults
self.private_key_file = context.CLIARGS.get('private_key_file') # Else default
self._internal_verbosity = context.CLIARGS.get('verbosity') # Else default
# Not every cli that uses PlayContext has these command line args so have a default
self.start_at_task = context.CLIARGS.get('start_at_task', None) # Else default
def set_task_and_variable_override(self, task, variables, templar):
'''
Sets attributes from the task if they are set, which will override
those from the play.
:arg task: the task object with the parameters that were set on it
:arg variables: variables from inventory
:arg templar: templar instance if templating variables is needed
'''
new_info = self.copy()
# loop through a subset of attributes on the task object and set
# connection fields based on their values
for attr in TASK_ATTRIBUTE_OVERRIDES:
if (attr_val := getattr(task, attr, None)) is not None:
setattr(new_info, attr, attr_val)
# next, use the MAGIC_VARIABLE_MAPPING dictionary to update this
# connection info object with 'magic' variables from the variable list.
# If the value 'ansible_delegated_vars' is in the variables, it means
# we have a delegated-to host, so we check there first before looking
# at the variables in general
if task.delegate_to is not None:
# In the case of a loop, the delegated_to host may have been
# templated based on the loop variable, so we try and locate
# the host name in the delegated variable dictionary here
delegated_host_name = templar.template(task.delegate_to)
delegated_vars = variables.get('ansible_delegated_vars', dict()).get(delegated_host_name, dict())
delegated_transport = C.DEFAULT_TRANSPORT
for transport_var in C.MAGIC_VARIABLE_MAPPING.get('connection'):
if transport_var in delegated_vars:
delegated_transport = delegated_vars[transport_var]
break
# make sure this delegated_to host has something set for its remote
# address, otherwise we default to connecting to it by name. This
# may happen when users put an IP entry into their inventory, or if
# they rely on DNS for a non-inventory hostname
for address_var in ('ansible_%s_host' % delegated_transport,) + C.MAGIC_VARIABLE_MAPPING.get('remote_addr'):
if address_var in delegated_vars:
break
else:
display.debug("no remote address found for delegated host %s\nusing its name, so success depends on DNS resolution" % delegated_host_name)
delegated_vars['ansible_host'] = delegated_host_name
# reset the port back to the default if none was specified, to prevent
# the delegated host from inheriting the original host's setting
for port_var in ('ansible_%s_port' % delegated_transport,) + C.MAGIC_VARIABLE_MAPPING.get('port'):
if port_var in delegated_vars:
break
else:
if delegated_transport == 'winrm':
delegated_vars['ansible_port'] = 5986
else:
delegated_vars['ansible_port'] = C.DEFAULT_REMOTE_PORT
# and likewise for the remote user
for user_var in ('ansible_%s_user' % delegated_transport,) + C.MAGIC_VARIABLE_MAPPING.get('remote_user'):
if user_var in delegated_vars and delegated_vars[user_var]:
break
else:
delegated_vars['ansible_user'] = task.remote_user or self.remote_user
else:
delegated_vars = dict()
# setup shell
for exe_var in C.MAGIC_VARIABLE_MAPPING.get('executable'):
if exe_var in variables:
setattr(new_info, 'executable', variables.get(exe_var))
attrs_considered = []
for (attr, variable_names) in C.MAGIC_VARIABLE_MAPPING.items():
for variable_name in variable_names:
if attr in attrs_considered:
continue
# if delegation task ONLY use delegated host vars, avoid delegated FOR host vars
if task.delegate_to is not None:
if isinstance(delegated_vars, dict) and variable_name in delegated_vars:
setattr(new_info, attr, delegated_vars[variable_name])
attrs_considered.append(attr)
elif variable_name in variables:
setattr(new_info, attr, variables[variable_name])
attrs_considered.append(attr)
# no else, as no other vars should be considered
# become legacy updates -- from inventory file (inventory overrides
# commandline)
for become_pass_name in C.MAGIC_VARIABLE_MAPPING.get('become_pass'):
if become_pass_name in variables:
break
# make sure we get port defaults if needed
if new_info.port is None and C.DEFAULT_REMOTE_PORT is not None:
new_info.port = int(C.DEFAULT_REMOTE_PORT)
# special overrides for the connection setting
if len(delegated_vars) > 0:
# in the event that we were using local before make sure to reset the
# connection type to the default transport for the delegated-to host,
# if not otherwise specified
for connection_type in C.MAGIC_VARIABLE_MAPPING.get('connection'):
if connection_type in delegated_vars:
break
else:
remote_addr_local = new_info.remote_addr in C.LOCALHOST
inv_hostname_local = delegated_vars.get('inventory_hostname') in C.LOCALHOST
if remote_addr_local and inv_hostname_local:
setattr(new_info, 'connection', 'local')
elif getattr(new_info, 'connection', None) == 'local' and (not remote_addr_local or not inv_hostname_local):
setattr(new_info, 'connection', C.DEFAULT_TRANSPORT)
# we store original in 'connection_user' for use of network/other modules that fallback to it as login user
# connection_user to be deprecated once connection=local is removed for, as local resets remote_user
if new_info.connection == 'local':
if not new_info.connection_user:
new_info.connection_user = new_info.remote_user
# for case in which connection plugin still uses pc.remote_addr and in it's own options
# specifies 'default: inventory_hostname', but never added to vars:
if new_info.remote_addr == 'inventory_hostname':
new_info.remote_addr = variables.get('inventory_hostname')
display.warning('The "%s" connection plugin has an improperly configured remote target value, '
'forcing "inventory_hostname" templated value instead of the string' % new_info.connection)
# set no_log to default if it was not previously set
if new_info.no_log is None:
new_info.no_log = C.DEFAULT_NO_LOG
if task.check_mode is not None:
new_info.check_mode = task.check_mode
if task.diff is not None:
new_info.diff = task.diff
return new_info
def set_become_plugin(self, plugin):
self._become_plugin = plugin
def update_vars(self, variables):
'''
Adds 'magic' variables relating to connections to the variable dictionary provided.
In case users need to access from the play, this is a legacy from runner.
'''
for prop, var_list in C.MAGIC_VARIABLE_MAPPING.items():
try:
if 'become' in prop:
continue
var_val = getattr(self, prop)
for var_opt in var_list:
if var_opt not in variables and var_val is not None:
variables[var_opt] = var_val
except AttributeError:
continue
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 80,841 |
Add type hints to `ansible.utils.display::Display`
|
### Summary
Add type hints to `ansible.utils.display::Display`
### Issue Type
Feature Idea
### Component Name
lib/ansible/utils/display.py
|
https://github.com/ansible/ansible/issues/80841
|
https://github.com/ansible/ansible/pull/81400
|
48d8e067bf6c947a96750b8a61c7d6ef8cad594b
|
4d409888762ca9ca0ae2d67153be5f21a77f5149
| 2023-05-18T16:31:55Z |
python
| 2023-09-05T19:08:13Z |
lib/ansible/plugins/connection/ssh.py
|
# Copyright (c) 2012, Michael DeHaan <[email protected]>
# Copyright 2015 Abhijit Menon-Sen <[email protected]>
# Copyright 2017 Toshio Kuratomi <[email protected]>
# Copyright (c) 2017 Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import (annotations, absolute_import, division, print_function)
__metaclass__ = type
DOCUMENTATION = '''
name: ssh
short_description: connect via SSH client binary
description:
- This connection plugin allows Ansible to communicate to the target machines through normal SSH command line.
- Ansible does not expose a channel to allow communication between the user and the SSH process to accept
a password manually to decrypt an SSH key when using this connection plugin (which is the default). The
use of C(ssh-agent) is highly recommended.
author: ansible (@core)
extends_documentation_fragment:
- connection_pipelining
version_added: historical
notes:
- Many options default to V(None) here but that only means we do not override the SSH tool's defaults and/or configuration.
For example, if you specify the port in this plugin it will override any C(Port) entry in your C(.ssh/config).
- The ssh CLI tool uses return code 255 as a 'connection error', this can conflict with commands/tools that
also return 255 as an error code and will look like an 'unreachable' condition or 'connection error' to this plugin.
options:
host:
description: Hostname/IP to connect to.
default: inventory_hostname
type: string
vars:
- name: inventory_hostname
- name: ansible_host
- name: ansible_ssh_host
- name: delegated_vars['ansible_host']
- name: delegated_vars['ansible_ssh_host']
host_key_checking:
description: Determines if SSH should check host keys.
default: True
type: boolean
ini:
- section: defaults
key: 'host_key_checking'
- section: ssh_connection
key: 'host_key_checking'
version_added: '2.5'
env:
- name: ANSIBLE_HOST_KEY_CHECKING
- name: ANSIBLE_SSH_HOST_KEY_CHECKING
version_added: '2.5'
vars:
- name: ansible_host_key_checking
version_added: '2.5'
- name: ansible_ssh_host_key_checking
version_added: '2.5'
password:
description: Authentication password for the O(remote_user). Can be supplied as CLI option.
type: string
vars:
- name: ansible_password
- name: ansible_ssh_pass
- name: ansible_ssh_password
sshpass_prompt:
description:
- Password prompt that sshpass should search for. Supported by sshpass 1.06 and up.
- Defaults to C(Enter PIN for) when pkcs11_provider is set.
default: ''
type: string
ini:
- section: 'ssh_connection'
key: 'sshpass_prompt'
env:
- name: ANSIBLE_SSHPASS_PROMPT
vars:
- name: ansible_sshpass_prompt
version_added: '2.10'
ssh_args:
description: Arguments to pass to all SSH CLI tools.
default: '-C -o ControlMaster=auto -o ControlPersist=60s'
type: string
ini:
- section: 'ssh_connection'
key: 'ssh_args'
env:
- name: ANSIBLE_SSH_ARGS
vars:
- name: ansible_ssh_args
version_added: '2.7'
ssh_common_args:
description: Common extra args for all SSH CLI tools.
type: string
ini:
- section: 'ssh_connection'
key: 'ssh_common_args'
version_added: '2.7'
env:
- name: ANSIBLE_SSH_COMMON_ARGS
version_added: '2.7'
vars:
- name: ansible_ssh_common_args
cli:
- name: ssh_common_args
default: ''
ssh_executable:
default: ssh
description:
- This defines the location of the SSH binary. It defaults to V(ssh) which will use the first SSH binary available in $PATH.
- This option is usually not required, it might be useful when access to system SSH is restricted,
or when using SSH wrappers to connect to remote hosts.
type: string
env: [{name: ANSIBLE_SSH_EXECUTABLE}]
ini:
- {key: ssh_executable, section: ssh_connection}
#const: ANSIBLE_SSH_EXECUTABLE
version_added: "2.2"
vars:
- name: ansible_ssh_executable
version_added: '2.7'
sftp_executable:
default: sftp
description:
- This defines the location of the sftp binary. It defaults to V(sftp) which will use the first binary available in $PATH.
type: string
env: [{name: ANSIBLE_SFTP_EXECUTABLE}]
ini:
- {key: sftp_executable, section: ssh_connection}
version_added: "2.6"
vars:
- name: ansible_sftp_executable
version_added: '2.7'
scp_executable:
default: scp
description:
- This defines the location of the scp binary. It defaults to V(scp) which will use the first binary available in $PATH.
type: string
env: [{name: ANSIBLE_SCP_EXECUTABLE}]
ini:
- {key: scp_executable, section: ssh_connection}
version_added: "2.6"
vars:
- name: ansible_scp_executable
version_added: '2.7'
scp_extra_args:
description: Extra exclusive to the C(scp) CLI
type: string
vars:
- name: ansible_scp_extra_args
env:
- name: ANSIBLE_SCP_EXTRA_ARGS
version_added: '2.7'
ini:
- key: scp_extra_args
section: ssh_connection
version_added: '2.7'
cli:
- name: scp_extra_args
default: ''
sftp_extra_args:
description: Extra exclusive to the C(sftp) CLI
type: string
vars:
- name: ansible_sftp_extra_args
env:
- name: ANSIBLE_SFTP_EXTRA_ARGS
version_added: '2.7'
ini:
- key: sftp_extra_args
section: ssh_connection
version_added: '2.7'
cli:
- name: sftp_extra_args
default: ''
ssh_extra_args:
description: Extra exclusive to the SSH CLI.
type: string
vars:
- name: ansible_ssh_extra_args
env:
- name: ANSIBLE_SSH_EXTRA_ARGS
version_added: '2.7'
ini:
- key: ssh_extra_args
section: ssh_connection
version_added: '2.7'
cli:
- name: ssh_extra_args
default: ''
reconnection_retries:
description:
- Number of attempts to connect.
- Ansible retries connections only if it gets an SSH error with a return code of 255.
- Any errors with return codes other than 255 indicate an issue with program execution.
default: 0
type: integer
env:
- name: ANSIBLE_SSH_RETRIES
ini:
- section: connection
key: retries
- section: ssh_connection
key: retries
vars:
- name: ansible_ssh_retries
version_added: '2.7'
port:
description: Remote port to connect to.
type: int
ini:
- section: defaults
key: remote_port
env:
- name: ANSIBLE_REMOTE_PORT
vars:
- name: ansible_port
- name: ansible_ssh_port
keyword:
- name: port
remote_user:
description:
- User name with which to login to the remote server, normally set by the remote_user keyword.
- If no user is supplied, Ansible will let the SSH client binary choose the user as it normally.
type: string
ini:
- section: defaults
key: remote_user
env:
- name: ANSIBLE_REMOTE_USER
vars:
- name: ansible_user
- name: ansible_ssh_user
cli:
- name: user
keyword:
- name: remote_user
pipelining:
env:
- name: ANSIBLE_PIPELINING
- name: ANSIBLE_SSH_PIPELINING
ini:
- section: defaults
key: pipelining
- section: connection
key: pipelining
- section: ssh_connection
key: pipelining
vars:
- name: ansible_pipelining
- name: ansible_ssh_pipelining
private_key_file:
description:
- Path to private key file to use for authentication.
type: string
ini:
- section: defaults
key: private_key_file
env:
- name: ANSIBLE_PRIVATE_KEY_FILE
vars:
- name: ansible_private_key_file
- name: ansible_ssh_private_key_file
cli:
- name: private_key_file
option: '--private-key'
control_path:
description:
- This is the location to save SSH's ControlPath sockets, it uses SSH's variable substitution.
- Since 2.3, if null (default), ansible will generate a unique hash. Use ``%(directory)s`` to indicate where to use the control dir path setting.
- Before 2.3 it defaulted to ``control_path=%(directory)s/ansible-ssh-%%h-%%p-%%r``.
- Be aware that this setting is ignored if C(-o ControlPath) is set in ssh args.
type: string
env:
- name: ANSIBLE_SSH_CONTROL_PATH
ini:
- key: control_path
section: ssh_connection
vars:
- name: ansible_control_path
version_added: '2.7'
control_path_dir:
default: ~/.ansible/cp
description:
- This sets the directory to use for ssh control path if the control path setting is null.
- Also, provides the ``%(directory)s`` variable for the control path setting.
type: string
env:
- name: ANSIBLE_SSH_CONTROL_PATH_DIR
ini:
- section: ssh_connection
key: control_path_dir
vars:
- name: ansible_control_path_dir
version_added: '2.7'
sftp_batch_mode:
default: true
description: 'TODO: write it'
env: [{name: ANSIBLE_SFTP_BATCH_MODE}]
ini:
- {key: sftp_batch_mode, section: ssh_connection}
type: bool
vars:
- name: ansible_sftp_batch_mode
version_added: '2.7'
ssh_transfer_method:
description:
- "Preferred method to use when transferring files over ssh"
- Setting to 'smart' (default) will try them in order, until one succeeds or they all fail
- For OpenSSH >=9.0 you must add an additional option to enable scp (scp_extra_args="-O")
- Using 'piped' creates an ssh pipe with C(dd) on either side to copy the data
choices: ['sftp', 'scp', 'piped', 'smart']
type: string
env: [{name: ANSIBLE_SSH_TRANSFER_METHOD}]
ini:
- {key: transfer_method, section: ssh_connection}
vars:
- name: ansible_ssh_transfer_method
version_added: '2.12'
scp_if_ssh:
deprecated:
why: In favor of the O(ssh_transfer_method) option.
version: "2.17"
alternatives: O(ssh_transfer_method)
default: smart
description:
- "Preferred method to use when transferring files over SSH."
- When set to V(smart), Ansible will try them until one succeeds or they all fail.
- If set to V(True), it will force 'scp', if V(False) it will use 'sftp'.
- For OpenSSH >=9.0 you must add an additional option to enable scp (C(scp_extra_args="-O"))
- This setting will overridden by O(ssh_transfer_method) if set.
env: [{name: ANSIBLE_SCP_IF_SSH}]
ini:
- {key: scp_if_ssh, section: ssh_connection}
vars:
- name: ansible_scp_if_ssh
version_added: '2.7'
use_tty:
version_added: '2.5'
default: true
description: add -tt to ssh commands to force tty allocation.
env: [{name: ANSIBLE_SSH_USETTY}]
ini:
- {key: usetty, section: ssh_connection}
type: bool
vars:
- name: ansible_ssh_use_tty
version_added: '2.7'
timeout:
default: 10
description:
- This is the default amount of time we will wait while establishing an SSH connection.
- It also controls how long we can wait to access reading the connection once established (select on the socket).
env:
- name: ANSIBLE_TIMEOUT
- name: ANSIBLE_SSH_TIMEOUT
version_added: '2.11'
ini:
- key: timeout
section: defaults
- key: timeout
section: ssh_connection
version_added: '2.11'
vars:
- name: ansible_ssh_timeout
version_added: '2.11'
cli:
- name: timeout
type: integer
pkcs11_provider:
version_added: '2.12'
default: ""
type: string
description:
- "PKCS11 SmartCard provider such as opensc, example: /usr/local/lib/opensc-pkcs11.so"
- Requires sshpass version 1.06+, sshpass must support the -P option.
env: [{name: ANSIBLE_PKCS11_PROVIDER}]
ini:
- {key: pkcs11_provider, section: ssh_connection}
vars:
- name: ansible_ssh_pkcs11_provider
'''
import collections.abc as c
import errno
import fcntl
import hashlib
import io
import os
import pty
import re
import shlex
import subprocess
import time
import typing as t
from functools import wraps
from ansible.errors import (
AnsibleAuthenticationFailure,
AnsibleConnectionFailure,
AnsibleError,
AnsibleFileNotFound,
)
from ansible.errors import AnsibleOptionsError
from ansible.module_utils.compat import selectors
from ansible.module_utils.six import PY3, text_type, binary_type
from ansible.module_utils.common.text.converters import to_bytes, to_native, to_text
from ansible.module_utils.parsing.convert_bool import BOOLEANS, boolean
from ansible.plugins.connection import ConnectionBase, BUFSIZE
from ansible.plugins.shell.powershell import _parse_clixml
from ansible.utils.display import Display
from ansible.utils.path import unfrackpath, makedirs_safe
display = Display()
P = t.ParamSpec('P')
# error messages that indicate 255 return code is not from ssh itself.
b_NOT_SSH_ERRORS = (b'Traceback (most recent call last):', # Python-2.6 when there's an exception
# while invoking a script via -m
b'PHP Parse error:', # Php always returns with error
b'chmod: invalid mode', # chmod, but really only on AIX
b'chmod: A flag or octal number is not correct.', # chmod, other AIX
)
SSHPASS_AVAILABLE = None
SSH_DEBUG = re.compile(r'^debug\d+: .*')
class AnsibleControlPersistBrokenPipeError(AnsibleError):
''' ControlPersist broken pipe '''
pass
def _handle_error(
remaining_retries: int,
command: bytes,
return_tuple: tuple[int, bytes, bytes],
no_log: bool,
host: str,
display: Display = display,
) -> None:
# sshpass errors
if command == b'sshpass':
# Error 5 is invalid/incorrect password. Raise an exception to prevent retries from locking the account.
if return_tuple[0] == 5:
msg = 'Invalid/incorrect username/password. Skipping remaining {0} retries to prevent account lockout:'.format(remaining_retries)
if remaining_retries <= 0:
msg = 'Invalid/incorrect password:'
if no_log:
msg = '{0} <error censored due to no log>'.format(msg)
else:
msg = '{0} {1}'.format(msg, to_native(return_tuple[2]).rstrip())
raise AnsibleAuthenticationFailure(msg)
# sshpass returns codes are 1-6. We handle 5 previously, so this catches other scenarios.
# No exception is raised, so the connection is retried - except when attempting to use
# sshpass_prompt with an sshpass that won't let us pass -P, in which case we fail loudly.
elif return_tuple[0] in [1, 2, 3, 4, 6]:
msg = 'sshpass error:'
if no_log:
msg = '{0} <error censored due to no log>'.format(msg)
else:
details = to_native(return_tuple[2]).rstrip()
if "sshpass: invalid option -- 'P'" in details:
details = 'Installed sshpass version does not support customized password prompts. ' \
'Upgrade sshpass to use sshpass_prompt, or otherwise switch to ssh keys.'
raise AnsibleError('{0} {1}'.format(msg, details))
msg = '{0} {1}'.format(msg, details)
if return_tuple[0] == 255:
SSH_ERROR = True
for signature in b_NOT_SSH_ERRORS:
# 1 == stout, 2 == stderr
if signature in return_tuple[1] or signature in return_tuple[2]:
SSH_ERROR = False
break
if SSH_ERROR:
msg = "Failed to connect to the host via ssh:"
if no_log:
msg = '{0} <error censored due to no log>'.format(msg)
else:
msg = '{0} {1}'.format(msg, to_native(return_tuple[2]).rstrip())
raise AnsibleConnectionFailure(msg)
# For other errors, no exception is raised so the connection is retried and we only log the messages
if 1 <= return_tuple[0] <= 254:
msg = u"Failed to connect to the host via ssh:"
if no_log:
msg = u'{0} <error censored due to no log>'.format(msg)
else:
msg = u'{0} {1}'.format(msg, to_text(return_tuple[2]).rstrip())
display.vvv(msg, host=host)
def _ssh_retry(
func: c.Callable[t.Concatenate[Connection, P], tuple[int, bytes, bytes]],
) -> c.Callable[t.Concatenate[Connection, P], tuple[int, bytes, bytes]]:
"""
Decorator to retry ssh/scp/sftp in the case of a connection failure
Will retry if:
* an exception is caught
* ssh returns 255
Will not retry if
* sshpass returns 5 (invalid password, to prevent account lockouts)
* remaining_tries is < 2
* retries limit reached
"""
@wraps(func)
def wrapped(self: Connection, *args: P.args, **kwargs: P.kwargs) -> tuple[int, bytes, bytes]:
remaining_tries = int(self.get_option('reconnection_retries')) + 1
cmd_summary = u"%s..." % to_text(args[0])
conn_password = self.get_option('password') or self._play_context.password
for attempt in range(remaining_tries):
cmd = t.cast(list[bytes], args[0])
if attempt != 0 and conn_password and isinstance(cmd, list):
# If this is a retry, the fd/pipe for sshpass is closed, and we need a new one
self.sshpass_pipe = os.pipe()
cmd[1] = b'-d' + to_bytes(self.sshpass_pipe[0], nonstring='simplerepr', errors='surrogate_or_strict')
try:
try:
return_tuple = func(self, *args, **kwargs)
# TODO: this should come from task
if self._play_context.no_log:
display.vvv(u'rc=%s, stdout and stderr censored due to no log' % return_tuple[0], host=self.host)
else:
display.vvv(return_tuple, host=self.host)
# 0 = success
# 1-254 = remote command return code
# 255 could be a failure from the ssh command itself
except (AnsibleControlPersistBrokenPipeError):
# Retry one more time because of the ControlPersist broken pipe (see #16731)
cmd = t.cast(list[bytes], args[0])
if conn_password and isinstance(cmd, list):
# This is a retry, so the fd/pipe for sshpass is closed, and we need a new one
self.sshpass_pipe = os.pipe()
cmd[1] = b'-d' + to_bytes(self.sshpass_pipe[0], nonstring='simplerepr', errors='surrogate_or_strict')
display.vvv(u"RETRYING BECAUSE OF CONTROLPERSIST BROKEN PIPE")
return_tuple = func(self, *args, **kwargs)
remaining_retries = remaining_tries - attempt - 1
_handle_error(remaining_retries, cmd[0], return_tuple, self._play_context.no_log, self.host)
break
# 5 = Invalid/incorrect password from sshpass
except AnsibleAuthenticationFailure:
# Raising this exception, which is subclassed from AnsibleConnectionFailure, prevents further retries
raise
except (AnsibleConnectionFailure, Exception) as e:
if attempt == remaining_tries - 1:
raise
else:
pause = 2 ** attempt - 1
if pause > 30:
pause = 30
if isinstance(e, AnsibleConnectionFailure):
msg = u"ssh_retry: attempt: %d, ssh return code is 255. cmd (%s), pausing for %d seconds" % (attempt + 1, cmd_summary, pause)
else:
msg = (u"ssh_retry: attempt: %d, caught exception(%s) from cmd (%s), "
u"pausing for %d seconds" % (attempt + 1, to_text(e), cmd_summary, pause))
display.vv(msg, host=self.host)
time.sleep(pause)
continue
return return_tuple
return wrapped
class Connection(ConnectionBase):
''' ssh based connections '''
transport = 'ssh'
has_pipelining = True
def __init__(self, *args: t.Any, **kwargs: t.Any) -> None:
super(Connection, self).__init__(*args, **kwargs)
# TODO: all should come from get_option(), but not might be set at this point yet
self.host = self._play_context.remote_addr
self.port = self._play_context.port
self.user = self._play_context.remote_user
self.control_path: str | None = None
self.control_path_dir: str | None = None
# Windows operates differently from a POSIX connection/shell plugin,
# we need to set various properties to ensure SSH on Windows continues
# to work
if getattr(self._shell, "_IS_WINDOWS", False):
self.has_native_async = True
self.always_pipeline_modules = True
self.module_implementation_preferences = ('.ps1', '.exe', '')
self.allow_executable = False
# The connection is created by running ssh/scp/sftp from the exec_command,
# put_file, and fetch_file methods, so we don't need to do any connection
# management here.
def _connect(self) -> Connection:
return self
@staticmethod
def _create_control_path(
host: str | None,
port: int | None,
user: str | None,
connection: ConnectionBase | None = None,
pid: int | None = None,
) -> str:
'''Make a hash for the controlpath based on con attributes'''
pstring = '%s-%s-%s' % (host, port, user)
if connection:
pstring += '-%s' % connection
if pid:
pstring += '-%s' % to_text(pid)
m = hashlib.sha1()
m.update(to_bytes(pstring))
digest = m.hexdigest()
cpath = '%(directory)s/' + digest[:10]
return cpath
@staticmethod
def _sshpass_available() -> bool:
global SSHPASS_AVAILABLE
# We test once if sshpass is available, and remember the result. It
# would be nice to use distutils.spawn.find_executable for this, but
# distutils isn't always available; shutils.which() is Python3-only.
if SSHPASS_AVAILABLE is None:
try:
p = subprocess.Popen(["sshpass"], stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
p.communicate()
SSHPASS_AVAILABLE = True
except OSError:
SSHPASS_AVAILABLE = False
return SSHPASS_AVAILABLE
@staticmethod
def _persistence_controls(b_command: list[bytes]) -> tuple[bool, bool]:
'''
Takes a command array and scans it for ControlPersist and ControlPath
settings and returns two booleans indicating whether either was found.
This could be smarter, e.g. returning false if ControlPersist is 'no',
but for now we do it simple way.
'''
controlpersist = False
controlpath = False
for b_arg in (a.lower() for a in b_command):
if b'controlpersist' in b_arg:
controlpersist = True
elif b'controlpath' in b_arg:
controlpath = True
return controlpersist, controlpath
def _add_args(self, b_command: list[bytes], b_args: t.Iterable[bytes], explanation: str) -> None:
"""
Adds arguments to the ssh command and displays a caller-supplied explanation of why.
:arg b_command: A list containing the command to add the new arguments to.
This list will be modified by this method.
:arg b_args: An iterable of new arguments to add. This iterable is used
more than once so it must be persistent (ie: a list is okay but a
StringIO would not)
:arg explanation: A text string containing explaining why the arguments
were added. It will be displayed with a high enough verbosity.
.. note:: This function does its work via side-effect. The b_command list has the new arguments appended.
"""
display.vvvvv(u'SSH: %s: (%s)' % (explanation, ')('.join(to_text(a) for a in b_args)), host=self.host)
b_command += b_args
def _build_command(self, binary: str, subsystem: str, *other_args: bytes | str) -> list[bytes]:
'''
Takes a executable (ssh, scp, sftp or wrapper) and optional extra arguments and returns the remote command
wrapped in local ssh shell commands and ready for execution.
:arg binary: actual executable to use to execute command.
:arg subsystem: type of executable provided, ssh/sftp/scp, needed because wrappers for ssh might have diff names.
:arg other_args: dict of, value pairs passed as arguments to the ssh binary
'''
b_command = []
conn_password = self.get_option('password') or self._play_context.password
#
# First, the command to invoke
#
# If we want to use password authentication, we have to set up a pipe to
# write the password to sshpass.
pkcs11_provider = self.get_option("pkcs11_provider")
if conn_password or pkcs11_provider:
if not self._sshpass_available():
raise AnsibleError("to use the 'ssh' connection type with passwords or pkcs11_provider, you must install the sshpass program")
if not conn_password and pkcs11_provider:
raise AnsibleError("to use pkcs11_provider you must specify a password/pin")
self.sshpass_pipe = os.pipe()
b_command += [b'sshpass', b'-d' + to_bytes(self.sshpass_pipe[0], nonstring='simplerepr', errors='surrogate_or_strict')]
password_prompt = self.get_option('sshpass_prompt')
if not password_prompt and pkcs11_provider:
# Set default password prompt for pkcs11_provider to make it clear its a PIN
password_prompt = 'Enter PIN for '
if password_prompt:
b_command += [b'-P', to_bytes(password_prompt, errors='surrogate_or_strict')]
b_command += [to_bytes(binary, errors='surrogate_or_strict')]
#
# Next, additional arguments based on the configuration.
#
# pkcs11 mode allows the use of Smartcards or Yubikey devices
if conn_password and pkcs11_provider:
self._add_args(b_command,
(b"-o", b"KbdInteractiveAuthentication=no",
b"-o", b"PreferredAuthentications=publickey",
b"-o", b"PasswordAuthentication=no",
b'-o', to_bytes(u'PKCS11Provider=%s' % pkcs11_provider)),
u'Enable pkcs11')
# sftp batch mode allows us to correctly catch failed transfers, but can
# be disabled if the client side doesn't support the option. However,
# sftp batch mode does not prompt for passwords so it must be disabled
# if not using controlpersist and using sshpass
b_args: t.Iterable[bytes]
if subsystem == 'sftp' and self.get_option('sftp_batch_mode'):
if conn_password:
b_args = [b'-o', b'BatchMode=no']
self._add_args(b_command, b_args, u'disable batch mode for sshpass')
b_command += [b'-b', b'-']
if display.verbosity > 3:
b_command.append(b'-vvv')
# Next, we add ssh_args
ssh_args = self.get_option('ssh_args')
if ssh_args:
b_args = [to_bytes(a, errors='surrogate_or_strict') for a in
self._split_ssh_args(ssh_args)]
self._add_args(b_command, b_args, u"ansible.cfg set ssh_args")
# Now we add various arguments that have their own specific settings defined in docs above.
if self.get_option('host_key_checking') is False:
b_args = (b"-o", b"StrictHostKeyChecking=no")
self._add_args(b_command, b_args, u"ANSIBLE_HOST_KEY_CHECKING/host_key_checking disabled")
self.port = self.get_option('port')
if self.port is not None:
b_args = (b"-o", b"Port=" + to_bytes(self.port, nonstring='simplerepr', errors='surrogate_or_strict'))
self._add_args(b_command, b_args, u"ANSIBLE_REMOTE_PORT/remote_port/ansible_port set")
key = self.get_option('private_key_file')
if key:
b_args = (b"-o", b'IdentityFile="' + to_bytes(os.path.expanduser(key), errors='surrogate_or_strict') + b'"')
self._add_args(b_command, b_args, u"ANSIBLE_PRIVATE_KEY_FILE/private_key_file/ansible_ssh_private_key_file set")
if not conn_password:
self._add_args(
b_command, (
b"-o", b"KbdInteractiveAuthentication=no",
b"-o", b"PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey",
b"-o", b"PasswordAuthentication=no"
),
u"ansible_password/ansible_ssh_password not set"
)
self.user = self.get_option('remote_user')
if self.user:
self._add_args(
b_command,
(b"-o", b'User="%s"' % to_bytes(self.user, errors='surrogate_or_strict')),
u"ANSIBLE_REMOTE_USER/remote_user/ansible_user/user/-u set"
)
timeout = self.get_option('timeout')
self._add_args(
b_command,
(b"-o", b"ConnectTimeout=" + to_bytes(timeout, errors='surrogate_or_strict', nonstring='simplerepr')),
u"ANSIBLE_TIMEOUT/timeout set"
)
# Add in any common or binary-specific arguments from the PlayContext
# (i.e. inventory or task settings or overrides on the command line).
for opt in (u'ssh_common_args', u'{0}_extra_args'.format(subsystem)):
attr = self.get_option(opt)
if attr is not None:
b_args = [to_bytes(a, errors='surrogate_or_strict') for a in self._split_ssh_args(attr)]
self._add_args(b_command, b_args, u"Set %s" % opt)
# Check if ControlPersist is enabled and add a ControlPath if one hasn't
# already been set.
controlpersist, controlpath = self._persistence_controls(b_command)
if controlpersist:
self._persistent = True
if not controlpath:
self.control_path_dir = self.get_option('control_path_dir')
cpdir = unfrackpath(self.control_path_dir)
b_cpdir = to_bytes(cpdir, errors='surrogate_or_strict')
# The directory must exist and be writable.
makedirs_safe(b_cpdir, 0o700)
if not os.access(b_cpdir, os.W_OK):
raise AnsibleError("Cannot write to ControlPath %s" % to_native(cpdir))
self.control_path = self.get_option('control_path')
if not self.control_path:
self.control_path = self._create_control_path(
self.host,
self.port,
self.user
)
b_args = (b"-o", b'ControlPath="%s"' % to_bytes(self.control_path % dict(directory=cpdir), errors='surrogate_or_strict'))
self._add_args(b_command, b_args, u"found only ControlPersist; added ControlPath")
# Finally, we add any caller-supplied extras.
if other_args:
b_command += [to_bytes(a) for a in other_args]
return b_command
def _send_initial_data(self, fh: io.IOBase, in_data: bytes, ssh_process: subprocess.Popen) -> None:
'''
Writes initial data to the stdin filehandle of the subprocess and closes
it. (The handle must be closed; otherwise, for example, "sftp -b -" will
just hang forever waiting for more commands.)
'''
display.debug(u'Sending initial data')
try:
fh.write(to_bytes(in_data))
fh.close()
except (OSError, IOError) as e:
# The ssh connection may have already terminated at this point, with a more useful error
# Only raise AnsibleConnectionFailure if the ssh process is still alive
time.sleep(0.001)
ssh_process.poll()
if getattr(ssh_process, 'returncode', None) is None:
raise AnsibleConnectionFailure(
'Data could not be sent to remote host "%s". Make sure this host can be reached '
'over ssh: %s' % (self.host, to_native(e)), orig_exc=e
)
display.debug(u'Sent initial data (%d bytes)' % len(in_data))
# Used by _run() to kill processes on failures
@staticmethod
def _terminate_process(p: subprocess.Popen) -> None:
""" Terminate a process, ignoring errors """
try:
p.terminate()
except (OSError, IOError):
pass
# This is separate from _run() because we need to do the same thing for stdout
# and stderr.
def _examine_output(self, source: str, state: str, b_chunk: bytes, sudoable: bool) -> tuple[bytes, bytes]:
'''
Takes a string, extracts complete lines from it, tests to see if they
are a prompt, error message, etc., and sets appropriate flags in self.
Prompt and success lines are removed.
Returns the processed (i.e. possibly-edited) output and the unprocessed
remainder (to be processed with the next chunk) as strings.
'''
output = []
for b_line in b_chunk.splitlines(True):
display_line = to_text(b_line).rstrip('\r\n')
suppress_output = False
# display.debug("Examining line (source=%s, state=%s): '%s'" % (source, state, display_line))
if SSH_DEBUG.match(display_line):
# skip lines from ssh debug output to avoid false matches
pass
elif self.become.expect_prompt() and self.become.check_password_prompt(b_line):
display.debug(u"become_prompt: (source=%s, state=%s): '%s'" % (source, state, display_line))
self._flags['become_prompt'] = True
suppress_output = True
elif self.become.success and self.become.check_success(b_line):
display.debug(u"become_success: (source=%s, state=%s): '%s'" % (source, state, display_line))
self._flags['become_success'] = True
suppress_output = True
elif sudoable and self.become.check_incorrect_password(b_line):
display.debug(u"become_error: (source=%s, state=%s): '%s'" % (source, state, display_line))
self._flags['become_error'] = True
elif sudoable and self.become.check_missing_password(b_line):
display.debug(u"become_nopasswd_error: (source=%s, state=%s): '%s'" % (source, state, display_line))
self._flags['become_nopasswd_error'] = True
if not suppress_output:
output.append(b_line)
# The chunk we read was most likely a series of complete lines, but just
# in case the last line was incomplete (and not a prompt, which we would
# have removed from the output), we retain it to be processed with the
# next chunk.
remainder = b''
if output and not output[-1].endswith(b'\n'):
remainder = output[-1]
output = output[:-1]
return b''.join(output), remainder
def _bare_run(self, cmd: list[bytes], in_data: bytes | None, sudoable: bool = True, checkrc: bool = True) -> tuple[int, bytes, bytes]:
'''
Starts the command and communicates with it until it ends.
'''
# We don't use _shell.quote as this is run on the controller and independent from the shell plugin chosen
display_cmd = u' '.join(shlex.quote(to_text(c)) for c in cmd)
display.vvv(u'SSH: EXEC {0}'.format(display_cmd), host=self.host)
# Start the given command. If we don't need to pipeline data, we can try
# to use a pseudo-tty (ssh will have been invoked with -tt). If we are
# pipelining data, or can't create a pty, we fall back to using plain
# old pipes.
p = None
if isinstance(cmd, (text_type, binary_type)):
cmd = to_bytes(cmd)
else:
cmd = list(map(to_bytes, cmd))
conn_password = self.get_option('password') or self._play_context.password
if not in_data:
try:
# Make sure stdin is a proper pty to avoid tcgetattr errors
master, slave = pty.openpty()
if PY3 and conn_password:
# pylint: disable=unexpected-keyword-arg
p = subprocess.Popen(cmd, stdin=slave, stdout=subprocess.PIPE, stderr=subprocess.PIPE, pass_fds=self.sshpass_pipe)
else:
p = subprocess.Popen(cmd, stdin=slave, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
stdin = os.fdopen(master, 'wb', 0)
os.close(slave)
except (OSError, IOError):
p = None
if not p:
try:
if PY3 and conn_password:
# pylint: disable=unexpected-keyword-arg
p = subprocess.Popen(cmd, stdin=subprocess.PIPE, stdout=subprocess.PIPE,
stderr=subprocess.PIPE, pass_fds=self.sshpass_pipe)
else:
p = subprocess.Popen(cmd, stdin=subprocess.PIPE, stdout=subprocess.PIPE,
stderr=subprocess.PIPE)
stdin = p.stdin # type: ignore[assignment] # stdin will be set and not None due to the calls above
except (OSError, IOError) as e:
raise AnsibleError('Unable to execute ssh command line on a controller due to: %s' % to_native(e))
# If we are using SSH password authentication, write the password into
# the pipe we opened in _build_command.
if conn_password:
os.close(self.sshpass_pipe[0])
try:
os.write(self.sshpass_pipe[1], to_bytes(conn_password) + b'\n')
except OSError as e:
# Ignore broken pipe errors if the sshpass process has exited.
if e.errno != errno.EPIPE or p.poll() is None:
raise
os.close(self.sshpass_pipe[1])
#
# SSH state machine
#
# Now we read and accumulate output from the running process until it
# exits. Depending on the circumstances, we may also need to write an
# escalation password and/or pipelined input to the process.
states = [
'awaiting_prompt', 'awaiting_escalation', 'ready_to_send', 'awaiting_exit'
]
# Are we requesting privilege escalation? Right now, we may be invoked
# to execute sftp/scp with sudoable=True, but we can request escalation
# only when using ssh. Otherwise we can send initial data straightaway.
state = states.index('ready_to_send')
if to_bytes(self.get_option('ssh_executable')) in cmd and sudoable:
prompt = getattr(self.become, 'prompt', None)
if prompt:
# We're requesting escalation with a password, so we have to
# wait for a password prompt.
state = states.index('awaiting_prompt')
display.debug(u'Initial state: %s: %s' % (states[state], to_text(prompt)))
elif self.become and self.become.success:
# We're requesting escalation without a password, so we have to
# detect success/failure before sending any initial data.
state = states.index('awaiting_escalation')
display.debug(u'Initial state: %s: %s' % (states[state], to_text(self.become.success)))
# We store accumulated stdout and stderr output from the process here,
# but strip any privilege escalation prompt/confirmation lines first.
# Output is accumulated into tmp_*, complete lines are extracted into
# an array, then checked and removed or copied to stdout or stderr. We
# set any flags based on examining the output in self._flags.
b_stdout = b_stderr = b''
b_tmp_stdout = b_tmp_stderr = b''
self._flags = dict(
become_prompt=False, become_success=False,
become_error=False, become_nopasswd_error=False
)
# select timeout should be longer than the connect timeout, otherwise
# they will race each other when we can't connect, and the connect
# timeout usually fails
timeout = 2 + self.get_option('timeout')
for fd in (p.stdout, p.stderr):
fcntl.fcntl(fd, fcntl.F_SETFL, fcntl.fcntl(fd, fcntl.F_GETFL) | os.O_NONBLOCK)
# TODO: bcoca would like to use SelectSelector() when open
# select is faster when filehandles is low and we only ever handle 1.
selector = selectors.DefaultSelector()
selector.register(p.stdout, selectors.EVENT_READ)
selector.register(p.stderr, selectors.EVENT_READ)
# If we can send initial data without waiting for anything, we do so
# before we start polling
if states[state] == 'ready_to_send' and in_data:
self._send_initial_data(stdin, in_data, p)
state += 1
try:
while True:
poll = p.poll()
events = selector.select(timeout)
# We pay attention to timeouts only while negotiating a prompt.
if not events:
# We timed out
if state <= states.index('awaiting_escalation'):
# If the process has already exited, then it's not really a
# timeout; we'll let the normal error handling deal with it.
if poll is not None:
break
self._terminate_process(p)
raise AnsibleError('Timeout (%ds) waiting for privilege escalation prompt: %s' % (timeout, to_native(b_stdout)))
# Read whatever output is available on stdout and stderr, and stop
# listening to the pipe if it's been closed.
for key, event in events:
if key.fileobj == p.stdout:
b_chunk = p.stdout.read()
if b_chunk == b'':
# stdout has been closed, stop watching it
selector.unregister(p.stdout)
# When ssh has ControlMaster (+ControlPath/Persist) enabled, the
# first connection goes into the background and we never see EOF
# on stderr. If we see EOF on stdout, lower the select timeout
# to reduce the time wasted selecting on stderr if we observe
# that the process has not yet existed after this EOF. Otherwise
# we may spend a long timeout period waiting for an EOF that is
# not going to arrive until the persisted connection closes.
timeout = 1
b_tmp_stdout += b_chunk
display.debug(u"stdout chunk (state=%s):\n>>>%s<<<\n" % (state, to_text(b_chunk)))
elif key.fileobj == p.stderr:
b_chunk = p.stderr.read()
if b_chunk == b'':
# stderr has been closed, stop watching it
selector.unregister(p.stderr)
b_tmp_stderr += b_chunk
display.debug("stderr chunk (state=%s):\n>>>%s<<<\n" % (state, to_text(b_chunk)))
# We examine the output line-by-line until we have negotiated any
# privilege escalation prompt and subsequent success/error message.
# Afterwards, we can accumulate output without looking at it.
if state < states.index('ready_to_send'):
if b_tmp_stdout:
b_output, b_unprocessed = self._examine_output('stdout', states[state], b_tmp_stdout, sudoable)
b_stdout += b_output
b_tmp_stdout = b_unprocessed
if b_tmp_stderr:
b_output, b_unprocessed = self._examine_output('stderr', states[state], b_tmp_stderr, sudoable)
b_stderr += b_output
b_tmp_stderr = b_unprocessed
else:
b_stdout += b_tmp_stdout
b_stderr += b_tmp_stderr
b_tmp_stdout = b_tmp_stderr = b''
# If we see a privilege escalation prompt, we send the password.
# (If we're expecting a prompt but the escalation succeeds, we
# didn't need the password and can carry on regardless.)
if states[state] == 'awaiting_prompt':
if self._flags['become_prompt']:
display.debug(u'Sending become_password in response to prompt')
become_pass = self.become.get_option('become_pass', playcontext=self._play_context)
stdin.write(to_bytes(become_pass, errors='surrogate_or_strict') + b'\n')
# On python3 stdin is a BufferedWriter, and we don't have a guarantee
# that the write will happen without a flush
stdin.flush()
self._flags['become_prompt'] = False
state += 1
elif self._flags['become_success']:
state += 1
# We've requested escalation (with or without a password), now we
# wait for an error message or a successful escalation.
if states[state] == 'awaiting_escalation':
if self._flags['become_success']:
display.vvv(u'Escalation succeeded')
self._flags['become_success'] = False
state += 1
elif self._flags['become_error']:
display.vvv(u'Escalation failed')
self._terminate_process(p)
self._flags['become_error'] = False
raise AnsibleError('Incorrect %s password' % self.become.name)
elif self._flags['become_nopasswd_error']:
display.vvv(u'Escalation requires password')
self._terminate_process(p)
self._flags['become_nopasswd_error'] = False
raise AnsibleError('Missing %s password' % self.become.name)
elif self._flags['become_prompt']:
# This shouldn't happen, because we should see the "Sorry,
# try again" message first.
display.vvv(u'Escalation prompt repeated')
self._terminate_process(p)
self._flags['become_prompt'] = False
raise AnsibleError('Incorrect %s password' % self.become.name)
# Once we're sure that the privilege escalation prompt, if any, has
# been dealt with, we can send any initial data and start waiting
# for output.
if states[state] == 'ready_to_send':
if in_data:
self._send_initial_data(stdin, in_data, p)
state += 1
# Now we're awaiting_exit: has the child process exited? If it has,
# and we've read all available output from it, we're done.
if poll is not None:
if not selector.get_map() or not events:
break
# We should not see further writes to the stdout/stderr file
# descriptors after the process has closed, set the select
# timeout to gather any last writes we may have missed.
timeout = 0
continue
# If the process has not yet exited, but we've already read EOF from
# its stdout and stderr (and thus no longer watching any file
# descriptors), we can just wait for it to exit.
elif not selector.get_map():
p.wait()
break
# Otherwise there may still be outstanding data to read.
finally:
selector.close()
# close stdin, stdout, and stderr after process is terminated and
# stdout/stderr are read completely (see also issues #848, #64768).
stdin.close()
p.stdout.close()
p.stderr.close()
if self.get_option('host_key_checking'):
if cmd[0] == b"sshpass" and p.returncode == 6:
raise AnsibleError('Using a SSH password instead of a key is not possible because Host Key checking is enabled and sshpass does not support '
'this. Please add this host\'s fingerprint to your known_hosts file to manage this host.')
controlpersisterror = b'Bad configuration option: ControlPersist' in b_stderr or b'unknown configuration option: ControlPersist' in b_stderr
if p.returncode != 0 and controlpersisterror:
raise AnsibleError('using -c ssh on certain older ssh versions may not support ControlPersist, set ANSIBLE_SSH_ARGS="" '
'(or ssh_args in [ssh_connection] section of the config file) before running again')
# If we find a broken pipe because of ControlPersist timeout expiring (see #16731),
# we raise a special exception so that we can retry a connection.
controlpersist_broken_pipe = b'mux_client_hello_exchange: write packet: Broken pipe' in b_stderr
if p.returncode == 255:
additional = to_native(b_stderr)
if controlpersist_broken_pipe:
raise AnsibleControlPersistBrokenPipeError('Data could not be sent because of ControlPersist broken pipe: %s' % additional)
elif in_data and checkrc:
raise AnsibleConnectionFailure('Data could not be sent to remote host "%s". Make sure this host can be reached over ssh: %s'
% (self.host, additional))
return (p.returncode, b_stdout, b_stderr)
@_ssh_retry
def _run(self, cmd: list[bytes], in_data: bytes | None, sudoable: bool = True, checkrc: bool = True) -> tuple[int, bytes, bytes]:
"""Wrapper around _bare_run that retries the connection
"""
return self._bare_run(cmd, in_data, sudoable=sudoable, checkrc=checkrc)
@_ssh_retry
def _file_transport_command(self, in_path: str, out_path: str, sftp_action: str) -> tuple[int, bytes, bytes]:
# scp and sftp require square brackets for IPv6 addresses, but
# accept them for hostnames and IPv4 addresses too.
host = '[%s]' % self.host
smart_methods = ['sftp', 'scp', 'piped']
# Windows does not support dd so we cannot use the piped method
if getattr(self._shell, "_IS_WINDOWS", False):
smart_methods.remove('piped')
# Transfer methods to try
methods = []
# Use the transfer_method option if set, otherwise use scp_if_ssh
ssh_transfer_method = self.get_option('ssh_transfer_method')
scp_if_ssh = self.get_option('scp_if_ssh')
if ssh_transfer_method is None and scp_if_ssh == 'smart':
ssh_transfer_method = 'smart'
if ssh_transfer_method is not None:
if ssh_transfer_method == 'smart':
methods = smart_methods
else:
methods = [ssh_transfer_method]
else:
# since this can be a non-bool now, we need to handle it correctly
if not isinstance(scp_if_ssh, bool):
scp_if_ssh = scp_if_ssh.lower()
if scp_if_ssh in BOOLEANS:
scp_if_ssh = boolean(scp_if_ssh, strict=False)
elif scp_if_ssh != 'smart':
raise AnsibleOptionsError('scp_if_ssh needs to be one of [smart|True|False]')
if scp_if_ssh == 'smart':
methods = smart_methods
elif scp_if_ssh is True:
methods = ['scp']
else:
methods = ['sftp']
for method in methods:
returncode = stdout = stderr = None
if method == 'sftp':
cmd = self._build_command(self.get_option('sftp_executable'), 'sftp', to_bytes(host))
in_data = u"{0} {1} {2}\n".format(sftp_action, shlex.quote(in_path), shlex.quote(out_path))
in_data = to_bytes(in_data, nonstring='passthru')
(returncode, stdout, stderr) = self._bare_run(cmd, in_data, checkrc=False)
elif method == 'scp':
scp = self.get_option('scp_executable')
if sftp_action == 'get':
cmd = self._build_command(scp, 'scp', u'{0}:{1}'.format(host, self._shell.quote(in_path)), out_path)
else:
cmd = self._build_command(scp, 'scp', in_path, u'{0}:{1}'.format(host, self._shell.quote(out_path)))
in_data = None
(returncode, stdout, stderr) = self._bare_run(cmd, in_data, checkrc=False)
elif method == 'piped':
if sftp_action == 'get':
# we pass sudoable=False to disable pty allocation, which
# would end up mixing stdout/stderr and screwing with newlines
(returncode, stdout, stderr) = self.exec_command('dd if=%s bs=%s' % (in_path, BUFSIZE), sudoable=False)
with open(to_bytes(out_path, errors='surrogate_or_strict'), 'wb+') as out_file:
out_file.write(stdout)
else:
with open(to_bytes(in_path, errors='surrogate_or_strict'), 'rb') as f:
in_data = to_bytes(f.read(), nonstring='passthru')
if not in_data:
count = ' count=0'
else:
count = ''
(returncode, stdout, stderr) = self.exec_command('dd of=%s bs=%s%s' % (out_path, BUFSIZE, count), in_data=in_data, sudoable=False)
# Check the return code and rollover to next method if failed
if returncode == 0:
return (returncode, stdout, stderr)
else:
# If not in smart mode, the data will be printed by the raise below
if len(methods) > 1:
display.warning(u'%s transfer mechanism failed on %s. Use ANSIBLE_DEBUG=1 to see detailed information' % (method, host))
display.debug(u'%s' % to_text(stdout))
display.debug(u'%s' % to_text(stderr))
if returncode == 255:
raise AnsibleConnectionFailure("Failed to connect to the host via %s: %s" % (method, to_native(stderr)))
else:
raise AnsibleError("failed to transfer file to %s %s:\n%s\n%s" %
(to_native(in_path), to_native(out_path), to_native(stdout), to_native(stderr)))
def _escape_win_path(self, path: str) -> str:
""" converts a Windows path to one that's supported by SFTP and SCP """
# If using a root path then we need to start with /
prefix = ""
if re.match(r'^\w{1}:', path):
prefix = "/"
# Convert all '\' to '/'
return "%s%s" % (prefix, path.replace("\\", "/"))
#
# Main public methods
#
def exec_command(self, cmd: str, in_data: bytes | None = None, sudoable: bool = True) -> tuple[int, bytes, bytes]:
''' run a command on the remote host '''
super(Connection, self).exec_command(cmd, in_data=in_data, sudoable=sudoable)
self.host = self.get_option('host') or self._play_context.remote_addr
display.vvv(u"ESTABLISH SSH CONNECTION FOR USER: {0}".format(self.user), host=self.host)
if getattr(self._shell, "_IS_WINDOWS", False):
# Become method 'runas' is done in the wrapper that is executed,
# need to disable sudoable so the bare_run is not waiting for a
# prompt that will not occur
sudoable = False
# Make sure our first command is to set the console encoding to
# utf-8, this must be done via chcp to get utf-8 (65001)
# union-attr ignores rely on internal powershell shell plugin details,
# this should be fixed at a future point in time.
cmd_parts = ["chcp.com", "65001", self._shell._SHELL_REDIRECT_ALLNULL, self._shell._SHELL_AND] # type: ignore[union-attr]
cmd_parts.extend(self._shell._encode_script(cmd, as_list=True, strict_mode=False, preserve_rc=False)) # type: ignore[union-attr]
cmd = ' '.join(cmd_parts)
# we can only use tty when we are not pipelining the modules. piping
# data into /usr/bin/python inside a tty automatically invokes the
# python interactive-mode but the modules are not compatible with the
# interactive-mode ("unexpected indent" mainly because of empty lines)
ssh_executable = self.get_option('ssh_executable')
# -tt can cause various issues in some environments so allow the user
# to disable it as a troubleshooting method.
use_tty = self.get_option('use_tty')
args: tuple[str, ...]
if not in_data and sudoable and use_tty:
args = ('-tt', self.host, cmd)
else:
args = (self.host, cmd)
cmd = self._build_command(ssh_executable, 'ssh', *args)
(returncode, stdout, stderr) = self._run(cmd, in_data, sudoable=sudoable)
# When running on Windows, stderr may contain CLIXML encoded output
if getattr(self._shell, "_IS_WINDOWS", False) and stderr.startswith(b"#< CLIXML"):
stderr = _parse_clixml(stderr)
return (returncode, stdout, stderr)
def put_file(self, in_path: str, out_path: str) -> tuple[int, bytes, bytes]: # type: ignore[override] # Used by tests and would break API
''' transfer a file from local to remote '''
super(Connection, self).put_file(in_path, out_path)
self.host = self.get_option('host') or self._play_context.remote_addr
display.vvv(u"PUT {0} TO {1}".format(in_path, out_path), host=self.host)
if not os.path.exists(to_bytes(in_path, errors='surrogate_or_strict')):
raise AnsibleFileNotFound("file or module does not exist: {0}".format(to_native(in_path)))
if getattr(self._shell, "_IS_WINDOWS", False):
out_path = self._escape_win_path(out_path)
return self._file_transport_command(in_path, out_path, 'put')
def fetch_file(self, in_path: str, out_path: str) -> tuple[int, bytes, bytes]: # type: ignore[override] # Used by tests and would break API
''' fetch a file from remote to local '''
super(Connection, self).fetch_file(in_path, out_path)
self.host = self.get_option('host') or self._play_context.remote_addr
display.vvv(u"FETCH {0} TO {1}".format(in_path, out_path), host=self.host)
# need to add / if path is rooted
if getattr(self._shell, "_IS_WINDOWS", False):
in_path = self._escape_win_path(in_path)
return self._file_transport_command(in_path, out_path, 'get')
def reset(self) -> None:
run_reset = False
self.host = self.get_option('host') or self._play_context.remote_addr
# If we have a persistent ssh connection (ControlPersist), we can ask it to stop listening.
# only run the reset if the ControlPath already exists or if it isn't configured and ControlPersist is set
# 'check' will determine this.
cmd = self._build_command(self.get_option('ssh_executable'), 'ssh', '-O', 'check', self.host)
display.vvv(u'sending connection check: %s' % to_text(cmd))
p = subprocess.Popen(cmd, stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
stdout, stderr = p.communicate()
status_code = p.wait()
if status_code != 0:
display.vvv(u"No connection to reset: %s" % to_text(stderr))
else:
run_reset = True
if run_reset:
cmd = self._build_command(self.get_option('ssh_executable'), 'ssh', '-O', 'stop', self.host)
display.vvv(u'sending connection stop: %s' % to_text(cmd))
p = subprocess.Popen(cmd, stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
stdout, stderr = p.communicate()
status_code = p.wait()
if status_code != 0:
display.warning(u"Failed to reset connection:%s" % to_text(stderr))
self.close()
def close(self) -> None:
self._connected = False
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 80,841 |
Add type hints to `ansible.utils.display::Display`
|
### Summary
Add type hints to `ansible.utils.display::Display`
### Issue Type
Feature Idea
### Component Name
lib/ansible/utils/display.py
|
https://github.com/ansible/ansible/issues/80841
|
https://github.com/ansible/ansible/pull/81400
|
48d8e067bf6c947a96750b8a61c7d6ef8cad594b
|
4d409888762ca9ca0ae2d67153be5f21a77f5149
| 2023-05-18T16:31:55Z |
python
| 2023-09-05T19:08:13Z |
lib/ansible/utils/display.py
|
# (c) 2014, Michael DeHaan <[email protected]>
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
try:
import curses
except ImportError:
HAS_CURSES = False
else:
# this will be set to False if curses.setupterm() fails
HAS_CURSES = True
import codecs
import ctypes.util
import fcntl
import getpass
import io
import logging
import os
import random
import subprocess
import sys
import termios
import textwrap
import threading
import time
import tty
import typing as t
from functools import wraps
from struct import unpack, pack
from ansible import constants as C
from ansible.errors import AnsibleError, AnsibleAssertionError, AnsiblePromptInterrupt, AnsiblePromptNoninteractive
from ansible.module_utils.common.text.converters import to_bytes, to_text
from ansible.module_utils.six import text_type
from ansible.utils.color import stringc
from ansible.utils.multiprocessing import context as multiprocessing_context
from ansible.utils.singleton import Singleton
from ansible.utils.unsafe_proxy import wrap_var
_LIBC = ctypes.cdll.LoadLibrary(ctypes.util.find_library('c'))
# Set argtypes, to avoid segfault if the wrong type is provided,
# restype is assumed to be c_int
_LIBC.wcwidth.argtypes = (ctypes.c_wchar,)
_LIBC.wcswidth.argtypes = (ctypes.c_wchar_p, ctypes.c_int)
# Max for c_int
_MAX_INT = 2 ** (ctypes.sizeof(ctypes.c_int) * 8 - 1) - 1
MOVE_TO_BOL = b'\r'
CLEAR_TO_EOL = b'\x1b[K'
def get_text_width(text):
"""Function that utilizes ``wcswidth`` or ``wcwidth`` to determine the
number of columns used to display a text string.
We try first with ``wcswidth``, and fallback to iterating each
character and using wcwidth individually, falling back to a value of 0
for non-printable wide characters.
"""
if not isinstance(text, text_type):
raise TypeError('get_text_width requires text, not %s' % type(text))
try:
width = _LIBC.wcswidth(text, _MAX_INT)
except ctypes.ArgumentError:
width = -1
if width != -1:
return width
width = 0
counter = 0
for c in text:
counter += 1
if c in (u'\x08', u'\x7f', u'\x94', u'\x1b'):
# A few characters result in a subtraction of length:
# BS, DEL, CCH, ESC
# ESC is slightly different in that it's part of an escape sequence, and
# while ESC is non printable, it's part of an escape sequence, which results
# in a single non printable length
width -= 1
counter -= 1
continue
try:
w = _LIBC.wcwidth(c)
except ctypes.ArgumentError:
w = -1
if w == -1:
# -1 signifies a non-printable character
# use 0 here as a best effort
w = 0
width += w
if width == 0 and counter:
raise EnvironmentError(
'get_text_width could not calculate text width of %r' % text
)
# It doesn't make sense to have a negative printable width
return width if width >= 0 else 0
def proxy_display(method):
def proxyit(self, *args, **kwargs):
if self._final_q:
# If _final_q is set, that means we are in a WorkerProcess
# and instead of displaying messages directly from the fork
# we will proxy them through the queue
return self._final_q.send_display(method.__name__, *args, **kwargs)
else:
return method(self, *args, **kwargs)
return proxyit
class FilterBlackList(logging.Filter):
def __init__(self, blacklist):
self.blacklist = [logging.Filter(name) for name in blacklist]
def filter(self, record):
return not any(f.filter(record) for f in self.blacklist)
class FilterUserInjector(logging.Filter):
"""
This is a filter which injects the current user as the 'user' attribute on each record. We need to add this filter
to all logger handlers so that 3rd party libraries won't print an exception due to user not being defined.
"""
try:
username = getpass.getuser()
except KeyError:
# people like to make containers w/o actual valid passwd/shadow and use host uids
username = 'uid=%s' % os.getuid()
def filter(self, record):
record.user = FilterUserInjector.username
return True
logger = None
# TODO: make this a callback event instead
if getattr(C, 'DEFAULT_LOG_PATH'):
path = C.DEFAULT_LOG_PATH
if path and (os.path.exists(path) and os.access(path, os.W_OK)) or os.access(os.path.dirname(path), os.W_OK):
# NOTE: level is kept at INFO to avoid security disclosures caused by certain libraries when using DEBUG
logging.basicConfig(filename=path, level=logging.INFO, # DO NOT set to logging.DEBUG
format='%(asctime)s p=%(process)d u=%(user)s n=%(name)s | %(message)s')
logger = logging.getLogger('ansible')
for handler in logging.root.handlers:
handler.addFilter(FilterBlackList(getattr(C, 'DEFAULT_LOG_FILTER', [])))
handler.addFilter(FilterUserInjector())
else:
print("[WARNING]: log file at %s is not writeable and we cannot create it, aborting\n" % path, file=sys.stderr)
# map color to log levels
color_to_log_level = {C.COLOR_ERROR: logging.ERROR,
C.COLOR_WARN: logging.WARNING,
C.COLOR_OK: logging.INFO,
C.COLOR_SKIP: logging.WARNING,
C.COLOR_UNREACHABLE: logging.ERROR,
C.COLOR_DEBUG: logging.DEBUG,
C.COLOR_CHANGED: logging.INFO,
C.COLOR_DEPRECATE: logging.WARNING,
C.COLOR_VERBOSE: logging.INFO}
b_COW_PATHS = (
b"/usr/bin/cowsay",
b"/usr/games/cowsay",
b"/usr/local/bin/cowsay", # BSD path for cowsay
b"/opt/local/bin/cowsay", # MacPorts path for cowsay
)
def _synchronize_textiowrapper(tio, lock):
# Ensure that a background thread can't hold the internal buffer lock on a file object
# during a fork, which causes forked children to hang. We're using display's existing lock for
# convenience (and entering the lock before a fork).
def _wrap_with_lock(f, lock):
@wraps(f)
def locking_wrapper(*args, **kwargs):
with lock:
return f(*args, **kwargs)
return locking_wrapper
buffer = tio.buffer
# monkeypatching the underlying file-like object isn't great, but likely safer than subclassing
buffer.write = _wrap_with_lock(buffer.write, lock)
buffer.flush = _wrap_with_lock(buffer.flush, lock)
def setraw(fd, when=termios.TCSAFLUSH):
"""Put terminal into a raw mode.
Copied from ``tty`` from CPython 3.11.0, and modified to not remove OPOST from OFLAG
OPOST is kept to prevent an issue with multi line prompts from being corrupted now that display
is proxied via the queue from forks. The problem is a race condition, in that we proxy the display
over the fork, but before it can be displayed, this plugin will have continued executing, potentially
setting stdout and stdin to raw which remove output post processing that commonly converts NL to CRLF
"""
mode = termios.tcgetattr(fd)
mode[tty.IFLAG] = mode[tty.IFLAG] & ~(termios.BRKINT | termios.ICRNL | termios.INPCK | termios.ISTRIP | termios.IXON)
mode[tty.OFLAG] = mode[tty.OFLAG] & ~(termios.OPOST)
mode[tty.CFLAG] = mode[tty.CFLAG] & ~(termios.CSIZE | termios.PARENB)
mode[tty.CFLAG] = mode[tty.CFLAG] | termios.CS8
mode[tty.LFLAG] = mode[tty.LFLAG] & ~(termios.ECHO | termios.ICANON | termios.IEXTEN | termios.ISIG)
mode[tty.CC][termios.VMIN] = 1
mode[tty.CC][termios.VTIME] = 0
termios.tcsetattr(fd, when, mode)
def clear_line(stdout):
stdout.write(b'\x1b[%s' % MOVE_TO_BOL)
stdout.write(b'\x1b[%s' % CLEAR_TO_EOL)
def setup_prompt(stdin_fd, stdout_fd, seconds, echo):
# type: (int, int, int, bool) -> None
setraw(stdin_fd)
# Only set stdout to raw mode if it is a TTY. This is needed when redirecting
# stdout to a file since a file cannot be set to raw mode.
if os.isatty(stdout_fd):
setraw(stdout_fd)
if echo:
new_settings = termios.tcgetattr(stdin_fd)
new_settings[3] = new_settings[3] | termios.ECHO
termios.tcsetattr(stdin_fd, termios.TCSANOW, new_settings)
def setupterm():
# Nest the try except since curses.error is not available if curses did not import
try:
curses.setupterm()
except (curses.error, TypeError, io.UnsupportedOperation):
global HAS_CURSES
HAS_CURSES = False
else:
global MOVE_TO_BOL
global CLEAR_TO_EOL
# curses.tigetstr() returns None in some circumstances
MOVE_TO_BOL = curses.tigetstr('cr') or MOVE_TO_BOL
CLEAR_TO_EOL = curses.tigetstr('el') or CLEAR_TO_EOL
class Display(metaclass=Singleton):
def __init__(self, verbosity=0):
self._final_q = None
# NB: this lock is used to both prevent intermingled output between threads and to block writes during forks.
# Do not change the type of this lock or upgrade to a shared lock (eg multiprocessing.RLock).
self._lock = threading.RLock()
self.columns = None
self.verbosity = verbosity
# list of all deprecation messages to prevent duplicate display
self._deprecations = {}
self._warns = {}
self._errors = {}
self.b_cowsay = None
self.noncow = C.ANSIBLE_COW_SELECTION
self.set_cowsay_info()
if self.b_cowsay:
try:
cmd = subprocess.Popen([self.b_cowsay, "-l"], stdout=subprocess.PIPE, stderr=subprocess.PIPE)
(out, err) = cmd.communicate()
if cmd.returncode:
raise Exception
self.cows_available = {to_text(c) for c in out.split()} # set comprehension
if C.ANSIBLE_COW_ACCEPTLIST and any(C.ANSIBLE_COW_ACCEPTLIST):
self.cows_available = set(C.ANSIBLE_COW_ACCEPTLIST).intersection(self.cows_available)
except Exception:
# could not execute cowsay for some reason
self.b_cowsay = False
self._set_column_width()
try:
# NB: we're relying on the display singleton behavior to ensure this only runs once
_synchronize_textiowrapper(sys.stdout, self._lock)
_synchronize_textiowrapper(sys.stderr, self._lock)
except Exception as ex:
self.warning(f"failed to patch stdout/stderr for fork-safety: {ex}")
codecs.register_error('_replacing_warning_handler', self._replacing_warning_handler)
try:
sys.stdout.reconfigure(errors='_replacing_warning_handler')
sys.stderr.reconfigure(errors='_replacing_warning_handler')
except Exception as ex:
self.warning(f"failed to reconfigure stdout/stderr with custom encoding error handler: {ex}")
self.setup_curses = False
def _replacing_warning_handler(self, exception):
# TODO: This should probably be deferred until after the current display is completed
# this will require some amount of new functionality
self.deprecated(
'Non UTF-8 encoded data replaced with "?" while displaying text to stdout/stderr, this is temporary and will become an error',
version='2.18',
)
return '?', exception.end
def set_queue(self, queue):
"""Set the _final_q on Display, so that we know to proxy display over the queue
instead of directly writing to stdout/stderr from forks
This is only needed in ansible.executor.process.worker:WorkerProcess._run
"""
if multiprocessing_context.parent_process() is None:
raise RuntimeError('queue cannot be set in parent process')
self._final_q = queue
def set_cowsay_info(self):
if C.ANSIBLE_NOCOWS:
return
if C.ANSIBLE_COW_PATH:
self.b_cowsay = C.ANSIBLE_COW_PATH
else:
for b_cow_path in b_COW_PATHS:
if os.path.exists(b_cow_path):
self.b_cowsay = b_cow_path
@proxy_display
def display(self, msg, color=None, stderr=False, screen_only=False, log_only=False, newline=True):
""" Display a message to the user
Note: msg *must* be a unicode string to prevent UnicodeError tracebacks.
"""
if not isinstance(msg, str):
raise TypeError(f'Display message must be str, not: {msg.__class__.__name__}')
nocolor = msg
if not log_only:
has_newline = msg.endswith(u'\n')
if has_newline:
msg2 = msg[:-1]
else:
msg2 = msg
if color:
msg2 = stringc(msg2, color)
if has_newline or newline:
msg2 = msg2 + u'\n'
# Note: After Display() class is refactored need to update the log capture
# code in 'bin/ansible-connection' (and other relevant places).
if not stderr:
fileobj = sys.stdout
else:
fileobj = sys.stderr
with self._lock:
fileobj.write(msg2)
# With locks, and the fact that we aren't printing from forks
# just write, and let the system flush. Everything should come out peachy
# I've left this code for historical purposes, or in case we need to add this
# back at a later date. For now ``TaskQueueManager.cleanup`` will perform a
# final flush at shutdown.
# try:
# fileobj.flush()
# except IOError as e:
# # Ignore EPIPE in case fileobj has been prematurely closed, eg.
# # when piping to "head -n1"
# if e.errno != errno.EPIPE:
# raise
if logger and not screen_only:
msg2 = nocolor.lstrip('\n')
lvl = logging.INFO
if color:
# set logger level based on color (not great)
try:
lvl = color_to_log_level[color]
except KeyError:
# this should not happen, but JIC
raise AnsibleAssertionError('Invalid color supplied to display: %s' % color)
# actually log
logger.log(lvl, msg2)
def v(self, msg, host=None):
return self.verbose(msg, host=host, caplevel=0)
def vv(self, msg, host=None):
return self.verbose(msg, host=host, caplevel=1)
def vvv(self, msg, host=None):
return self.verbose(msg, host=host, caplevel=2)
def vvvv(self, msg, host=None):
return self.verbose(msg, host=host, caplevel=3)
def vvvvv(self, msg, host=None):
return self.verbose(msg, host=host, caplevel=4)
def vvvvvv(self, msg, host=None):
return self.verbose(msg, host=host, caplevel=5)
def debug(self, msg, host=None):
if C.DEFAULT_DEBUG:
if host is None:
self.display("%6d %0.5f: %s" % (os.getpid(), time.time(), msg), color=C.COLOR_DEBUG)
else:
self.display("%6d %0.5f [%s]: %s" % (os.getpid(), time.time(), host, msg), color=C.COLOR_DEBUG)
def verbose(self, msg, host=None, caplevel=2):
to_stderr = C.VERBOSE_TO_STDERR
if self.verbosity > caplevel:
if host is None:
self.display(msg, color=C.COLOR_VERBOSE, stderr=to_stderr)
else:
self.display("<%s> %s" % (host, msg), color=C.COLOR_VERBOSE, stderr=to_stderr)
def get_deprecation_message(self, msg, version=None, removed=False, date=None, collection_name=None):
''' used to print out a deprecation message.'''
msg = msg.strip()
if msg and msg[-1] not in ['!', '?', '.']:
msg += '.'
if collection_name == 'ansible.builtin':
collection_name = 'ansible-core'
if removed:
header = '[DEPRECATED]: {0}'.format(msg)
removal_fragment = 'This feature was removed'
help_text = 'Please update your playbooks.'
else:
header = '[DEPRECATION WARNING]: {0}'.format(msg)
removal_fragment = 'This feature will be removed'
# FUTURE: make this a standalone warning so it only shows up once?
help_text = 'Deprecation warnings can be disabled by setting deprecation_warnings=False in ansible.cfg.'
if collection_name:
from_fragment = 'from {0}'.format(collection_name)
else:
from_fragment = ''
if date:
when = 'in a release after {0}.'.format(date)
elif version:
when = 'in version {0}.'.format(version)
else:
when = 'in a future release.'
message_text = ' '.join(f for f in [header, removal_fragment, from_fragment, when, help_text] if f)
return message_text
@proxy_display
def deprecated(self, msg, version=None, removed=False, date=None, collection_name=None):
if not removed and not C.DEPRECATION_WARNINGS:
return
message_text = self.get_deprecation_message(msg, version=version, removed=removed, date=date, collection_name=collection_name)
if removed:
raise AnsibleError(message_text)
wrapped = textwrap.wrap(message_text, self.columns, drop_whitespace=False)
message_text = "\n".join(wrapped) + "\n"
if message_text not in self._deprecations:
self.display(message_text.strip(), color=C.COLOR_DEPRECATE, stderr=True)
self._deprecations[message_text] = 1
@proxy_display
def warning(self, msg, formatted=False):
if not formatted:
new_msg = "[WARNING]: %s" % msg
wrapped = textwrap.wrap(new_msg, self.columns)
new_msg = "\n".join(wrapped) + "\n"
else:
new_msg = "\n[WARNING]: \n%s" % msg
if new_msg not in self._warns:
self.display(new_msg, color=C.COLOR_WARN, stderr=True)
self._warns[new_msg] = 1
def system_warning(self, msg):
if C.SYSTEM_WARNINGS:
self.warning(msg)
def banner(self, msg, color=None, cows=True):
'''
Prints a header-looking line with cowsay or stars with length depending on terminal width (3 minimum)
'''
msg = to_text(msg)
if self.b_cowsay and cows:
try:
self.banner_cowsay(msg)
return
except OSError:
self.warning("somebody cleverly deleted cowsay or something during the PB run. heh.")
msg = msg.strip()
try:
star_len = self.columns - get_text_width(msg)
except EnvironmentError:
star_len = self.columns - len(msg)
if star_len <= 3:
star_len = 3
stars = u"*" * star_len
self.display(u"\n%s %s" % (msg, stars), color=color)
def banner_cowsay(self, msg, color=None):
if u": [" in msg:
msg = msg.replace(u"[", u"")
if msg.endswith(u"]"):
msg = msg[:-1]
runcmd = [self.b_cowsay, b"-W", b"60"]
if self.noncow:
thecow = self.noncow
if thecow == 'random':
thecow = random.choice(list(self.cows_available))
runcmd.append(b'-f')
runcmd.append(to_bytes(thecow))
runcmd.append(to_bytes(msg))
cmd = subprocess.Popen(runcmd, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
(out, err) = cmd.communicate()
self.display(u"%s\n" % to_text(out), color=color)
def error(self, msg, wrap_text=True):
if wrap_text:
new_msg = u"\n[ERROR]: %s" % msg
wrapped = textwrap.wrap(new_msg, self.columns)
new_msg = u"\n".join(wrapped) + u"\n"
else:
new_msg = u"ERROR! %s" % msg
if new_msg not in self._errors:
self.display(new_msg, color=C.COLOR_ERROR, stderr=True)
self._errors[new_msg] = 1
@staticmethod
def prompt(msg, private=False):
if private:
return getpass.getpass(msg)
else:
return input(msg)
def do_var_prompt(self, varname, private=True, prompt=None, encrypt=None, confirm=False, salt_size=None, salt=None, default=None, unsafe=None):
result = None
if sys.__stdin__.isatty():
do_prompt = self.prompt
if prompt and default is not None:
msg = "%s [%s]: " % (prompt, default)
elif prompt:
msg = "%s: " % prompt
else:
msg = 'input for %s: ' % varname
if confirm:
while True:
result = do_prompt(msg, private)
second = do_prompt("confirm " + msg, private)
if result == second:
break
self.display("***** VALUES ENTERED DO NOT MATCH ****")
else:
result = do_prompt(msg, private)
else:
result = None
self.warning("Not prompting as we are not in interactive mode")
# if result is false and default is not None
if not result and default is not None:
result = default
if encrypt:
# Circular import because encrypt needs a display class
from ansible.utils.encrypt import do_encrypt
result = do_encrypt(result, encrypt, salt_size, salt)
# handle utf-8 chars
result = to_text(result, errors='surrogate_or_strict')
if unsafe:
result = wrap_var(result)
return result
def _set_column_width(self):
if os.isatty(1):
tty_size = unpack('HHHH', fcntl.ioctl(1, termios.TIOCGWINSZ, pack('HHHH', 0, 0, 0, 0)))[1]
else:
tty_size = 0
self.columns = max(79, tty_size - 1)
def prompt_until(self, msg, private=False, seconds=None, interrupt_input=None, complete_input=None):
if self._final_q:
from ansible.executor.process.worker import current_worker
self._final_q.send_prompt(
worker_id=current_worker.worker_id, prompt=msg, private=private, seconds=seconds,
interrupt_input=interrupt_input, complete_input=complete_input
)
return current_worker.worker_queue.get()
if HAS_CURSES and not self.setup_curses:
setupterm()
self.setup_curses = True
if (
self._stdin_fd is None
or not os.isatty(self._stdin_fd)
# Compare the current process group to the process group associated
# with terminal of the given file descriptor to determine if the process
# is running in the background.
or os.getpgrp() != os.tcgetpgrp(self._stdin_fd)
):
raise AnsiblePromptNoninteractive('stdin is not interactive')
# When seconds/interrupt_input/complete_input are all None, this does mostly the same thing as input/getpass,
# but self.prompt may raise a KeyboardInterrupt, which must be caught in the main thread.
# If the main thread handled this, it would also need to send a newline to the tty of any hanging pids.
# if seconds is None and interrupt_input is None and complete_input is None:
# try:
# return self.prompt(msg, private=private)
# except KeyboardInterrupt:
# # can't catch in the results_thread_main daemon thread
# raise AnsiblePromptInterrupt('user interrupt')
self.display(msg)
result = b''
with self._lock:
original_stdin_settings = termios.tcgetattr(self._stdin_fd)
try:
setup_prompt(self._stdin_fd, self._stdout_fd, seconds, not private)
# flush the buffer to make sure no previous key presses
# are read in below
termios.tcflush(self._stdin, termios.TCIFLUSH)
# read input 1 char at a time until the optional timeout or complete/interrupt condition is met
return self._read_non_blocking_stdin(echo=not private, seconds=seconds, interrupt_input=interrupt_input, complete_input=complete_input)
finally:
# restore the old settings for the duped stdin stdin_fd
termios.tcsetattr(self._stdin_fd, termios.TCSADRAIN, original_stdin_settings)
def _read_non_blocking_stdin(
self,
echo=False, # type: bool
seconds=None, # type: int
interrupt_input=None, # type: t.Iterable[bytes]
complete_input=None, # type: t.Iterable[bytes]
): # type: (...) -> bytes
if self._final_q:
raise NotImplementedError
if seconds is not None:
start = time.time()
if interrupt_input is None:
try:
interrupt = termios.tcgetattr(sys.stdin.buffer.fileno())[6][termios.VINTR]
except Exception:
interrupt = b'\x03' # value for Ctrl+C
try:
backspace_sequences = [termios.tcgetattr(self._stdin_fd)[6][termios.VERASE]]
except Exception:
# unsupported/not present, use default
backspace_sequences = [b'\x7f', b'\x08']
result_string = b''
while seconds is None or (time.time() - start < seconds):
key_pressed = None
try:
os.set_blocking(self._stdin_fd, False)
while key_pressed is None and (seconds is None or (time.time() - start < seconds)):
key_pressed = self._stdin.read(1)
# throttle to prevent excess CPU consumption
time.sleep(C.DEFAULT_INTERNAL_POLL_INTERVAL)
finally:
os.set_blocking(self._stdin_fd, True)
if key_pressed is None:
key_pressed = b''
if (interrupt_input is None and key_pressed == interrupt) or (interrupt_input is not None and key_pressed.lower() in interrupt_input):
clear_line(self._stdout)
raise AnsiblePromptInterrupt('user interrupt')
if (complete_input is None and key_pressed in (b'\r', b'\n')) or (complete_input is not None and key_pressed.lower() in complete_input):
clear_line(self._stdout)
break
elif key_pressed in backspace_sequences:
clear_line(self._stdout)
result_string = result_string[:-1]
if echo:
self._stdout.write(result_string)
self._stdout.flush()
else:
result_string += key_pressed
return result_string
@property
def _stdin(self):
if self._final_q:
raise NotImplementedError
try:
return sys.stdin.buffer
except AttributeError:
return None
@property
def _stdin_fd(self):
try:
return self._stdin.fileno()
except (ValueError, AttributeError):
return None
@property
def _stdout(self):
if self._final_q:
raise NotImplementedError
return sys.stdout.buffer
@property
def _stdout_fd(self):
try:
return self._stdout.fileno()
except (ValueError, AttributeError):
return None
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 80,841 |
Add type hints to `ansible.utils.display::Display`
|
### Summary
Add type hints to `ansible.utils.display::Display`
### Issue Type
Feature Idea
### Component Name
lib/ansible/utils/display.py
|
https://github.com/ansible/ansible/issues/80841
|
https://github.com/ansible/ansible/pull/81400
|
48d8e067bf6c947a96750b8a61c7d6ef8cad594b
|
4d409888762ca9ca0ae2d67153be5f21a77f5149
| 2023-05-18T16:31:55Z |
python
| 2023-09-05T19:08:13Z |
lib/ansible/utils/encrypt.py
|
# (c) 2012-2014, Michael DeHaan <[email protected]>
# (c) 2017 Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import random
import re
import string
import sys
from collections import namedtuple
from ansible import constants as C
from ansible.errors import AnsibleError, AnsibleAssertionError
from ansible.module_utils.six import text_type
from ansible.module_utils.common.text.converters import to_text, to_bytes
from ansible.utils.display import Display
PASSLIB_E = CRYPT_E = None
HAS_CRYPT = PASSLIB_AVAILABLE = False
try:
import passlib
import passlib.hash
from passlib.utils.handlers import HasRawSalt, PrefixWrapper
try:
from passlib.utils.binary import bcrypt64
except ImportError:
from passlib.utils import bcrypt64
PASSLIB_AVAILABLE = True
except Exception as e:
PASSLIB_E = e
try:
import crypt
HAS_CRYPT = True
except Exception as e:
CRYPT_E = e
display = Display()
__all__ = ['do_encrypt']
DEFAULT_PASSWORD_LENGTH = 20
def random_password(length=DEFAULT_PASSWORD_LENGTH, chars=C.DEFAULT_PASSWORD_CHARS, seed=None):
'''Return a random password string of length containing only chars
:kwarg length: The number of characters in the new password. Defaults to 20.
:kwarg chars: The characters to choose from. The default is all ascii
letters, ascii digits, and these symbols ``.,:-_``
'''
if not isinstance(chars, text_type):
raise AnsibleAssertionError('%s (%s) is not a text_type' % (chars, type(chars)))
if seed is None:
random_generator = random.SystemRandom()
else:
random_generator = random.Random(seed)
return u''.join(random_generator.choice(chars) for dummy in range(length))
def random_salt(length=8):
"""Return a text string suitable for use as a salt for the hash functions we use to encrypt passwords.
"""
# Note passlib salt values must be pure ascii so we can't let the user
# configure this
salt_chars = string.ascii_letters + string.digits + u'./'
return random_password(length=length, chars=salt_chars)
class BaseHash(object):
algo = namedtuple('algo', ['crypt_id', 'salt_size', 'implicit_rounds', 'salt_exact', 'implicit_ident'])
algorithms = {
'md5_crypt': algo(crypt_id='1', salt_size=8, implicit_rounds=None, salt_exact=False, implicit_ident=None),
'bcrypt': algo(crypt_id='2b', salt_size=22, implicit_rounds=12, salt_exact=True, implicit_ident='2b'),
'sha256_crypt': algo(crypt_id='5', salt_size=16, implicit_rounds=535000, salt_exact=False, implicit_ident=None),
'sha512_crypt': algo(crypt_id='6', salt_size=16, implicit_rounds=656000, salt_exact=False, implicit_ident=None),
}
def __init__(self, algorithm):
self.algorithm = algorithm
class CryptHash(BaseHash):
def __init__(self, algorithm):
super(CryptHash, self).__init__(algorithm)
if not HAS_CRYPT:
raise AnsibleError("crypt.crypt cannot be used as the 'crypt' python library is not installed or is unusable.", orig_exc=CRYPT_E)
if sys.platform.startswith('darwin'):
raise AnsibleError("crypt.crypt not supported on Mac OS X/Darwin, install passlib python module")
if algorithm not in self.algorithms:
raise AnsibleError("crypt.crypt does not support '%s' algorithm" % self.algorithm)
display.deprecated(
"Encryption using the Python crypt module is deprecated. The "
"Python crypt module is deprecated and will be removed from "
"Python 3.13. Install the passlib library for continued "
"encryption functionality.",
version=2.17
)
self.algo_data = self.algorithms[algorithm]
def hash(self, secret, salt=None, salt_size=None, rounds=None, ident=None):
salt = self._salt(salt, salt_size)
rounds = self._rounds(rounds)
ident = self._ident(ident)
return self._hash(secret, salt, rounds, ident)
def _salt(self, salt, salt_size):
salt_size = salt_size or self.algo_data.salt_size
ret = salt or random_salt(salt_size)
if re.search(r'[^./0-9A-Za-z]', ret):
raise AnsibleError("invalid characters in salt")
if self.algo_data.salt_exact and len(ret) != self.algo_data.salt_size:
raise AnsibleError("invalid salt size")
elif not self.algo_data.salt_exact and len(ret) > self.algo_data.salt_size:
raise AnsibleError("invalid salt size")
return ret
def _rounds(self, rounds):
if self.algorithm == 'bcrypt':
# crypt requires 2 digits for rounds
return rounds or self.algo_data.implicit_rounds
elif rounds == self.algo_data.implicit_rounds:
# Passlib does not include the rounds if it is the same as implicit_rounds.
# Make crypt lib behave the same, by not explicitly specifying the rounds in that case.
return None
else:
return rounds
def _ident(self, ident):
if not ident:
return self.algo_data.crypt_id
if self.algorithm == 'bcrypt':
return ident
return None
def _hash(self, secret, salt, rounds, ident):
saltstring = ""
if ident:
saltstring = "$%s" % ident
if rounds:
if self.algorithm == 'bcrypt':
saltstring += "$%d" % rounds
else:
saltstring += "$rounds=%d" % rounds
saltstring += "$%s" % salt
# crypt.crypt throws OSError on Python >= 3.9 if it cannot parse saltstring.
try:
result = crypt.crypt(secret, saltstring)
orig_exc = None
except OSError as e:
result = None
orig_exc = e
# None as result would be interpreted by the some modules (user module)
# as no password at all.
if not result:
raise AnsibleError(
"crypt.crypt does not support '%s' algorithm" % self.algorithm,
orig_exc=orig_exc,
)
return result
class PasslibHash(BaseHash):
def __init__(self, algorithm):
super(PasslibHash, self).__init__(algorithm)
if not PASSLIB_AVAILABLE:
raise AnsibleError("passlib must be installed and usable to hash with '%s'" % algorithm, orig_exc=PASSLIB_E)
display.vv("Using passlib to hash input with '%s'" % algorithm)
try:
self.crypt_algo = getattr(passlib.hash, algorithm)
except Exception:
raise AnsibleError("passlib does not support '%s' algorithm" % algorithm)
def hash(self, secret, salt=None, salt_size=None, rounds=None, ident=None):
salt = self._clean_salt(salt)
rounds = self._clean_rounds(rounds)
ident = self._clean_ident(ident)
return self._hash(secret, salt=salt, salt_size=salt_size, rounds=rounds, ident=ident)
def _clean_ident(self, ident):
ret = None
if not ident:
if self.algorithm in self.algorithms:
return self.algorithms.get(self.algorithm).implicit_ident
return ret
if self.algorithm == 'bcrypt':
return ident
return ret
def _clean_salt(self, salt):
if not salt:
return None
elif issubclass(self.crypt_algo.wrapped if isinstance(self.crypt_algo, PrefixWrapper) else self.crypt_algo, HasRawSalt):
ret = to_bytes(salt, encoding='ascii', errors='strict')
else:
ret = to_text(salt, encoding='ascii', errors='strict')
# Ensure the salt has the correct padding
if self.algorithm == 'bcrypt':
ret = bcrypt64.repair_unused(ret)
return ret
def _clean_rounds(self, rounds):
algo_data = self.algorithms.get(self.algorithm)
if rounds:
return rounds
elif algo_data and algo_data.implicit_rounds:
# The default rounds used by passlib depend on the passlib version.
# For consistency ensure that passlib behaves the same as crypt in case no rounds were specified.
# Thus use the crypt defaults.
return algo_data.implicit_rounds
else:
return None
def _hash(self, secret, salt, salt_size, rounds, ident):
# Not every hash algorithm supports every parameter.
# Thus create the settings dict only with set parameters.
settings = {}
if salt:
settings['salt'] = salt
if salt_size:
settings['salt_size'] = salt_size
if rounds:
settings['rounds'] = rounds
if ident:
settings['ident'] = ident
# starting with passlib 1.7 'using' and 'hash' should be used instead of 'encrypt'
try:
if hasattr(self.crypt_algo, 'hash'):
result = self.crypt_algo.using(**settings).hash(secret)
elif hasattr(self.crypt_algo, 'encrypt'):
result = self.crypt_algo.encrypt(secret, **settings)
else:
raise AnsibleError("installed passlib version %s not supported" % passlib.__version__)
except ValueError as e:
raise AnsibleError("Could not hash the secret.", orig_exc=e)
# passlib.hash should always return something or raise an exception.
# Still ensure that there is always a result.
# Otherwise an empty password might be assumed by some modules, like the user module.
if not result:
raise AnsibleError("failed to hash with algorithm '%s'" % self.algorithm)
# Hashes from passlib.hash should be represented as ascii strings of hex
# digits so this should not traceback. If it's not representable as such
# we need to traceback and then block such algorithms because it may
# impact calling code.
return to_text(result, errors='strict')
def passlib_or_crypt(secret, algorithm, salt=None, salt_size=None, rounds=None, ident=None):
if PASSLIB_AVAILABLE:
return PasslibHash(algorithm).hash(secret, salt=salt, salt_size=salt_size, rounds=rounds, ident=ident)
if HAS_CRYPT:
return CryptHash(algorithm).hash(secret, salt=salt, salt_size=salt_size, rounds=rounds, ident=ident)
raise AnsibleError("Unable to encrypt nor hash, either crypt or passlib must be installed.", orig_exc=CRYPT_E)
def do_encrypt(result, encrypt, salt_size=None, salt=None, ident=None):
return passlib_or_crypt(result, encrypt, salt_size=salt_size, salt=salt, ident=ident)
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 80,841 |
Add type hints to `ansible.utils.display::Display`
|
### Summary
Add type hints to `ansible.utils.display::Display`
### Issue Type
Feature Idea
### Component Name
lib/ansible/utils/display.py
|
https://github.com/ansible/ansible/issues/80841
|
https://github.com/ansible/ansible/pull/81400
|
48d8e067bf6c947a96750b8a61c7d6ef8cad594b
|
4d409888762ca9ca0ae2d67153be5f21a77f5149
| 2023-05-18T16:31:55Z |
python
| 2023-09-05T19:08:13Z |
lib/ansible/vars/plugins.py
|
# Copyright (c) 2018 Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
# Make coding more python3-ish
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import os
from ansible import constants as C
from ansible.errors import AnsibleError
from ansible.inventory.host import Host
from ansible.module_utils.common.text.converters import to_bytes
from ansible.plugins.loader import vars_loader
from ansible.utils.collection_loader import AnsibleCollectionRef
from ansible.utils.display import Display
from ansible.utils.vars import combine_vars
display = Display()
def get_plugin_vars(loader, plugin, path, entities):
data = {}
try:
data = plugin.get_vars(loader, path, entities)
except AttributeError:
try:
for entity in entities:
if isinstance(entity, Host):
data |= plugin.get_host_vars(entity.name)
else:
data |= plugin.get_group_vars(entity.name)
except AttributeError:
if hasattr(plugin, 'run'):
raise AnsibleError("Cannot use v1 type vars plugin %s from %s" % (plugin._load_name, plugin._original_path))
else:
raise AnsibleError("Invalid vars plugin %s from %s" % (plugin._load_name, plugin._original_path))
return data
def get_vars_from_path(loader, path, entities, stage):
data = {}
vars_plugin_list = list(vars_loader.all())
for plugin_name in C.VARIABLE_PLUGINS_ENABLED:
if AnsibleCollectionRef.is_valid_fqcr(plugin_name):
vars_plugin = vars_loader.get(plugin_name)
if vars_plugin is None:
# Error if there's no play directory or the name is wrong?
continue
if vars_plugin not in vars_plugin_list:
vars_plugin_list.append(vars_plugin)
for plugin in vars_plugin_list:
# legacy plugins always run by default, but they can set REQUIRES_ENABLED=True to opt out.
builtin_or_legacy = plugin.ansible_name.startswith('ansible.builtin.') or '.' not in plugin.ansible_name
# builtin is supposed to have REQUIRES_ENABLED=True, the following is for legacy plugins...
needs_enabled = not builtin_or_legacy
if hasattr(plugin, 'REQUIRES_ENABLED'):
needs_enabled = plugin.REQUIRES_ENABLED
elif hasattr(plugin, 'REQUIRES_WHITELIST'):
display.deprecated("The VarsModule class variable 'REQUIRES_WHITELIST' is deprecated. "
"Use 'REQUIRES_ENABLED' instead.", version=2.18)
needs_enabled = plugin.REQUIRES_WHITELIST
# A collection plugin was enabled to get to this point because vars_loader.all() does not include collection plugins.
# Warn if a collection plugin has REQUIRES_ENABLED because it has no effect.
if not builtin_or_legacy and (hasattr(plugin, 'REQUIRES_ENABLED') or hasattr(plugin, 'REQUIRES_WHITELIST')):
display.warning(
"Vars plugins in collections must be enabled to be loaded, REQUIRES_ENABLED is not supported. "
"This should be removed from the plugin %s." % plugin.ansible_name
)
elif builtin_or_legacy and needs_enabled and not plugin.matches_name(C.VARIABLE_PLUGINS_ENABLED):
continue
has_stage = hasattr(plugin, 'get_option') and plugin.has_option('stage')
# if a plugin-specific setting has not been provided, use the global setting
# older/non shipped plugins that don't support the plugin-specific setting should also use the global setting
use_global = (has_stage and plugin.get_option('stage') is None) or not has_stage
if use_global:
if C.RUN_VARS_PLUGINS == 'demand' and stage == 'inventory':
continue
elif C.RUN_VARS_PLUGINS == 'start' and stage == 'task':
continue
elif has_stage and plugin.get_option('stage') not in ('all', stage):
continue
data = combine_vars(data, get_plugin_vars(loader, plugin, path, entities))
return data
def get_vars_from_inventory_sources(loader, sources, entities, stage):
data = {}
for path in sources:
if path is None:
continue
if ',' in path and not os.path.exists(path): # skip host lists
continue
elif not os.path.isdir(to_bytes(path)):
# always pass the directory of the inventory source file
path = os.path.dirname(path)
data = combine_vars(data, get_vars_from_path(loader, path, entities, stage))
return data
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 80,488 |
Remove use of `pkg_resources` from the `pip` module
|
### Summary
Use of `pkg_resources` is [deprecated](https://github.com/pypa/setuptools/pull/3843) as of `setuptools` version [67.5.0](https://setuptools.pypa.io/en/stable/history.html#v67-5-0).
https://github.com/ansible/ansible/blob/0371ea08d6de55635ffcbf94da5ddec0cd809495/lib/ansible/modules/pip.py#L277-L284
### Issue Type
Feature Idea
### Component Name
pip module
|
https://github.com/ansible/ansible/issues/80488
|
https://github.com/ansible/ansible/pull/80881
|
dd79c49a4de3a6dd5bd9d31503bd7846475e8e57
|
3ec0850df9429f4b1abc78d9ba505df12d7dd1db
| 2023-04-12T02:06:08Z |
python
| 2023-09-05T21:11:18Z |
changelogs/fragments/80488-pip-pkg-resources.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 80,488 |
Remove use of `pkg_resources` from the `pip` module
|
### Summary
Use of `pkg_resources` is [deprecated](https://github.com/pypa/setuptools/pull/3843) as of `setuptools` version [67.5.0](https://setuptools.pypa.io/en/stable/history.html#v67-5-0).
https://github.com/ansible/ansible/blob/0371ea08d6de55635ffcbf94da5ddec0cd809495/lib/ansible/modules/pip.py#L277-L284
### Issue Type
Feature Idea
### Component Name
pip module
|
https://github.com/ansible/ansible/issues/80488
|
https://github.com/ansible/ansible/pull/80881
|
dd79c49a4de3a6dd5bd9d31503bd7846475e8e57
|
3ec0850df9429f4b1abc78d9ba505df12d7dd1db
| 2023-04-12T02:06:08Z |
python
| 2023-09-05T21:11:18Z |
lib/ansible/modules/pip.py
|
# -*- coding: utf-8 -*-
# Copyright: (c) 2012, Matt Wright <[email protected]>
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
DOCUMENTATION = '''
---
module: pip
short_description: Manages Python library dependencies
description:
- "Manage Python library dependencies. To use this module, one of the following keys is required: O(name)
or O(requirements)."
version_added: "0.7"
options:
name:
description:
- The name of a Python library to install or the url(bzr+,hg+,git+,svn+) of the remote package.
- This can be a list (since 2.2) and contain version specifiers (since 2.7).
type: list
elements: str
version:
description:
- The version number to install of the Python library specified in the O(name) parameter.
type: str
requirements:
description:
- The path to a pip requirements file, which should be local to the remote system.
File can be specified as a relative path if using the chdir option.
type: str
virtualenv:
description:
- An optional path to a I(virtualenv) directory to install into.
It cannot be specified together with the 'executable' parameter
(added in 2.1).
If the virtualenv does not exist, it will be created before installing
packages. The optional virtualenv_site_packages, virtualenv_command,
and virtualenv_python options affect the creation of the virtualenv.
type: path
virtualenv_site_packages:
description:
- Whether the virtual environment will inherit packages from the
global site-packages directory. Note that if this setting is
changed on an already existing virtual environment it will not
have any effect, the environment must be deleted and newly
created.
type: bool
default: "no"
version_added: "1.0"
virtualenv_command:
description:
- The command or a pathname to the command to create the virtual
environment with. For example V(pyvenv), V(virtualenv),
V(virtualenv2), V(~/bin/virtualenv), V(/usr/local/bin/virtualenv).
type: path
default: virtualenv
version_added: "1.1"
virtualenv_python:
description:
- The Python executable used for creating the virtual environment.
For example V(python3.12), V(python2.7). When not specified, the
Python version used to run the ansible module is used. This parameter
should not be used when O(virtualenv_command) is using V(pyvenv) or
the C(-m venv) module.
type: str
version_added: "2.0"
state:
description:
- The state of module
- The 'forcereinstall' option is only available in Ansible 2.1 and above.
type: str
choices: [ absent, forcereinstall, latest, present ]
default: present
extra_args:
description:
- Extra arguments passed to pip.
type: str
version_added: "1.0"
editable:
description:
- Pass the editable flag.
type: bool
default: 'no'
version_added: "2.0"
chdir:
description:
- cd into this directory before running the command
type: path
version_added: "1.3"
executable:
description:
- The explicit executable or pathname for the pip executable,
if different from the Ansible Python interpreter. For
example V(pip3.3), if there are both Python 2.7 and 3.3 installations
in the system and you want to run pip for the Python 3.3 installation.
- Mutually exclusive with O(virtualenv) (added in 2.1).
- Does not affect the Ansible Python interpreter.
- The setuptools package must be installed for both the Ansible Python interpreter
and for the version of Python specified by this option.
type: path
version_added: "1.3"
umask:
description:
- The system umask to apply before installing the pip package. This is
useful, for example, when installing on systems that have a very
restrictive umask by default (e.g., "0077") and you want to pip install
packages which are to be used by all users. Note that this requires you
to specify desired umask mode as an octal string, (e.g., "0022").
type: str
version_added: "2.1"
extends_documentation_fragment:
- action_common_attributes
attributes:
check_mode:
support: full
diff_mode:
support: none
platform:
platforms: posix
notes:
- The virtualenv (U(http://www.virtualenv.org/)) must be
installed on the remote host if the virtualenv parameter is specified and
the virtualenv needs to be created.
- Although it executes using the Ansible Python interpreter, the pip module shells out to
run the actual pip command, so it can use any pip version you specify with O(executable).
By default, it uses the pip version for the Ansible Python interpreter. For example, pip3 on python 3, and pip2 or pip on python 2.
- The interpreter used by Ansible
(see R(ansible_python_interpreter, ansible_python_interpreter))
requires the setuptools package, regardless of the version of pip set with
the O(executable) option.
requirements:
- pip
- virtualenv
- setuptools
author:
- Matt Wright (@mattupstate)
'''
EXAMPLES = '''
- name: Install bottle python package
ansible.builtin.pip:
name: bottle
- name: Install bottle python package on version 0.11
ansible.builtin.pip:
name: bottle==0.11
- name: Install bottle python package with version specifiers
ansible.builtin.pip:
name: bottle>0.10,<0.20,!=0.11
- name: Install multi python packages with version specifiers
ansible.builtin.pip:
name:
- django>1.11.0,<1.12.0
- bottle>0.10,<0.20,!=0.11
- name: Install python package using a proxy
ansible.builtin.pip:
name: six
environment:
http_proxy: 'http://127.0.0.1:8080'
https_proxy: 'https://127.0.0.1:8080'
# You do not have to supply '-e' option in extra_args
- name: Install MyApp using one of the remote protocols (bzr+,hg+,git+,svn+)
ansible.builtin.pip:
name: svn+http://myrepo/svn/MyApp#egg=MyApp
- name: Install MyApp using one of the remote protocols (bzr+,hg+,git+)
ansible.builtin.pip:
name: git+http://myrepo/app/MyApp
- name: Install MyApp from local tarball
ansible.builtin.pip:
name: file:///path/to/MyApp.tar.gz
- name: Install bottle into the specified (virtualenv), inheriting none of the globally installed modules
ansible.builtin.pip:
name: bottle
virtualenv: /my_app/venv
- name: Install bottle into the specified (virtualenv), inheriting globally installed modules
ansible.builtin.pip:
name: bottle
virtualenv: /my_app/venv
virtualenv_site_packages: yes
- name: Install bottle into the specified (virtualenv), using Python 2.7
ansible.builtin.pip:
name: bottle
virtualenv: /my_app/venv
virtualenv_command: virtualenv-2.7
- name: Install bottle within a user home directory
ansible.builtin.pip:
name: bottle
extra_args: --user
- name: Install specified python requirements
ansible.builtin.pip:
requirements: /my_app/requirements.txt
- name: Install specified python requirements in indicated (virtualenv)
ansible.builtin.pip:
requirements: /my_app/requirements.txt
virtualenv: /my_app/venv
- name: Install specified python requirements and custom Index URL
ansible.builtin.pip:
requirements: /my_app/requirements.txt
extra_args: -i https://example.com/pypi/simple
- name: Install specified python requirements offline from a local directory with downloaded packages
ansible.builtin.pip:
requirements: /my_app/requirements.txt
extra_args: "--no-index --find-links=file:///my_downloaded_packages_dir"
- name: Install bottle for Python 3.3 specifically, using the 'pip3.3' executable
ansible.builtin.pip:
name: bottle
executable: pip3.3
- name: Install bottle, forcing reinstallation if it's already installed
ansible.builtin.pip:
name: bottle
state: forcereinstall
- name: Install bottle while ensuring the umask is 0022 (to ensure other users can use it)
ansible.builtin.pip:
name: bottle
umask: "0022"
become: True
'''
RETURN = '''
cmd:
description: pip command used by the module
returned: success
type: str
sample: pip2 install ansible six
name:
description: list of python modules targeted by pip
returned: success
type: list
sample: ['ansible', 'six']
requirements:
description: Path to the requirements file
returned: success, if a requirements file was provided
type: str
sample: "/srv/git/project/requirements.txt"
version:
description: Version of the package specified in 'name'
returned: success, if a name and version were provided
type: str
sample: "2.5.1"
virtualenv:
description: Path to the virtualenv
returned: success, if a virtualenv path was provided
type: str
sample: "/tmp/virtualenv"
'''
import argparse
import os
import re
import sys
import tempfile
import operator
import shlex
import traceback
from ansible.module_utils.compat.version import LooseVersion
SETUPTOOLS_IMP_ERR = None
try:
from pkg_resources import Requirement
HAS_SETUPTOOLS = True
except ImportError:
HAS_SETUPTOOLS = False
SETUPTOOLS_IMP_ERR = traceback.format_exc()
from ansible.module_utils.common.text.converters import to_native
from ansible.module_utils.basic import AnsibleModule, is_executable, missing_required_lib
from ansible.module_utils.common.locale import get_best_parsable_locale
from ansible.module_utils.six import PY3
#: Python one-liners to be run at the command line that will determine the
# installed version for these special libraries. These are libraries that
# don't end up in the output of pip freeze.
_SPECIAL_PACKAGE_CHECKERS = {'setuptools': 'import setuptools; print(setuptools.__version__)',
'pip': 'import pkg_resources; print(pkg_resources.get_distribution("pip").version)'}
_VCS_RE = re.compile(r'(svn|git|hg|bzr)\+')
op_dict = {">=": operator.ge, "<=": operator.le, ">": operator.gt,
"<": operator.lt, "==": operator.eq, "!=": operator.ne, "~=": operator.ge}
def _is_vcs_url(name):
"""Test whether a name is a vcs url or not."""
return re.match(_VCS_RE, name)
def _is_venv_command(command):
venv_parser = argparse.ArgumentParser()
venv_parser.add_argument('-m', type=str)
argv = shlex.split(command)
if argv[0] == 'pyvenv':
return True
args, dummy = venv_parser.parse_known_args(argv[1:])
if args.m == 'venv':
return True
return False
def _is_package_name(name):
"""Test whether the name is a package name or a version specifier."""
return not name.lstrip().startswith(tuple(op_dict.keys()))
def _recover_package_name(names):
"""Recover package names as list from user's raw input.
:input: a mixed and invalid list of names or version specifiers
:return: a list of valid package name
eg.
input: ['django>1.11.1', '<1.11.3', 'ipaddress', 'simpleproject>1.1.0', '<2.0.0']
return: ['django>1.11.1,<1.11.3', 'ipaddress', 'simpleproject>1.1.0,<2.0.0']
input: ['django>1.11.1,<1.11.3,ipaddress', 'simpleproject>1.1.0,<2.0.0']
return: ['django>1.11.1,<1.11.3', 'ipaddress', 'simpleproject>1.1.0,<2.0.0']
"""
# rebuild input name to a flat list so we can tolerate any combination of input
tmp = []
for one_line in names:
tmp.extend(one_line.split(","))
names = tmp
# reconstruct the names
name_parts = []
package_names = []
in_brackets = False
for name in names:
if _is_package_name(name) and not in_brackets:
if name_parts:
package_names.append(",".join(name_parts))
name_parts = []
if "[" in name:
in_brackets = True
if in_brackets and "]" in name:
in_brackets = False
name_parts.append(name)
package_names.append(",".join(name_parts))
return package_names
def _get_cmd_options(module, cmd):
thiscmd = cmd + " --help"
rc, stdout, stderr = module.run_command(thiscmd)
if rc != 0:
module.fail_json(msg="Could not get output from %s: %s" % (thiscmd, stdout + stderr))
words = stdout.strip().split()
cmd_options = [x for x in words if x.startswith('--')]
return cmd_options
def _get_packages(module, pip, chdir):
'''Return results of pip command to get packages.'''
# Try 'pip list' command first.
command = pip + ['list', '--format=freeze']
locale = get_best_parsable_locale(module)
lang_env = {'LANG': locale, 'LC_ALL': locale, 'LC_MESSAGES': locale}
rc, out, err = module.run_command(command, cwd=chdir, environ_update=lang_env)
# If there was an error (pip version too old) then use 'pip freeze'.
if rc != 0:
command = pip + ['freeze']
rc, out, err = module.run_command(command, cwd=chdir)
if rc != 0:
_fail(module, command, out, err)
return ' '.join(command), out, err
def _is_present(module, req, installed_pkgs, pkg_command):
'''Return whether or not package is installed.'''
for pkg in installed_pkgs:
if '==' in pkg:
pkg_name, pkg_version = pkg.split('==')
pkg_name = Package.canonicalize_name(pkg_name)
else:
continue
if pkg_name == req.package_name and req.is_satisfied_by(pkg_version):
return True
return False
def _get_pip(module, env=None, executable=None):
# Older pip only installed under the "/usr/bin/pip" name. Many Linux
# distros install it there.
# By default, we try to use pip required for the current python
# interpreter, so people can use pip to install modules dependencies
candidate_pip_basenames = ('pip2', 'pip')
if PY3:
# pip under python3 installs the "/usr/bin/pip3" name
candidate_pip_basenames = ('pip3',)
pip = None
if executable is not None:
if os.path.isabs(executable):
pip = executable
else:
# If you define your own executable that executable should be the only candidate.
# As noted in the docs, executable doesn't work with virtualenvs.
candidate_pip_basenames = (executable,)
elif executable is None and env is None and _have_pip_module():
# If no executable or virtualenv were specified, use the pip module for the current Python interpreter if available.
# Use of `__main__` is required to support Python 2.6 since support for executing packages with `runpy` was added in Python 2.7.
# Without it Python 2.6 gives the following error: pip is a package and cannot be directly executed
pip = [sys.executable, '-m', 'pip.__main__']
if pip is None:
if env is None:
opt_dirs = []
for basename in candidate_pip_basenames:
pip = module.get_bin_path(basename, False, opt_dirs)
if pip is not None:
break
else:
# For-else: Means that we did not break out of the loop
# (therefore, that pip was not found)
module.fail_json(msg='Unable to find any of %s to use. pip'
' needs to be installed.' % ', '.join(candidate_pip_basenames))
else:
# If we're using a virtualenv we must use the pip from the
# virtualenv
venv_dir = os.path.join(env, 'bin')
candidate_pip_basenames = (candidate_pip_basenames[0], 'pip')
for basename in candidate_pip_basenames:
candidate = os.path.join(venv_dir, basename)
if os.path.exists(candidate) and is_executable(candidate):
pip = candidate
break
else:
# For-else: Means that we did not break out of the loop
# (therefore, that pip was not found)
module.fail_json(msg='Unable to find pip in the virtualenv, %s, ' % env +
'under any of these names: %s. ' % (', '.join(candidate_pip_basenames)) +
'Make sure pip is present in the virtualenv.')
if not isinstance(pip, list):
pip = [pip]
return pip
def _have_pip_module(): # type: () -> bool
"""Return True if the `pip` module can be found using the current Python interpreter, otherwise return False."""
try:
from importlib.util import find_spec
except ImportError:
find_spec = None # type: ignore[assignment] # type: ignore[no-redef]
if find_spec: # type: ignore[truthy-function]
# noinspection PyBroadException
try:
# noinspection PyUnresolvedReferences
found = bool(find_spec('pip'))
except Exception:
found = False
else:
# noinspection PyDeprecation
import imp
# noinspection PyBroadException
try:
# noinspection PyDeprecation
imp.find_module('pip')
except Exception:
found = False
else:
found = True
return found
def _fail(module, cmd, out, err):
msg = ''
if out:
msg += "stdout: %s" % (out, )
if err:
msg += "\n:stderr: %s" % (err, )
module.fail_json(cmd=cmd, msg=msg)
def _get_package_info(module, package, env=None):
"""This is only needed for special packages which do not show up in pip freeze
pip and setuptools fall into this category.
:returns: a string containing the version number if the package is
installed. None if the package is not installed.
"""
if env:
opt_dirs = ['%s/bin' % env]
else:
opt_dirs = []
python_bin = module.get_bin_path('python', False, opt_dirs)
if python_bin is None:
formatted_dep = None
else:
rc, out, err = module.run_command([python_bin, '-c', _SPECIAL_PACKAGE_CHECKERS[package]])
if rc:
formatted_dep = None
else:
formatted_dep = '%s==%s' % (package, out.strip())
return formatted_dep
def setup_virtualenv(module, env, chdir, out, err):
if module.check_mode:
module.exit_json(changed=True)
cmd = shlex.split(module.params['virtualenv_command'])
# Find the binary for the command in the PATH
# and switch the command for the explicit path.
if os.path.basename(cmd[0]) == cmd[0]:
cmd[0] = module.get_bin_path(cmd[0], True)
# Add the system-site-packages option if that
# is enabled, otherwise explicitly set the option
# to not use system-site-packages if that is an
# option provided by the command's help function.
if module.params['virtualenv_site_packages']:
cmd.append('--system-site-packages')
else:
cmd_opts = _get_cmd_options(module, cmd[0])
if '--no-site-packages' in cmd_opts:
cmd.append('--no-site-packages')
virtualenv_python = module.params['virtualenv_python']
# -p is a virtualenv option, not compatible with pyenv or venv
# this conditional validates if the command being used is not any of them
if not _is_venv_command(module.params['virtualenv_command']):
if virtualenv_python:
cmd.append('-p%s' % virtualenv_python)
elif PY3:
# Ubuntu currently has a patch making virtualenv always
# try to use python2. Since Ubuntu16 works without
# python2 installed, this is a problem. This code mimics
# the upstream behaviour of using the python which invoked
# virtualenv to determine which python is used inside of
# the virtualenv (when none are specified).
cmd.append('-p%s' % sys.executable)
# if venv or pyvenv are used and virtualenv_python is defined, then
# virtualenv_python is ignored, this has to be acknowledged
elif module.params['virtualenv_python']:
module.fail_json(
msg='virtualenv_python should not be used when'
' using the venv module or pyvenv as virtualenv_command'
)
cmd.append(env)
rc, out_venv, err_venv = module.run_command(cmd, cwd=chdir)
out += out_venv
err += err_venv
if rc != 0:
_fail(module, cmd, out, err)
return out, err
class Package:
"""Python distribution package metadata wrapper.
A wrapper class for Requirement, which provides
API to parse package name, version specifier,
test whether a package is already satisfied.
"""
_CANONICALIZE_RE = re.compile(r'[-_.]+')
def __init__(self, name_string, version_string=None):
self._plain_package = False
self.package_name = name_string
self._requirement = None
if version_string:
version_string = version_string.lstrip()
separator = '==' if version_string[0].isdigit() else ' '
name_string = separator.join((name_string, version_string))
try:
self._requirement = Requirement.parse(name_string)
# old pkg_resource will replace 'setuptools' with 'distribute' when it's already installed
if self._requirement.project_name == "distribute" and "setuptools" in name_string:
self.package_name = "setuptools"
self._requirement.project_name = "setuptools"
else:
self.package_name = Package.canonicalize_name(self._requirement.project_name)
self._plain_package = True
except ValueError as e:
pass
@property
def has_version_specifier(self):
if self._plain_package:
return bool(self._requirement.specs)
return False
def is_satisfied_by(self, version_to_test):
if not self._plain_package:
return False
try:
return self._requirement.specifier.contains(version_to_test, prereleases=True)
except AttributeError:
# old setuptools has no specifier, do fallback
version_to_test = LooseVersion(version_to_test)
return all(
op_dict[op](version_to_test, LooseVersion(ver))
for op, ver in self._requirement.specs
)
@staticmethod
def canonicalize_name(name):
# This is taken from PEP 503.
return Package._CANONICALIZE_RE.sub("-", name).lower()
def __str__(self):
if self._plain_package:
return to_native(self._requirement)
return self.package_name
def main():
state_map = dict(
present=['install'],
absent=['uninstall', '-y'],
latest=['install', '-U'],
forcereinstall=['install', '-U', '--force-reinstall'],
)
module = AnsibleModule(
argument_spec=dict(
state=dict(type='str', default='present', choices=list(state_map.keys())),
name=dict(type='list', elements='str'),
version=dict(type='str'),
requirements=dict(type='str'),
virtualenv=dict(type='path'),
virtualenv_site_packages=dict(type='bool', default=False),
virtualenv_command=dict(type='path', default='virtualenv'),
virtualenv_python=dict(type='str'),
extra_args=dict(type='str'),
editable=dict(type='bool', default=False),
chdir=dict(type='path'),
executable=dict(type='path'),
umask=dict(type='str'),
),
required_one_of=[['name', 'requirements']],
mutually_exclusive=[['name', 'requirements'], ['executable', 'virtualenv']],
supports_check_mode=True,
)
if not HAS_SETUPTOOLS:
module.fail_json(msg=missing_required_lib("setuptools"),
exception=SETUPTOOLS_IMP_ERR)
state = module.params['state']
name = module.params['name']
version = module.params['version']
requirements = module.params['requirements']
extra_args = module.params['extra_args']
chdir = module.params['chdir']
umask = module.params['umask']
env = module.params['virtualenv']
venv_created = False
if env and chdir:
env = os.path.join(chdir, env)
if umask and not isinstance(umask, int):
try:
umask = int(umask, 8)
except Exception:
module.fail_json(msg="umask must be an octal integer",
details=to_native(sys.exc_info()[1]))
old_umask = None
if umask is not None:
old_umask = os.umask(umask)
try:
if state == 'latest' and version is not None:
module.fail_json(msg='version is incompatible with state=latest')
if chdir is None:
# this is done to avoid permissions issues with privilege escalation and virtualenvs
chdir = tempfile.gettempdir()
err = ''
out = ''
if env:
if not os.path.exists(os.path.join(env, 'bin', 'activate')):
venv_created = True
out, err = setup_virtualenv(module, env, chdir, out, err)
pip = _get_pip(module, env, module.params['executable'])
cmd = pip + state_map[state]
# If there's a virtualenv we want things we install to be able to use other
# installations that exist as binaries within this virtualenv. Example: we
# install cython and then gevent -- gevent needs to use the cython binary,
# not just a python package that will be found by calling the right python.
# So if there's a virtualenv, we add that bin/ to the beginning of the PATH
# in run_command by setting path_prefix here.
path_prefix = None
if env:
path_prefix = os.path.join(env, 'bin')
# Automatically apply -e option to extra_args when source is a VCS url. VCS
# includes those beginning with svn+, git+, hg+ or bzr+
has_vcs = False
if name:
for pkg in name:
if pkg and _is_vcs_url(pkg):
has_vcs = True
break
# convert raw input package names to Package instances
packages = [Package(pkg) for pkg in _recover_package_name(name)]
# check invalid combination of arguments
if version is not None:
if len(packages) > 1:
module.fail_json(
msg="'version' argument is ambiguous when installing multiple package distributions. "
"Please specify version restrictions next to each package in 'name' argument."
)
if packages[0].has_version_specifier:
module.fail_json(
msg="The 'version' argument conflicts with any version specifier provided along with a package name. "
"Please keep the version specifier, but remove the 'version' argument."
)
# if the version specifier is provided by version, append that into the package
packages[0] = Package(to_native(packages[0]), version)
if module.params['editable']:
args_list = [] # used if extra_args is not used at all
if extra_args:
args_list = extra_args.split(' ')
if '-e' not in args_list:
args_list.append('-e')
# Ok, we will reconstruct the option string
extra_args = ' '.join(args_list)
if extra_args:
cmd.extend(shlex.split(extra_args))
if name:
cmd.extend(to_native(p) for p in packages)
elif requirements:
cmd.extend(['-r', requirements])
else:
module.exit_json(
changed=False,
warnings=["No valid name or requirements file found."],
)
if module.check_mode:
if extra_args or requirements or state == 'latest' or not name:
module.exit_json(changed=True)
pkg_cmd, out_pip, err_pip = _get_packages(module, pip, chdir)
out += out_pip
err += err_pip
changed = False
if name:
pkg_list = [p for p in out.split('\n') if not p.startswith('You are using') and not p.startswith('You should consider') and p]
if pkg_cmd.endswith(' freeze') and ('pip' in name or 'setuptools' in name):
# Older versions of pip (pre-1.3) do not have pip list.
# pip freeze does not list setuptools or pip in its output
# So we need to get those via a specialcase
for pkg in ('setuptools', 'pip'):
if pkg in name:
formatted_dep = _get_package_info(module, pkg, env)
if formatted_dep is not None:
pkg_list.append(formatted_dep)
out += '%s\n' % formatted_dep
for package in packages:
is_present = _is_present(module, package, pkg_list, pkg_cmd)
if (state == 'present' and not is_present) or (state == 'absent' and is_present):
changed = True
break
module.exit_json(changed=changed, cmd=pkg_cmd, stdout=out, stderr=err)
out_freeze_before = None
if requirements or has_vcs:
dummy, out_freeze_before, dummy = _get_packages(module, pip, chdir)
rc, out_pip, err_pip = module.run_command(cmd, path_prefix=path_prefix, cwd=chdir)
out += out_pip
err += err_pip
if rc == 1 and state == 'absent' and \
('not installed' in out_pip or 'not installed' in err_pip):
pass # rc is 1 when attempting to uninstall non-installed package
elif rc != 0:
_fail(module, cmd, out, err)
if state == 'absent':
changed = 'Successfully uninstalled' in out_pip
else:
if out_freeze_before is None:
changed = 'Successfully installed' in out_pip
else:
dummy, out_freeze_after, dummy = _get_packages(module, pip, chdir)
changed = out_freeze_before != out_freeze_after
changed = changed or venv_created
module.exit_json(changed=changed, cmd=cmd, name=name, version=version,
state=state, requirements=requirements, virtualenv=env,
stdout=out, stderr=err)
finally:
if old_umask is not None:
os.umask(old_umask)
if __name__ == '__main__':
main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 80,488 |
Remove use of `pkg_resources` from the `pip` module
|
### Summary
Use of `pkg_resources` is [deprecated](https://github.com/pypa/setuptools/pull/3843) as of `setuptools` version [67.5.0](https://setuptools.pypa.io/en/stable/history.html#v67-5-0).
https://github.com/ansible/ansible/blob/0371ea08d6de55635ffcbf94da5ddec0cd809495/lib/ansible/modules/pip.py#L277-L284
### Issue Type
Feature Idea
### Component Name
pip module
|
https://github.com/ansible/ansible/issues/80488
|
https://github.com/ansible/ansible/pull/80881
|
dd79c49a4de3a6dd5bd9d31503bd7846475e8e57
|
3ec0850df9429f4b1abc78d9ba505df12d7dd1db
| 2023-04-12T02:06:08Z |
python
| 2023-09-05T21:11:18Z |
test/integration/targets/pip/tasks/main.yml
|
# Current pip unconditionally uses md5.
# We can re-enable if pip switches to a different hash or allows us to not check md5.
- name: Python 2
when: ansible_python.version.major == 2
block:
- name: find virtualenv command
command: "which virtualenv virtualenv-{{ ansible_python.version.major }}.{{ ansible_python.version.minor }}"
register: command
ignore_errors: true
- name: is virtualenv available to python -m
command: '{{ ansible_python_interpreter }} -m virtualenv'
register: python_m
when: not command.stdout_lines
failed_when: python_m.rc != 2
- name: remember selected virtualenv command
set_fact:
virtualenv: "{{ command.stdout_lines[0] if command is successful else ansible_python_interpreter ~ ' -m virtualenv' }}"
- name: Python 3+
when: ansible_python.version.major > 2
block:
- name: remember selected virtualenv command
set_fact:
virtualenv: "{{ ansible_python_interpreter ~ ' -m venv' }}"
- block:
- name: install git, needed for repo installs
package:
name: git
state: present
when: ansible_distribution not in ["MacOSX", "Alpine"]
register: git_install
- name: ensure wheel is installed
pip:
name: wheel
extra_args: "-c {{ remote_constraints }}"
- include_tasks: pip.yml
always:
- name: platform specific cleanup
include_tasks: "{{ cleanup_filename }}"
with_first_found:
- "{{ ansible_distribution | lower }}_cleanup.yml"
- "default_cleanup.yml"
loop_control:
loop_var: cleanup_filename
when: ansible_fips|bool != True
module_defaults:
pip:
virtualenv_command: "{{ virtualenv }}"
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 80,488 |
Remove use of `pkg_resources` from the `pip` module
|
### Summary
Use of `pkg_resources` is [deprecated](https://github.com/pypa/setuptools/pull/3843) as of `setuptools` version [67.5.0](https://setuptools.pypa.io/en/stable/history.html#v67-5-0).
https://github.com/ansible/ansible/blob/0371ea08d6de55635ffcbf94da5ddec0cd809495/lib/ansible/modules/pip.py#L277-L284
### Issue Type
Feature Idea
### Component Name
pip module
|
https://github.com/ansible/ansible/issues/80488
|
https://github.com/ansible/ansible/pull/80881
|
dd79c49a4de3a6dd5bd9d31503bd7846475e8e57
|
3ec0850df9429f4b1abc78d9ba505df12d7dd1db
| 2023-04-12T02:06:08Z |
python
| 2023-09-05T21:11:18Z |
test/integration/targets/pip/tasks/no_setuptools.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 81,656 |
''ansible.builtin.ini'' uses removed function
|
### Summary
lookup function fails - lookup('ansible.builtin.ini', ...)
Ansible ini is using the function 'readfp' on Configmap this function was [removed ](https://issues.apache.org/jira/browse/SVN-4899#:~:text=Description,3.12%20(due%20October%202023).) on python 3.12
### Issue Type
Bug Report
### Component Name
ansible.builtin.ini
### Ansible Version
```console
$ ansible --version
ansible [core 2.15.3]
config file = /home/liss/webnews/pyramid-rt4/ansible/ansible.cfg
configured module search path = ['/home/liss/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/liss/.local/lib/python3.12/site-packages/ansible
ansible collection location = /home/liss/webnews/pyramid-rt4/ansible/collections
executable location = /home/liss/.local/bin/ansible
python version = 3.12.0rc1 (main, Aug 6 2023, 17:56:34) [GCC 9.4.0] (/usr/bin/python3)
jinja version = 3.1.2
libyaml = True
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
```
### OS / Environment
"Ubuntu 20.04.6 LTS" on WSL
### Steps to Reproduce
- name:
set_fact:
planning_service_monitor_consul_token: "{{ lookup('ansible.builtin.ini', 'consul_token', file='/tmp/pyr_tmp', section='Consul', allow_no_value=True) }}"
### Expected Results
Success
### Actual Results
```console
fatal: [localhost]: FAILED! =>
msg: 'An unhandled exception occurred while running the lookup plugin ''ansible.builtin.ini''. Error was a <class ''AttributeError''>, original message: ''ConfigParser'' object has no attribute ''readfp''. ''ConfigParser'' object has no attribute ''readfp'''
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/81656
|
https://github.com/ansible/ansible/pull/81657
|
a65c331e8e035bfaa5361895dafae020799f81f7
|
a861b1adba5d4a12f61ed268f67a224bdaa5f835
| 2023-09-07T07:17:45Z |
python
| 2023-09-07T19:24:50Z |
changelogs/fragments/81656-cf_readfp-deprecated.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 81,656 |
''ansible.builtin.ini'' uses removed function
|
### Summary
lookup function fails - lookup('ansible.builtin.ini', ...)
Ansible ini is using the function 'readfp' on Configmap this function was [removed ](https://issues.apache.org/jira/browse/SVN-4899#:~:text=Description,3.12%20(due%20October%202023).) on python 3.12
### Issue Type
Bug Report
### Component Name
ansible.builtin.ini
### Ansible Version
```console
$ ansible --version
ansible [core 2.15.3]
config file = /home/liss/webnews/pyramid-rt4/ansible/ansible.cfg
configured module search path = ['/home/liss/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/liss/.local/lib/python3.12/site-packages/ansible
ansible collection location = /home/liss/webnews/pyramid-rt4/ansible/collections
executable location = /home/liss/.local/bin/ansible
python version = 3.12.0rc1 (main, Aug 6 2023, 17:56:34) [GCC 9.4.0] (/usr/bin/python3)
jinja version = 3.1.2
libyaml = True
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
```
### OS / Environment
"Ubuntu 20.04.6 LTS" on WSL
### Steps to Reproduce
- name:
set_fact:
planning_service_monitor_consul_token: "{{ lookup('ansible.builtin.ini', 'consul_token', file='/tmp/pyr_tmp', section='Consul', allow_no_value=True) }}"
### Expected Results
Success
### Actual Results
```console
fatal: [localhost]: FAILED! =>
msg: 'An unhandled exception occurred while running the lookup plugin ''ansible.builtin.ini''. Error was a <class ''AttributeError''>, original message: ''ConfigParser'' object has no attribute ''readfp''. ''ConfigParser'' object has no attribute ''readfp'''
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/81656
|
https://github.com/ansible/ansible/pull/81657
|
a65c331e8e035bfaa5361895dafae020799f81f7
|
a861b1adba5d4a12f61ed268f67a224bdaa5f835
| 2023-09-07T07:17:45Z |
python
| 2023-09-07T19:24:50Z |
lib/ansible/module_utils/facts/system/local.py
|
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import glob
import json
import os
import stat
import ansible.module_utils.compat.typing as t
from ansible.module_utils.common.text.converters import to_text
from ansible.module_utils.facts.utils import get_file_content
from ansible.module_utils.facts.collector import BaseFactCollector
from ansible.module_utils.six.moves import configparser, StringIO
class LocalFactCollector(BaseFactCollector):
name = 'local'
_fact_ids = set() # type: t.Set[str]
def collect(self, module=None, collected_facts=None):
local_facts = {}
local_facts['local'] = {}
if not module:
return local_facts
fact_path = module.params.get('fact_path', None)
if not fact_path or not os.path.exists(fact_path):
return local_facts
local = {}
# go over .fact files, run executables, read rest, skip bad with warning and note
for fn in sorted(glob.glob(fact_path + '/*.fact')):
# use filename for key where it will sit under local facts
fact_base = os.path.basename(fn).replace('.fact', '')
failed = None
try:
executable_fact = stat.S_IXUSR & os.stat(fn)[stat.ST_MODE]
except OSError as e:
failed = 'Could not stat fact (%s): %s' % (fn, to_text(e))
local[fact_base] = failed
module.warn(failed)
continue
if executable_fact:
try:
# run it
rc, out, err = module.run_command(fn)
if rc != 0:
failed = 'Failure executing fact script (%s), rc: %s, err: %s' % (fn, rc, err)
except (IOError, OSError) as e:
failed = 'Could not execute fact script (%s): %s' % (fn, to_text(e))
if failed is not None:
local[fact_base] = failed
module.warn(failed)
continue
else:
# ignores exceptions and returns empty
out = get_file_content(fn, default='')
try:
# ensure we have unicode
out = to_text(out, errors='surrogate_or_strict')
except UnicodeError:
fact = 'error loading fact - output of running "%s" was not utf-8' % fn
local[fact_base] = fact
module.warn(fact)
continue
# try to read it as json first
try:
fact = json.loads(out)
except ValueError:
# if that fails read it with ConfigParser
cp = configparser.ConfigParser()
try:
cp.readfp(StringIO(out))
except configparser.Error:
fact = "error loading facts as JSON or ini - please check content: %s" % fn
module.warn(fact)
else:
fact = {}
for sect in cp.sections():
if sect not in fact:
fact[sect] = {}
for opt in cp.options(sect):
val = cp.get(sect, opt)
fact[sect][opt] = val
except Exception as e:
fact = "Failed to convert (%s) to JSON: %s" % (fn, to_text(e))
module.warn(fact)
local[fact_base] = fact
local_facts['local'] = local
return local_facts
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 81,656 |
''ansible.builtin.ini'' uses removed function
|
### Summary
lookup function fails - lookup('ansible.builtin.ini', ...)
Ansible ini is using the function 'readfp' on Configmap this function was [removed ](https://issues.apache.org/jira/browse/SVN-4899#:~:text=Description,3.12%20(due%20October%202023).) on python 3.12
### Issue Type
Bug Report
### Component Name
ansible.builtin.ini
### Ansible Version
```console
$ ansible --version
ansible [core 2.15.3]
config file = /home/liss/webnews/pyramid-rt4/ansible/ansible.cfg
configured module search path = ['/home/liss/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/liss/.local/lib/python3.12/site-packages/ansible
ansible collection location = /home/liss/webnews/pyramid-rt4/ansible/collections
executable location = /home/liss/.local/bin/ansible
python version = 3.12.0rc1 (main, Aug 6 2023, 17:56:34) [GCC 9.4.0] (/usr/bin/python3)
jinja version = 3.1.2
libyaml = True
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
```
### OS / Environment
"Ubuntu 20.04.6 LTS" on WSL
### Steps to Reproduce
- name:
set_fact:
planning_service_monitor_consul_token: "{{ lookup('ansible.builtin.ini', 'consul_token', file='/tmp/pyr_tmp', section='Consul', allow_no_value=True) }}"
### Expected Results
Success
### Actual Results
```console
fatal: [localhost]: FAILED! =>
msg: 'An unhandled exception occurred while running the lookup plugin ''ansible.builtin.ini''. Error was a <class ''AttributeError''>, original message: ''ConfigParser'' object has no attribute ''readfp''. ''ConfigParser'' object has no attribute ''readfp'''
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/81656
|
https://github.com/ansible/ansible/pull/81657
|
a65c331e8e035bfaa5361895dafae020799f81f7
|
a861b1adba5d4a12f61ed268f67a224bdaa5f835
| 2023-09-07T07:17:45Z |
python
| 2023-09-07T19:24:50Z |
lib/ansible/plugins/lookup/ini.py
|
# (c) 2015, Yannig Perre <yannig.perre(at)gmail.com>
# (c) 2017 Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
DOCUMENTATION = """
name: ini
author: Yannig Perre (!UNKNOWN) <yannig.perre(at)gmail.com>
version_added: "2.0"
short_description: read data from an ini file
description:
- "The ini lookup reads the contents of a file in INI format C(key1=value1).
This plugin retrieves the value on the right side after the equal sign C('=') of a given section C([section])."
- "You can also read a property file which - in this case - does not contain section."
options:
_terms:
description: The key(s) to look up.
required: True
type:
description: Type of the file. 'properties' refers to the Java properties files.
default: 'ini'
choices: ['ini', 'properties']
file:
description: Name of the file to load.
default: 'ansible.ini'
section:
default: global
description: Section where to lookup the key.
re:
default: False
type: boolean
description: Flag to indicate if the key supplied is a regexp.
encoding:
default: utf-8
description: Text encoding to use.
default:
description: Return value if the key is not in the ini file.
default: ''
case_sensitive:
description:
Whether key names read from O(file) should be case sensitive. This prevents
duplicate key errors if keys only differ in case.
default: False
version_added: '2.12'
allow_no_value:
description:
- Read an ini file which contains key without value and without '=' symbol.
type: bool
default: False
aliases: ['allow_none']
version_added: '2.12'
seealso:
- ref: playbook_task_paths
description: Search paths used for relative files.
"""
EXAMPLES = """
- ansible.builtin.debug: msg="User in integration is {{ lookup('ansible.builtin.ini', 'user', section='integration', file='users.ini') }}"
- ansible.builtin.debug: msg="User in production is {{ lookup('ansible.builtin.ini', 'user', section='production', file='users.ini') }}"
- ansible.builtin.debug: msg="user.name is {{ lookup('ansible.builtin.ini', 'user.name', type='properties', file='user.properties') }}"
- ansible.builtin.debug:
msg: "{{ item }}"
loop: "{{ q('ansible.builtin.ini', '.*', section='section1', file='test.ini', re=True) }}"
- name: Read an ini file with allow_no_value
ansible.builtin.debug:
msg: "{{ lookup('ansible.builtin.ini', 'user', file='mysql.ini', section='mysqld', allow_no_value=True) }}"
"""
RETURN = """
_raw:
description:
- value(s) of the key(s) in the ini file
type: list
elements: str
"""
import configparser
import os
import re
from io import StringIO
from collections import defaultdict
from collections.abc import MutableSequence
from ansible.errors import AnsibleLookupError, AnsibleOptionsError
from ansible.module_utils.common.text.converters import to_text, to_native
from ansible.plugins.lookup import LookupBase
def _parse_params(term, paramvals):
'''Safely split parameter term to preserve spaces'''
# TODO: deprecate this method
valid_keys = paramvals.keys()
params = defaultdict(lambda: '')
# TODO: check kv_parser to see if it can handle spaces this same way
keys = []
thiskey = 'key' # initialize for 'lookup item'
for idp, phrase in enumerate(term.split()):
# update current key if used
if '=' in phrase:
for k in valid_keys:
if ('%s=' % k) in phrase:
thiskey = k
# if first term or key does not exist
if idp == 0 or not params[thiskey]:
params[thiskey] = phrase
keys.append(thiskey)
else:
# append to existing key
params[thiskey] += ' ' + phrase
# return list of values
return [params[x] for x in keys]
class LookupModule(LookupBase):
def get_value(self, key, section, dflt, is_regexp):
# Retrieve all values from a section using a regexp
if is_regexp:
return [v for k, v in self.cp.items(section) if re.match(key, k)]
value = None
# Retrieve a single value
try:
value = self.cp.get(section, key)
except configparser.NoOptionError:
return dflt
return value
def run(self, terms, variables=None, **kwargs):
self.set_options(var_options=variables, direct=kwargs)
paramvals = self.get_options()
self.cp = configparser.ConfigParser(allow_no_value=paramvals.get('allow_no_value', paramvals.get('allow_none')))
if paramvals['case_sensitive']:
self.cp.optionxform = to_native
ret = []
for term in terms:
key = term
# parameters specified?
if '=' in term or ' ' in term.strip():
self._deprecate_inline_kv()
params = _parse_params(term, paramvals)
try:
updated_key = False
for param in params:
if '=' in param:
name, value = param.split('=')
if name not in paramvals:
raise AnsibleLookupError('%s is not a valid option.' % name)
paramvals[name] = value
elif key == term:
# only take first, this format never supported multiple keys inline
key = param
updated_key = True
except ValueError as e:
# bad params passed
raise AnsibleLookupError("Could not use '%s' from '%s': %s" % (param, params, to_native(e)), orig_exc=e)
if not updated_key:
raise AnsibleOptionsError("No key to lookup was provided as first term with in string inline options: %s" % term)
# only passed options in inline string
# TODO: look to use cache to avoid redoing this for every term if they use same file
# Retrieve file path
path = self.find_file_in_search_path(variables, 'files', paramvals['file'])
# Create StringIO later used to parse ini
config = StringIO()
# Special case for java properties
if paramvals['type'] == "properties":
config.write(u'[java_properties]\n')
paramvals['section'] = 'java_properties'
# Open file using encoding
contents, show_data = self._loader._get_file_contents(path)
contents = to_text(contents, errors='surrogate_or_strict', encoding=paramvals['encoding'])
config.write(contents)
config.seek(0, os.SEEK_SET)
try:
self.cp.readfp(config)
except configparser.DuplicateOptionError as doe:
raise AnsibleLookupError("Duplicate option in '{file}': {error}".format(file=paramvals['file'], error=to_native(doe)))
try:
var = self.get_value(key, paramvals['section'], paramvals['default'], paramvals['re'])
except configparser.NoSectionError:
raise AnsibleLookupError("No section '{section}' in {file}".format(section=paramvals['section'], file=paramvals['file']))
if var is not None:
if isinstance(var, MutableSequence):
for v in var:
ret.append(v)
else:
ret.append(var)
return ret
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 81,455 |
Ansible-vault encrypt corrupts file if directory is not writeable
|
### Summary
If you have write permissions for the file you want to encrypt, but not for the directory it's in, `ansible-vault encrypt` corrupts the file.
My colleague originally encountered this problem on `ansible [core 2.13.1]` with `python version = 3.8.10`, which I believe is the version from the Ubuntu 20.04 package repo, but I managed to reproduce it on a fresh install of ansible core 2.15.2.
I'd upload the original and corrupted file but my corporate firewall stupidly blocks gists and that's too annoying to work around right now. If you do think the files would help, just let me know and I'll make it work.
### Issue Type
Bug Report
### Component Name
ansible-vault
### Ansible Version
```console
$ ansible --version
ansible [core 2.15.2]
config file = None
configured module search path = ['<homedir>/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = <homedir>/test_ansible/lib/python3.10/site-packages/ansible
ansible collection location = <homedir>/.ansible/collections:/usr/share/ansible/collections
executable location = <homedir>/test_ansible/bin/ansible
python version = 3.10.12 (main, Jun 11 2023, 05:26:28) [GCC 11.4.0] (<homedir>/test_ansible/bin/python3)
jinja version = 3.1.2
libyaml = True
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
CONFIG_FILE() = None
```
### OS / Environment
Ubuntu 22.04 LTS
### Steps to Reproduce
Consider the following directory and file.
```bash
> ls testdir/
drwxr-sr-x 2 root some_group 4,0K 2023 Aug 07 (Mo) 11:34 .
-rw-rw---- 1 root some_group 3,7K 2023 Aug 07 (Mo) 11:34 .bashrc
```
I'm not the owner of the file or directory but I'm part of that group. Because of a config error on my part, the directory is missing group write permissions. (I'm not actually trying to encrypt my .bashrc, that's just a convenient file for reproducing the problem.) I _am_ able to write to the existing file but I'm not able to create new files in that directory without using sudo. I think ansible-vault tries to move the new, encrypted file into the directory. That fails of course, and somehow corrupts the file in the process.
```bash
> ansible-vault encrypt testdir/.bashrc
New Vault password:
Confirm New Vault password:
ERROR! Unexpected Exception, this is probably a bug: [Errno 13] Permission denied: '<path>/testdir/.bashrc'
to see the full traceback, use -vvv
> less testdir/.bashrc
"testdir/.bashrc" may be a binary file. See it anyway?
```
.bashrc is now in a corrupted state, as indicated by `less` recognizing it as binary.
If the group doesn't have write permissions on the file, ansible-vault fails with the same error message and the file is, correctly, not touched. The corruption happens because ansible-vault is able to write to the file.
### Expected Results
Ansible-vault should either correctly encrypt and write the file or should fail and leave the file untouched. Under no circumstance should it corrupt my file.
### Actual Results
```console
> ansible-vault encrypt -vvv testdir/.bashrc
New Vault password:
Confirm New Vault password:
ERROR! Unexpected Exception, this is probably a bug: [Errno 13] Permission denied: '<path>/testdir/.bashrc'
the full traceback was:
Traceback (most recent call last):
File "<homedir>/test_ansible/lib/python3.10/site-packages/ansible/cli/__init__.py", line 659, in cli_executor
exit_code = cli.run()
File "<homedir>/test_ansible/lib/python3.10/site-packages/ansible/cli/vault.py", line 248, in run
context.CLIARGS['func']()
File "<homedir>/test_ansible/lib/python3.10/site-packages/ansible/cli/vault.py", line 261, in execute_encrypt
self.editor.encrypt_file(f, self.encrypt_secret,
File "<homedir>/test_ansible/lib/python3.10/site-packages/ansible/parsing/vault/__init__.py", line 906, in encrypt_file
self.write_data(b_ciphertext, output_file or filename)
File "<homedir>/test_ansible/lib/python3.10/site-packages/ansible/parsing/vault/__init__.py", line 1083, in write_data
self._shred_file(thefile)
File "<homedir>/test_ansible/lib/python3.10/site-packages/ansible/parsing/vault/__init__.py", line 837, in _shred_file
os.remove(tmp_path)
PermissionError: [Errno 13] Permission denied: '<path>/testdir/.bashrc'
> less testdir/.bashrc
"testdir/.bashrc" may be a binary file. See it anyway?
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/81455
|
https://github.com/ansible/ansible/pull/81660
|
a861b1adba5d4a12f61ed268f67a224bdaa5f835
|
6177888cf6a6b9fba24e3875bc73138e5be2a224
| 2023-08-07T10:30:13Z |
python
| 2023-09-07T19:30:05Z |
changelogs/fragments/ansible-vault.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 81,455 |
Ansible-vault encrypt corrupts file if directory is not writeable
|
### Summary
If you have write permissions for the file you want to encrypt, but not for the directory it's in, `ansible-vault encrypt` corrupts the file.
My colleague originally encountered this problem on `ansible [core 2.13.1]` with `python version = 3.8.10`, which I believe is the version from the Ubuntu 20.04 package repo, but I managed to reproduce it on a fresh install of ansible core 2.15.2.
I'd upload the original and corrupted file but my corporate firewall stupidly blocks gists and that's too annoying to work around right now. If you do think the files would help, just let me know and I'll make it work.
### Issue Type
Bug Report
### Component Name
ansible-vault
### Ansible Version
```console
$ ansible --version
ansible [core 2.15.2]
config file = None
configured module search path = ['<homedir>/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = <homedir>/test_ansible/lib/python3.10/site-packages/ansible
ansible collection location = <homedir>/.ansible/collections:/usr/share/ansible/collections
executable location = <homedir>/test_ansible/bin/ansible
python version = 3.10.12 (main, Jun 11 2023, 05:26:28) [GCC 11.4.0] (<homedir>/test_ansible/bin/python3)
jinja version = 3.1.2
libyaml = True
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
CONFIG_FILE() = None
```
### OS / Environment
Ubuntu 22.04 LTS
### Steps to Reproduce
Consider the following directory and file.
```bash
> ls testdir/
drwxr-sr-x 2 root some_group 4,0K 2023 Aug 07 (Mo) 11:34 .
-rw-rw---- 1 root some_group 3,7K 2023 Aug 07 (Mo) 11:34 .bashrc
```
I'm not the owner of the file or directory but I'm part of that group. Because of a config error on my part, the directory is missing group write permissions. (I'm not actually trying to encrypt my .bashrc, that's just a convenient file for reproducing the problem.) I _am_ able to write to the existing file but I'm not able to create new files in that directory without using sudo. I think ansible-vault tries to move the new, encrypted file into the directory. That fails of course, and somehow corrupts the file in the process.
```bash
> ansible-vault encrypt testdir/.bashrc
New Vault password:
Confirm New Vault password:
ERROR! Unexpected Exception, this is probably a bug: [Errno 13] Permission denied: '<path>/testdir/.bashrc'
to see the full traceback, use -vvv
> less testdir/.bashrc
"testdir/.bashrc" may be a binary file. See it anyway?
```
.bashrc is now in a corrupted state, as indicated by `less` recognizing it as binary.
If the group doesn't have write permissions on the file, ansible-vault fails with the same error message and the file is, correctly, not touched. The corruption happens because ansible-vault is able to write to the file.
### Expected Results
Ansible-vault should either correctly encrypt and write the file or should fail and leave the file untouched. Under no circumstance should it corrupt my file.
### Actual Results
```console
> ansible-vault encrypt -vvv testdir/.bashrc
New Vault password:
Confirm New Vault password:
ERROR! Unexpected Exception, this is probably a bug: [Errno 13] Permission denied: '<path>/testdir/.bashrc'
the full traceback was:
Traceback (most recent call last):
File "<homedir>/test_ansible/lib/python3.10/site-packages/ansible/cli/__init__.py", line 659, in cli_executor
exit_code = cli.run()
File "<homedir>/test_ansible/lib/python3.10/site-packages/ansible/cli/vault.py", line 248, in run
context.CLIARGS['func']()
File "<homedir>/test_ansible/lib/python3.10/site-packages/ansible/cli/vault.py", line 261, in execute_encrypt
self.editor.encrypt_file(f, self.encrypt_secret,
File "<homedir>/test_ansible/lib/python3.10/site-packages/ansible/parsing/vault/__init__.py", line 906, in encrypt_file
self.write_data(b_ciphertext, output_file or filename)
File "<homedir>/test_ansible/lib/python3.10/site-packages/ansible/parsing/vault/__init__.py", line 1083, in write_data
self._shred_file(thefile)
File "<homedir>/test_ansible/lib/python3.10/site-packages/ansible/parsing/vault/__init__.py", line 837, in _shred_file
os.remove(tmp_path)
PermissionError: [Errno 13] Permission denied: '<path>/testdir/.bashrc'
> less testdir/.bashrc
"testdir/.bashrc" may be a binary file. See it anyway?
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/81455
|
https://github.com/ansible/ansible/pull/81660
|
a861b1adba5d4a12f61ed268f67a224bdaa5f835
|
6177888cf6a6b9fba24e3875bc73138e5be2a224
| 2023-08-07T10:30:13Z |
python
| 2023-09-07T19:30:05Z |
lib/ansible/parsing/vault/__init__.py
|
# (c) 2014, James Tanner <[email protected]>
# (c) 2016, Adrian Likins <[email protected]>
# (c) 2016 Toshio Kuratomi <[email protected]>
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
# Make coding more python3-ish
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import errno
import fcntl
import os
import random
import shlex
import shutil
import subprocess
import sys
import tempfile
import warnings
from binascii import hexlify
from binascii import unhexlify
from binascii import Error as BinasciiError
HAS_CRYPTOGRAPHY = False
CRYPTOGRAPHY_BACKEND = None
try:
with warnings.catch_warnings():
warnings.simplefilter("ignore", DeprecationWarning)
from cryptography.exceptions import InvalidSignature
from cryptography.hazmat.backends import default_backend
from cryptography.hazmat.primitives import hashes, padding
from cryptography.hazmat.primitives.hmac import HMAC
from cryptography.hazmat.primitives.kdf.pbkdf2 import PBKDF2HMAC
from cryptography.hazmat.primitives.ciphers import (
Cipher as C_Cipher, algorithms, modes
)
CRYPTOGRAPHY_BACKEND = default_backend()
HAS_CRYPTOGRAPHY = True
except ImportError:
pass
from ansible.errors import AnsibleError, AnsibleAssertionError
from ansible import constants as C
from ansible.module_utils.six import binary_type
from ansible.module_utils.common.text.converters import to_bytes, to_text, to_native
from ansible.utils.display import Display
from ansible.utils.path import makedirs_safe, unfrackpath
display = Display()
b_HEADER = b'$ANSIBLE_VAULT'
CIPHER_WHITELIST = frozenset((u'AES256',))
CIPHER_WRITE_WHITELIST = frozenset((u'AES256',))
# See also CIPHER_MAPPING at the bottom of the file which maps cipher strings
# (used in VaultFile header) to a cipher class
NEED_CRYPTO_LIBRARY = "ansible-vault requires the cryptography library in order to function"
class AnsibleVaultError(AnsibleError):
pass
class AnsibleVaultPasswordError(AnsibleVaultError):
pass
class AnsibleVaultFormatError(AnsibleError):
pass
def is_encrypted(data):
""" Test if this is vault encrypted data blob
:arg data: a byte or text string to test whether it is recognized as vault
encrypted data
:returns: True if it is recognized. Otherwise, False.
"""
try:
# Make sure we have a byte string and that it only contains ascii
# bytes.
b_data = to_bytes(to_text(data, encoding='ascii', errors='strict', nonstring='strict'), encoding='ascii', errors='strict')
except (UnicodeError, TypeError):
# The vault format is pure ascii so if we failed to encode to bytes
# via ascii we know that this is not vault data.
# Similarly, if it's not a string, it's not vault data
return False
if b_data.startswith(b_HEADER):
return True
return False
def is_encrypted_file(file_obj, start_pos=0, count=-1):
"""Test if the contents of a file obj are a vault encrypted data blob.
:arg file_obj: A file object that will be read from.
:kwarg start_pos: A byte offset in the file to start reading the header
from. Defaults to 0, the beginning of the file.
:kwarg count: Read up to this number of bytes from the file to determine
if it looks like encrypted vault data. The default is -1, read to the
end of file.
:returns: True if the file looks like a vault file. Otherwise, False.
"""
# read the header and reset the file stream to where it started
current_position = file_obj.tell()
try:
file_obj.seek(start_pos)
return is_encrypted(file_obj.read(count))
finally:
file_obj.seek(current_position)
def _parse_vaulttext_envelope(b_vaulttext_envelope, default_vault_id=None):
b_tmpdata = b_vaulttext_envelope.splitlines()
b_tmpheader = b_tmpdata[0].strip().split(b';')
b_version = b_tmpheader[1].strip()
cipher_name = to_text(b_tmpheader[2].strip())
vault_id = default_vault_id
# Only attempt to find vault_id if the vault file is version 1.2 or newer
# if self.b_version == b'1.2':
if len(b_tmpheader) >= 4:
vault_id = to_text(b_tmpheader[3].strip())
b_ciphertext = b''.join(b_tmpdata[1:])
return b_ciphertext, b_version, cipher_name, vault_id
def parse_vaulttext_envelope(b_vaulttext_envelope, default_vault_id=None, filename=None):
"""Parse the vaulttext envelope
When data is saved, it has a header prepended and is formatted into 80
character lines. This method extracts the information from the header
and then removes the header and the inserted newlines. The string returned
is suitable for processing by the Cipher classes.
:arg b_vaulttext: byte str containing the data from a save file
:kwarg default_vault_id: The vault_id name to use if the vaulttext does not provide one.
:kwarg filename: The filename that the data came from. This is only
used to make better error messages in case the data cannot be
decrypted. This is optional.
:returns: A tuple of byte str of the vaulttext suitable to pass to parse_vaultext,
a byte str of the vault format version,
the name of the cipher used, and the vault_id.
:raises: AnsibleVaultFormatError: if the vaulttext_envelope format is invalid
"""
# used by decrypt
default_vault_id = default_vault_id or C.DEFAULT_VAULT_IDENTITY
try:
return _parse_vaulttext_envelope(b_vaulttext_envelope, default_vault_id)
except Exception as exc:
msg = "Vault envelope format error"
if filename:
msg += ' in %s' % (filename)
msg += ': %s' % exc
raise AnsibleVaultFormatError(msg)
def format_vaulttext_envelope(b_ciphertext, cipher_name, version=None, vault_id=None):
""" Add header and format to 80 columns
:arg b_ciphertext: the encrypted and hexlified data as a byte string
:arg cipher_name: unicode cipher name (for ex, u'AES256')
:arg version: unicode vault version (for ex, '1.2'). Optional ('1.1' is default)
:arg vault_id: unicode vault identifier. If provided, the version will be bumped to 1.2.
:returns: a byte str that should be dumped into a file. It's
formatted to 80 char columns and has the header prepended
"""
if not cipher_name:
raise AnsibleError("the cipher must be set before adding a header")
version = version or '1.1'
# If we specify a vault_id, use format version 1.2. For no vault_id, stick to 1.1
if vault_id and vault_id != u'default':
version = '1.2'
b_version = to_bytes(version, 'utf-8', errors='strict')
b_vault_id = to_bytes(vault_id, 'utf-8', errors='strict')
b_cipher_name = to_bytes(cipher_name, 'utf-8', errors='strict')
header_parts = [b_HEADER,
b_version,
b_cipher_name]
if b_version == b'1.2' and b_vault_id:
header_parts.append(b_vault_id)
header = b';'.join(header_parts)
b_vaulttext = [header]
b_vaulttext += [b_ciphertext[i:i + 80] for i in range(0, len(b_ciphertext), 80)]
b_vaulttext += [b'']
b_vaulttext = b'\n'.join(b_vaulttext)
return b_vaulttext
def _unhexlify(b_data):
try:
return unhexlify(b_data)
except (BinasciiError, TypeError) as exc:
raise AnsibleVaultFormatError('Vault format unhexlify error: %s' % exc)
def _parse_vaulttext(b_vaulttext):
b_vaulttext = _unhexlify(b_vaulttext)
b_salt, b_crypted_hmac, b_ciphertext = b_vaulttext.split(b"\n", 2)
b_salt = _unhexlify(b_salt)
b_ciphertext = _unhexlify(b_ciphertext)
return b_ciphertext, b_salt, b_crypted_hmac
def parse_vaulttext(b_vaulttext):
"""Parse the vaulttext
:arg b_vaulttext: byte str containing the vaulttext (ciphertext, salt, crypted_hmac)
:returns: A tuple of byte str of the ciphertext suitable for passing to a
Cipher class's decrypt() function, a byte str of the salt,
and a byte str of the crypted_hmac
:raises: AnsibleVaultFormatError: if the vaulttext format is invalid
"""
# SPLIT SALT, DIGEST, AND DATA
try:
return _parse_vaulttext(b_vaulttext)
except AnsibleVaultFormatError:
raise
except Exception as exc:
msg = "Vault vaulttext format error: %s" % exc
raise AnsibleVaultFormatError(msg)
def verify_secret_is_not_empty(secret, msg=None):
'''Check the secret against minimal requirements.
Raises: AnsibleVaultPasswordError if the password does not meet requirements.
Currently, only requirement is that the password is not None or an empty string.
'''
msg = msg or 'Invalid vault password was provided'
if not secret:
raise AnsibleVaultPasswordError(msg)
class VaultSecret:
'''Opaque/abstract objects for a single vault secret. ie, a password or a key.'''
def __init__(self, _bytes=None):
# FIXME: ? that seems wrong... Unset etc?
self._bytes = _bytes
@property
def bytes(self):
'''The secret as a bytestring.
Sub classes that store text types will need to override to encode the text to bytes.
'''
return self._bytes
def load(self):
return self._bytes
class PromptVaultSecret(VaultSecret):
default_prompt_formats = ["Vault password (%s): "]
def __init__(self, _bytes=None, vault_id=None, prompt_formats=None):
super(PromptVaultSecret, self).__init__(_bytes=_bytes)
self.vault_id = vault_id
if prompt_formats is None:
self.prompt_formats = self.default_prompt_formats
else:
self.prompt_formats = prompt_formats
@property
def bytes(self):
return self._bytes
def load(self):
self._bytes = self.ask_vault_passwords()
def ask_vault_passwords(self):
b_vault_passwords = []
for prompt_format in self.prompt_formats:
prompt = prompt_format % {'vault_id': self.vault_id}
try:
vault_pass = display.prompt(prompt, private=True)
except EOFError:
raise AnsibleVaultError('EOFError (ctrl-d) on prompt for (%s)' % self.vault_id)
verify_secret_is_not_empty(vault_pass)
b_vault_pass = to_bytes(vault_pass, errors='strict', nonstring='simplerepr').strip()
b_vault_passwords.append(b_vault_pass)
# Make sure the passwords match by comparing them all to the first password
for b_vault_password in b_vault_passwords:
self.confirm(b_vault_passwords[0], b_vault_password)
if b_vault_passwords:
return b_vault_passwords[0]
return None
def confirm(self, b_vault_pass_1, b_vault_pass_2):
# enforce no newline chars at the end of passwords
if b_vault_pass_1 != b_vault_pass_2:
# FIXME: more specific exception
raise AnsibleError("Passwords do not match")
def script_is_client(filename):
'''Determine if a vault secret script is a client script that can be given --vault-id args'''
# if password script is 'something-client' or 'something-client.[sh|py|rb|etc]'
# script_name can still have '.' or could be entire filename if there is no ext
script_name, dummy = os.path.splitext(filename)
# TODO: for now, this is entirely based on filename
if script_name.endswith('-client'):
return True
return False
def get_file_vault_secret(filename=None, vault_id=None, encoding=None, loader=None):
''' Get secret from file content or execute file and get secret from stdout '''
# we unfrack but not follow the full path/context to possible vault script
# so when the script uses 'adjacent' file for configuration or similar
# it still works (as inventory scripts often also do).
# while files from --vault-password-file are already unfracked, other sources are not
this_path = unfrackpath(filename, follow=False)
if not os.path.exists(this_path):
raise AnsibleError("The vault password file %s was not found" % this_path)
# it is a script?
if loader.is_executable(this_path):
if script_is_client(filename):
# this is special script type that handles vault ids
display.vvvv(u'The vault password file %s is a client script.' % to_text(this_path))
# TODO: pass vault_id_name to script via cli
return ClientScriptVaultSecret(filename=this_path, vault_id=vault_id, encoding=encoding, loader=loader)
# just a plain vault password script. No args, returns a byte array
return ScriptVaultSecret(filename=this_path, encoding=encoding, loader=loader)
return FileVaultSecret(filename=this_path, encoding=encoding, loader=loader)
# TODO: mv these classes to a separate file so we don't pollute vault with 'subprocess' etc
class FileVaultSecret(VaultSecret):
def __init__(self, filename=None, encoding=None, loader=None):
super(FileVaultSecret, self).__init__()
self.filename = filename
self.loader = loader
self.encoding = encoding or 'utf8'
# We could load from file here, but that is eventually a pain to test
self._bytes = None
self._text = None
@property
def bytes(self):
if self._bytes:
return self._bytes
if self._text:
return self._text.encode(self.encoding)
return None
def load(self):
self._bytes = self._read_file(self.filename)
def _read_file(self, filename):
"""
Read a vault password from a file or if executable, execute the script and
retrieve password from STDOUT
"""
# TODO: replace with use of self.loader
try:
with open(filename, "rb") as f:
vault_pass = f.read().strip()
except (OSError, IOError) as e:
raise AnsibleError("Could not read vault password file %s: %s" % (filename, e))
b_vault_data, dummy = self.loader._decrypt_if_vault_data(vault_pass, filename)
vault_pass = b_vault_data.strip(b'\r\n')
verify_secret_is_not_empty(vault_pass,
msg='Invalid vault password was provided from file (%s)' % filename)
return vault_pass
def __repr__(self):
if self.filename:
return "%s(filename='%s')" % (self.__class__.__name__, self.filename)
return "%s()" % (self.__class__.__name__)
class ScriptVaultSecret(FileVaultSecret):
def _read_file(self, filename):
if not self.loader.is_executable(filename):
raise AnsibleVaultError("The vault password script %s was not executable" % filename)
command = self._build_command()
stdout, stderr, p = self._run(command)
self._check_results(stdout, stderr, p)
vault_pass = stdout.strip(b'\r\n')
empty_password_msg = 'Invalid vault password was provided from script (%s)' % filename
verify_secret_is_not_empty(vault_pass, msg=empty_password_msg)
return vault_pass
def _run(self, command):
try:
# STDERR not captured to make it easier for users to prompt for input in their scripts
p = subprocess.Popen(command, stdout=subprocess.PIPE)
except OSError as e:
msg_format = "Problem running vault password script %s (%s)." \
" If this is not a script, remove the executable bit from the file."
msg = msg_format % (self.filename, e)
raise AnsibleError(msg)
stdout, stderr = p.communicate()
return stdout, stderr, p
def _check_results(self, stdout, stderr, popen):
if popen.returncode != 0:
raise AnsibleError("Vault password script %s returned non-zero (%s): %s" %
(self.filename, popen.returncode, stderr))
def _build_command(self):
return [self.filename]
class ClientScriptVaultSecret(ScriptVaultSecret):
VAULT_ID_UNKNOWN_RC = 2
def __init__(self, filename=None, encoding=None, loader=None, vault_id=None):
super(ClientScriptVaultSecret, self).__init__(filename=filename,
encoding=encoding,
loader=loader)
self._vault_id = vault_id
display.vvvv(u'Executing vault password client script: %s --vault-id %s' % (to_text(filename), to_text(vault_id)))
def _run(self, command):
try:
p = subprocess.Popen(command,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE)
except OSError as e:
msg_format = "Problem running vault password client script %s (%s)." \
" If this is not a script, remove the executable bit from the file."
msg = msg_format % (self.filename, e)
raise AnsibleError(msg)
stdout, stderr = p.communicate()
return stdout, stderr, p
def _check_results(self, stdout, stderr, popen):
if popen.returncode == self.VAULT_ID_UNKNOWN_RC:
raise AnsibleError('Vault password client script %s did not find a secret for vault-id=%s: %s' %
(self.filename, self._vault_id, stderr))
if popen.returncode != 0:
raise AnsibleError("Vault password client script %s returned non-zero (%s) when getting secret for vault-id=%s: %s" %
(self.filename, popen.returncode, self._vault_id, stderr))
def _build_command(self):
command = [self.filename]
if self._vault_id:
command.extend(['--vault-id', self._vault_id])
return command
def __repr__(self):
if self.filename:
return "%s(filename='%s', vault_id='%s')" % \
(self.__class__.__name__, self.filename, self._vault_id)
return "%s()" % (self.__class__.__name__)
def match_secrets(secrets, target_vault_ids):
'''Find all VaultSecret objects that are mapped to any of the target_vault_ids in secrets'''
if not secrets:
return []
matches = [(vault_id, secret) for vault_id, secret in secrets if vault_id in target_vault_ids]
return matches
def match_best_secret(secrets, target_vault_ids):
'''Find the best secret from secrets that matches target_vault_ids
Since secrets should be ordered so the early secrets are 'better' than later ones, this
just finds all the matches, then returns the first secret'''
matches = match_secrets(secrets, target_vault_ids)
if matches:
return matches[0]
# raise exception?
return None
def match_encrypt_vault_id_secret(secrets, encrypt_vault_id=None):
# See if the --encrypt-vault-id matches a vault-id
display.vvvv(u'encrypt_vault_id=%s' % to_text(encrypt_vault_id))
if encrypt_vault_id is None:
raise AnsibleError('match_encrypt_vault_id_secret requires a non None encrypt_vault_id')
encrypt_vault_id_matchers = [encrypt_vault_id]
encrypt_secret = match_best_secret(secrets, encrypt_vault_id_matchers)
# return the best match for --encrypt-vault-id
if encrypt_secret:
return encrypt_secret
# If we specified a encrypt_vault_id and we couldn't find it, dont
# fallback to using the first/best secret
raise AnsibleVaultError('Did not find a match for --encrypt-vault-id=%s in the known vault-ids %s' % (encrypt_vault_id,
[_v for _v, _vs in secrets]))
def match_encrypt_secret(secrets, encrypt_vault_id=None):
'''Find the best/first/only secret in secrets to use for encrypting'''
display.vvvv(u'encrypt_vault_id=%s' % to_text(encrypt_vault_id))
# See if the --encrypt-vault-id matches a vault-id
if encrypt_vault_id:
return match_encrypt_vault_id_secret(secrets,
encrypt_vault_id=encrypt_vault_id)
# Find the best/first secret from secrets since we didnt specify otherwise
# ie, consider all of the available secrets as matches
_vault_id_matchers = [_vault_id for _vault_id, dummy in secrets]
best_secret = match_best_secret(secrets, _vault_id_matchers)
# can be empty list sans any tuple
return best_secret
class VaultLib:
def __init__(self, secrets=None):
self.secrets = secrets or []
self.cipher_name = None
self.b_version = b'1.2'
@staticmethod
def is_encrypted(vaulttext):
return is_encrypted(vaulttext)
def encrypt(self, plaintext, secret=None, vault_id=None, salt=None):
"""Vault encrypt a piece of data.
:arg plaintext: a text or byte string to encrypt.
:returns: a utf-8 encoded byte str of encrypted data. The string
contains a header identifying this as vault encrypted data and
formatted to newline terminated lines of 80 characters. This is
suitable for dumping as is to a vault file.
If the string passed in is a text string, it will be encoded to UTF-8
before encryption.
"""
if secret is None:
if self.secrets:
dummy, secret = match_encrypt_secret(self.secrets)
else:
raise AnsibleVaultError("A vault password must be specified to encrypt data")
b_plaintext = to_bytes(plaintext, errors='surrogate_or_strict')
if is_encrypted(b_plaintext):
raise AnsibleError("input is already encrypted")
if not self.cipher_name or self.cipher_name not in CIPHER_WRITE_WHITELIST:
self.cipher_name = u"AES256"
try:
this_cipher = CIPHER_MAPPING[self.cipher_name]()
except KeyError:
raise AnsibleError(u"{0} cipher could not be found".format(self.cipher_name))
# encrypt data
if vault_id:
display.vvvvv(u'Encrypting with vault_id "%s" and vault secret %s' % (to_text(vault_id), to_text(secret)))
else:
display.vvvvv(u'Encrypting without a vault_id using vault secret %s' % to_text(secret))
b_ciphertext = this_cipher.encrypt(b_plaintext, secret, salt)
# format the data for output to the file
b_vaulttext = format_vaulttext_envelope(b_ciphertext,
self.cipher_name,
vault_id=vault_id)
return b_vaulttext
def decrypt(self, vaulttext, filename=None, obj=None):
'''Decrypt a piece of vault encrypted data.
:arg vaulttext: a string to decrypt. Since vault encrypted data is an
ascii text format this can be either a byte str or unicode string.
:kwarg filename: a filename that the data came from. This is only
used to make better error messages in case the data cannot be
decrypted.
:returns: a byte string containing the decrypted data and the vault-id that was used
'''
plaintext, vault_id, vault_secret = self.decrypt_and_get_vault_id(vaulttext, filename=filename, obj=obj)
return plaintext
def decrypt_and_get_vault_id(self, vaulttext, filename=None, obj=None):
"""Decrypt a piece of vault encrypted data.
:arg vaulttext: a string to decrypt. Since vault encrypted data is an
ascii text format this can be either a byte str or unicode string.
:kwarg filename: a filename that the data came from. This is only
used to make better error messages in case the data cannot be
decrypted.
:returns: a byte string containing the decrypted data and the vault-id vault-secret that was used
"""
b_vaulttext = to_bytes(vaulttext, errors='strict', encoding='utf-8')
if self.secrets is None:
msg = "A vault password must be specified to decrypt data"
if filename:
msg += " in file %s" % to_native(filename)
raise AnsibleVaultError(msg)
if not is_encrypted(b_vaulttext):
msg = "input is not vault encrypted data. "
if filename:
msg += "%s is not a vault encrypted file" % to_native(filename)
raise AnsibleError(msg)
b_vaulttext, dummy, cipher_name, vault_id = parse_vaulttext_envelope(b_vaulttext, filename=filename)
# create the cipher object, note that the cipher used for decrypt can
# be different than the cipher used for encrypt
if cipher_name in CIPHER_WHITELIST:
this_cipher = CIPHER_MAPPING[cipher_name]()
else:
raise AnsibleError("{0} cipher could not be found".format(cipher_name))
b_plaintext = None
if not self.secrets:
raise AnsibleVaultError('Attempting to decrypt but no vault secrets found')
# WARNING: Currently, the vault id is not required to match the vault id in the vault blob to
# decrypt a vault properly. The vault id in the vault blob is not part of the encrypted
# or signed vault payload. There is no cryptographic checking/verification/validation of the
# vault blobs vault id. It can be tampered with and changed. The vault id is just a nick
# name to use to pick the best secret and provide some ux/ui info.
# iterate over all the applicable secrets (all of them by default) until one works...
# if we specify a vault_id, only the corresponding vault secret is checked and
# we check it first.
vault_id_matchers = []
vault_id_used = None
vault_secret_used = None
if vault_id:
display.vvvvv(u'Found a vault_id (%s) in the vaulttext' % to_text(vault_id))
vault_id_matchers.append(vault_id)
_matches = match_secrets(self.secrets, vault_id_matchers)
if _matches:
display.vvvvv(u'We have a secret associated with vault id (%s), will try to use to decrypt %s' % (to_text(vault_id), to_text(filename)))
else:
display.vvvvv(u'Found a vault_id (%s) in the vault text, but we do not have a associated secret (--vault-id)' % to_text(vault_id))
# Not adding the other secrets to vault_secret_ids enforces a match between the vault_id from the vault_text and
# the known vault secrets.
if not C.DEFAULT_VAULT_ID_MATCH:
# Add all of the known vault_ids as candidates for decrypting a vault.
vault_id_matchers.extend([_vault_id for _vault_id, _dummy in self.secrets if _vault_id != vault_id])
matched_secrets = match_secrets(self.secrets, vault_id_matchers)
# for vault_secret_id in vault_secret_ids:
for vault_secret_id, vault_secret in matched_secrets:
display.vvvvv(u'Trying to use vault secret=(%s) id=%s to decrypt %s' % (to_text(vault_secret), to_text(vault_secret_id), to_text(filename)))
try:
# secret = self.secrets[vault_secret_id]
display.vvvv(u'Trying secret %s for vault_id=%s' % (to_text(vault_secret), to_text(vault_secret_id)))
b_plaintext = this_cipher.decrypt(b_vaulttext, vault_secret)
if b_plaintext is not None:
vault_id_used = vault_secret_id
vault_secret_used = vault_secret
file_slug = ''
if filename:
file_slug = ' of "%s"' % filename
display.vvvvv(
u'Decrypt%s successful with secret=%s and vault_id=%s' % (to_text(file_slug), to_text(vault_secret), to_text(vault_secret_id))
)
break
except AnsibleVaultFormatError as exc:
exc.obj = obj
msg = u"There was a vault format error"
if filename:
msg += u' in %s' % (to_text(filename))
msg += u': %s' % to_text(exc)
display.warning(msg, formatted=True)
raise
except AnsibleError as e:
display.vvvv(u'Tried to use the vault secret (%s) to decrypt (%s) but it failed. Error: %s' %
(to_text(vault_secret_id), to_text(filename), e))
continue
else:
msg = "Decryption failed (no vault secrets were found that could decrypt)"
if filename:
msg += " on %s" % to_native(filename)
raise AnsibleVaultError(msg)
if b_plaintext is None:
msg = "Decryption failed"
if filename:
msg += " on %s" % to_native(filename)
raise AnsibleError(msg)
return b_plaintext, vault_id_used, vault_secret_used
class VaultEditor:
def __init__(self, vault=None):
# TODO: it may be more useful to just make VaultSecrets and index of VaultLib objects...
self.vault = vault or VaultLib()
# TODO: mv shred file stuff to it's own class
def _shred_file_custom(self, tmp_path):
""""Destroy a file, when shred (core-utils) is not available
Unix `shred' destroys files "so that they can be recovered only with great difficulty with
specialised hardware, if at all". It is based on the method from the paper
"Secure Deletion of Data from Magnetic and Solid-State Memory",
Proceedings of the Sixth USENIX Security Symposium (San Jose, California, July 22-25, 1996).
We do not go to that length to re-implement shred in Python; instead, overwriting with a block
of random data should suffice.
See https://github.com/ansible/ansible/pull/13700 .
"""
file_len = os.path.getsize(tmp_path)
if file_len > 0: # avoid work when file was empty
max_chunk_len = min(1024 * 1024 * 2, file_len)
passes = 3
with open(tmp_path, "wb") as fh:
for dummy in range(passes):
fh.seek(0, 0)
# get a random chunk of data, each pass with other length
chunk_len = random.randint(max_chunk_len // 2, max_chunk_len)
data = os.urandom(chunk_len)
for dummy in range(0, file_len // chunk_len):
fh.write(data)
fh.write(data[:file_len % chunk_len])
# FIXME remove this assert once we have unittests to check its accuracy
if fh.tell() != file_len:
raise AnsibleAssertionError()
os.fsync(fh)
def _shred_file(self, tmp_path):
"""Securely destroy a decrypted file
Note standard limitations of GNU shred apply (For flash, overwriting would have no effect
due to wear leveling; for other storage systems, the async kernel->filesystem->disk calls never
guarantee data hits the disk; etc). Furthermore, if your tmp dirs is on tmpfs (ramdisks),
it is a non-issue.
Nevertheless, some form of overwriting the data (instead of just removing the fs index entry) is
a good idea. If shred is not available (e.g. on windows, or no core-utils installed), fall back on
a custom shredding method.
"""
if not os.path.isfile(tmp_path):
# file is already gone
return
try:
r = subprocess.call(['shred', tmp_path])
except (OSError, ValueError):
# shred is not available on this system, or some other error occurred.
# ValueError caught because macOS El Capitan is raising an
# exception big enough to hit a limit in python2-2.7.11 and below.
# Symptom is ValueError: insecure pickle when shred is not
# installed there.
r = 1
if r != 0:
# we could not successfully execute unix shred; therefore, do custom shred.
self._shred_file_custom(tmp_path)
os.remove(tmp_path)
def _edit_file_helper(self, filename, secret, existing_data=None, force_save=False, vault_id=None):
# Create a tempfile
root, ext = os.path.splitext(os.path.realpath(filename))
fd, tmp_path = tempfile.mkstemp(suffix=ext, dir=C.DEFAULT_LOCAL_TMP)
cmd = self._editor_shell_command(tmp_path)
try:
if existing_data:
self.write_data(existing_data, fd, shred=False)
except Exception:
# if an error happens, destroy the decrypted file
self._shred_file(tmp_path)
raise
finally:
os.close(fd)
try:
# drop the user into an editor on the tmp file
subprocess.call(cmd)
except Exception as e:
# if an error happens, destroy the decrypted file
self._shred_file(tmp_path)
raise AnsibleError('Unable to execute the command "%s": %s' % (' '.join(cmd), to_native(e)))
b_tmpdata = self.read_data(tmp_path)
# Do nothing if the content has not changed
if force_save or existing_data != b_tmpdata:
# encrypt new data and write out to tmp
# An existing vaultfile will always be UTF-8,
# so decode to unicode here
b_ciphertext = self.vault.encrypt(b_tmpdata, secret, vault_id=vault_id)
self.write_data(b_ciphertext, tmp_path)
# shuffle tmp file into place
self.shuffle_files(tmp_path, filename)
display.vvvvv(u'Saved edited file "%s" encrypted using %s and vault id "%s"' % (to_text(filename), to_text(secret), to_text(vault_id)))
# always shred temp, jic
self._shred_file(tmp_path)
def _real_path(self, filename):
# '-' is special to VaultEditor, dont expand it.
if filename == '-':
return filename
real_path = os.path.realpath(filename)
return real_path
def encrypt_bytes(self, b_plaintext, secret, vault_id=None):
b_ciphertext = self.vault.encrypt(b_plaintext, secret, vault_id=vault_id)
return b_ciphertext
def encrypt_file(self, filename, secret, vault_id=None, output_file=None):
# A file to be encrypted into a vaultfile could be any encoding
# so treat the contents as a byte string.
# follow the symlink
filename = self._real_path(filename)
b_plaintext = self.read_data(filename)
b_ciphertext = self.vault.encrypt(b_plaintext, secret, vault_id=vault_id)
self.write_data(b_ciphertext, output_file or filename)
def decrypt_file(self, filename, output_file=None):
# follow the symlink
filename = self._real_path(filename)
ciphertext = self.read_data(filename)
try:
plaintext = self.vault.decrypt(ciphertext, filename=filename)
except AnsibleError as e:
raise AnsibleError("%s for %s" % (to_native(e), to_native(filename)))
self.write_data(plaintext, output_file or filename, shred=False)
def create_file(self, filename, secret, vault_id=None):
""" create a new encrypted file """
dirname = os.path.dirname(filename)
if dirname and not os.path.exists(dirname):
display.warning(u"%s does not exist, creating..." % to_text(dirname))
makedirs_safe(dirname)
# FIXME: If we can raise an error here, we can probably just make it
# behave like edit instead.
if os.path.isfile(filename):
raise AnsibleError("%s exists, please use 'edit' instead" % filename)
self._edit_file_helper(filename, secret, vault_id=vault_id)
def edit_file(self, filename):
vault_id_used = None
vault_secret_used = None
# follow the symlink
filename = self._real_path(filename)
b_vaulttext = self.read_data(filename)
# vault or yaml files are always utf8
vaulttext = to_text(b_vaulttext)
try:
# vaulttext gets converted back to bytes, but alas
# TODO: return the vault_id that worked?
plaintext, vault_id_used, vault_secret_used = self.vault.decrypt_and_get_vault_id(vaulttext)
except AnsibleError as e:
raise AnsibleError("%s for %s" % (to_native(e), to_native(filename)))
# Figure out the vault id from the file, to select the right secret to re-encrypt it
# (duplicates parts of decrypt, but alas...)
dummy, dummy, cipher_name, vault_id = parse_vaulttext_envelope(b_vaulttext, filename=filename)
# vault id here may not be the vault id actually used for decrypting
# as when the edited file has no vault-id but is decrypted by non-default id in secrets
# (vault_id=default, while a different vault-id decrypted)
# we want to get rid of files encrypted with the AES cipher
force_save = (cipher_name not in CIPHER_WRITE_WHITELIST)
# Keep the same vault-id (and version) as in the header
self._edit_file_helper(filename, vault_secret_used, existing_data=plaintext, force_save=force_save, vault_id=vault_id)
def plaintext(self, filename):
b_vaulttext = self.read_data(filename)
vaulttext = to_text(b_vaulttext)
try:
plaintext = self.vault.decrypt(vaulttext, filename=filename)
return plaintext
except AnsibleError as e:
raise AnsibleVaultError("%s for %s" % (to_native(e), to_native(filename)))
# FIXME/TODO: make this use VaultSecret
def rekey_file(self, filename, new_vault_secret, new_vault_id=None):
# follow the symlink
filename = self._real_path(filename)
prev = os.stat(filename)
b_vaulttext = self.read_data(filename)
vaulttext = to_text(b_vaulttext)
display.vvvvv(u'Rekeying file "%s" to with new vault-id "%s" and vault secret %s' %
(to_text(filename), to_text(new_vault_id), to_text(new_vault_secret)))
try:
plaintext, vault_id_used, _dummy = self.vault.decrypt_and_get_vault_id(vaulttext)
except AnsibleError as e:
raise AnsibleError("%s for %s" % (to_native(e), to_native(filename)))
# This is more or less an assert, see #18247
if new_vault_secret is None:
raise AnsibleError('The value for the new_password to rekey %s with is not valid' % filename)
# FIXME: VaultContext...? could rekey to a different vault_id in the same VaultSecrets
# Need a new VaultLib because the new vault data can be a different
# vault lib format or cipher (for ex, when we migrate 1.0 style vault data to
# 1.1 style data we change the version and the cipher). This is where a VaultContext might help
# the new vault will only be used for encrypting, so it doesn't need the vault secrets
# (we will pass one in directly to encrypt)
new_vault = VaultLib(secrets={})
b_new_vaulttext = new_vault.encrypt(plaintext, new_vault_secret, vault_id=new_vault_id)
self.write_data(b_new_vaulttext, filename)
# preserve permissions
os.chmod(filename, prev.st_mode)
os.chown(filename, prev.st_uid, prev.st_gid)
display.vvvvv(u'Rekeyed file "%s" (decrypted with vault id "%s") was encrypted with new vault-id "%s" and vault secret %s' %
(to_text(filename), to_text(vault_id_used), to_text(new_vault_id), to_text(new_vault_secret)))
def read_data(self, filename):
try:
if filename == '-':
data = sys.stdin.buffer.read()
else:
with open(filename, "rb") as fh:
data = fh.read()
except Exception as e:
msg = to_native(e)
if not msg:
msg = repr(e)
raise AnsibleError('Unable to read source file (%s): %s' % (to_native(filename), msg))
return data
def write_data(self, data, thefile, shred=True, mode=0o600):
# TODO: add docstrings for arg types since this code is picky about that
"""Write the data bytes to given path
This is used to write a byte string to a file or stdout. It is used for
writing the results of vault encryption or decryption. It is used for
saving the ciphertext after encryption and it is also used for saving the
plaintext after decrypting a vault. The type of the 'data' arg should be bytes,
since in the plaintext case, the original contents can be of any text encoding
or arbitrary binary data.
When used to write the result of vault encryption, the val of the 'data' arg
should be a utf-8 encoded byte string and not a text typ and not a text type..
When used to write the result of vault decryption, the val of the 'data' arg
should be a byte string and not a text type.
:arg data: the byte string (bytes) data
:arg thefile: file descriptor or filename to save 'data' to.
:arg shred: if shred==True, make sure that the original data is first shredded so that is cannot be recovered.
:returns: None
"""
# FIXME: do we need this now? data_bytes should always be a utf-8 byte string
b_file_data = to_bytes(data, errors='strict')
# check if we have a file descriptor instead of a path
is_fd = False
try:
is_fd = (isinstance(thefile, int) and fcntl.fcntl(thefile, fcntl.F_GETFD) != -1)
except Exception:
pass
if is_fd:
# if passed descriptor, use that to ensure secure access, otherwise it is a string.
# assumes the fd is securely opened by caller (mkstemp)
os.ftruncate(thefile, 0)
os.write(thefile, b_file_data)
elif thefile == '-':
# get a ref to either sys.stdout.buffer for py3 or plain old sys.stdout for py2
# We need sys.stdout.buffer on py3 so we can write bytes to it since the plaintext
# of the vaulted object could be anything/binary/etc
output = getattr(sys.stdout, 'buffer', sys.stdout)
output.write(b_file_data)
else:
# file names are insecure and prone to race conditions, so remove and create securely
if os.path.isfile(thefile):
if shred:
self._shred_file(thefile)
else:
os.remove(thefile)
# when setting new umask, we get previous as return
current_umask = os.umask(0o077)
try:
try:
# create file with secure permissions
fd = os.open(thefile, os.O_CREAT | os.O_EXCL | os.O_RDWR | os.O_TRUNC, mode)
except OSError as ose:
# Want to catch FileExistsError, which doesn't exist in Python 2, so catch OSError
# and compare the error number to get equivalent behavior in Python 2/3
if ose.errno == errno.EEXIST:
raise AnsibleError('Vault file got recreated while we were operating on it: %s' % to_native(ose))
raise AnsibleError('Problem creating temporary vault file: %s' % to_native(ose))
try:
# now write to the file and ensure ours is only data in it
os.ftruncate(fd, 0)
os.write(fd, b_file_data)
except OSError as e:
raise AnsibleError('Unable to write to temporary vault file: %s' % to_native(e))
finally:
# Make sure the file descriptor is always closed and reset umask
os.close(fd)
finally:
os.umask(current_umask)
def shuffle_files(self, src, dest):
prev = None
# overwrite dest with src
if os.path.isfile(dest):
prev = os.stat(dest)
# old file 'dest' was encrypted, no need to _shred_file
os.remove(dest)
shutil.move(src, dest)
# reset permissions if needed
if prev is not None:
# TODO: selinux, ACLs, xattr?
os.chmod(dest, prev.st_mode)
os.chown(dest, prev.st_uid, prev.st_gid)
def _editor_shell_command(self, filename):
env_editor = C.config.get_config_value('EDITOR')
editor = shlex.split(env_editor)
editor.append(filename)
return editor
########################################
# CIPHERS #
########################################
class VaultAES256:
"""
Vault implementation using AES-CTR with an HMAC-SHA256 authentication code.
Keys are derived using PBKDF2
"""
# http://www.daemonology.net/blog/2009-06-11-cryptographic-right-answers.html
# Note: strings in this class should be byte strings by default.
def __init__(self):
if not HAS_CRYPTOGRAPHY:
raise AnsibleError(NEED_CRYPTO_LIBRARY)
@staticmethod
def _create_key_cryptography(b_password, b_salt, key_length, iv_length):
kdf = PBKDF2HMAC(
algorithm=hashes.SHA256(),
length=2 * key_length + iv_length,
salt=b_salt,
iterations=10000,
backend=CRYPTOGRAPHY_BACKEND)
b_derivedkey = kdf.derive(b_password)
return b_derivedkey
@classmethod
def _gen_key_initctr(cls, b_password, b_salt):
# 16 for AES 128, 32 for AES256
key_length = 32
if HAS_CRYPTOGRAPHY:
# AES is a 128-bit block cipher, so IVs and counter nonces are 16 bytes
iv_length = algorithms.AES.block_size // 8
b_derivedkey = cls._create_key_cryptography(b_password, b_salt, key_length, iv_length)
b_iv = b_derivedkey[(key_length * 2):(key_length * 2) + iv_length]
else:
raise AnsibleError(NEED_CRYPTO_LIBRARY + '(Detected in initctr)')
b_key1 = b_derivedkey[:key_length]
b_key2 = b_derivedkey[key_length:(key_length * 2)]
return b_key1, b_key2, b_iv
@staticmethod
def _encrypt_cryptography(b_plaintext, b_key1, b_key2, b_iv):
cipher = C_Cipher(algorithms.AES(b_key1), modes.CTR(b_iv), CRYPTOGRAPHY_BACKEND)
encryptor = cipher.encryptor()
padder = padding.PKCS7(algorithms.AES.block_size).padder()
b_ciphertext = encryptor.update(padder.update(b_plaintext) + padder.finalize())
b_ciphertext += encryptor.finalize()
# COMBINE SALT, DIGEST AND DATA
hmac = HMAC(b_key2, hashes.SHA256(), CRYPTOGRAPHY_BACKEND)
hmac.update(b_ciphertext)
b_hmac = hmac.finalize()
return to_bytes(hexlify(b_hmac), errors='surrogate_or_strict'), hexlify(b_ciphertext)
@classmethod
def _get_salt(cls):
custom_salt = C.config.get_config_value('VAULT_ENCRYPT_SALT')
if not custom_salt:
custom_salt = os.urandom(32)
return to_bytes(custom_salt)
@classmethod
def encrypt(cls, b_plaintext, secret, salt=None):
if secret is None:
raise AnsibleVaultError('The secret passed to encrypt() was None')
if salt is None:
b_salt = cls._get_salt()
elif not salt:
raise AnsibleVaultError('Empty or invalid salt passed to encrypt()')
else:
b_salt = to_bytes(salt)
b_password = secret.bytes
b_key1, b_key2, b_iv = cls._gen_key_initctr(b_password, b_salt)
if HAS_CRYPTOGRAPHY:
b_hmac, b_ciphertext = cls._encrypt_cryptography(b_plaintext, b_key1, b_key2, b_iv)
else:
raise AnsibleError(NEED_CRYPTO_LIBRARY + '(Detected in encrypt)')
b_vaulttext = b'\n'.join([hexlify(b_salt), b_hmac, b_ciphertext])
# Unnecessary but getting rid of it is a backwards incompatible vault
# format change
b_vaulttext = hexlify(b_vaulttext)
return b_vaulttext
@classmethod
def _decrypt_cryptography(cls, b_ciphertext, b_crypted_hmac, b_key1, b_key2, b_iv):
# b_key1, b_key2, b_iv = self._gen_key_initctr(b_password, b_salt)
# EXIT EARLY IF DIGEST DOESN'T MATCH
hmac = HMAC(b_key2, hashes.SHA256(), CRYPTOGRAPHY_BACKEND)
hmac.update(b_ciphertext)
try:
hmac.verify(_unhexlify(b_crypted_hmac))
except InvalidSignature as e:
raise AnsibleVaultError('HMAC verification failed: %s' % e)
cipher = C_Cipher(algorithms.AES(b_key1), modes.CTR(b_iv), CRYPTOGRAPHY_BACKEND)
decryptor = cipher.decryptor()
unpadder = padding.PKCS7(128).unpadder()
b_plaintext = unpadder.update(
decryptor.update(b_ciphertext) + decryptor.finalize()
) + unpadder.finalize()
return b_plaintext
@staticmethod
def _is_equal(b_a, b_b):
"""
Comparing 2 byte arrays in constant time to avoid timing attacks.
It would be nice if there were a library for this but hey.
"""
if not (isinstance(b_a, binary_type) and isinstance(b_b, binary_type)):
raise TypeError('_is_equal can only be used to compare two byte strings')
# http://codahale.com/a-lesson-in-timing-attacks/
if len(b_a) != len(b_b):
return False
result = 0
for b_x, b_y in zip(b_a, b_b):
result |= b_x ^ b_y
return result == 0
@classmethod
def decrypt(cls, b_vaulttext, secret):
b_ciphertext, b_salt, b_crypted_hmac = parse_vaulttext(b_vaulttext)
# TODO: would be nice if a VaultSecret could be passed directly to _decrypt_*
# (move _gen_key_initctr() to a AES256 VaultSecret or VaultContext impl?)
# though, likely needs to be python cryptography specific impl that basically
# creates a Cipher() with b_key1, a Mode.CTR() with b_iv, and a HMAC() with sign key b_key2
b_password = secret.bytes
b_key1, b_key2, b_iv = cls._gen_key_initctr(b_password, b_salt)
if HAS_CRYPTOGRAPHY:
b_plaintext = cls._decrypt_cryptography(b_ciphertext, b_crypted_hmac, b_key1, b_key2, b_iv)
else:
raise AnsibleError(NEED_CRYPTO_LIBRARY + '(Detected in decrypt)')
return b_plaintext
# Keys could be made bytes later if the code that gets the data is more
# naturally byte-oriented
CIPHER_MAPPING = {
u'AES256': VaultAES256,
}
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 81,455 |
Ansible-vault encrypt corrupts file if directory is not writeable
|
### Summary
If you have write permissions for the file you want to encrypt, but not for the directory it's in, `ansible-vault encrypt` corrupts the file.
My colleague originally encountered this problem on `ansible [core 2.13.1]` with `python version = 3.8.10`, which I believe is the version from the Ubuntu 20.04 package repo, but I managed to reproduce it on a fresh install of ansible core 2.15.2.
I'd upload the original and corrupted file but my corporate firewall stupidly blocks gists and that's too annoying to work around right now. If you do think the files would help, just let me know and I'll make it work.
### Issue Type
Bug Report
### Component Name
ansible-vault
### Ansible Version
```console
$ ansible --version
ansible [core 2.15.2]
config file = None
configured module search path = ['<homedir>/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = <homedir>/test_ansible/lib/python3.10/site-packages/ansible
ansible collection location = <homedir>/.ansible/collections:/usr/share/ansible/collections
executable location = <homedir>/test_ansible/bin/ansible
python version = 3.10.12 (main, Jun 11 2023, 05:26:28) [GCC 11.4.0] (<homedir>/test_ansible/bin/python3)
jinja version = 3.1.2
libyaml = True
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
CONFIG_FILE() = None
```
### OS / Environment
Ubuntu 22.04 LTS
### Steps to Reproduce
Consider the following directory and file.
```bash
> ls testdir/
drwxr-sr-x 2 root some_group 4,0K 2023 Aug 07 (Mo) 11:34 .
-rw-rw---- 1 root some_group 3,7K 2023 Aug 07 (Mo) 11:34 .bashrc
```
I'm not the owner of the file or directory but I'm part of that group. Because of a config error on my part, the directory is missing group write permissions. (I'm not actually trying to encrypt my .bashrc, that's just a convenient file for reproducing the problem.) I _am_ able to write to the existing file but I'm not able to create new files in that directory without using sudo. I think ansible-vault tries to move the new, encrypted file into the directory. That fails of course, and somehow corrupts the file in the process.
```bash
> ansible-vault encrypt testdir/.bashrc
New Vault password:
Confirm New Vault password:
ERROR! Unexpected Exception, this is probably a bug: [Errno 13] Permission denied: '<path>/testdir/.bashrc'
to see the full traceback, use -vvv
> less testdir/.bashrc
"testdir/.bashrc" may be a binary file. See it anyway?
```
.bashrc is now in a corrupted state, as indicated by `less` recognizing it as binary.
If the group doesn't have write permissions on the file, ansible-vault fails with the same error message and the file is, correctly, not touched. The corruption happens because ansible-vault is able to write to the file.
### Expected Results
Ansible-vault should either correctly encrypt and write the file or should fail and leave the file untouched. Under no circumstance should it corrupt my file.
### Actual Results
```console
> ansible-vault encrypt -vvv testdir/.bashrc
New Vault password:
Confirm New Vault password:
ERROR! Unexpected Exception, this is probably a bug: [Errno 13] Permission denied: '<path>/testdir/.bashrc'
the full traceback was:
Traceback (most recent call last):
File "<homedir>/test_ansible/lib/python3.10/site-packages/ansible/cli/__init__.py", line 659, in cli_executor
exit_code = cli.run()
File "<homedir>/test_ansible/lib/python3.10/site-packages/ansible/cli/vault.py", line 248, in run
context.CLIARGS['func']()
File "<homedir>/test_ansible/lib/python3.10/site-packages/ansible/cli/vault.py", line 261, in execute_encrypt
self.editor.encrypt_file(f, self.encrypt_secret,
File "<homedir>/test_ansible/lib/python3.10/site-packages/ansible/parsing/vault/__init__.py", line 906, in encrypt_file
self.write_data(b_ciphertext, output_file or filename)
File "<homedir>/test_ansible/lib/python3.10/site-packages/ansible/parsing/vault/__init__.py", line 1083, in write_data
self._shred_file(thefile)
File "<homedir>/test_ansible/lib/python3.10/site-packages/ansible/parsing/vault/__init__.py", line 837, in _shred_file
os.remove(tmp_path)
PermissionError: [Errno 13] Permission denied: '<path>/testdir/.bashrc'
> less testdir/.bashrc
"testdir/.bashrc" may be a binary file. See it anyway?
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/81455
|
https://github.com/ansible/ansible/pull/81660
|
a861b1adba5d4a12f61ed268f67a224bdaa5f835
|
6177888cf6a6b9fba24e3875bc73138e5be2a224
| 2023-08-07T10:30:13Z |
python
| 2023-09-07T19:30:05Z |
test/integration/targets/ansible-vault/runme.sh
|
#!/usr/bin/env bash
set -euvx
source virtualenv.sh
MYTMPDIR=$(mktemp -d 2>/dev/null || mktemp -d -t 'mytmpdir')
trap 'rm -rf "${MYTMPDIR}"' EXIT
# create a test file
TEST_FILE="${MYTMPDIR}/test_file"
echo "This is a test file" > "${TEST_FILE}"
TEST_FILE_1_2="${MYTMPDIR}/test_file_1_2"
echo "This is a test file for format 1.2" > "${TEST_FILE_1_2}"
TEST_FILE_ENC_PASSWORD="${MYTMPDIR}/test_file_enc_password"
echo "This is a test file for encrypted with a vault password that is itself vault encrypted" > "${TEST_FILE_ENC_PASSWORD}"
TEST_FILE_ENC_PASSWORD_DEFAULT="${MYTMPDIR}/test_file_enc_password_default"
echo "This is a test file for encrypted with a vault password that is itself vault encrypted using --encrypted-vault-id default" > "${TEST_FILE_ENC_PASSWORD_DEFAULT}"
TEST_FILE_OUTPUT="${MYTMPDIR}/test_file_output"
TEST_FILE_EDIT="${MYTMPDIR}/test_file_edit"
echo "This is a test file for edit" > "${TEST_FILE_EDIT}"
TEST_FILE_EDIT2="${MYTMPDIR}/test_file_edit2"
echo "This is a test file for edit2" > "${TEST_FILE_EDIT2}"
# test case for https://github.com/ansible/ansible/issues/35834
# (being prompted for new password on vault-edit with no configured passwords)
TEST_FILE_EDIT3="${MYTMPDIR}/test_file_edit3"
echo "This is a test file for edit3" > "${TEST_FILE_EDIT3}"
# ansible-config view
ansible-config view
# ansible-config
ansible-config dump --only-changed
ansible-vault encrypt "$@" --vault-id vault-password "${TEST_FILE_EDIT3}"
# EDITOR=./faux-editor.py ansible-vault edit "$@" "${TEST_FILE_EDIT3}"
EDITOR=./faux-editor.py ansible-vault edit --vault-id vault-password -vvvvv "${TEST_FILE_EDIT3}"
echo $?
# view the vault encrypted password file
ansible-vault view "$@" --vault-id vault-password encrypted-vault-password
# encrypt with a password from a vault encrypted password file and multiple vault-ids
# should fail because we dont know which vault id to use to encrypt with
ansible-vault encrypt "$@" --vault-id vault-password --vault-id encrypted-vault-password "${TEST_FILE_ENC_PASSWORD}" && :
WRONG_RC=$?
echo "rc was $WRONG_RC (5 is expected)"
[ $WRONG_RC -eq 5 ]
# try to view the file encrypted with the vault-password we didnt specify
# to verify we didnt choose the wrong vault-id
ansible-vault view "$@" --vault-id vault-password encrypted-vault-password
FORMAT_1_1_HEADER="\$ANSIBLE_VAULT;1.1;AES256"
FORMAT_1_2_HEADER="\$ANSIBLE_VAULT;1.2;AES256"
VAULT_PASSWORD_FILE=vault-password
# new format, view, using password client script
ansible-vault view "$@" --vault-id [email protected] format_1_1_AES256.yml
# view, using password client script, unknown vault/keyname
ansible-vault view "$@" --vault-id [email protected] format_1_1_AES256.yml && :
# Use linux setsid to test without a tty. No setsid if osx/bsd though...
if [ -x "$(command -v setsid)" ]; then
# tests related to https://github.com/ansible/ansible/issues/30993
CMD='ansible-playbook -i ../../inventory -vvvvv --ask-vault-pass test_vault.yml'
setsid sh -c "echo test-vault-password|${CMD}" < /dev/null > log 2>&1 && :
WRONG_RC=$?
cat log
echo "rc was $WRONG_RC (0 is expected)"
[ $WRONG_RC -eq 0 ]
setsid sh -c 'tty; ansible-vault view --ask-vault-pass -vvvvv test_vault.yml' < /dev/null > log 2>&1 && :
WRONG_RC=$?
echo "rc was $WRONG_RC (1 is expected)"
[ $WRONG_RC -eq 1 ]
cat log
setsid sh -c 'tty; echo passbhkjhword|ansible-playbook -i ../../inventory -vvvvv --ask-vault-pass test_vault.yml' < /dev/null > log 2>&1 && :
WRONG_RC=$?
echo "rc was $WRONG_RC (1 is expected)"
[ $WRONG_RC -eq 1 ]
cat log
setsid sh -c 'tty; echo test-vault-password |ansible-playbook -i ../../inventory -vvvvv --ask-vault-pass test_vault.yml' < /dev/null > log 2>&1
echo $?
cat log
setsid sh -c 'tty; echo test-vault-password|ansible-playbook -i ../../inventory -vvvvv --ask-vault-pass test_vault.yml' < /dev/null > log 2>&1
echo $?
cat log
setsid sh -c 'tty; echo test-vault-password |ansible-playbook -i ../../inventory -vvvvv --ask-vault-pass test_vault.yml' < /dev/null > log 2>&1
echo $?
cat log
setsid sh -c 'tty; echo test-vault-password|ansible-vault view --ask-vault-pass -vvvvv vaulted.inventory' < /dev/null > log 2>&1
echo $?
cat log
# test using --ask-vault-password option
CMD='ansible-playbook -i ../../inventory -vvvvv --ask-vault-password test_vault.yml'
setsid sh -c "echo test-vault-password|${CMD}" < /dev/null > log 2>&1 && :
WRONG_RC=$?
cat log
echo "rc was $WRONG_RC (0 is expected)"
[ $WRONG_RC -eq 0 ]
fi
ansible-vault view "$@" --vault-password-file vault-password-wrong format_1_1_AES256.yml && :
WRONG_RC=$?
echo "rc was $WRONG_RC (1 is expected)"
[ $WRONG_RC -eq 1 ]
set -eux
# new format, view
ansible-vault view "$@" --vault-password-file vault-password format_1_1_AES256.yml
# new format, view with vault-id
ansible-vault view "$@" --vault-id=vault-password format_1_1_AES256.yml
# new format, view, using password script
ansible-vault view "$@" --vault-password-file password-script.py format_1_1_AES256.yml
# new format, view, using password script with vault-id
ansible-vault view "$@" --vault-id password-script.py format_1_1_AES256.yml
# new 1.2 format, view
ansible-vault view "$@" --vault-password-file vault-password format_1_2_AES256.yml
# new 1.2 format, view with vault-id
ansible-vault view "$@" --vault-id=test_vault_id@vault-password format_1_2_AES256.yml
# new 1,2 format, view, using password script
ansible-vault view "$@" --vault-password-file password-script.py format_1_2_AES256.yml
# new 1.2 format, view, using password script with vault-id
ansible-vault view "$@" --vault-id password-script.py format_1_2_AES256.yml
# newish 1.1 format, view, using a vault-id list from config env var
ANSIBLE_VAULT_IDENTITY_LIST='wrong-password@vault-password-wrong,default@vault-password' ansible-vault view "$@" --vault-id password-script.py format_1_1_AES256.yml
# new 1.2 format, view, ENFORCE_IDENTITY_MATCH=true, should fail, no 'test_vault_id' vault_id
ANSIBLE_VAULT_ID_MATCH=1 ansible-vault view "$@" --vault-password-file vault-password format_1_2_AES256.yml && :
WRONG_RC=$?
echo "rc was $WRONG_RC (1 is expected)"
[ $WRONG_RC -eq 1 ]
# new 1.2 format, view with vault-id, ENFORCE_IDENTITY_MATCH=true, should work, 'test_vault_id' is provided
ANSIBLE_VAULT_ID_MATCH=1 ansible-vault view "$@" --vault-id=test_vault_id@vault-password format_1_2_AES256.yml
# new 1,2 format, view, using password script, ENFORCE_IDENTITY_MATCH=true, should fail, no 'test_vault_id'
ANSIBLE_VAULT_ID_MATCH=1 ansible-vault view "$@" --vault-password-file password-script.py format_1_2_AES256.yml && :
WRONG_RC=$?
echo "rc was $WRONG_RC (1 is expected)"
[ $WRONG_RC -eq 1 ]
# new 1.2 format, view, using password script with vault-id, ENFORCE_IDENTITY_MATCH=true, should fail
ANSIBLE_VAULT_ID_MATCH=1 ansible-vault view "$@" --vault-id password-script.py format_1_2_AES256.yml && :
WRONG_RC=$?
echo "rc was $WRONG_RC (1 is expected)"
[ $WRONG_RC -eq 1 ]
# new 1.2 format, view, using password script with vault-id, ENFORCE_IDENTITY_MATCH=true, 'test_vault_id' provided should work
ANSIBLE_VAULT_ID_MATCH=1 ansible-vault view "$@" [email protected] format_1_2_AES256.yml
# test with a default vault password set via config/env, right password
ANSIBLE_VAULT_PASSWORD_FILE=vault-password ansible-vault view "$@" format_1_1_AES256.yml
# test with a default vault password set via config/env, wrong password
ANSIBLE_VAULT_PASSWORD_FILE=vault-password-wrong ansible-vault view "$@" format_1_1_AES.yml && :
WRONG_RC=$?
echo "rc was $WRONG_RC (1 is expected)"
[ $WRONG_RC -eq 1 ]
# test with a default vault-id list set via config/env, right password
ANSIBLE_VAULT_PASSWORD_FILE=wrong@vault-password-wrong,correct@vault-password ansible-vault view "$@" format_1_1_AES.yml && :
# test with a default vault-id list set via config/env,wrong passwords
ANSIBLE_VAULT_PASSWORD_FILE=wrong@vault-password-wrong,alsowrong@vault-password-wrong ansible-vault view "$@" format_1_1_AES.yml && :
WRONG_RC=$?
echo "rc was $WRONG_RC (1 is expected)"
[ $WRONG_RC -eq 1 ]
# try specifying a --encrypt-vault-id that doesnt exist, should exit with an error indicating
# that --encrypt-vault-id and the known vault-ids
ansible-vault encrypt "$@" --vault-password-file vault-password --encrypt-vault-id doesnt_exist "${TEST_FILE}" && :
WRONG_RC=$?
echo "rc was $WRONG_RC (1 is expected)"
[ $WRONG_RC -eq 1 ]
# encrypt it
ansible-vault encrypt "$@" --vault-password-file vault-password "${TEST_FILE}"
ansible-vault view "$@" --vault-password-file vault-password "${TEST_FILE}"
# view with multiple vault-password files, including a wrong one
ansible-vault view "$@" --vault-password-file vault-password --vault-password-file vault-password-wrong "${TEST_FILE}"
# view with multiple vault-password files, including a wrong one, using vault-id
ansible-vault view "$@" --vault-id vault-password --vault-id vault-password-wrong "${TEST_FILE}"
# And with the password files specified in a different order
ansible-vault view "$@" --vault-password-file vault-password-wrong --vault-password-file vault-password "${TEST_FILE}"
# And with the password files specified in a different order, using vault-id
ansible-vault view "$@" --vault-id vault-password-wrong --vault-id vault-password "${TEST_FILE}"
# And with the password files specified in a different order, using --vault-id and non default vault_ids
ansible-vault view "$@" --vault-id test_vault_id@vault-password-wrong --vault-id test_vault_id@vault-password "${TEST_FILE}"
ansible-vault decrypt "$@" --vault-password-file vault-password "${TEST_FILE}"
# encrypt it, using a vault_id so we write a 1.2 format file
ansible-vault encrypt "$@" --vault-id test_vault_1_2@vault-password "${TEST_FILE_1_2}"
ansible-vault view "$@" --vault-id vault-password "${TEST_FILE_1_2}"
ansible-vault view "$@" --vault-id test_vault_1_2@vault-password "${TEST_FILE_1_2}"
# view with multiple vault-password files, including a wrong one
ansible-vault view "$@" --vault-id vault-password --vault-id wrong_password@vault-password-wrong "${TEST_FILE_1_2}"
# And with the password files specified in a different order, using vault-id
ansible-vault view "$@" --vault-id vault-password-wrong --vault-id vault-password "${TEST_FILE_1_2}"
# And with the password files specified in a different order, using --vault-id and non default vault_ids
ansible-vault view "$@" --vault-id test_vault_id@vault-password-wrong --vault-id test_vault_id@vault-password "${TEST_FILE_1_2}"
ansible-vault decrypt "$@" --vault-id test_vault_1_2@vault-password "${TEST_FILE_1_2}"
# multiple vault passwords
ansible-vault view "$@" --vault-password-file vault-password --vault-password-file vault-password-wrong format_1_1_AES256.yml
# multiple vault passwords, --vault-id
ansible-vault view "$@" --vault-id test_vault_id@vault-password --vault-id test_vault_id@vault-password-wrong format_1_1_AES256.yml
# encrypt it, with password from password script
ansible-vault encrypt "$@" --vault-password-file password-script.py "${TEST_FILE}"
ansible-vault view "$@" --vault-password-file password-script.py "${TEST_FILE}"
ansible-vault decrypt "$@" --vault-password-file password-script.py "${TEST_FILE}"
# encrypt it, with password from password script
ansible-vault encrypt "$@" --vault-id [email protected] "${TEST_FILE}"
ansible-vault view "$@" --vault-id [email protected] "${TEST_FILE}"
ansible-vault decrypt "$@" --vault-id [email protected] "${TEST_FILE}"
# new password file for rekeyed file
NEW_VAULT_PASSWORD="${MYTMPDIR}/new-vault-password"
echo "newpassword" > "${NEW_VAULT_PASSWORD}"
ansible-vault encrypt "$@" --vault-password-file vault-password "${TEST_FILE}"
ansible-vault rekey "$@" --vault-password-file vault-password --new-vault-password-file "${NEW_VAULT_PASSWORD}" "${TEST_FILE}"
# --new-vault-password-file and --new-vault-id should cause options error
ansible-vault rekey "$@" --vault-password-file vault-password --new-vault-id=foobar --new-vault-password-file "${NEW_VAULT_PASSWORD}" "${TEST_FILE}" && :
WRONG_RC=$?
echo "rc was $WRONG_RC (2 is expected)"
[ $WRONG_RC -eq 2 ]
ansible-vault view "$@" --vault-password-file "${NEW_VAULT_PASSWORD}" "${TEST_FILE}"
# view file with unicode in filename
ansible-vault view "$@" --vault-password-file vault-password vault-cafΓ©.yml
# view with old password file and new password file
ansible-vault view "$@" --vault-password-file "${NEW_VAULT_PASSWORD}" --vault-password-file vault-password "${TEST_FILE}"
# view with old password file and new password file, different order
ansible-vault view "$@" --vault-password-file vault-password --vault-password-file "${NEW_VAULT_PASSWORD}" "${TEST_FILE}"
# view with old password file and new password file and another wrong
ansible-vault view "$@" --vault-password-file "${NEW_VAULT_PASSWORD}" --vault-password-file vault-password-wrong --vault-password-file vault-password "${TEST_FILE}"
# view with old password file and new password file and another wrong, using --vault-id
ansible-vault view "$@" --vault-id "tmp_new_password@${NEW_VAULT_PASSWORD}" --vault-id wrong_password@vault-password-wrong --vault-id myorg@vault-password "${TEST_FILE}"
ansible-vault decrypt "$@" --vault-password-file "${NEW_VAULT_PASSWORD}" "${TEST_FILE}"
# reading/writing to/from stdin/stdin (See https://github.com/ansible/ansible/issues/23567)
ansible-vault encrypt "$@" --vault-password-file "${VAULT_PASSWORD_FILE}" --output="${TEST_FILE_OUTPUT}" < "${TEST_FILE}"
OUTPUT=$(ansible-vault decrypt "$@" --vault-password-file "${VAULT_PASSWORD_FILE}" --output=- < "${TEST_FILE_OUTPUT}")
echo "${OUTPUT}" | grep 'This is a test file'
OUTPUT_DASH=$(ansible-vault decrypt "$@" --vault-password-file "${VAULT_PASSWORD_FILE}" --output=- "${TEST_FILE_OUTPUT}")
echo "${OUTPUT_DASH}" | grep 'This is a test file'
OUTPUT_DASH_SPACE=$(ansible-vault decrypt "$@" --vault-password-file "${VAULT_PASSWORD_FILE}" --output - "${TEST_FILE_OUTPUT}")
echo "${OUTPUT_DASH_SPACE}" | grep 'This is a test file'
# test using an empty vault password file
ansible-vault view "$@" --vault-password-file empty-password format_1_1_AES256.yml && :
WRONG_RC=$?
echo "rc was $WRONG_RC (1 is expected)"
[ $WRONG_RC -eq 1 ]
ansible-vault view "$@" --vault-id=empty@empty-password --vault-password-file empty-password format_1_1_AES256.yml && :
WRONG_RC=$?
echo "rc was $WRONG_RC (1 is expected)"
[ $WRONG_RC -eq 1 ]
echo 'foo' > some_file.txt
ansible-vault encrypt "$@" --vault-password-file empty-password some_file.txt && :
WRONG_RC=$?
echo "rc was $WRONG_RC (1 is expected)"
[ $WRONG_RC -eq 1 ]
ansible-vault encrypt_string "$@" --vault-password-file "${NEW_VAULT_PASSWORD}" "a test string"
# Test with multiple vault password files
# https://github.com/ansible/ansible/issues/57172
env ANSIBLE_VAULT_PASSWORD_FILE=vault-password ansible-vault encrypt_string "$@" --vault-password-file "${NEW_VAULT_PASSWORD}" --encrypt-vault-id default "a test string"
ansible-vault encrypt_string "$@" --vault-password-file "${NEW_VAULT_PASSWORD}" --name "blippy" "a test string names blippy"
ansible-vault encrypt_string "$@" --vault-id "${NEW_VAULT_PASSWORD}" "a test string"
ansible-vault encrypt_string "$@" --vault-id "${NEW_VAULT_PASSWORD}" --name "blippy" "a test string names blippy"
# from stdin
ansible-vault encrypt_string "$@" --vault-password-file "${NEW_VAULT_PASSWORD}" < "${TEST_FILE}"
ansible-vault encrypt_string "$@" --vault-password-file "${NEW_VAULT_PASSWORD}" --stdin-name "the_var_from_stdin" < "${TEST_FILE}"
# write to file
ansible-vault encrypt_string "$@" --vault-password-file "${NEW_VAULT_PASSWORD}" --name "blippy" "a test string names blippy" --output "${MYTMPDIR}/enc_string_test_file"
[ -f "${MYTMPDIR}/enc_string_test_file" ];
# test ansible-vault edit with a faux editor
ansible-vault encrypt "$@" --vault-password-file vault-password "${TEST_FILE_EDIT}"
# edit a 1.1 format with no vault-id, should stay 1.1
EDITOR=./faux-editor.py ansible-vault edit "$@" --vault-password-file vault-password "${TEST_FILE_EDIT}"
head -1 "${TEST_FILE_EDIT}" | grep "${FORMAT_1_1_HEADER}"
# edit a 1.1 format with vault-id, should stay 1.1
cat "${TEST_FILE_EDIT}"
EDITOR=./faux-editor.py ansible-vault edit "$@" --vault-id vault_password@vault-password "${TEST_FILE_EDIT}"
cat "${TEST_FILE_EDIT}"
head -1 "${TEST_FILE_EDIT}" | grep "${FORMAT_1_1_HEADER}"
ansible-vault encrypt "$@" --vault-id vault_password@vault-password "${TEST_FILE_EDIT2}"
# verify that we aren't prompted for a new vault password on edit if we are running interactively (ie, with prompts)
# have to use setsid nd --ask-vault-pass to force a prompt to simulate.
# See https://github.com/ansible/ansible/issues/35834
setsid sh -c 'tty; echo password |ansible-vault edit --ask-vault-pass vault_test.yml' < /dev/null > log 2>&1 && :
grep 'New Vault password' log && :
WRONG_RC=$?
echo "The stdout log had 'New Vault password' in it and it is not supposed to. rc of grep was $WRONG_RC (1 is expected)"
[ $WRONG_RC -eq 1 ]
# edit a 1.2 format with vault id, should keep vault id and 1.2 format
EDITOR=./faux-editor.py ansible-vault edit "$@" --vault-id vault_password@vault-password "${TEST_FILE_EDIT2}"
head -1 "${TEST_FILE_EDIT2}" | grep "${FORMAT_1_2_HEADER};vault_password"
# edit a 1.2 file with no vault-id, should keep vault id and 1.2 format
EDITOR=./faux-editor.py ansible-vault edit "$@" --vault-password-file vault-password "${TEST_FILE_EDIT2}"
head -1 "${TEST_FILE_EDIT2}" | grep "${FORMAT_1_2_HEADER};vault_password"
# encrypt with a password from a vault encrypted password file and multiple vault-ids
# should fail because we dont know which vault id to use to encrypt with
ansible-vault encrypt "$@" --vault-id vault-password --vault-id encrypted-vault-password "${TEST_FILE_ENC_PASSWORD}" && :
WRONG_RC=$?
echo "rc was $WRONG_RC (5 is expected)"
[ $WRONG_RC -eq 5 ]
# encrypt with a password from a vault encrypted password file and multiple vault-ids
# but this time specify with --encrypt-vault-id, but specifying vault-id names (instead of default)
# ansible-vault encrypt "$@" --vault-id from_vault_password@vault-password --vault-id from_encrypted_vault_password@encrypted-vault-password --encrypt-vault-id from_encrypted_vault_password "${TEST_FILE(_ENC_PASSWORD}"
# try to view the file encrypted with the vault-password we didnt specify
# to verify we didnt choose the wrong vault-id
# ansible-vault view "$@" --vault-id vault-password "${TEST_FILE_ENC_PASSWORD}" && :
# WRONG_RC=$?
# echo "rc was $WRONG_RC (1 is expected)"
# [ $WRONG_RC -eq 1 ]
ansible-vault encrypt "$@" --vault-id vault-password "${TEST_FILE_ENC_PASSWORD}"
# view the file encrypted with a password from a vault encrypted password file
ansible-vault view "$@" --vault-id vault-password --vault-id encrypted-vault-password "${TEST_FILE_ENC_PASSWORD}"
# try to view the file encrypted with a password from a vault encrypted password file but without the password to the password file.
# This should fail with an
ansible-vault view "$@" --vault-id encrypted-vault-password "${TEST_FILE_ENC_PASSWORD}" && :
WRONG_RC=$?
echo "rc was $WRONG_RC (1 is expected)"
[ $WRONG_RC -eq 1 ]
# test playbooks using vaulted files
ansible-playbook test_vault.yml -i ../../inventory -v "$@" --vault-password-file vault-password --list-tasks
ansible-playbook test_vault.yml -i ../../inventory -v "$@" --vault-password-file vault-password --list-hosts
ansible-playbook test_vault.yml -i ../../inventory -v "$@" --vault-password-file vault-password --syntax-check
ansible-playbook test_vault.yml -i ../../inventory -v "$@" --vault-password-file vault-password
ansible-playbook test_vault_embedded.yml -i ../../inventory -v "$@" --vault-password-file vault-password --syntax-check
ansible-playbook test_vault_embedded.yml -i ../../inventory -v "$@" --vault-password-file vault-password
ansible-playbook test_vaulted_inventory.yml -i vaulted.inventory -v "$@" --vault-password-file vault-password
ansible-playbook test_vaulted_template.yml -i ../../inventory -v "$@" --vault-password-file vault-password
# test using --vault-pass-file option
ansible-playbook test_vault.yml -i ../../inventory -v "$@" --vault-pass-file vault-password
# install TOML for parse toml inventory
# test playbooks using vaulted files(toml)
pip install toml
ansible-vault encrypt ./inventory.toml -v "$@" --vault-password-file=./vault-password
ansible-playbook test_vaulted_inventory_toml.yml -i ./inventory.toml -v "$@" --vault-password-file vault-password
ansible-vault decrypt ./inventory.toml -v "$@" --vault-password-file=./vault-password
# test a playbook with a host_var whose value is non-ascii utf8 (see https://github.com/ansible/ansible/issues/37258)
ansible-playbook -i ../../inventory -v "$@" --vault-id vault-password test_vaulted_utf8_value.yml
# test with password from password script
ansible-playbook test_vault.yml -i ../../inventory -v "$@" --vault-password-file password-script.py
ansible-playbook test_vault_embedded.yml -i ../../inventory -v "$@" --vault-password-file password-script.py
# with multiple password files
ansible-playbook test_vault.yml -i ../../inventory -v "$@" --vault-password-file vault-password --vault-password-file vault-password-wrong
ansible-playbook test_vault.yml -i ../../inventory -v "$@" --vault-password-file vault-password-wrong --vault-password-file vault-password
ansible-playbook test_vault_embedded.yml -i ../../inventory -v "$@" --vault-password-file vault-password --vault-password-file vault-password-wrong --syntax-check
ansible-playbook test_vault_embedded.yml -i ../../inventory -v "$@" --vault-password-file vault-password-wrong --vault-password-file vault-password
# test with a default vault password file set in config
ANSIBLE_VAULT_PASSWORD_FILE=vault-password ansible-playbook test_vault_embedded.yml -i ../../inventory -v "$@" --vault-password-file vault-password-wrong
# test using vault_identity_list config
ANSIBLE_VAULT_IDENTITY_LIST='wrong-password@vault-password-wrong,default@vault-password' ansible-playbook test_vault.yml -i ../../inventory -v "$@"
# test that we can have a vault encrypted yaml file that includes embedded vault vars
# that were encrypted with a different vault secret
ansible-playbook test_vault_file_encrypted_embedded.yml -i ../../inventory "$@" --vault-id encrypted_file_encrypted_var_password --vault-id vault-password
# with multiple password files, --vault-id, ordering
ansible-playbook test_vault.yml -i ../../inventory -v "$@" --vault-id vault-password --vault-id vault-password-wrong
ansible-playbook test_vault.yml -i ../../inventory -v "$@" --vault-id vault-password-wrong --vault-id vault-password
ansible-playbook test_vault_embedded.yml -i ../../inventory -v "$@" --vault-id vault-password --vault-id vault-password-wrong --syntax-check
ansible-playbook test_vault_embedded.yml -i ../../inventory -v "$@" --vault-id vault-password-wrong --vault-id vault-password
# test with multiple password files, including a script, and a wrong password
ansible-playbook test_vault_embedded.yml -i ../../inventory -v "$@" --vault-password-file vault-password-wrong --vault-password-file password-script.py --vault-password-file vault-password
# test with multiple password files, including a script, and a wrong password, and a mix of --vault-id and --vault-password-file
ansible-playbook test_vault_embedded.yml -i ../../inventory -v "$@" --vault-password-file vault-password-wrong --vault-id password-script.py --vault-id vault-password
# test with multiple password files, including a script, and a wrong password, and a mix of --vault-id and --vault-password-file
ansible-playbook test_vault_embedded_ids.yml -i ../../inventory -v "$@" \
--vault-password-file vault-password-wrong \
--vault-id password-script.py --vault-id example1@example1_password \
--vault-id example2@example2_password --vault-password-file example3_password \
--vault-id vault-password
# with wrong password
ansible-playbook test_vault.yml -i ../../inventory -v "$@" --vault-password-file vault-password-wrong && :
WRONG_RC=$?
echo "rc was $WRONG_RC (1 is expected)"
[ $WRONG_RC -eq 1 ]
# with multiple wrong passwords
ansible-playbook test_vault.yml -i ../../inventory -v "$@" --vault-password-file vault-password-wrong --vault-password-file vault-password-wrong && :
WRONG_RC=$?
echo "rc was $WRONG_RC (1 is expected)"
[ $WRONG_RC -eq 1 ]
# with wrong password, --vault-id
ansible-playbook test_vault.yml -i ../../inventory -v "$@" --vault-id vault-password-wrong && :
WRONG_RC=$?
echo "rc was $WRONG_RC (1 is expected)"
[ $WRONG_RC -eq 1 ]
# with multiple wrong passwords with --vault-id
ansible-playbook test_vault.yml -i ../../inventory -v "$@" --vault-id vault-password-wrong --vault-id vault-password-wrong && :
WRONG_RC=$?
echo "rc was $WRONG_RC (1 is expected)"
[ $WRONG_RC -eq 1 ]
# with multiple wrong passwords with --vault-id
ansible-playbook test_vault.yml -i ../../inventory -v "$@" --vault-id wrong1@vault-password-wrong --vault-id wrong2@vault-password-wrong && :
WRONG_RC=$?
echo "rc was $WRONG_RC (1 is expected)"
[ $WRONG_RC -eq 1 ]
# with empty password file
ansible-playbook test_vault.yml -i ../../inventory -v "$@" --vault-id empty@empty-password && :
WRONG_RC=$?
echo "rc was $WRONG_RC (1 is expected)"
[ $WRONG_RC -eq 1 ]
# test invalid format ala https://github.com/ansible/ansible/issues/28038
EXPECTED_ERROR='Vault format unhexlify error: Non-hexadecimal digit found'
ansible-playbook "$@" -i invalid_format/inventory --vault-id invalid_format/vault-secret invalid_format/broken-host-vars-tasks.yml 2>&1 | grep "${EXPECTED_ERROR}"
EXPECTED_ERROR='Vault format unhexlify error: Odd-length string'
ansible-playbook "$@" -i invalid_format/inventory --vault-id invalid_format/vault-secret invalid_format/broken-group-vars-tasks.yml 2>&1 | grep "${EXPECTED_ERROR}"
# Run playbook with vault file with unicode in filename (https://github.com/ansible/ansible/issues/50316)
ansible-playbook -i ../../inventory -v "$@" --vault-password-file vault-password test_utf8_value_in_filename.yml
# Ensure we don't leave unencrypted temp files dangling
ansible-playbook -v "$@" --vault-password-file vault-password test_dangling_temp.yml
ansible-playbook "$@" --vault-password-file vault-password single_vault_as_string.yml
# Test that only one accessible vault password is required
export ANSIBLE_VAULT_IDENTITY_LIST="id1@./nonexistent, id2@${MYTMPDIR}/unreadable, id3@./vault-password"
touch "${MYTMPDIR}/unreadable"
sudo chmod 000 "${MYTMPDIR}/unreadable"
ansible-vault encrypt_string content
ansible-vault encrypt_string content --encrypt-vault-id id3
set +e
# Try to use a missing vault password file
ansible-vault encrypt_string content --encrypt-vault-id id1 2>&1 | tee out.txt
test $? -ne 0
grep out.txt -e '[WARNING]: Error getting vault password file (id1)'
grep out.txt -e "ERROR! Did not find a match for --encrypt-vault-id=id2 in the known vault-ids ['id3']"
# Try to use an inaccessible vault password file
ansible-vault encrypt_string content --encrypt-vault-id id2 2>&1 | tee out.txt
test $? -ne 0
grep out.txt -e "[WARNING]: Error in vault password file loading (id2)"
grep out.txt -e "ERROR! Did not find a match for --encrypt-vault-id=id2 in the known vault-ids ['id3']"
set -e
unset ANSIBLE_VAULT_IDENTITY_LIST
# 'real script'
ansible-playbook realpath.yml "$@" --vault-password-file script/vault-secret.sh
# using symlink
ansible-playbook symlink.yml "$@" --vault-password-file symlink/get-password-symlink
### NEGATIVE TESTS
ER='Attempting to decrypt'
#### no secrets
# 'real script'
ansible-playbook realpath.yml "$@" 2>&1 |grep "${ER}"
# using symlink
ansible-playbook symlink.yml "$@" 2>&1 |grep "${ER}"
ER='Decryption failed'
### wrong secrets
# 'real script'
ansible-playbook realpath.yml "$@" --vault-password-file symlink/get-password-symlink 2>&1 |grep "${ER}"
# using symlink
ansible-playbook symlink.yml "$@" --vault-password-file script/vault-secret.sh 2>&1 |grep "${ER}"
### SALT TESTING ###
# prep files for encryption
for salted in test1 test2 test3
do
echo 'this is salty' > "salted_${salted}"
done
# encrypt files
ANSIBLE_VAULT_ENCRYPT_SALT=salty ansible-vault encrypt salted_test1 --vault-password-file example1_password "$@"
ANSIBLE_VAULT_ENCRYPT_SALT=salty ansible-vault encrypt salted_test2 --vault-password-file example1_password "$@"
ansible-vault encrypt salted_test3 --vault-password-file example1_password "$@"
# should be the same
out=$(diff salted_test1 salted_test2)
[ "${out}" == "" ]
# shoudl be diff
out=$(diff salted_test1 salted_test3 || true)
[ "${out}" != "" ]
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 81,486 |
Role dependencies: change in comportement when role run conditionally
|
### Summary
Hi,
Since Ansible 8, when multiple roles include the same dependency and when the first role doesn't run because of a `when` condition, then the dependency is not included at all.
### Issue Type
Bug Report
### Component Name
role
### Ansible Version
```console
ansible [core 2.15.2]
config file = /home/courgette/Documents/Bimdata/dev/deployment/ansible.cfg
configured module search path = ['/home/courgette/Documents/Bimdata/dev/deployment/library', '/home/courgette/Documents/Bimdata/dev/deployment/kubespray/library']
ansible python module location = /home/courgette/.virtualenvs/deploy/lib/python3.11/site-packages/ansible
ansible collection location = /home/courgette/.ansible/collections:/usr/share/ansible/collections
executable location = /home/courgette/.virtualenvs/deploy/bin/ansible
python version = 3.11.3 (main, Jun 5 2023, 09:32:32) [GCC 13.1.1 20230429] (/home/courgette/.virtualenvs/deploy/bin/python)
jinja version = 3.1.2
libyaml = True
```
### Configuration
```console
CONFIG_FILE() = /home/courgette/Documents/Bimdata/dev/deployment/ansible.cfg
DEFAULT_FILTER_PLUGIN_PATH(/home/courgette/Documents/Bimdata/dev/deployment/ansible.cfg) = ['/home/courgette/Documents/Bimdata/dev/deployment/ansible_plugins/filter_plugins']
DEFAULT_TIMEOUT(/home/courgette/Documents/Bimdata/dev/deployment/ansible.cfg) = 60
DEFAULT_VAULT_PASSWORD_FILE(/home/courgette/Documents/Bimdata/dev/deployment/ansible.cfg) = /home/courgette/Documents/Bimdata/dev/deployment/.get-vault-pass.sh
EDITOR(env: EDITOR) = vim
PAGER(env: PAGER) = less
CONNECTION:
==========
paramiko_ssh:
____________
timeout(/home/courgette/Documents/Bimdata/dev/deployment/ansible.cfg) = 60
ssh:
___
pipelining(/home/courgette/Documents/Bimdata/dev/deployment/ansible.cfg) = True
timeout(/home/courgette/Documents/Bimdata/dev/deployment/ansible.cfg) = 60
```
### OS / Environment
Archlinux
### Steps to Reproduce
- Create a file `roles/test_a/tasks/main.yml` with:
```
---
- name: "Debug."
ansible.builtin.debug:
msg: "test_a"
```
- Create a file `roles/test_b/meta/main.yml` with:
```
---
dependencies:
- role: test_a
```
- Create a file `roles/test_b/tasks/main.yml` with:
```
---
- name: "Debug."
ansible.builtin.debug:
msg: "test_b
```
- Duplicate test_b into test_c: `cp -r roles/test_b roles/test_c`
- Modify the debug message of the test_c role: `sed -i 's/test_b/test_c/g' roles/test_c/tasks/main.yml`
- Create the file `test.yml` with:
```
---
- name: Test
hosts: localhost
gather_facts: false
become: false
vars:
skip_b: true
skip_c: false
roles:
- role: test_b
when: not skip_b
- role: test_c
when: not skip_c
```
- Run : `ansible-playbook test.yml`
### Expected Results
I expected test_a to be include by test_c, but I was chocked that it did not.
It's working fine with Ansible 7. For reference, here the complete version of ansible 7 that I use, and the corresponding output.
```
ansible [core 2.14.8]
config file = /home/courgette/Documents/Bimdata/dev/deployment/ansible.cfg
configured module search path = ['/home/courgette/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/courgette/.virtualenvs/deploy/lib/python3.11/site-packages/ansible
ansible collection location = /home/courgette/.ansible/collections:/usr/share/ansible/collections
executable location = /home/courgette/.virtualenvs/deploy/bin/ansible
python version = 3.11.3 (main, Jun 5 2023, 09:32:32) [GCC 13.1.1 20230429] (/home/courgette/.virtualenvs/deploy/bin/python)
jinja version = 3.1.2
libyaml = True
```
```
[WARNING]: No inventory was parsed, only implicit localhost is available
[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all'
PLAY [Test] *********************************************************************************************************************************************************************************************************
TASK [test_a : Debug.] **********************************************************************************************************************************************************************************************
skipping: [localhost]
TASK [test_b : Debug.] **********************************************************************************************************************************************************************************************
skipping: [localhost]
TASK [test_a : Debug.] **********************************************************************************************************************************************************************************************
ok: [localhost] => {
"msg": "test_a"
}
TASK [test_c : Debug.] **********************************************************************************************************************************************************************************************
ok: [localhost] => {
"msg": "test_c"
}
PLAY RECAP **********************************************************************************************************************************************************************************************************
localhost : ok=2 changed=0 unreachable=0 failed=0 skipped=2 rescued=0 ignored=0
```
### Actual Results
```console
[WARNING]: No inventory was parsed, only implicit localhost is available
[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all'
PLAY [Test] *********************************************************************************************************************************************************************************************************
TASK [test_a : Debug.] **********************************************************************************************************************************************************************************************
skipping: [localhost]
TASK [test_b : Debug.] **********************************************************************************************************************************************************************************************
skipping: [localhost]
TASK [test_c : Debug.] **********************************************************************************************************************************************************************************************
ok: [localhost] => {
"msg": "test_c"
}
PLAY RECAP **********************************************************************************************************************************************************************************************************
localhost : ok=1 changed=0 unreachable=0 failed=0 skipped=2 rescued=0 ignored=0
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/81486
|
https://github.com/ansible/ansible/pull/81565
|
3eb96f2c68826718cda83c42cb519c78c0a7a8a8
|
8034651cd2626e0d634b2b52eeafc81852d8110d
| 2023-08-10T17:14:55Z |
python
| 2023-09-08T16:11:48Z |
changelogs/fragments/role-deduplication-condition.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 81,486 |
Role dependencies: change in comportement when role run conditionally
|
### Summary
Hi,
Since Ansible 8, when multiple roles include the same dependency and when the first role doesn't run because of a `when` condition, then the dependency is not included at all.
### Issue Type
Bug Report
### Component Name
role
### Ansible Version
```console
ansible [core 2.15.2]
config file = /home/courgette/Documents/Bimdata/dev/deployment/ansible.cfg
configured module search path = ['/home/courgette/Documents/Bimdata/dev/deployment/library', '/home/courgette/Documents/Bimdata/dev/deployment/kubespray/library']
ansible python module location = /home/courgette/.virtualenvs/deploy/lib/python3.11/site-packages/ansible
ansible collection location = /home/courgette/.ansible/collections:/usr/share/ansible/collections
executable location = /home/courgette/.virtualenvs/deploy/bin/ansible
python version = 3.11.3 (main, Jun 5 2023, 09:32:32) [GCC 13.1.1 20230429] (/home/courgette/.virtualenvs/deploy/bin/python)
jinja version = 3.1.2
libyaml = True
```
### Configuration
```console
CONFIG_FILE() = /home/courgette/Documents/Bimdata/dev/deployment/ansible.cfg
DEFAULT_FILTER_PLUGIN_PATH(/home/courgette/Documents/Bimdata/dev/deployment/ansible.cfg) = ['/home/courgette/Documents/Bimdata/dev/deployment/ansible_plugins/filter_plugins']
DEFAULT_TIMEOUT(/home/courgette/Documents/Bimdata/dev/deployment/ansible.cfg) = 60
DEFAULT_VAULT_PASSWORD_FILE(/home/courgette/Documents/Bimdata/dev/deployment/ansible.cfg) = /home/courgette/Documents/Bimdata/dev/deployment/.get-vault-pass.sh
EDITOR(env: EDITOR) = vim
PAGER(env: PAGER) = less
CONNECTION:
==========
paramiko_ssh:
____________
timeout(/home/courgette/Documents/Bimdata/dev/deployment/ansible.cfg) = 60
ssh:
___
pipelining(/home/courgette/Documents/Bimdata/dev/deployment/ansible.cfg) = True
timeout(/home/courgette/Documents/Bimdata/dev/deployment/ansible.cfg) = 60
```
### OS / Environment
Archlinux
### Steps to Reproduce
- Create a file `roles/test_a/tasks/main.yml` with:
```
---
- name: "Debug."
ansible.builtin.debug:
msg: "test_a"
```
- Create a file `roles/test_b/meta/main.yml` with:
```
---
dependencies:
- role: test_a
```
- Create a file `roles/test_b/tasks/main.yml` with:
```
---
- name: "Debug."
ansible.builtin.debug:
msg: "test_b
```
- Duplicate test_b into test_c: `cp -r roles/test_b roles/test_c`
- Modify the debug message of the test_c role: `sed -i 's/test_b/test_c/g' roles/test_c/tasks/main.yml`
- Create the file `test.yml` with:
```
---
- name: Test
hosts: localhost
gather_facts: false
become: false
vars:
skip_b: true
skip_c: false
roles:
- role: test_b
when: not skip_b
- role: test_c
when: not skip_c
```
- Run : `ansible-playbook test.yml`
### Expected Results
I expected test_a to be include by test_c, but I was chocked that it did not.
It's working fine with Ansible 7. For reference, here the complete version of ansible 7 that I use, and the corresponding output.
```
ansible [core 2.14.8]
config file = /home/courgette/Documents/Bimdata/dev/deployment/ansible.cfg
configured module search path = ['/home/courgette/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/courgette/.virtualenvs/deploy/lib/python3.11/site-packages/ansible
ansible collection location = /home/courgette/.ansible/collections:/usr/share/ansible/collections
executable location = /home/courgette/.virtualenvs/deploy/bin/ansible
python version = 3.11.3 (main, Jun 5 2023, 09:32:32) [GCC 13.1.1 20230429] (/home/courgette/.virtualenvs/deploy/bin/python)
jinja version = 3.1.2
libyaml = True
```
```
[WARNING]: No inventory was parsed, only implicit localhost is available
[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all'
PLAY [Test] *********************************************************************************************************************************************************************************************************
TASK [test_a : Debug.] **********************************************************************************************************************************************************************************************
skipping: [localhost]
TASK [test_b : Debug.] **********************************************************************************************************************************************************************************************
skipping: [localhost]
TASK [test_a : Debug.] **********************************************************************************************************************************************************************************************
ok: [localhost] => {
"msg": "test_a"
}
TASK [test_c : Debug.] **********************************************************************************************************************************************************************************************
ok: [localhost] => {
"msg": "test_c"
}
PLAY RECAP **********************************************************************************************************************************************************************************************************
localhost : ok=2 changed=0 unreachable=0 failed=0 skipped=2 rescued=0 ignored=0
```
### Actual Results
```console
[WARNING]: No inventory was parsed, only implicit localhost is available
[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all'
PLAY [Test] *********************************************************************************************************************************************************************************************************
TASK [test_a : Debug.] **********************************************************************************************************************************************************************************************
skipping: [localhost]
TASK [test_b : Debug.] **********************************************************************************************************************************************************************************************
skipping: [localhost]
TASK [test_c : Debug.] **********************************************************************************************************************************************************************************************
ok: [localhost] => {
"msg": "test_c"
}
PLAY RECAP **********************************************************************************************************************************************************************************************************
localhost : ok=1 changed=0 unreachable=0 failed=0 skipped=2 rescued=0 ignored=0
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/81486
|
https://github.com/ansible/ansible/pull/81565
|
3eb96f2c68826718cda83c42cb519c78c0a7a8a8
|
8034651cd2626e0d634b2b52eeafc81852d8110d
| 2023-08-10T17:14:55Z |
python
| 2023-09-08T16:11:48Z |
lib/ansible/plugins/strategy/__init__.py
|
# (c) 2012-2014, Michael DeHaan <[email protected]>
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
# Make coding more python3-ish
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import cmd
import functools
import os
import pprint
import queue
import sys
import threading
import time
import typing as t
from collections import deque
from multiprocessing import Lock
from jinja2.exceptions import UndefinedError
from ansible import constants as C
from ansible import context
from ansible.errors import AnsibleError, AnsibleFileNotFound, AnsibleUndefinedVariable, AnsibleParserError
from ansible.executor import action_write_locks
from ansible.executor.play_iterator import IteratingStates, PlayIterator
from ansible.executor.process.worker import WorkerProcess
from ansible.executor.task_result import TaskResult
from ansible.executor.task_queue_manager import CallbackSend, DisplaySend, PromptSend
from ansible.module_utils.six import string_types
from ansible.module_utils.common.text.converters import to_text
from ansible.module_utils.connection import Connection, ConnectionError
from ansible.playbook.conditional import Conditional
from ansible.playbook.handler import Handler
from ansible.playbook.helpers import load_list_of_blocks
from ansible.playbook.task import Task
from ansible.playbook.task_include import TaskInclude
from ansible.plugins import loader as plugin_loader
from ansible.template import Templar
from ansible.utils.display import Display
from ansible.utils.fqcn import add_internal_fqcns
from ansible.utils.unsafe_proxy import wrap_var
from ansible.utils.vars import combine_vars, isidentifier
from ansible.vars.clean import strip_internal_keys, module_response_deepcopy
display = Display()
__all__ = ['StrategyBase']
# This list can be an exact match, or start of string bound
# does not accept regex
ALWAYS_DELEGATE_FACT_PREFIXES = frozenset((
'discovered_interpreter_',
))
class StrategySentinel:
pass
_sentinel = StrategySentinel()
def post_process_whens(result, task, templar, task_vars):
cond = None
if task.changed_when:
with templar.set_temporary_context(available_variables=task_vars):
cond = Conditional(loader=templar._loader)
cond.when = task.changed_when
result['changed'] = cond.evaluate_conditional(templar, templar.available_variables)
if task.failed_when:
with templar.set_temporary_context(available_variables=task_vars):
if cond is None:
cond = Conditional(loader=templar._loader)
cond.when = task.failed_when
failed_when_result = cond.evaluate_conditional(templar, templar.available_variables)
result['failed_when_result'] = result['failed'] = failed_when_result
def _get_item_vars(result, task):
item_vars = {}
if task.loop or task.loop_with:
loop_var = result.get('ansible_loop_var', 'item')
index_var = result.get('ansible_index_var')
if loop_var in result:
item_vars[loop_var] = result[loop_var]
if index_var and index_var in result:
item_vars[index_var] = result[index_var]
if '_ansible_item_label' in result:
item_vars['_ansible_item_label'] = result['_ansible_item_label']
if 'ansible_loop' in result:
item_vars['ansible_loop'] = result['ansible_loop']
return item_vars
def results_thread_main(strategy):
while True:
try:
result = strategy._final_q.get()
if isinstance(result, StrategySentinel):
break
elif isinstance(result, DisplaySend):
dmethod = getattr(display, result.method)
dmethod(*result.args, **result.kwargs)
elif isinstance(result, CallbackSend):
for arg in result.args:
if isinstance(arg, TaskResult):
strategy.normalize_task_result(arg)
break
strategy._tqm.send_callback(result.method_name, *result.args, **result.kwargs)
elif isinstance(result, TaskResult):
strategy.normalize_task_result(result)
with strategy._results_lock:
strategy._results.append(result)
elif isinstance(result, PromptSend):
try:
value = display.prompt_until(
result.prompt,
private=result.private,
seconds=result.seconds,
complete_input=result.complete_input,
interrupt_input=result.interrupt_input,
)
except AnsibleError as e:
value = e
except BaseException as e:
# relay unexpected errors so bugs in display are reported and don't cause workers to hang
try:
raise AnsibleError(f"{e}") from e
except AnsibleError as e:
value = e
strategy._workers[result.worker_id].worker_queue.put(value)
else:
display.warning('Received an invalid object (%s) in the result queue: %r' % (type(result), result))
except (IOError, EOFError):
break
except queue.Empty:
pass
def debug_closure(func):
"""Closure to wrap ``StrategyBase._process_pending_results`` and invoke the task debugger"""
@functools.wraps(func)
def inner(self, iterator, one_pass=False, max_passes=None):
status_to_stats_map = (
('is_failed', 'failures'),
('is_unreachable', 'dark'),
('is_changed', 'changed'),
('is_skipped', 'skipped'),
)
# We don't know the host yet, copy the previous states, for lookup after we process new results
prev_host_states = iterator.host_states.copy()
results = func(self, iterator, one_pass=one_pass, max_passes=max_passes)
_processed_results = []
for result in results:
task = result._task
host = result._host
_queued_task_args = self._queued_task_cache.pop((host.name, task._uuid), None)
task_vars = _queued_task_args['task_vars']
play_context = _queued_task_args['play_context']
# Try to grab the previous host state, if it doesn't exist use get_host_state to generate an empty state
try:
prev_host_state = prev_host_states[host.name]
except KeyError:
prev_host_state = iterator.get_host_state(host)
while result.needs_debugger(globally_enabled=self.debugger_active):
next_action = NextAction()
dbg = Debugger(task, host, task_vars, play_context, result, next_action)
dbg.cmdloop()
if next_action.result == NextAction.REDO:
# rollback host state
self._tqm.clear_failed_hosts()
if task.run_once and iterator._play.strategy in add_internal_fqcns(('linear',)) and result.is_failed():
for host_name, state in prev_host_states.items():
if host_name == host.name:
continue
iterator.set_state_for_host(host_name, state)
iterator._play._removed_hosts.remove(host_name)
iterator.set_state_for_host(host.name, prev_host_state)
for method, what in status_to_stats_map:
if getattr(result, method)():
self._tqm._stats.decrement(what, host.name)
self._tqm._stats.decrement('ok', host.name)
# redo
self._queue_task(host, task, task_vars, play_context)
_processed_results.extend(debug_closure(func)(self, iterator, one_pass))
break
elif next_action.result == NextAction.CONTINUE:
_processed_results.append(result)
break
elif next_action.result == NextAction.EXIT:
# Matches KeyboardInterrupt from bin/ansible
sys.exit(99)
else:
_processed_results.append(result)
return _processed_results
return inner
class StrategyBase:
'''
This is the base class for strategy plugins, which contains some common
code useful to all strategies like running handlers, cleanup actions, etc.
'''
# by default, strategies should support throttling but we allow individual
# strategies to disable this and either forego supporting it or managing
# the throttling internally (as `free` does)
ALLOW_BASE_THROTTLING = True
def __init__(self, tqm):
self._tqm = tqm
self._inventory = tqm.get_inventory()
self._workers = tqm._workers
self._variable_manager = tqm.get_variable_manager()
self._loader = tqm.get_loader()
self._final_q = tqm._final_q
self._step = context.CLIARGS.get('step', False)
self._diff = context.CLIARGS.get('diff', False)
# the task cache is a dictionary of tuples of (host.name, task._uuid)
# used to find the original task object of in-flight tasks and to store
# the task args/vars and play context info used to queue the task.
self._queued_task_cache = {}
# Backwards compat: self._display isn't really needed, just import the global display and use that.
self._display = display
# internal counters
self._pending_results = 0
self._cur_worker = 0
# this dictionary is used to keep track of hosts that have
# outstanding tasks still in queue
self._blocked_hosts = dict()
self._results = deque()
self._results_lock = threading.Condition(threading.Lock())
self._worker_queues = dict()
# create the result processing thread for reading results in the background
self._results_thread = threading.Thread(target=results_thread_main, args=(self,))
self._results_thread.daemon = True
self._results_thread.start()
# holds the list of active (persistent) connections to be shutdown at
# play completion
self._active_connections = dict()
# Caches for get_host calls, to avoid calling excessively
# These values should be set at the top of the ``run`` method of each
# strategy plugin. Use ``_set_hosts_cache`` to set these values
self._hosts_cache = []
self._hosts_cache_all = []
self.debugger_active = C.ENABLE_TASK_DEBUGGER
def _set_hosts_cache(self, play, refresh=True):
"""Responsible for setting _hosts_cache and _hosts_cache_all
See comment in ``__init__`` for the purpose of these caches
"""
if not refresh and all((self._hosts_cache, self._hosts_cache_all)):
return
if not play.finalized and Templar(None).is_template(play.hosts):
_pattern = 'all'
else:
_pattern = play.hosts or 'all'
self._hosts_cache_all = [h.name for h in self._inventory.get_hosts(pattern=_pattern, ignore_restrictions=True)]
self._hosts_cache = [h.name for h in self._inventory.get_hosts(play.hosts, order=play.order)]
def cleanup(self):
# close active persistent connections
for sock in self._active_connections.values():
try:
conn = Connection(sock)
conn.reset()
except ConnectionError as e:
# most likely socket is already closed
display.debug("got an error while closing persistent connection: %s" % e)
self._final_q.put(_sentinel)
self._results_thread.join()
def run(self, iterator, play_context, result=0):
# execute one more pass through the iterator without peeking, to
# make sure that all of the hosts are advanced to their final task.
# This should be safe, as everything should be IteratingStates.COMPLETE by
# this point, though the strategy may not advance the hosts itself.
for host in self._hosts_cache:
if host not in self._tqm._unreachable_hosts:
try:
iterator.get_next_task_for_host(self._inventory.hosts[host])
except KeyError:
iterator.get_next_task_for_host(self._inventory.get_host(host))
# return the appropriate code, depending on the status hosts after the run
if not isinstance(result, bool) and result != self._tqm.RUN_OK:
return result
elif len(self._tqm._unreachable_hosts.keys()) > 0:
return self._tqm.RUN_UNREACHABLE_HOSTS
elif len(iterator.get_failed_hosts()) > 0:
return self._tqm.RUN_FAILED_HOSTS
else:
return self._tqm.RUN_OK
def get_hosts_remaining(self, play):
self._set_hosts_cache(play, refresh=False)
ignore = set(self._tqm._failed_hosts).union(self._tqm._unreachable_hosts)
return [host for host in self._hosts_cache if host not in ignore]
def get_failed_hosts(self, play):
self._set_hosts_cache(play, refresh=False)
return [host for host in self._hosts_cache if host in self._tqm._failed_hosts]
def add_tqm_variables(self, vars, play):
'''
Base class method to add extra variables/information to the list of task
vars sent through the executor engine regarding the task queue manager state.
'''
vars['ansible_current_hosts'] = self.get_hosts_remaining(play)
vars['ansible_failed_hosts'] = self.get_failed_hosts(play)
def _queue_task(self, host, task, task_vars, play_context):
''' handles queueing the task up to be sent to a worker '''
display.debug("entering _queue_task() for %s/%s" % (host.name, task.action))
# Add a write lock for tasks.
# Maybe this should be added somewhere further up the call stack but
# this is the earliest in the code where we have task (1) extracted
# into its own variable and (2) there's only a single code path
# leading to the module being run. This is called by two
# functions: linear.py::run(), and
# free.py::run() so we'd have to add to both to do it there.
# The next common higher level is __init__.py::run() and that has
# tasks inside of play_iterator so we'd have to extract them to do it
# there.
if task.action not in action_write_locks.action_write_locks:
display.debug('Creating lock for %s' % task.action)
action_write_locks.action_write_locks[task.action] = Lock()
# create a templar and template things we need later for the queuing process
templar = Templar(loader=self._loader, variables=task_vars)
try:
throttle = int(templar.template(task.throttle))
except Exception as e:
raise AnsibleError("Failed to convert the throttle value to an integer.", obj=task._ds, orig_exc=e)
# and then queue the new task
try:
# Determine the "rewind point" of the worker list. This means we start
# iterating over the list of workers until the end of the list is found.
# Normally, that is simply the length of the workers list (as determined
# by the forks or serial setting), however a task/block/play may "throttle"
# that limit down.
rewind_point = len(self._workers)
if throttle > 0 and self.ALLOW_BASE_THROTTLING:
if task.run_once:
display.debug("Ignoring 'throttle' as 'run_once' is also set for '%s'" % task.get_name())
else:
if throttle <= rewind_point:
display.debug("task: %s, throttle: %d" % (task.get_name(), throttle))
rewind_point = throttle
queued = False
starting_worker = self._cur_worker
while True:
if self._cur_worker >= rewind_point:
self._cur_worker = 0
worker_prc = self._workers[self._cur_worker]
if worker_prc is None or not worker_prc.is_alive():
self._queued_task_cache[(host.name, task._uuid)] = {
'host': host,
'task': task,
'task_vars': task_vars,
'play_context': play_context
}
# Pass WorkerProcess its strategy worker number so it can send an identifier along with intra-task requests
worker_prc = WorkerProcess(
self._final_q, task_vars, host, task, play_context, self._loader, self._variable_manager, plugin_loader, self._cur_worker,
)
self._workers[self._cur_worker] = worker_prc
self._tqm.send_callback('v2_runner_on_start', host, task)
worker_prc.start()
display.debug("worker is %d (out of %d available)" % (self._cur_worker + 1, len(self._workers)))
queued = True
self._cur_worker += 1
if self._cur_worker >= rewind_point:
self._cur_worker = 0
if queued:
break
elif self._cur_worker == starting_worker:
time.sleep(0.0001)
self._pending_results += 1
except (EOFError, IOError, AssertionError) as e:
# most likely an abort
display.debug("got an error while queuing: %s" % e)
return
display.debug("exiting _queue_task() for %s/%s" % (host.name, task.action))
def get_task_hosts(self, iterator, task_host, task):
if task.run_once:
host_list = [host for host in self._hosts_cache if host not in self._tqm._unreachable_hosts]
else:
host_list = [task_host.name]
return host_list
def get_delegated_hosts(self, result, task):
host_name = result.get('_ansible_delegated_vars', {}).get('ansible_delegated_host', None)
return [host_name or task.delegate_to]
def _set_always_delegated_facts(self, result, task):
"""Sets host facts for ``delegate_to`` hosts for facts that should
always be delegated
This operation mutates ``result`` to remove the always delegated facts
See ``ALWAYS_DELEGATE_FACT_PREFIXES``
"""
if task.delegate_to is None:
return
facts = result['ansible_facts']
always_keys = set()
_add = always_keys.add
for fact_key in facts:
for always_key in ALWAYS_DELEGATE_FACT_PREFIXES:
if fact_key.startswith(always_key):
_add(fact_key)
if always_keys:
_pop = facts.pop
always_facts = {
'ansible_facts': dict((k, _pop(k)) for k in list(facts) if k in always_keys)
}
host_list = self.get_delegated_hosts(result, task)
_set_host_facts = self._variable_manager.set_host_facts
for target_host in host_list:
_set_host_facts(target_host, always_facts)
def normalize_task_result(self, task_result):
"""Normalize a TaskResult to reference actual Host and Task objects
when only given the ``Host.name``, or the ``Task._uuid``
Only the ``Host.name`` and ``Task._uuid`` are commonly sent back from
the ``TaskExecutor`` or ``WorkerProcess`` due to performance concerns
Mutates the original object
"""
if isinstance(task_result._host, string_types):
# If the value is a string, it is ``Host.name``
task_result._host = self._inventory.get_host(to_text(task_result._host))
if isinstance(task_result._task, string_types):
# If the value is a string, it is ``Task._uuid``
queue_cache_entry = (task_result._host.name, task_result._task)
try:
found_task = self._queued_task_cache[queue_cache_entry]['task']
except KeyError:
# This should only happen due to an implicit task created by the
# TaskExecutor, restrict this behavior to the explicit use case
# of an implicit async_status task
if task_result._task_fields.get('action') != 'async_status':
raise
original_task = Task()
else:
original_task = found_task.copy(exclude_parent=True, exclude_tasks=True)
original_task._parent = found_task._parent
original_task.from_attrs(task_result._task_fields)
task_result._task = original_task
return task_result
def search_handlers_by_notification(self, notification: str, iterator: PlayIterator) -> t.Generator[Handler, None, None]:
templar = Templar(None)
handlers = [h for b in reversed(iterator._play.handlers) for h in b.block]
# iterate in reversed order since last handler loaded with the same name wins
for handler in handlers:
if not handler.name:
continue
if not handler.cached_name:
if templar.is_template(handler.name):
templar.available_variables = self._variable_manager.get_vars(
play=iterator._play,
task=handler,
_hosts=self._hosts_cache,
_hosts_all=self._hosts_cache_all
)
try:
handler.name = templar.template(handler.name)
except (UndefinedError, AnsibleUndefinedVariable) as e:
# We skip this handler due to the fact that it may be using
# a variable in the name that was conditionally included via
# set_fact or some other method, and we don't want to error
# out unnecessarily
if not handler.listen:
display.warning(
"Handler '%s' is unusable because it has no listen topics and "
"the name could not be templated (host-specific variables are "
"not supported in handler names). The error: %s" % (handler.name, to_text(e))
)
continue
handler.cached_name = True
# first we check with the full result of get_name(), which may
# include the role name (if the handler is from a role). If that
# is not found, we resort to the simple name field, which doesn't
# have anything extra added to it.
if notification in {
handler.name,
handler.get_name(include_role_fqcn=False),
handler.get_name(include_role_fqcn=True),
}:
yield handler
break
templar.available_variables = {}
seen = []
for handler in handlers:
if listeners := handler.listen:
if notification in handler.get_validated_value(
'listen',
handler.fattributes.get('listen'),
listeners,
templar,
):
if handler.name and handler.name in seen:
continue
seen.append(handler.name)
yield handler
@debug_closure
def _process_pending_results(self, iterator, one_pass=False, max_passes=None):
'''
Reads results off the final queue and takes appropriate action
based on the result (executing callbacks, updating state, etc.).
'''
ret_results = []
cur_pass = 0
while True:
try:
self._results_lock.acquire()
task_result = self._results.popleft()
except IndexError:
break
finally:
self._results_lock.release()
original_host = task_result._host
original_task = task_result._task
# all host status messages contain 2 entries: (msg, task_result)
role_ran = False
if task_result.is_failed():
role_ran = True
ignore_errors = original_task.ignore_errors
if not ignore_errors:
# save the current state before failing it for later inspection
state_when_failed = iterator.get_state_for_host(original_host.name)
display.debug("marking %s as failed" % original_host.name)
if original_task.run_once:
# if we're using run_once, we have to fail every host here
for h in self._inventory.get_hosts(iterator._play.hosts):
if h.name not in self._tqm._unreachable_hosts:
iterator.mark_host_failed(h)
else:
iterator.mark_host_failed(original_host)
state, dummy = iterator.get_next_task_for_host(original_host, peek=True)
if iterator.is_failed(original_host) and state and state.run_state == IteratingStates.COMPLETE:
self._tqm._failed_hosts[original_host.name] = True
# if we're iterating on the rescue portion of a block then
# we save the failed task in a special var for use
# within the rescue/always
if iterator.is_any_block_rescuing(state_when_failed):
self._tqm._stats.increment('rescued', original_host.name)
iterator._play._removed_hosts.remove(original_host.name)
self._variable_manager.set_nonpersistent_facts(
original_host.name,
dict(
ansible_failed_task=wrap_var(original_task.serialize()),
ansible_failed_result=task_result._result,
),
)
else:
self._tqm._stats.increment('failures', original_host.name)
else:
self._tqm._stats.increment('ok', original_host.name)
self._tqm._stats.increment('ignored', original_host.name)
if 'changed' in task_result._result and task_result._result['changed']:
self._tqm._stats.increment('changed', original_host.name)
self._tqm.send_callback('v2_runner_on_failed', task_result, ignore_errors=ignore_errors)
elif task_result.is_unreachable():
ignore_unreachable = original_task.ignore_unreachable
if not ignore_unreachable:
self._tqm._unreachable_hosts[original_host.name] = True
iterator._play._removed_hosts.append(original_host.name)
self._tqm._stats.increment('dark', original_host.name)
else:
self._tqm._stats.increment('ok', original_host.name)
self._tqm._stats.increment('ignored', original_host.name)
self._tqm.send_callback('v2_runner_on_unreachable', task_result)
elif task_result.is_skipped():
self._tqm._stats.increment('skipped', original_host.name)
self._tqm.send_callback('v2_runner_on_skipped', task_result)
else:
role_ran = True
if original_task.loop:
# this task had a loop, and has more than one result, so
# loop over all of them instead of a single result
result_items = task_result._result.get('results', [])
else:
result_items = [task_result._result]
for result_item in result_items:
if '_ansible_notify' in result_item and task_result.is_changed():
# only ensure that notified handlers exist, if so save the notifications for when
# handlers are actually flushed so the last defined handlers are exexcuted,
# otherwise depending on the setting either error or warn
host_state = iterator.get_state_for_host(original_host.name)
for notification in result_item['_ansible_notify']:
for handler in self.search_handlers_by_notification(notification, iterator):
if host_state.run_state == IteratingStates.HANDLERS:
# we're currently iterating handlers, so we need to expand this now
if handler.notify_host(original_host):
# NOTE even with notifications deduplicated this can still happen in case of handlers being
# notified multiple times using different names, like role name or fqcn
self._tqm.send_callback('v2_playbook_on_notify', handler, original_host)
else:
iterator.add_notification(original_host.name, notification)
display.vv(f"Notification for handler {notification} has been saved.")
break
else:
msg = (
f"The requested handler '{notification}' was not found in either the main handlers"
" list nor in the listening handlers list"
)
if C.ERROR_ON_MISSING_HANDLER:
raise AnsibleError(msg)
else:
display.warning(msg)
if 'add_host' in result_item:
# this task added a new host (add_host module)
new_host_info = result_item.get('add_host', dict())
self._inventory.add_dynamic_host(new_host_info, result_item)
# ensure host is available for subsequent plays
if result_item.get('changed') and new_host_info['host_name'] not in self._hosts_cache_all:
self._hosts_cache_all.append(new_host_info['host_name'])
elif 'add_group' in result_item:
# this task added a new group (group_by module)
self._inventory.add_dynamic_group(original_host, result_item)
if 'add_host' in result_item or 'add_group' in result_item:
item_vars = _get_item_vars(result_item, original_task)
found_task_vars = self._queued_task_cache.get((original_host.name, task_result._task._uuid))['task_vars']
if item_vars:
all_task_vars = combine_vars(found_task_vars, item_vars)
else:
all_task_vars = found_task_vars
all_task_vars[original_task.register] = wrap_var(result_item)
post_process_whens(result_item, original_task, Templar(self._loader), all_task_vars)
if original_task.loop or original_task.loop_with:
new_item_result = TaskResult(
task_result._host,
task_result._task,
result_item,
task_result._task_fields,
)
self._tqm.send_callback('v2_runner_item_on_ok', new_item_result)
if result_item.get('changed', False):
task_result._result['changed'] = True
if result_item.get('failed', False):
task_result._result['failed'] = True
if 'ansible_facts' in result_item and original_task.action not in C._ACTION_DEBUG:
# if delegated fact and we are delegating facts, we need to change target host for them
if original_task.delegate_to is not None and original_task.delegate_facts:
host_list = self.get_delegated_hosts(result_item, original_task)
else:
# Set facts that should always be on the delegated hosts
self._set_always_delegated_facts(result_item, original_task)
host_list = self.get_task_hosts(iterator, original_host, original_task)
if original_task.action in C._ACTION_INCLUDE_VARS:
for (var_name, var_value) in result_item['ansible_facts'].items():
# find the host we're actually referring too here, which may
# be a host that is not really in inventory at all
for target_host in host_list:
self._variable_manager.set_host_variable(target_host, var_name, var_value)
else:
cacheable = result_item.pop('_ansible_facts_cacheable', False)
for target_host in host_list:
# so set_fact is a misnomer but 'cacheable = true' was meant to create an 'actual fact'
# to avoid issues with precedence and confusion with set_fact normal operation,
# we set BOTH fact and nonpersistent_facts (aka hostvar)
# when fact is retrieved from cache in subsequent operations it will have the lower precedence,
# but for playbook setting it the 'higher' precedence is kept
is_set_fact = original_task.action in C._ACTION_SET_FACT
if not is_set_fact or cacheable:
self._variable_manager.set_host_facts(target_host, result_item['ansible_facts'].copy())
if is_set_fact:
self._variable_manager.set_nonpersistent_facts(target_host, result_item['ansible_facts'].copy())
if 'ansible_stats' in result_item and 'data' in result_item['ansible_stats'] and result_item['ansible_stats']['data']:
if 'per_host' not in result_item['ansible_stats'] or result_item['ansible_stats']['per_host']:
host_list = self.get_task_hosts(iterator, original_host, original_task)
else:
host_list = [None]
data = result_item['ansible_stats']['data']
aggregate = 'aggregate' in result_item['ansible_stats'] and result_item['ansible_stats']['aggregate']
for myhost in host_list:
for k in data.keys():
if aggregate:
self._tqm._stats.update_custom_stats(k, data[k], myhost)
else:
self._tqm._stats.set_custom_stats(k, data[k], myhost)
if 'diff' in task_result._result:
if self._diff or getattr(original_task, 'diff', False):
self._tqm.send_callback('v2_on_file_diff', task_result)
if not isinstance(original_task, TaskInclude):
self._tqm._stats.increment('ok', original_host.name)
if 'changed' in task_result._result and task_result._result['changed']:
self._tqm._stats.increment('changed', original_host.name)
# finally, send the ok for this task
self._tqm.send_callback('v2_runner_on_ok', task_result)
# register final results
if original_task.register:
if not isidentifier(original_task.register):
raise AnsibleError("Invalid variable name in 'register' specified: '%s'" % original_task.register)
host_list = self.get_task_hosts(iterator, original_host, original_task)
clean_copy = strip_internal_keys(module_response_deepcopy(task_result._result))
if 'invocation' in clean_copy:
del clean_copy['invocation']
for target_host in host_list:
self._variable_manager.set_nonpersistent_facts(target_host, {original_task.register: clean_copy})
self._pending_results -= 1
if original_host.name in self._blocked_hosts:
del self._blocked_hosts[original_host.name]
# If this is a role task, mark the parent role as being run (if
# the task was ok or failed, but not skipped or unreachable)
if original_task._role is not None and role_ran: # TODO: and original_task.action not in C._ACTION_INCLUDE_ROLE:?
# lookup the role in the role cache to make sure we're dealing
# with the correct object and mark it as executed
role_obj = self._get_cached_role(original_task, iterator._play)
role_obj._had_task_run[original_host.name] = True
ret_results.append(task_result)
if isinstance(original_task, Handler):
for handler in (h for b in iterator._play.handlers for h in b.block if h._uuid == original_task._uuid):
handler.remove_host(original_host)
if one_pass or max_passes is not None and (cur_pass + 1) >= max_passes:
break
cur_pass += 1
return ret_results
def _wait_on_pending_results(self, iterator):
'''
Wait for the shared counter to drop to zero, using a short sleep
between checks to ensure we don't spin lock
'''
ret_results = []
display.debug("waiting for pending results...")
while self._pending_results > 0 and not self._tqm._terminated:
if self._tqm.has_dead_workers():
raise AnsibleError("A worker was found in a dead state")
results = self._process_pending_results(iterator)
ret_results.extend(results)
if self._pending_results > 0:
time.sleep(C.DEFAULT_INTERNAL_POLL_INTERVAL)
display.debug("no more pending results, returning what we have")
return ret_results
def _copy_included_file(self, included_file):
'''
A proven safe and performant way to create a copy of an included file
'''
ti_copy = included_file._task.copy(exclude_parent=True)
ti_copy._parent = included_file._task._parent
temp_vars = ti_copy.vars | included_file._vars
ti_copy.vars = temp_vars
return ti_copy
def _load_included_file(self, included_file, iterator, is_handler=False):
'''
Loads an included YAML file of tasks, applying the optional set of variables.
Raises AnsibleError exception in case of a failure during including a file,
in such case the caller is responsible for marking the host(s) as failed
using PlayIterator.mark_host_failed().
'''
display.debug("loading included file: %s" % included_file._filename)
try:
data = self._loader.load_from_file(included_file._filename)
if data is None:
return []
elif not isinstance(data, list):
raise AnsibleError("included task files must contain a list of tasks")
ti_copy = self._copy_included_file(included_file)
block_list = load_list_of_blocks(
data,
play=iterator._play,
parent_block=ti_copy.build_parent_block(),
role=included_file._task._role,
use_handlers=is_handler,
loader=self._loader,
variable_manager=self._variable_manager,
)
# since we skip incrementing the stats when the task result is
# first processed, we do so now for each host in the list
for host in included_file._hosts:
self._tqm._stats.increment('ok', host.name)
except AnsibleParserError:
raise
except AnsibleError as e:
if isinstance(e, AnsibleFileNotFound):
reason = "Could not find or access '%s' on the Ansible Controller." % to_text(e.file_name)
else:
reason = to_text(e)
for r in included_file._results:
r._result['failed'] = True
for host in included_file._hosts:
tr = TaskResult(host=host, task=included_file._task, return_data=dict(failed=True, reason=reason))
self._tqm._stats.increment('failures', host.name)
self._tqm.send_callback('v2_runner_on_failed', tr)
raise AnsibleError(reason) from e
# finally, send the callback and return the list of blocks loaded
self._tqm.send_callback('v2_playbook_on_include', included_file)
display.debug("done processing included file")
return block_list
def _take_step(self, task, host=None):
ret = False
msg = u'Perform task: %s ' % task
if host:
msg += u'on %s ' % host
msg += u'(N)o/(y)es/(c)ontinue: '
resp = display.prompt(msg)
if resp.lower() in ['y', 'yes']:
display.debug("User ran task")
ret = True
elif resp.lower() in ['c', 'continue']:
display.debug("User ran task and canceled step mode")
self._step = False
ret = True
else:
display.debug("User skipped task")
display.banner(msg)
return ret
def _cond_not_supported_warn(self, task_name):
display.warning("%s task does not support when conditional" % task_name)
def _execute_meta(self, task, play_context, iterator, target_host):
# meta tasks store their args in the _raw_params field of args,
# since they do not use k=v pairs, so get that
meta_action = task.args.get('_raw_params')
def _evaluate_conditional(h):
all_vars = self._variable_manager.get_vars(play=iterator._play, host=h, task=task,
_hosts=self._hosts_cache, _hosts_all=self._hosts_cache_all)
templar = Templar(loader=self._loader, variables=all_vars)
return task.evaluate_conditional(templar, all_vars)
skipped = False
msg = meta_action
skip_reason = '%s conditional evaluated to False' % meta_action
if isinstance(task, Handler):
self._tqm.send_callback('v2_playbook_on_handler_task_start', task)
else:
self._tqm.send_callback('v2_playbook_on_task_start', task, is_conditional=False)
# These don't support "when" conditionals
if meta_action in ('noop', 'refresh_inventory', 'reset_connection') and task.when:
self._cond_not_supported_warn(meta_action)
if meta_action == 'noop':
msg = "noop"
elif meta_action == 'flush_handlers':
if _evaluate_conditional(target_host):
host_state = iterator.get_state_for_host(target_host.name)
# actually notify proper handlers based on all notifications up to this point
for notification in list(host_state.handler_notifications):
for handler in self.search_handlers_by_notification(notification, iterator):
if handler.notify_host(target_host):
# NOTE even with notifications deduplicated this can still happen in case of handlers being
# notified multiple times using different names, like role name or fqcn
self._tqm.send_callback('v2_playbook_on_notify', handler, target_host)
iterator.clear_notification(target_host.name, notification)
if host_state.run_state == IteratingStates.HANDLERS:
raise AnsibleError('flush_handlers cannot be used as a handler')
if target_host.name not in self._tqm._unreachable_hosts:
host_state.pre_flushing_run_state = host_state.run_state
host_state.run_state = IteratingStates.HANDLERS
msg = "triggered running handlers for %s" % target_host.name
else:
skipped = True
skip_reason += ', not running handlers for %s' % target_host.name
elif meta_action == 'refresh_inventory':
self._inventory.refresh_inventory()
self._set_hosts_cache(iterator._play)
msg = "inventory successfully refreshed"
elif meta_action == 'clear_facts':
if _evaluate_conditional(target_host):
for host in self._inventory.get_hosts(iterator._play.hosts):
hostname = host.get_name()
self._variable_manager.clear_facts(hostname)
msg = "facts cleared"
else:
skipped = True
skip_reason += ', not clearing facts and fact cache for %s' % target_host.name
elif meta_action == 'clear_host_errors':
if _evaluate_conditional(target_host):
for host in self._inventory.get_hosts(iterator._play.hosts):
self._tqm._failed_hosts.pop(host.name, False)
self._tqm._unreachable_hosts.pop(host.name, False)
iterator.clear_host_errors(host)
msg = "cleared host errors"
else:
skipped = True
skip_reason += ', not clearing host error state for %s' % target_host.name
elif meta_action == 'end_batch':
if _evaluate_conditional(target_host):
for host in self._inventory.get_hosts(iterator._play.hosts):
if host.name not in self._tqm._unreachable_hosts:
iterator.set_run_state_for_host(host.name, IteratingStates.COMPLETE)
msg = "ending batch"
else:
skipped = True
skip_reason += ', continuing current batch'
elif meta_action == 'end_play':
if _evaluate_conditional(target_host):
for host in self._inventory.get_hosts(iterator._play.hosts):
if host.name not in self._tqm._unreachable_hosts:
iterator.set_run_state_for_host(host.name, IteratingStates.COMPLETE)
# end_play is used in PlaybookExecutor/TQM to indicate that
# the whole play is supposed to be ended as opposed to just a batch
iterator.end_play = True
msg = "ending play"
else:
skipped = True
skip_reason += ', continuing play'
elif meta_action == 'end_host':
if _evaluate_conditional(target_host):
iterator.set_run_state_for_host(target_host.name, IteratingStates.COMPLETE)
iterator._play._removed_hosts.append(target_host.name)
msg = "ending play for %s" % target_host.name
else:
skipped = True
skip_reason += ", continuing execution for %s" % target_host.name
# TODO: Nix msg here? Left for historical reasons, but skip_reason exists now.
msg = "end_host conditional evaluated to false, continuing execution for %s" % target_host.name
elif meta_action == 'role_complete':
# Allow users to use this in a play as reported in https://github.com/ansible/ansible/issues/22286?
# How would this work with allow_duplicates??
if task.implicit:
role_obj = self._get_cached_role(task, iterator._play)
role_obj._completed[target_host.name] = True
msg = 'role_complete for %s' % target_host.name
elif meta_action == 'reset_connection':
all_vars = self._variable_manager.get_vars(play=iterator._play, host=target_host, task=task,
_hosts=self._hosts_cache, _hosts_all=self._hosts_cache_all)
templar = Templar(loader=self._loader, variables=all_vars)
# apply the given task's information to the connection info,
# which may override some fields already set by the play or
# the options specified on the command line
play_context = play_context.set_task_and_variable_override(task=task, variables=all_vars, templar=templar)
# fields set from the play/task may be based on variables, so we have to
# do the same kind of post validation step on it here before we use it.
play_context.post_validate(templar=templar)
# now that the play context is finalized, if the remote_addr is not set
# default to using the host's address field as the remote address
if not play_context.remote_addr:
play_context.remote_addr = target_host.address
# We also add "magic" variables back into the variables dict to make sure
# a certain subset of variables exist. This 'mostly' works here cause meta
# disregards the loop, but should not really use play_context at all
play_context.update_vars(all_vars)
if target_host in self._active_connections:
connection = Connection(self._active_connections[target_host])
del self._active_connections[target_host]
else:
connection = plugin_loader.connection_loader.get(play_context.connection, play_context, os.devnull)
connection.set_options(task_keys=task.dump_attrs(), var_options=all_vars)
play_context.set_attributes_from_plugin(connection)
if connection:
try:
connection.reset()
msg = 'reset connection'
except ConnectionError as e:
# most likely socket is already closed
display.debug("got an error while closing persistent connection: %s" % e)
else:
msg = 'no connection, nothing to reset'
else:
raise AnsibleError("invalid meta action requested: %s" % meta_action, obj=task._ds)
result = {'msg': msg}
if skipped:
result['skipped'] = True
result['skip_reason'] = skip_reason
else:
result['changed'] = False
if not task.implicit:
header = skip_reason if skipped else msg
display.vv(f"META: {header}")
if isinstance(task, Handler):
task.remove_host(target_host)
res = TaskResult(target_host, task, result)
if skipped:
self._tqm.send_callback('v2_runner_on_skipped', res)
return [res]
def _get_cached_role(self, task, play):
role_path = task._role.get_role_path()
role_cache = play.role_cache[role_path]
try:
idx = role_cache.index(task._role)
return role_cache[idx]
except ValueError:
raise AnsibleError(f'Cannot locate {task._role.get_name()} in role cache')
def get_hosts_left(self, iterator):
''' returns list of available hosts for this iterator by filtering out unreachables '''
hosts_left = []
for host in self._hosts_cache:
if host not in self._tqm._unreachable_hosts:
try:
hosts_left.append(self._inventory.hosts[host])
except KeyError:
hosts_left.append(self._inventory.get_host(host))
return hosts_left
def update_active_connections(self, results):
''' updates the current active persistent connections '''
for r in results:
if 'args' in r._task_fields:
socket_path = r._task_fields['args'].get('_ansible_socket')
if socket_path:
if r._host not in self._active_connections:
self._active_connections[r._host] = socket_path
class NextAction(object):
""" The next action after an interpreter's exit. """
REDO = 1
CONTINUE = 2
EXIT = 3
def __init__(self, result=EXIT):
self.result = result
class Debugger(cmd.Cmd):
prompt_continuous = '> ' # multiple lines
def __init__(self, task, host, task_vars, play_context, result, next_action):
# cmd.Cmd is old-style class
cmd.Cmd.__init__(self)
self.prompt = '[%s] %s (debug)> ' % (host, task)
self.intro = None
self.scope = {}
self.scope['task'] = task
self.scope['task_vars'] = task_vars
self.scope['host'] = host
self.scope['play_context'] = play_context
self.scope['result'] = result
self.next_action = next_action
def cmdloop(self):
try:
cmd.Cmd.cmdloop(self)
except KeyboardInterrupt:
pass
do_h = cmd.Cmd.do_help
def do_EOF(self, args):
"""Quit"""
return self.do_quit(args)
def do_quit(self, args):
"""Quit"""
display.display('User interrupted execution')
self.next_action.result = NextAction.EXIT
return True
do_q = do_quit
def do_continue(self, args):
"""Continue to next result"""
self.next_action.result = NextAction.CONTINUE
return True
do_c = do_continue
def do_redo(self, args):
"""Schedule task for re-execution. The re-execution may not be the next result"""
self.next_action.result = NextAction.REDO
return True
do_r = do_redo
def do_update_task(self, args):
"""Recreate the task from ``task._ds``, and template with updated ``task_vars``"""
templar = Templar(None, variables=self.scope['task_vars'])
task = self.scope['task']
task = task.load_data(task._ds)
task.post_validate(templar)
self.scope['task'] = task
do_u = do_update_task
def evaluate(self, args):
try:
return eval(args, globals(), self.scope)
except Exception:
t, v = sys.exc_info()[:2]
if isinstance(t, str):
exc_type_name = t
else:
exc_type_name = t.__name__
display.display('***%s:%s' % (exc_type_name, repr(v)))
raise
def do_pprint(self, args):
"""Pretty Print"""
try:
result = self.evaluate(args)
display.display(pprint.pformat(result))
except Exception:
pass
do_p = do_pprint
def execute(self, args):
try:
code = compile(args + '\n', '<stdin>', 'single')
exec(code, globals(), self.scope)
except Exception:
t, v = sys.exc_info()[:2]
if isinstance(t, str):
exc_type_name = t
else:
exc_type_name = t.__name__
display.display('***%s:%s' % (exc_type_name, repr(v)))
raise
def default(self, line):
try:
self.execute(line)
except Exception:
pass
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 81,486 |
Role dependencies: change in comportement when role run conditionally
|
### Summary
Hi,
Since Ansible 8, when multiple roles include the same dependency and when the first role doesn't run because of a `when` condition, then the dependency is not included at all.
### Issue Type
Bug Report
### Component Name
role
### Ansible Version
```console
ansible [core 2.15.2]
config file = /home/courgette/Documents/Bimdata/dev/deployment/ansible.cfg
configured module search path = ['/home/courgette/Documents/Bimdata/dev/deployment/library', '/home/courgette/Documents/Bimdata/dev/deployment/kubespray/library']
ansible python module location = /home/courgette/.virtualenvs/deploy/lib/python3.11/site-packages/ansible
ansible collection location = /home/courgette/.ansible/collections:/usr/share/ansible/collections
executable location = /home/courgette/.virtualenvs/deploy/bin/ansible
python version = 3.11.3 (main, Jun 5 2023, 09:32:32) [GCC 13.1.1 20230429] (/home/courgette/.virtualenvs/deploy/bin/python)
jinja version = 3.1.2
libyaml = True
```
### Configuration
```console
CONFIG_FILE() = /home/courgette/Documents/Bimdata/dev/deployment/ansible.cfg
DEFAULT_FILTER_PLUGIN_PATH(/home/courgette/Documents/Bimdata/dev/deployment/ansible.cfg) = ['/home/courgette/Documents/Bimdata/dev/deployment/ansible_plugins/filter_plugins']
DEFAULT_TIMEOUT(/home/courgette/Documents/Bimdata/dev/deployment/ansible.cfg) = 60
DEFAULT_VAULT_PASSWORD_FILE(/home/courgette/Documents/Bimdata/dev/deployment/ansible.cfg) = /home/courgette/Documents/Bimdata/dev/deployment/.get-vault-pass.sh
EDITOR(env: EDITOR) = vim
PAGER(env: PAGER) = less
CONNECTION:
==========
paramiko_ssh:
____________
timeout(/home/courgette/Documents/Bimdata/dev/deployment/ansible.cfg) = 60
ssh:
___
pipelining(/home/courgette/Documents/Bimdata/dev/deployment/ansible.cfg) = True
timeout(/home/courgette/Documents/Bimdata/dev/deployment/ansible.cfg) = 60
```
### OS / Environment
Archlinux
### Steps to Reproduce
- Create a file `roles/test_a/tasks/main.yml` with:
```
---
- name: "Debug."
ansible.builtin.debug:
msg: "test_a"
```
- Create a file `roles/test_b/meta/main.yml` with:
```
---
dependencies:
- role: test_a
```
- Create a file `roles/test_b/tasks/main.yml` with:
```
---
- name: "Debug."
ansible.builtin.debug:
msg: "test_b
```
- Duplicate test_b into test_c: `cp -r roles/test_b roles/test_c`
- Modify the debug message of the test_c role: `sed -i 's/test_b/test_c/g' roles/test_c/tasks/main.yml`
- Create the file `test.yml` with:
```
---
- name: Test
hosts: localhost
gather_facts: false
become: false
vars:
skip_b: true
skip_c: false
roles:
- role: test_b
when: not skip_b
- role: test_c
when: not skip_c
```
- Run : `ansible-playbook test.yml`
### Expected Results
I expected test_a to be include by test_c, but I was chocked that it did not.
It's working fine with Ansible 7. For reference, here the complete version of ansible 7 that I use, and the corresponding output.
```
ansible [core 2.14.8]
config file = /home/courgette/Documents/Bimdata/dev/deployment/ansible.cfg
configured module search path = ['/home/courgette/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/courgette/.virtualenvs/deploy/lib/python3.11/site-packages/ansible
ansible collection location = /home/courgette/.ansible/collections:/usr/share/ansible/collections
executable location = /home/courgette/.virtualenvs/deploy/bin/ansible
python version = 3.11.3 (main, Jun 5 2023, 09:32:32) [GCC 13.1.1 20230429] (/home/courgette/.virtualenvs/deploy/bin/python)
jinja version = 3.1.2
libyaml = True
```
```
[WARNING]: No inventory was parsed, only implicit localhost is available
[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all'
PLAY [Test] *********************************************************************************************************************************************************************************************************
TASK [test_a : Debug.] **********************************************************************************************************************************************************************************************
skipping: [localhost]
TASK [test_b : Debug.] **********************************************************************************************************************************************************************************************
skipping: [localhost]
TASK [test_a : Debug.] **********************************************************************************************************************************************************************************************
ok: [localhost] => {
"msg": "test_a"
}
TASK [test_c : Debug.] **********************************************************************************************************************************************************************************************
ok: [localhost] => {
"msg": "test_c"
}
PLAY RECAP **********************************************************************************************************************************************************************************************************
localhost : ok=2 changed=0 unreachable=0 failed=0 skipped=2 rescued=0 ignored=0
```
### Actual Results
```console
[WARNING]: No inventory was parsed, only implicit localhost is available
[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all'
PLAY [Test] *********************************************************************************************************************************************************************************************************
TASK [test_a : Debug.] **********************************************************************************************************************************************************************************************
skipping: [localhost]
TASK [test_b : Debug.] **********************************************************************************************************************************************************************************************
skipping: [localhost]
TASK [test_c : Debug.] **********************************************************************************************************************************************************************************************
ok: [localhost] => {
"msg": "test_c"
}
PLAY RECAP **********************************************************************************************************************************************************************************************************
localhost : ok=1 changed=0 unreachable=0 failed=0 skipped=2 rescued=0 ignored=0
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/81486
|
https://github.com/ansible/ansible/pull/81565
|
3eb96f2c68826718cda83c42cb519c78c0a7a8a8
|
8034651cd2626e0d634b2b52eeafc81852d8110d
| 2023-08-10T17:14:55Z |
python
| 2023-09-08T16:11:48Z |
test/integration/targets/roles/role_complete.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 81,486 |
Role dependencies: change in comportement when role run conditionally
|
### Summary
Hi,
Since Ansible 8, when multiple roles include the same dependency and when the first role doesn't run because of a `when` condition, then the dependency is not included at all.
### Issue Type
Bug Report
### Component Name
role
### Ansible Version
```console
ansible [core 2.15.2]
config file = /home/courgette/Documents/Bimdata/dev/deployment/ansible.cfg
configured module search path = ['/home/courgette/Documents/Bimdata/dev/deployment/library', '/home/courgette/Documents/Bimdata/dev/deployment/kubespray/library']
ansible python module location = /home/courgette/.virtualenvs/deploy/lib/python3.11/site-packages/ansible
ansible collection location = /home/courgette/.ansible/collections:/usr/share/ansible/collections
executable location = /home/courgette/.virtualenvs/deploy/bin/ansible
python version = 3.11.3 (main, Jun 5 2023, 09:32:32) [GCC 13.1.1 20230429] (/home/courgette/.virtualenvs/deploy/bin/python)
jinja version = 3.1.2
libyaml = True
```
### Configuration
```console
CONFIG_FILE() = /home/courgette/Documents/Bimdata/dev/deployment/ansible.cfg
DEFAULT_FILTER_PLUGIN_PATH(/home/courgette/Documents/Bimdata/dev/deployment/ansible.cfg) = ['/home/courgette/Documents/Bimdata/dev/deployment/ansible_plugins/filter_plugins']
DEFAULT_TIMEOUT(/home/courgette/Documents/Bimdata/dev/deployment/ansible.cfg) = 60
DEFAULT_VAULT_PASSWORD_FILE(/home/courgette/Documents/Bimdata/dev/deployment/ansible.cfg) = /home/courgette/Documents/Bimdata/dev/deployment/.get-vault-pass.sh
EDITOR(env: EDITOR) = vim
PAGER(env: PAGER) = less
CONNECTION:
==========
paramiko_ssh:
____________
timeout(/home/courgette/Documents/Bimdata/dev/deployment/ansible.cfg) = 60
ssh:
___
pipelining(/home/courgette/Documents/Bimdata/dev/deployment/ansible.cfg) = True
timeout(/home/courgette/Documents/Bimdata/dev/deployment/ansible.cfg) = 60
```
### OS / Environment
Archlinux
### Steps to Reproduce
- Create a file `roles/test_a/tasks/main.yml` with:
```
---
- name: "Debug."
ansible.builtin.debug:
msg: "test_a"
```
- Create a file `roles/test_b/meta/main.yml` with:
```
---
dependencies:
- role: test_a
```
- Create a file `roles/test_b/tasks/main.yml` with:
```
---
- name: "Debug."
ansible.builtin.debug:
msg: "test_b
```
- Duplicate test_b into test_c: `cp -r roles/test_b roles/test_c`
- Modify the debug message of the test_c role: `sed -i 's/test_b/test_c/g' roles/test_c/tasks/main.yml`
- Create the file `test.yml` with:
```
---
- name: Test
hosts: localhost
gather_facts: false
become: false
vars:
skip_b: true
skip_c: false
roles:
- role: test_b
when: not skip_b
- role: test_c
when: not skip_c
```
- Run : `ansible-playbook test.yml`
### Expected Results
I expected test_a to be include by test_c, but I was chocked that it did not.
It's working fine with Ansible 7. For reference, here the complete version of ansible 7 that I use, and the corresponding output.
```
ansible [core 2.14.8]
config file = /home/courgette/Documents/Bimdata/dev/deployment/ansible.cfg
configured module search path = ['/home/courgette/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/courgette/.virtualenvs/deploy/lib/python3.11/site-packages/ansible
ansible collection location = /home/courgette/.ansible/collections:/usr/share/ansible/collections
executable location = /home/courgette/.virtualenvs/deploy/bin/ansible
python version = 3.11.3 (main, Jun 5 2023, 09:32:32) [GCC 13.1.1 20230429] (/home/courgette/.virtualenvs/deploy/bin/python)
jinja version = 3.1.2
libyaml = True
```
```
[WARNING]: No inventory was parsed, only implicit localhost is available
[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all'
PLAY [Test] *********************************************************************************************************************************************************************************************************
TASK [test_a : Debug.] **********************************************************************************************************************************************************************************************
skipping: [localhost]
TASK [test_b : Debug.] **********************************************************************************************************************************************************************************************
skipping: [localhost]
TASK [test_a : Debug.] **********************************************************************************************************************************************************************************************
ok: [localhost] => {
"msg": "test_a"
}
TASK [test_c : Debug.] **********************************************************************************************************************************************************************************************
ok: [localhost] => {
"msg": "test_c"
}
PLAY RECAP **********************************************************************************************************************************************************************************************************
localhost : ok=2 changed=0 unreachable=0 failed=0 skipped=2 rescued=0 ignored=0
```
### Actual Results
```console
[WARNING]: No inventory was parsed, only implicit localhost is available
[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all'
PLAY [Test] *********************************************************************************************************************************************************************************************************
TASK [test_a : Debug.] **********************************************************************************************************************************************************************************************
skipping: [localhost]
TASK [test_b : Debug.] **********************************************************************************************************************************************************************************************
skipping: [localhost]
TASK [test_c : Debug.] **********************************************************************************************************************************************************************************************
ok: [localhost] => {
"msg": "test_c"
}
PLAY RECAP **********************************************************************************************************************************************************************************************************
localhost : ok=1 changed=0 unreachable=0 failed=0 skipped=2 rescued=0 ignored=0
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/81486
|
https://github.com/ansible/ansible/pull/81565
|
3eb96f2c68826718cda83c42cb519c78c0a7a8a8
|
8034651cd2626e0d634b2b52eeafc81852d8110d
| 2023-08-10T17:14:55Z |
python
| 2023-09-08T16:11:48Z |
test/integration/targets/roles/roles/failed_when/tasks/main.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 81,486 |
Role dependencies: change in comportement when role run conditionally
|
### Summary
Hi,
Since Ansible 8, when multiple roles include the same dependency and when the first role doesn't run because of a `when` condition, then the dependency is not included at all.
### Issue Type
Bug Report
### Component Name
role
### Ansible Version
```console
ansible [core 2.15.2]
config file = /home/courgette/Documents/Bimdata/dev/deployment/ansible.cfg
configured module search path = ['/home/courgette/Documents/Bimdata/dev/deployment/library', '/home/courgette/Documents/Bimdata/dev/deployment/kubespray/library']
ansible python module location = /home/courgette/.virtualenvs/deploy/lib/python3.11/site-packages/ansible
ansible collection location = /home/courgette/.ansible/collections:/usr/share/ansible/collections
executable location = /home/courgette/.virtualenvs/deploy/bin/ansible
python version = 3.11.3 (main, Jun 5 2023, 09:32:32) [GCC 13.1.1 20230429] (/home/courgette/.virtualenvs/deploy/bin/python)
jinja version = 3.1.2
libyaml = True
```
### Configuration
```console
CONFIG_FILE() = /home/courgette/Documents/Bimdata/dev/deployment/ansible.cfg
DEFAULT_FILTER_PLUGIN_PATH(/home/courgette/Documents/Bimdata/dev/deployment/ansible.cfg) = ['/home/courgette/Documents/Bimdata/dev/deployment/ansible_plugins/filter_plugins']
DEFAULT_TIMEOUT(/home/courgette/Documents/Bimdata/dev/deployment/ansible.cfg) = 60
DEFAULT_VAULT_PASSWORD_FILE(/home/courgette/Documents/Bimdata/dev/deployment/ansible.cfg) = /home/courgette/Documents/Bimdata/dev/deployment/.get-vault-pass.sh
EDITOR(env: EDITOR) = vim
PAGER(env: PAGER) = less
CONNECTION:
==========
paramiko_ssh:
____________
timeout(/home/courgette/Documents/Bimdata/dev/deployment/ansible.cfg) = 60
ssh:
___
pipelining(/home/courgette/Documents/Bimdata/dev/deployment/ansible.cfg) = True
timeout(/home/courgette/Documents/Bimdata/dev/deployment/ansible.cfg) = 60
```
### OS / Environment
Archlinux
### Steps to Reproduce
- Create a file `roles/test_a/tasks/main.yml` with:
```
---
- name: "Debug."
ansible.builtin.debug:
msg: "test_a"
```
- Create a file `roles/test_b/meta/main.yml` with:
```
---
dependencies:
- role: test_a
```
- Create a file `roles/test_b/tasks/main.yml` with:
```
---
- name: "Debug."
ansible.builtin.debug:
msg: "test_b
```
- Duplicate test_b into test_c: `cp -r roles/test_b roles/test_c`
- Modify the debug message of the test_c role: `sed -i 's/test_b/test_c/g' roles/test_c/tasks/main.yml`
- Create the file `test.yml` with:
```
---
- name: Test
hosts: localhost
gather_facts: false
become: false
vars:
skip_b: true
skip_c: false
roles:
- role: test_b
when: not skip_b
- role: test_c
when: not skip_c
```
- Run : `ansible-playbook test.yml`
### Expected Results
I expected test_a to be include by test_c, but I was chocked that it did not.
It's working fine with Ansible 7. For reference, here the complete version of ansible 7 that I use, and the corresponding output.
```
ansible [core 2.14.8]
config file = /home/courgette/Documents/Bimdata/dev/deployment/ansible.cfg
configured module search path = ['/home/courgette/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/courgette/.virtualenvs/deploy/lib/python3.11/site-packages/ansible
ansible collection location = /home/courgette/.ansible/collections:/usr/share/ansible/collections
executable location = /home/courgette/.virtualenvs/deploy/bin/ansible
python version = 3.11.3 (main, Jun 5 2023, 09:32:32) [GCC 13.1.1 20230429] (/home/courgette/.virtualenvs/deploy/bin/python)
jinja version = 3.1.2
libyaml = True
```
```
[WARNING]: No inventory was parsed, only implicit localhost is available
[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all'
PLAY [Test] *********************************************************************************************************************************************************************************************************
TASK [test_a : Debug.] **********************************************************************************************************************************************************************************************
skipping: [localhost]
TASK [test_b : Debug.] **********************************************************************************************************************************************************************************************
skipping: [localhost]
TASK [test_a : Debug.] **********************************************************************************************************************************************************************************************
ok: [localhost] => {
"msg": "test_a"
}
TASK [test_c : Debug.] **********************************************************************************************************************************************************************************************
ok: [localhost] => {
"msg": "test_c"
}
PLAY RECAP **********************************************************************************************************************************************************************************************************
localhost : ok=2 changed=0 unreachable=0 failed=0 skipped=2 rescued=0 ignored=0
```
### Actual Results
```console
[WARNING]: No inventory was parsed, only implicit localhost is available
[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all'
PLAY [Test] *********************************************************************************************************************************************************************************************************
TASK [test_a : Debug.] **********************************************************************************************************************************************************************************************
skipping: [localhost]
TASK [test_b : Debug.] **********************************************************************************************************************************************************************************************
skipping: [localhost]
TASK [test_c : Debug.] **********************************************************************************************************************************************************************************************
ok: [localhost] => {
"msg": "test_c"
}
PLAY RECAP **********************************************************************************************************************************************************************************************************
localhost : ok=1 changed=0 unreachable=0 failed=0 skipped=2 rescued=0 ignored=0
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/81486
|
https://github.com/ansible/ansible/pull/81565
|
3eb96f2c68826718cda83c42cb519c78c0a7a8a8
|
8034651cd2626e0d634b2b52eeafc81852d8110d
| 2023-08-10T17:14:55Z |
python
| 2023-09-08T16:11:48Z |
test/integration/targets/roles/roles/recover/tasks/main.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 81,486 |
Role dependencies: change in comportement when role run conditionally
|
### Summary
Hi,
Since Ansible 8, when multiple roles include the same dependency and when the first role doesn't run because of a `when` condition, then the dependency is not included at all.
### Issue Type
Bug Report
### Component Name
role
### Ansible Version
```console
ansible [core 2.15.2]
config file = /home/courgette/Documents/Bimdata/dev/deployment/ansible.cfg
configured module search path = ['/home/courgette/Documents/Bimdata/dev/deployment/library', '/home/courgette/Documents/Bimdata/dev/deployment/kubespray/library']
ansible python module location = /home/courgette/.virtualenvs/deploy/lib/python3.11/site-packages/ansible
ansible collection location = /home/courgette/.ansible/collections:/usr/share/ansible/collections
executable location = /home/courgette/.virtualenvs/deploy/bin/ansible
python version = 3.11.3 (main, Jun 5 2023, 09:32:32) [GCC 13.1.1 20230429] (/home/courgette/.virtualenvs/deploy/bin/python)
jinja version = 3.1.2
libyaml = True
```
### Configuration
```console
CONFIG_FILE() = /home/courgette/Documents/Bimdata/dev/deployment/ansible.cfg
DEFAULT_FILTER_PLUGIN_PATH(/home/courgette/Documents/Bimdata/dev/deployment/ansible.cfg) = ['/home/courgette/Documents/Bimdata/dev/deployment/ansible_plugins/filter_plugins']
DEFAULT_TIMEOUT(/home/courgette/Documents/Bimdata/dev/deployment/ansible.cfg) = 60
DEFAULT_VAULT_PASSWORD_FILE(/home/courgette/Documents/Bimdata/dev/deployment/ansible.cfg) = /home/courgette/Documents/Bimdata/dev/deployment/.get-vault-pass.sh
EDITOR(env: EDITOR) = vim
PAGER(env: PAGER) = less
CONNECTION:
==========
paramiko_ssh:
____________
timeout(/home/courgette/Documents/Bimdata/dev/deployment/ansible.cfg) = 60
ssh:
___
pipelining(/home/courgette/Documents/Bimdata/dev/deployment/ansible.cfg) = True
timeout(/home/courgette/Documents/Bimdata/dev/deployment/ansible.cfg) = 60
```
### OS / Environment
Archlinux
### Steps to Reproduce
- Create a file `roles/test_a/tasks/main.yml` with:
```
---
- name: "Debug."
ansible.builtin.debug:
msg: "test_a"
```
- Create a file `roles/test_b/meta/main.yml` with:
```
---
dependencies:
- role: test_a
```
- Create a file `roles/test_b/tasks/main.yml` with:
```
---
- name: "Debug."
ansible.builtin.debug:
msg: "test_b
```
- Duplicate test_b into test_c: `cp -r roles/test_b roles/test_c`
- Modify the debug message of the test_c role: `sed -i 's/test_b/test_c/g' roles/test_c/tasks/main.yml`
- Create the file `test.yml` with:
```
---
- name: Test
hosts: localhost
gather_facts: false
become: false
vars:
skip_b: true
skip_c: false
roles:
- role: test_b
when: not skip_b
- role: test_c
when: not skip_c
```
- Run : `ansible-playbook test.yml`
### Expected Results
I expected test_a to be include by test_c, but I was chocked that it did not.
It's working fine with Ansible 7. For reference, here the complete version of ansible 7 that I use, and the corresponding output.
```
ansible [core 2.14.8]
config file = /home/courgette/Documents/Bimdata/dev/deployment/ansible.cfg
configured module search path = ['/home/courgette/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/courgette/.virtualenvs/deploy/lib/python3.11/site-packages/ansible
ansible collection location = /home/courgette/.ansible/collections:/usr/share/ansible/collections
executable location = /home/courgette/.virtualenvs/deploy/bin/ansible
python version = 3.11.3 (main, Jun 5 2023, 09:32:32) [GCC 13.1.1 20230429] (/home/courgette/.virtualenvs/deploy/bin/python)
jinja version = 3.1.2
libyaml = True
```
```
[WARNING]: No inventory was parsed, only implicit localhost is available
[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all'
PLAY [Test] *********************************************************************************************************************************************************************************************************
TASK [test_a : Debug.] **********************************************************************************************************************************************************************************************
skipping: [localhost]
TASK [test_b : Debug.] **********************************************************************************************************************************************************************************************
skipping: [localhost]
TASK [test_a : Debug.] **********************************************************************************************************************************************************************************************
ok: [localhost] => {
"msg": "test_a"
}
TASK [test_c : Debug.] **********************************************************************************************************************************************************************************************
ok: [localhost] => {
"msg": "test_c"
}
PLAY RECAP **********************************************************************************************************************************************************************************************************
localhost : ok=2 changed=0 unreachable=0 failed=0 skipped=2 rescued=0 ignored=0
```
### Actual Results
```console
[WARNING]: No inventory was parsed, only implicit localhost is available
[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all'
PLAY [Test] *********************************************************************************************************************************************************************************************************
TASK [test_a : Debug.] **********************************************************************************************************************************************************************************************
skipping: [localhost]
TASK [test_b : Debug.] **********************************************************************************************************************************************************************************************
skipping: [localhost]
TASK [test_c : Debug.] **********************************************************************************************************************************************************************************************
ok: [localhost] => {
"msg": "test_c"
}
PLAY RECAP **********************************************************************************************************************************************************************************************************
localhost : ok=1 changed=0 unreachable=0 failed=0 skipped=2 rescued=0 ignored=0
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/81486
|
https://github.com/ansible/ansible/pull/81565
|
3eb96f2c68826718cda83c42cb519c78c0a7a8a8
|
8034651cd2626e0d634b2b52eeafc81852d8110d
| 2023-08-10T17:14:55Z |
python
| 2023-09-08T16:11:48Z |
test/integration/targets/roles/roles/set_var/tasks/main.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 81,486 |
Role dependencies: change in comportement when role run conditionally
|
### Summary
Hi,
Since Ansible 8, when multiple roles include the same dependency and when the first role doesn't run because of a `when` condition, then the dependency is not included at all.
### Issue Type
Bug Report
### Component Name
role
### Ansible Version
```console
ansible [core 2.15.2]
config file = /home/courgette/Documents/Bimdata/dev/deployment/ansible.cfg
configured module search path = ['/home/courgette/Documents/Bimdata/dev/deployment/library', '/home/courgette/Documents/Bimdata/dev/deployment/kubespray/library']
ansible python module location = /home/courgette/.virtualenvs/deploy/lib/python3.11/site-packages/ansible
ansible collection location = /home/courgette/.ansible/collections:/usr/share/ansible/collections
executable location = /home/courgette/.virtualenvs/deploy/bin/ansible
python version = 3.11.3 (main, Jun 5 2023, 09:32:32) [GCC 13.1.1 20230429] (/home/courgette/.virtualenvs/deploy/bin/python)
jinja version = 3.1.2
libyaml = True
```
### Configuration
```console
CONFIG_FILE() = /home/courgette/Documents/Bimdata/dev/deployment/ansible.cfg
DEFAULT_FILTER_PLUGIN_PATH(/home/courgette/Documents/Bimdata/dev/deployment/ansible.cfg) = ['/home/courgette/Documents/Bimdata/dev/deployment/ansible_plugins/filter_plugins']
DEFAULT_TIMEOUT(/home/courgette/Documents/Bimdata/dev/deployment/ansible.cfg) = 60
DEFAULT_VAULT_PASSWORD_FILE(/home/courgette/Documents/Bimdata/dev/deployment/ansible.cfg) = /home/courgette/Documents/Bimdata/dev/deployment/.get-vault-pass.sh
EDITOR(env: EDITOR) = vim
PAGER(env: PAGER) = less
CONNECTION:
==========
paramiko_ssh:
____________
timeout(/home/courgette/Documents/Bimdata/dev/deployment/ansible.cfg) = 60
ssh:
___
pipelining(/home/courgette/Documents/Bimdata/dev/deployment/ansible.cfg) = True
timeout(/home/courgette/Documents/Bimdata/dev/deployment/ansible.cfg) = 60
```
### OS / Environment
Archlinux
### Steps to Reproduce
- Create a file `roles/test_a/tasks/main.yml` with:
```
---
- name: "Debug."
ansible.builtin.debug:
msg: "test_a"
```
- Create a file `roles/test_b/meta/main.yml` with:
```
---
dependencies:
- role: test_a
```
- Create a file `roles/test_b/tasks/main.yml` with:
```
---
- name: "Debug."
ansible.builtin.debug:
msg: "test_b
```
- Duplicate test_b into test_c: `cp -r roles/test_b roles/test_c`
- Modify the debug message of the test_c role: `sed -i 's/test_b/test_c/g' roles/test_c/tasks/main.yml`
- Create the file `test.yml` with:
```
---
- name: Test
hosts: localhost
gather_facts: false
become: false
vars:
skip_b: true
skip_c: false
roles:
- role: test_b
when: not skip_b
- role: test_c
when: not skip_c
```
- Run : `ansible-playbook test.yml`
### Expected Results
I expected test_a to be include by test_c, but I was chocked that it did not.
It's working fine with Ansible 7. For reference, here the complete version of ansible 7 that I use, and the corresponding output.
```
ansible [core 2.14.8]
config file = /home/courgette/Documents/Bimdata/dev/deployment/ansible.cfg
configured module search path = ['/home/courgette/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/courgette/.virtualenvs/deploy/lib/python3.11/site-packages/ansible
ansible collection location = /home/courgette/.ansible/collections:/usr/share/ansible/collections
executable location = /home/courgette/.virtualenvs/deploy/bin/ansible
python version = 3.11.3 (main, Jun 5 2023, 09:32:32) [GCC 13.1.1 20230429] (/home/courgette/.virtualenvs/deploy/bin/python)
jinja version = 3.1.2
libyaml = True
```
```
[WARNING]: No inventory was parsed, only implicit localhost is available
[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all'
PLAY [Test] *********************************************************************************************************************************************************************************************************
TASK [test_a : Debug.] **********************************************************************************************************************************************************************************************
skipping: [localhost]
TASK [test_b : Debug.] **********************************************************************************************************************************************************************************************
skipping: [localhost]
TASK [test_a : Debug.] **********************************************************************************************************************************************************************************************
ok: [localhost] => {
"msg": "test_a"
}
TASK [test_c : Debug.] **********************************************************************************************************************************************************************************************
ok: [localhost] => {
"msg": "test_c"
}
PLAY RECAP **********************************************************************************************************************************************************************************************************
localhost : ok=2 changed=0 unreachable=0 failed=0 skipped=2 rescued=0 ignored=0
```
### Actual Results
```console
[WARNING]: No inventory was parsed, only implicit localhost is available
[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all'
PLAY [Test] *********************************************************************************************************************************************************************************************************
TASK [test_a : Debug.] **********************************************************************************************************************************************************************************************
skipping: [localhost]
TASK [test_b : Debug.] **********************************************************************************************************************************************************************************************
skipping: [localhost]
TASK [test_c : Debug.] **********************************************************************************************************************************************************************************************
ok: [localhost] => {
"msg": "test_c"
}
PLAY RECAP **********************************************************************************************************************************************************************************************************
localhost : ok=1 changed=0 unreachable=0 failed=0 skipped=2 rescued=0 ignored=0
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/81486
|
https://github.com/ansible/ansible/pull/81565
|
3eb96f2c68826718cda83c42cb519c78c0a7a8a8
|
8034651cd2626e0d634b2b52eeafc81852d8110d
| 2023-08-10T17:14:55Z |
python
| 2023-09-08T16:11:48Z |
test/integration/targets/roles/roles/test_connectivity/tasks/main.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 81,486 |
Role dependencies: change in comportement when role run conditionally
|
### Summary
Hi,
Since Ansible 8, when multiple roles include the same dependency and when the first role doesn't run because of a `when` condition, then the dependency is not included at all.
### Issue Type
Bug Report
### Component Name
role
### Ansible Version
```console
ansible [core 2.15.2]
config file = /home/courgette/Documents/Bimdata/dev/deployment/ansible.cfg
configured module search path = ['/home/courgette/Documents/Bimdata/dev/deployment/library', '/home/courgette/Documents/Bimdata/dev/deployment/kubespray/library']
ansible python module location = /home/courgette/.virtualenvs/deploy/lib/python3.11/site-packages/ansible
ansible collection location = /home/courgette/.ansible/collections:/usr/share/ansible/collections
executable location = /home/courgette/.virtualenvs/deploy/bin/ansible
python version = 3.11.3 (main, Jun 5 2023, 09:32:32) [GCC 13.1.1 20230429] (/home/courgette/.virtualenvs/deploy/bin/python)
jinja version = 3.1.2
libyaml = True
```
### Configuration
```console
CONFIG_FILE() = /home/courgette/Documents/Bimdata/dev/deployment/ansible.cfg
DEFAULT_FILTER_PLUGIN_PATH(/home/courgette/Documents/Bimdata/dev/deployment/ansible.cfg) = ['/home/courgette/Documents/Bimdata/dev/deployment/ansible_plugins/filter_plugins']
DEFAULT_TIMEOUT(/home/courgette/Documents/Bimdata/dev/deployment/ansible.cfg) = 60
DEFAULT_VAULT_PASSWORD_FILE(/home/courgette/Documents/Bimdata/dev/deployment/ansible.cfg) = /home/courgette/Documents/Bimdata/dev/deployment/.get-vault-pass.sh
EDITOR(env: EDITOR) = vim
PAGER(env: PAGER) = less
CONNECTION:
==========
paramiko_ssh:
____________
timeout(/home/courgette/Documents/Bimdata/dev/deployment/ansible.cfg) = 60
ssh:
___
pipelining(/home/courgette/Documents/Bimdata/dev/deployment/ansible.cfg) = True
timeout(/home/courgette/Documents/Bimdata/dev/deployment/ansible.cfg) = 60
```
### OS / Environment
Archlinux
### Steps to Reproduce
- Create a file `roles/test_a/tasks/main.yml` with:
```
---
- name: "Debug."
ansible.builtin.debug:
msg: "test_a"
```
- Create a file `roles/test_b/meta/main.yml` with:
```
---
dependencies:
- role: test_a
```
- Create a file `roles/test_b/tasks/main.yml` with:
```
---
- name: "Debug."
ansible.builtin.debug:
msg: "test_b
```
- Duplicate test_b into test_c: `cp -r roles/test_b roles/test_c`
- Modify the debug message of the test_c role: `sed -i 's/test_b/test_c/g' roles/test_c/tasks/main.yml`
- Create the file `test.yml` with:
```
---
- name: Test
hosts: localhost
gather_facts: false
become: false
vars:
skip_b: true
skip_c: false
roles:
- role: test_b
when: not skip_b
- role: test_c
when: not skip_c
```
- Run : `ansible-playbook test.yml`
### Expected Results
I expected test_a to be include by test_c, but I was chocked that it did not.
It's working fine with Ansible 7. For reference, here the complete version of ansible 7 that I use, and the corresponding output.
```
ansible [core 2.14.8]
config file = /home/courgette/Documents/Bimdata/dev/deployment/ansible.cfg
configured module search path = ['/home/courgette/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/courgette/.virtualenvs/deploy/lib/python3.11/site-packages/ansible
ansible collection location = /home/courgette/.ansible/collections:/usr/share/ansible/collections
executable location = /home/courgette/.virtualenvs/deploy/bin/ansible
python version = 3.11.3 (main, Jun 5 2023, 09:32:32) [GCC 13.1.1 20230429] (/home/courgette/.virtualenvs/deploy/bin/python)
jinja version = 3.1.2
libyaml = True
```
```
[WARNING]: No inventory was parsed, only implicit localhost is available
[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all'
PLAY [Test] *********************************************************************************************************************************************************************************************************
TASK [test_a : Debug.] **********************************************************************************************************************************************************************************************
skipping: [localhost]
TASK [test_b : Debug.] **********************************************************************************************************************************************************************************************
skipping: [localhost]
TASK [test_a : Debug.] **********************************************************************************************************************************************************************************************
ok: [localhost] => {
"msg": "test_a"
}
TASK [test_c : Debug.] **********************************************************************************************************************************************************************************************
ok: [localhost] => {
"msg": "test_c"
}
PLAY RECAP **********************************************************************************************************************************************************************************************************
localhost : ok=2 changed=0 unreachable=0 failed=0 skipped=2 rescued=0 ignored=0
```
### Actual Results
```console
[WARNING]: No inventory was parsed, only implicit localhost is available
[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all'
PLAY [Test] *********************************************************************************************************************************************************************************************************
TASK [test_a : Debug.] **********************************************************************************************************************************************************************************************
skipping: [localhost]
TASK [test_b : Debug.] **********************************************************************************************************************************************************************************************
skipping: [localhost]
TASK [test_c : Debug.] **********************************************************************************************************************************************************************************************
ok: [localhost] => {
"msg": "test_c"
}
PLAY RECAP **********************************************************************************************************************************************************************************************************
localhost : ok=1 changed=0 unreachable=0 failed=0 skipped=2 rescued=0 ignored=0
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/81486
|
https://github.com/ansible/ansible/pull/81565
|
3eb96f2c68826718cda83c42cb519c78c0a7a8a8
|
8034651cd2626e0d634b2b52eeafc81852d8110d
| 2023-08-10T17:14:55Z |
python
| 2023-09-08T16:11:48Z |
test/integration/targets/roles/runme.sh
|
#!/usr/bin/env bash
set -eux
# test no dupes when dependencies in b and c point to a in roles:
[ "$(ansible-playbook no_dupes.yml -i ../../inventory --tags inroles "$@" | grep -c '"msg": "A"')" = "1" ]
[ "$(ansible-playbook no_dupes.yml -i ../../inventory --tags acrossroles "$@" | grep -c '"msg": "A"')" = "1" ]
[ "$(ansible-playbook no_dupes.yml -i ../../inventory --tags intasks "$@" | grep -c '"msg": "A"')" = "1" ]
# but still dupe across plays
[ "$(ansible-playbook no_dupes.yml -i ../../inventory "$@" | grep -c '"msg": "A"')" = "3" ]
# include/import can execute another instance of role
[ "$(ansible-playbook allowed_dupes.yml -i ../../inventory --tags importrole "$@" | grep -c '"msg": "A"')" = "2" ]
[ "$(ansible-playbook allowed_dupes.yml -i ../../inventory --tags includerole "$@" | grep -c '"msg": "A"')" = "2" ]
[ "$(ansible-playbook dupe_inheritance.yml -i ../../inventory "$@" | grep -c '"msg": "abc"')" = "3" ]
# ensure role data is merged correctly
ansible-playbook data_integrity.yml -i ../../inventory "$@"
# ensure role fails when trying to load 'non role' in _from
ansible-playbook no_outside.yml -i ../../inventory "$@" > role_outside_output.log 2>&1 || true
if grep "as it is not inside the expected role path" role_outside_output.log >/dev/null; then
echo "Test passed (playbook failed with expected output, output not shown)."
else
echo "Test failed, expected output from playbook failure is missing, output not shown)."
exit 1
fi
# ensure vars scope is correct
ansible-playbook vars_scope.yml -i ../../inventory "$@"
# test nested includes get parent roles greater than a depth of 3
[ "$(ansible-playbook 47023.yml -i ../../inventory "$@" | grep '\<\(Default\|Var\)\>' | grep -c 'is defined')" = "2" ]
# ensure import_role called from include_role has the include_role in the dep chain
ansible-playbook role_dep_chain.yml -i ../../inventory "$@"
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 79,749 |
copy with content and diff enabled prints wrong file modified
|
### Summary
When using the `copy` module together with `content` arg, wrong file name is printed when `ansible-playbook` is run with `--diff` switch. The file listed as modified is the temporary file in `/tmp` and not the real file name.
The `+++ after` in diff results output lists a temporary file.
### Issue Type
Bug Report
### Component Name
copy
### Ansible Version
```console
$ ansible --version
ansible [core 2.14.1]
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/ansible/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/ansible-venv/lib64/python3.9/site-packages/ansible
ansible collection location = /home/ansible/.ansible/collections:/usr/share/ansible/collections
executable location = /home/ansible-venv/bin/ansible
python version = 3.9.14 (main, Nov 7 2022, 00:00:00) [GCC 11.3.1 20220421 (Red Hat 11.3.1-2)] (/home/ansible-venv/bin/python3)
jinja version = 3.1.2
libyaml = True
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
irrelevant
```
### OS / Environment
EL9
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
---
- name: Test copy with content
hosts:
- all
gather_facts: false
tasks:
- name: Copy with content shows wrong file modified
ansible.builtin.copy:
dest: /tmp/test
content: aaa
```
### Expected Results
The file name passed to `copy` in arg `dest` should be listed in the diff line and not a temporary file.
### Actual Results
```console
TASK [Copy with content shows wrong file modified] *************************************************************************************************************
--- before
+++ after: /tmp/.ansible.ansible/ansible-local-332829qqzxldi3/tmpz_qauy37
@@ -0,0 +1 @@
+aaa
\ No newline at end of file
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/79749
|
https://github.com/ansible/ansible/pull/81678
|
243197f2d466d17082cb5334db7dcb2025f47bc7
|
e4468dc9442a5c1a3a52dda49586cd9ce41b7959
| 2023-01-16T14:22:43Z |
python
| 2023-09-12T15:32:41Z |
changelogs/fragments/copy_diff.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 79,749 |
copy with content and diff enabled prints wrong file modified
|
### Summary
When using the `copy` module together with `content` arg, wrong file name is printed when `ansible-playbook` is run with `--diff` switch. The file listed as modified is the temporary file in `/tmp` and not the real file name.
The `+++ after` in diff results output lists a temporary file.
### Issue Type
Bug Report
### Component Name
copy
### Ansible Version
```console
$ ansible --version
ansible [core 2.14.1]
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/ansible/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/ansible-venv/lib64/python3.9/site-packages/ansible
ansible collection location = /home/ansible/.ansible/collections:/usr/share/ansible/collections
executable location = /home/ansible-venv/bin/ansible
python version = 3.9.14 (main, Nov 7 2022, 00:00:00) [GCC 11.3.1 20220421 (Red Hat 11.3.1-2)] (/home/ansible-venv/bin/python3)
jinja version = 3.1.2
libyaml = True
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
irrelevant
```
### OS / Environment
EL9
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
---
- name: Test copy with content
hosts:
- all
gather_facts: false
tasks:
- name: Copy with content shows wrong file modified
ansible.builtin.copy:
dest: /tmp/test
content: aaa
```
### Expected Results
The file name passed to `copy` in arg `dest` should be listed in the diff line and not a temporary file.
### Actual Results
```console
TASK [Copy with content shows wrong file modified] *************************************************************************************************************
--- before
+++ after: /tmp/.ansible.ansible/ansible-local-332829qqzxldi3/tmpz_qauy37
@@ -0,0 +1 @@
+aaa
\ No newline at end of file
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/79749
|
https://github.com/ansible/ansible/pull/81678
|
243197f2d466d17082cb5334db7dcb2025f47bc7
|
e4468dc9442a5c1a3a52dda49586cd9ce41b7959
| 2023-01-16T14:22:43Z |
python
| 2023-09-12T15:32:41Z |
lib/ansible/plugins/action/__init__.py
|
# coding: utf-8
# Copyright: (c) 2012-2014, Michael DeHaan <[email protected]>
# Copyright: (c) 2018, Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
# Make coding more python3-ish
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import base64
import json
import os
import random
import re
import shlex
import stat
import tempfile
from abc import ABC, abstractmethod
from collections.abc import Sequence
from ansible import constants as C
from ansible.errors import AnsibleError, AnsibleConnectionFailure, AnsibleActionSkip, AnsibleActionFail, AnsibleAuthenticationFailure
from ansible.executor.module_common import modify_module
from ansible.executor.interpreter_discovery import discover_interpreter, InterpreterDiscoveryRequiredError
from ansible.module_utils.common.arg_spec import ArgumentSpecValidator
from ansible.module_utils.errors import UnsupportedError
from ansible.module_utils.json_utils import _filter_non_json_lines
from ansible.module_utils.six import binary_type, string_types, text_type
from ansible.module_utils.common.text.converters import to_bytes, to_native, to_text
from ansible.parsing.utils.jsonify import jsonify
from ansible.release import __version__
from ansible.utils.collection_loader import resource_from_fqcr
from ansible.utils.display import Display
from ansible.utils.unsafe_proxy import wrap_var, AnsibleUnsafeText
from ansible.vars.clean import remove_internal_keys
from ansible.utils.plugin_docs import get_versioned_doclink
display = Display()
def _validate_utf8_json(d):
if isinstance(d, text_type):
# Purposefully not using to_bytes here for performance reasons
d.encode(encoding='utf-8', errors='strict')
elif isinstance(d, dict):
for o in d.items():
_validate_utf8_json(o)
elif isinstance(d, (list, tuple)):
for o in d:
_validate_utf8_json(o)
class ActionBase(ABC):
'''
This class is the base class for all action plugins, and defines
code common to all actions. The base class handles the connection
by putting/getting files and executing commands based on the current
action in use.
'''
# A set of valid arguments
_VALID_ARGS = frozenset([]) # type: frozenset[str]
# behavioral attributes
BYPASS_HOST_LOOP = False
TRANSFERS_FILES = False
_requires_connection = True
_supports_check_mode = True
_supports_async = False
def __init__(self, task, connection, play_context, loader, templar, shared_loader_obj):
self._task = task
self._connection = connection
self._play_context = play_context
self._loader = loader
self._templar = templar
self._shared_loader_obj = shared_loader_obj
self._cleanup_remote_tmp = False
# interpreter discovery state
self._discovered_interpreter_key = None
self._discovered_interpreter = False
self._discovery_deprecation_warnings = []
self._discovery_warnings = []
self._used_interpreter = None
# Backwards compat: self._display isn't really needed, just import the global display and use that.
self._display = display
@abstractmethod
def run(self, tmp=None, task_vars=None):
""" Action Plugins should implement this method to perform their
tasks. Everything else in this base class is a helper method for the
action plugin to do that.
:kwarg tmp: Deprecated parameter. This is no longer used. An action plugin that calls
another one and wants to use the same remote tmp for both should set
self._connection._shell.tmpdir rather than this parameter.
:kwarg task_vars: The variables (host vars, group vars, config vars,
etc) associated with this task.
:returns: dictionary of results from the module
Implementors of action modules may find the following variables especially useful:
* Module parameters. These are stored in self._task.args
"""
# does not default to {'changed': False, 'failed': False}, as it breaks async
result = {}
if tmp is not None:
result['warning'] = ['ActionModule.run() no longer honors the tmp parameter. Action'
' plugins should set self._connection._shell.tmpdir to share'
' the tmpdir']
del tmp
if self._task.async_val and not self._supports_async:
raise AnsibleActionFail('async is not supported for this task.')
elif self._task.check_mode and not self._supports_check_mode:
raise AnsibleActionSkip('check mode is not supported for this task.')
elif self._task.async_val and self._task.check_mode:
raise AnsibleActionFail('check mode and async cannot be used on same task.')
# Error if invalid argument is passed
if self._VALID_ARGS:
task_opts = frozenset(self._task.args.keys())
bad_opts = task_opts.difference(self._VALID_ARGS)
if bad_opts:
raise AnsibleActionFail('Invalid options for %s: %s' % (self._task.action, ','.join(list(bad_opts))))
if self._connection._shell.tmpdir is None and self._early_needs_tmp_path():
self._make_tmp_path()
return result
def validate_argument_spec(self, argument_spec=None,
mutually_exclusive=None,
required_together=None,
required_one_of=None,
required_if=None,
required_by=None,
):
"""Validate an argument spec against the task args
This will return a tuple of (ValidationResult, dict) where the dict
is the validated, coerced, and normalized task args.
Be cautious when directly passing ``new_module_args`` directly to a
module invocation, as it will contain the defaults, and not only
the args supplied from the task. If you do this, the module
should not define ``mututally_exclusive`` or similar.
This code is roughly copied from the ``validate_argument_spec``
action plugin for use by other action plugins.
"""
new_module_args = self._task.args.copy()
validator = ArgumentSpecValidator(
argument_spec,
mutually_exclusive=mutually_exclusive,
required_together=required_together,
required_one_of=required_one_of,
required_if=required_if,
required_by=required_by,
)
validation_result = validator.validate(new_module_args)
new_module_args.update(validation_result.validated_parameters)
try:
error = validation_result.errors[0]
except IndexError:
error = None
# Fail for validation errors, even in check mode
if error:
msg = validation_result.errors.msg
if isinstance(error, UnsupportedError):
msg = f"Unsupported parameters for ({self._load_name}) module: {msg}"
raise AnsibleActionFail(msg)
return validation_result, new_module_args
def cleanup(self, force=False):
"""Method to perform a clean up at the end of an action plugin execution
By default this is designed to clean up the shell tmpdir, and is toggled based on whether
async is in use
Action plugins may override this if they deem necessary, but should still call this method
via super
"""
if force or not self._task.async_val:
self._remove_tmp_path(self._connection._shell.tmpdir)
def get_plugin_option(self, plugin, option, default=None):
"""Helper to get an option from a plugin without having to use
the try/except dance everywhere to set a default
"""
try:
return plugin.get_option(option)
except (AttributeError, KeyError):
return default
def get_become_option(self, option, default=None):
return self.get_plugin_option(self._connection.become, option, default=default)
def get_connection_option(self, option, default=None):
return self.get_plugin_option(self._connection, option, default=default)
def get_shell_option(self, option, default=None):
return self.get_plugin_option(self._connection._shell, option, default=default)
def _remote_file_exists(self, path):
cmd = self._connection._shell.exists(path)
result = self._low_level_execute_command(cmd=cmd, sudoable=True)
if result['rc'] == 0:
return True
return False
def _configure_module(self, module_name, module_args, task_vars):
'''
Handles the loading and templating of the module code through the
modify_module() function.
'''
if self._task.delegate_to:
use_vars = task_vars.get('ansible_delegated_vars')[self._task.delegate_to]
else:
use_vars = task_vars
split_module_name = module_name.split('.')
collection_name = '.'.join(split_module_name[0:2]) if len(split_module_name) > 2 else ''
leaf_module_name = resource_from_fqcr(module_name)
# Search module path(s) for named module.
for mod_type in self._connection.module_implementation_preferences:
# Check to determine if PowerShell modules are supported, and apply
# some fixes (hacks) to module name + args.
if mod_type == '.ps1':
# FIXME: This should be temporary and moved to an exec subsystem plugin where we can define the mapping
# for each subsystem.
win_collection = 'ansible.windows'
rewrite_collection_names = ['ansible.builtin', 'ansible.legacy', '']
# async_status, win_stat, win_file, win_copy, and win_ping are not just like their
# python counterparts but they are compatible enough for our
# internal usage
# NB: we only rewrite the module if it's not being called by the user (eg, an action calling something else)
# and if it's unqualified or FQ to a builtin
if leaf_module_name in ('stat', 'file', 'copy', 'ping') and \
collection_name in rewrite_collection_names and self._task.action != module_name:
module_name = '%s.win_%s' % (win_collection, leaf_module_name)
elif leaf_module_name == 'async_status' and collection_name in rewrite_collection_names:
module_name = '%s.%s' % (win_collection, leaf_module_name)
# TODO: move this tweak down to the modules, not extensible here
# Remove extra quotes surrounding path parameters before sending to module.
if leaf_module_name in ['win_stat', 'win_file', 'win_copy', 'slurp'] and module_args and \
hasattr(self._connection._shell, '_unquote'):
for key in ('src', 'dest', 'path'):
if key in module_args:
module_args[key] = self._connection._shell._unquote(module_args[key])
result = self._shared_loader_obj.module_loader.find_plugin_with_context(module_name, mod_type, collection_list=self._task.collections)
if not result.resolved:
if result.redirect_list and len(result.redirect_list) > 1:
# take the last one in the redirect list, we may have successfully jumped through N other redirects
target_module_name = result.redirect_list[-1]
raise AnsibleError("The module {0} was redirected to {1}, which could not be loaded.".format(module_name, target_module_name))
module_path = result.plugin_resolved_path
if module_path:
break
else: # This is a for-else: http://bit.ly/1ElPkyg
raise AnsibleError("The module %s was not found in configured module paths" % (module_name))
# insert shared code and arguments into the module
final_environment = dict()
self._compute_environment_string(final_environment)
become_kwargs = {}
if self._connection.become:
become_kwargs['become'] = True
become_kwargs['become_method'] = self._connection.become.name
become_kwargs['become_user'] = self._connection.become.get_option('become_user',
playcontext=self._play_context)
become_kwargs['become_password'] = self._connection.become.get_option('become_pass',
playcontext=self._play_context)
become_kwargs['become_flags'] = self._connection.become.get_option('become_flags',
playcontext=self._play_context)
# modify_module will exit early if interpreter discovery is required; re-run after if necessary
for dummy in (1, 2):
try:
(module_data, module_style, module_shebang) = modify_module(module_name, module_path, module_args, self._templar,
task_vars=use_vars,
module_compression=C.config.get_config_value('DEFAULT_MODULE_COMPRESSION',
variables=task_vars),
async_timeout=self._task.async_val,
environment=final_environment,
remote_is_local=bool(getattr(self._connection, '_remote_is_local', False)),
**become_kwargs)
break
except InterpreterDiscoveryRequiredError as idre:
self._discovered_interpreter = AnsibleUnsafeText(discover_interpreter(
action=self,
interpreter_name=idre.interpreter_name,
discovery_mode=idre.discovery_mode,
task_vars=use_vars))
# update the local task_vars with the discovered interpreter (which might be None);
# we'll propagate back to the controller in the task result
discovered_key = 'discovered_interpreter_%s' % idre.interpreter_name
# update the local vars copy for the retry
use_vars['ansible_facts'][discovered_key] = self._discovered_interpreter
# TODO: this condition prevents 'wrong host' from being updated
# but in future we would want to be able to update 'delegated host facts'
# irrespective of task settings
if not self._task.delegate_to or self._task.delegate_facts:
# store in local task_vars facts collection for the retry and any other usages in this worker
task_vars['ansible_facts'][discovered_key] = self._discovered_interpreter
# preserve this so _execute_module can propagate back to controller as a fact
self._discovered_interpreter_key = discovered_key
else:
task_vars['ansible_delegated_vars'][self._task.delegate_to]['ansible_facts'][discovered_key] = self._discovered_interpreter
return (module_style, module_shebang, module_data, module_path)
def _compute_environment_string(self, raw_environment_out=None):
'''
Builds the environment string to be used when executing the remote task.
'''
final_environment = dict()
if self._task.environment is not None:
environments = self._task.environment
if not isinstance(environments, list):
environments = [environments]
# The order of environments matters to make sure we merge
# in the parent's values first so those in the block then
# task 'win' in precedence
for environment in environments:
if environment is None or len(environment) == 0:
continue
temp_environment = self._templar.template(environment)
if not isinstance(temp_environment, dict):
raise AnsibleError("environment must be a dictionary, received %s (%s)" % (temp_environment, type(temp_environment)))
# very deliberately using update here instead of combine_vars, as
# these environment settings should not need to merge sub-dicts
final_environment.update(temp_environment)
if len(final_environment) > 0:
final_environment = self._templar.template(final_environment)
if isinstance(raw_environment_out, dict):
raw_environment_out.clear()
raw_environment_out.update(final_environment)
return self._connection._shell.env_prefix(**final_environment)
def _early_needs_tmp_path(self):
'''
Determines if a tmp path should be created before the action is executed.
'''
return getattr(self, 'TRANSFERS_FILES', False)
def _is_pipelining_enabled(self, module_style, wrap_async=False):
'''
Determines if we are required and can do pipelining
'''
try:
is_enabled = self._connection.get_option('pipelining')
except (KeyError, AttributeError, ValueError):
is_enabled = self._play_context.pipelining
# winrm supports async pipeline
# TODO: make other class property 'has_async_pipelining' to separate cases
always_pipeline = self._connection.always_pipeline_modules
# su does not work with pipelining
# TODO: add has_pipelining class prop to become plugins
become_exception = (self._connection.become.name if self._connection.become else '') != 'su'
# any of these require a true
conditions = [
self._connection.has_pipelining, # connection class supports it
is_enabled or always_pipeline, # enabled via config or forced via connection (eg winrm)
module_style == "new", # old style modules do not support pipelining
not C.DEFAULT_KEEP_REMOTE_FILES, # user wants remote files
not wrap_async or always_pipeline, # async does not normally support pipelining unless it does (eg winrm)
become_exception,
]
return all(conditions)
def _get_admin_users(self):
'''
Returns a list of admin users that are configured for the current shell
plugin
'''
return self.get_shell_option('admin_users', ['root'])
def _get_remote_addr(self, tvars):
''' consistently get the 'remote_address' for the action plugin '''
remote_addr = tvars.get('delegated_vars', {}).get('ansible_host', tvars.get('ansible_host', tvars.get('inventory_hostname', None)))
for variation in ('remote_addr', 'host'):
try:
remote_addr = self._connection.get_option(variation)
except KeyError:
continue
break
else:
# plugin does not have, fallback to play_context
remote_addr = self._play_context.remote_addr
return remote_addr
def _get_remote_user(self):
''' consistently get the 'remote_user' for the action plugin '''
# TODO: use 'current user running ansible' as fallback when moving away from play_context
# pwd.getpwuid(os.getuid()).pw_name
remote_user = None
try:
remote_user = self._connection.get_option('remote_user')
except KeyError:
# plugin does not have remote_user option, fallback to default and/play_context
remote_user = getattr(self._connection, 'default_user', None) or self._play_context.remote_user
except AttributeError:
# plugin does not use config system, fallback to old play_context
remote_user = self._play_context.remote_user
return remote_user
def _is_become_unprivileged(self):
'''
The user is not the same as the connection user and is not part of the
shell configured admin users
'''
# if we don't use become then we know we aren't switching to a
# different unprivileged user
if not self._connection.become:
return False
# if we use become and the user is not an admin (or same user) then
# we need to return become_unprivileged as True
admin_users = self._get_admin_users()
remote_user = self._get_remote_user()
become_user = self.get_become_option('become_user')
return bool(become_user and become_user not in admin_users + [remote_user])
def _make_tmp_path(self, remote_user=None):
'''
Create and return a temporary path on a remote box.
'''
# Network connection plugins (network_cli, netconf, etc.) execute on the controller, rather than the remote host.
# As such, we want to avoid using remote_user for paths as remote_user may not line up with the local user
# This is a hack and should be solved by more intelligent handling of remote_tmp in 2.7
if getattr(self._connection, '_remote_is_local', False):
tmpdir = C.DEFAULT_LOCAL_TMP
else:
# NOTE: shell plugins should populate this setting anyways, but they dont do remote expansion, which
# we need for 'non posix' systems like cloud-init and solaris
tmpdir = self._remote_expand_user(self.get_shell_option('remote_tmp', default='~/.ansible/tmp'), sudoable=False)
become_unprivileged = self._is_become_unprivileged()
basefile = self._connection._shell._generate_temp_dir_name()
cmd = self._connection._shell.mkdtemp(basefile=basefile, system=become_unprivileged, tmpdir=tmpdir)
result = self._low_level_execute_command(cmd, sudoable=False)
# error handling on this seems a little aggressive?
if result['rc'] != 0:
if result['rc'] == 5:
output = 'Authentication failure.'
elif result['rc'] == 255 and self._connection.transport in ('ssh',):
if display.verbosity > 3:
output = u'SSH encountered an unknown error. The output was:\n%s%s' % (result['stdout'], result['stderr'])
else:
output = (u'SSH encountered an unknown error during the connection. '
'We recommend you re-run the command using -vvvv, which will enable SSH debugging output to help diagnose the issue')
elif u'No space left on device' in result['stderr']:
output = result['stderr']
else:
output = ('Failed to create temporary directory. '
'In some cases, you may have been able to authenticate and did not have permissions on the target directory. '
'Consider changing the remote tmp path in ansible.cfg to a path rooted in "/tmp", for more error information use -vvv. '
'Failed command was: %s, exited with result %d' % (cmd, result['rc']))
if 'stdout' in result and result['stdout'] != u'':
output = output + u", stdout output: %s" % result['stdout']
if display.verbosity > 3 and 'stderr' in result and result['stderr'] != u'':
output += u", stderr output: %s" % result['stderr']
raise AnsibleConnectionFailure(output)
else:
self._cleanup_remote_tmp = True
try:
stdout_parts = result['stdout'].strip().split('%s=' % basefile, 1)
rc = self._connection._shell.join_path(stdout_parts[-1], u'').splitlines()[-1]
except IndexError:
# stdout was empty or just space, set to / to trigger error in next if
rc = '/'
# Catch failure conditions, files should never be
# written to locations in /.
if rc == '/':
raise AnsibleError('failed to resolve remote temporary directory from %s: `%s` returned empty string' % (basefile, cmd))
self._connection._shell.tmpdir = rc
return rc
def _should_remove_tmp_path(self, tmp_path):
'''Determine if temporary path should be deleted or kept by user request/config'''
return tmp_path and self._cleanup_remote_tmp and not C.DEFAULT_KEEP_REMOTE_FILES and "-tmp-" in tmp_path
def _remove_tmp_path(self, tmp_path, force=False):
'''Remove a temporary path we created. '''
if tmp_path is None and self._connection._shell.tmpdir:
tmp_path = self._connection._shell.tmpdir
if force or self._should_remove_tmp_path(tmp_path):
cmd = self._connection._shell.remove(tmp_path, recurse=True)
# If we have gotten here we have a working connection configuration.
# If the connection breaks we could leave tmp directories out on the remote system.
tmp_rm_res = self._low_level_execute_command(cmd, sudoable=False)
if tmp_rm_res.get('rc', 0) != 0:
display.warning('Error deleting remote temporary files (rc: %s, stderr: %s})'
% (tmp_rm_res.get('rc'), tmp_rm_res.get('stderr', 'No error string available.')))
else:
self._connection._shell.tmpdir = None
def _transfer_file(self, local_path, remote_path):
"""
Copy a file from the controller to a remote path
:arg local_path: Path on controller to transfer
:arg remote_path: Path on the remote system to transfer into
.. warning::
* When you use this function you likely want to use use fixup_perms2() on the
remote_path to make sure that the remote file is readable when the user becomes
a non-privileged user.
* If you use fixup_perms2() on the file and copy or move the file into place, you will
need to then remove filesystem acls on the file once it has been copied into place by
the module. See how the copy module implements this for help.
"""
self._connection.put_file(local_path, remote_path)
return remote_path
def _transfer_data(self, remote_path, data):
'''
Copies the module data out to the temporary module path.
'''
if isinstance(data, dict):
data = jsonify(data)
afd, afile = tempfile.mkstemp(dir=C.DEFAULT_LOCAL_TMP)
afo = os.fdopen(afd, 'wb')
try:
data = to_bytes(data, errors='surrogate_or_strict')
afo.write(data)
except Exception as e:
raise AnsibleError("failure writing module data to temporary file for transfer: %s" % to_native(e))
afo.flush()
afo.close()
try:
self._transfer_file(afile, remote_path)
finally:
os.unlink(afile)
return remote_path
def _fixup_perms2(self, remote_paths, remote_user=None, execute=True):
"""
We need the files we upload to be readable (and sometimes executable)
by the user being sudo'd to but we want to limit other people's access
(because the files could contain passwords or other private
information. We achieve this in one of these ways:
* If no sudo is performed or the remote_user is sudo'ing to
themselves, we don't have to change permissions.
* If the remote_user sudo's to a privileged user (for instance, root),
we don't have to change permissions
* If the remote_user sudo's to an unprivileged user then we attempt to
grant the unprivileged user access via file system acls.
* If granting file system acls fails we try to change the owner of the
file with chown which only works in case the remote_user is
privileged or the remote systems allows chown calls by unprivileged
users (e.g. HP-UX)
* If the above fails, we next try 'chmod +a' which is a macOS way of
setting ACLs on files.
* If the above fails, we check if ansible_common_remote_group is set.
If it is, we attempt to chgrp the file to its value. This is useful
if the remote_user has a group in common with the become_user. As the
remote_user, we can chgrp the file to that group and allow the
become_user to read it.
* If (the chown fails AND ansible_common_remote_group is not set) OR
(ansible_common_remote_group is set AND the chgrp (or following chmod)
returned non-zero), we can set the file to be world readable so that
the second unprivileged user can read the file.
Since this could allow other users to get access to private
information we only do this if ansible is configured with
"allow_world_readable_tmpfiles" in the ansible.cfg. Also note that
when ansible_common_remote_group is set this final fallback is very
unlikely to ever be triggered, so long as chgrp was successful. But
just because the chgrp was successful, does not mean Ansible can
necessarily access the files (if, for example, the variable was set
to a group that remote_user is in, and can chgrp to, but does not have
in common with become_user).
"""
if remote_user is None:
remote_user = self._get_remote_user()
# Step 1: Are we on windows?
if getattr(self._connection._shell, "_IS_WINDOWS", False):
# This won't work on Powershell as-is, so we'll just completely
# skip until we have a need for it, at which point we'll have to do
# something different.
return remote_paths
# Step 2: If we're not becoming an unprivileged user, we are roughly
# done. Make the files +x if we're asked to, and return.
if not self._is_become_unprivileged():
if execute:
# Can't depend on the file being transferred with execute permissions.
# Only need user perms because no become was used here
res = self._remote_chmod(remote_paths, 'u+x')
if res['rc'] != 0:
raise AnsibleError(
'Failed to set execute bit on remote files '
'(rc: {0}, err: {1})'.format(
res['rc'],
to_native(res['stderr'])))
return remote_paths
# If we're still here, we have an unprivileged user that's different
# than the ssh user.
become_user = self.get_become_option('become_user')
# Try to use file system acls to make the files readable for sudo'd
# user
if execute:
chmod_mode = 'rx'
setfacl_mode = 'r-x'
# Apple patches their "file_cmds" chmod with ACL support
chmod_acl_mode = '{0} allow read,execute'.format(become_user)
# POSIX-draft ACL specification. Solaris, maybe others.
# See chmod(1) on something Solaris-based for syntax details.
posix_acl_mode = 'A+user:{0}:rx:allow'.format(become_user)
else:
chmod_mode = 'rX'
# TODO: this form fails silently on freebsd. We currently
# never call _fixup_perms2() with execute=False but if we
# start to we'll have to fix this.
setfacl_mode = 'r-X'
# Apple
chmod_acl_mode = '{0} allow read'.format(become_user)
# POSIX-draft
posix_acl_mode = 'A+user:{0}:r:allow'.format(become_user)
# Step 3a: Are we able to use setfacl to add user ACLs to the file?
res = self._remote_set_user_facl(
remote_paths,
become_user,
setfacl_mode)
if res['rc'] == 0:
return remote_paths
# Step 3b: Set execute if we need to. We do this before anything else
# because some of the methods below might work but not let us set +x
# as part of them.
if execute:
res = self._remote_chmod(remote_paths, 'u+x')
if res['rc'] != 0:
raise AnsibleError(
'Failed to set file mode or acl on remote temporary files '
'(rc: {0}, err: {1})'.format(
res['rc'],
to_native(res['stderr'])))
# Step 3c: File system ACLs failed above; try falling back to chown.
res = self._remote_chown(remote_paths, become_user)
if res['rc'] == 0:
return remote_paths
# Check if we are an admin/root user. If we are and got here, it means
# we failed to chown as root and something weird has happened.
if remote_user in self._get_admin_users():
raise AnsibleError(
'Failed to change ownership of the temporary files Ansible '
'(via chmod nor setfacl) needs to create despite connecting as a '
'privileged user. Unprivileged become user would be unable to read'
' the file.')
# Step 3d: Try macOS's special chmod + ACL
# macOS chmod's +a flag takes its own argument. As a slight hack, we
# pass that argument as the first element of remote_paths. So we end
# up running `chmod +a [that argument] [file 1] [file 2] ...`
try:
res = self._remote_chmod([chmod_acl_mode] + list(remote_paths), '+a')
except AnsibleAuthenticationFailure as e:
# Solaris-based chmod will return 5 when it sees an invalid mode,
# and +a is invalid there. Because it returns 5, which is the same
# thing sshpass returns on auth failure, our sshpass code will
# assume that auth failed. If we don't handle that case here, none
# of the other logic below will get run. This is fairly hacky and a
# corner case, but probably one that shows up pretty often in
# Solaris-based environments (and possibly others).
pass
else:
if res['rc'] == 0:
return remote_paths
# Step 3e: Try Solaris/OpenSolaris/OpenIndiana-sans-setfacl chmod
# Similar to macOS above, Solaris 11.4 drops setfacl and takes file ACLs
# via chmod instead. OpenSolaris and illumos-based distros allow for
# using either setfacl or chmod, and compatibility depends on filesystem.
# It should be possible to debug this branch by installing OpenIndiana
# (use ZFS) and going unpriv -> unpriv.
res = self._remote_chmod(remote_paths, posix_acl_mode)
if res['rc'] == 0:
return remote_paths
# we'll need this down here
become_link = get_versioned_doclink('playbook_guide/playbooks_privilege_escalation.html')
# Step 3f: Common group
# Otherwise, we're a normal user. We failed to chown the paths to the
# unprivileged user, but if we have a common group with them, we should
# be able to chown it to that.
#
# Note that we have no way of knowing if this will actually work... just
# because chgrp exits successfully does not mean that Ansible will work.
# We could check if the become user is in the group, but this would
# create an extra round trip.
#
# Also note that due to the above, this can prevent the
# world_readable_temp logic below from ever getting called. We
# leave this up to the user to rectify if they have both of these
# features enabled.
group = self.get_shell_option('common_remote_group')
if group is not None:
res = self._remote_chgrp(remote_paths, group)
if res['rc'] == 0:
# warn user that something might go weirdly here.
if self.get_shell_option('world_readable_temp'):
display.warning(
'Both common_remote_group and '
'allow_world_readable_tmpfiles are set. chgrp was '
'successful, but there is no guarantee that Ansible '
'will be able to read the files after this operation, '
'particularly if common_remote_group was set to a '
'group of which the unprivileged become user is not a '
'member. In this situation, '
'allow_world_readable_tmpfiles is a no-op. See this '
'URL for more details: %s'
'#risks-of-becoming-an-unprivileged-user' % become_link)
if execute:
group_mode = 'g+rwx'
else:
group_mode = 'g+rw'
res = self._remote_chmod(remote_paths, group_mode)
if res['rc'] == 0:
return remote_paths
# Step 4: World-readable temp directory
if self.get_shell_option('world_readable_temp'):
# chown and fs acls failed -- do things this insecure way only if
# the user opted in in the config file
display.warning(
'Using world-readable permissions for temporary files Ansible '
'needs to create when becoming an unprivileged user. This may '
'be insecure. For information on securing this, see %s'
'#risks-of-becoming-an-unprivileged-user' % become_link)
res = self._remote_chmod(remote_paths, 'a+%s' % chmod_mode)
if res['rc'] == 0:
return remote_paths
raise AnsibleError(
'Failed to set file mode on remote files '
'(rc: {0}, err: {1})'.format(
res['rc'],
to_native(res['stderr'])))
raise AnsibleError(
'Failed to set permissions on the temporary files Ansible needs '
'to create when becoming an unprivileged user '
'(rc: %s, err: %s}). For information on working around this, see %s'
'#risks-of-becoming-an-unprivileged-user' % (
res['rc'],
to_native(res['stderr']), become_link))
def _remote_chmod(self, paths, mode, sudoable=False):
'''
Issue a remote chmod command
'''
cmd = self._connection._shell.chmod(paths, mode)
res = self._low_level_execute_command(cmd, sudoable=sudoable)
return res
def _remote_chown(self, paths, user, sudoable=False):
'''
Issue a remote chown command
'''
cmd = self._connection._shell.chown(paths, user)
res = self._low_level_execute_command(cmd, sudoable=sudoable)
return res
def _remote_chgrp(self, paths, group, sudoable=False):
'''
Issue a remote chgrp command
'''
cmd = self._connection._shell.chgrp(paths, group)
res = self._low_level_execute_command(cmd, sudoable=sudoable)
return res
def _remote_set_user_facl(self, paths, user, mode, sudoable=False):
'''
Issue a remote call to setfacl
'''
cmd = self._connection._shell.set_user_facl(paths, user, mode)
res = self._low_level_execute_command(cmd, sudoable=sudoable)
return res
def _execute_remote_stat(self, path, all_vars, follow, tmp=None, checksum=True):
'''
Get information from remote file.
'''
if tmp is not None:
display.warning('_execute_remote_stat no longer honors the tmp parameter. Action'
' plugins should set self._connection._shell.tmpdir to share'
' the tmpdir')
del tmp # No longer used
module_args = dict(
path=path,
follow=follow,
get_checksum=checksum,
checksum_algorithm='sha1',
)
mystat = self._execute_module(module_name='ansible.legacy.stat', module_args=module_args, task_vars=all_vars,
wrap_async=False)
if mystat.get('failed'):
msg = mystat.get('module_stderr')
if not msg:
msg = mystat.get('module_stdout')
if not msg:
msg = mystat.get('msg')
raise AnsibleError('Failed to get information on remote file (%s): %s' % (path, msg))
if not mystat['stat']['exists']:
# empty might be matched, 1 should never match, also backwards compatible
mystat['stat']['checksum'] = '1'
# happens sometimes when it is a dir and not on bsd
if 'checksum' not in mystat['stat']:
mystat['stat']['checksum'] = ''
elif not isinstance(mystat['stat']['checksum'], string_types):
raise AnsibleError("Invalid checksum returned by stat: expected a string type but got %s" % type(mystat['stat']['checksum']))
return mystat['stat']
def _remote_expand_user(self, path, sudoable=True, pathsep=None):
''' takes a remote path and performs tilde/$HOME expansion on the remote host '''
# We only expand ~/path and ~username/path
if not path.startswith('~'):
return path
# Per Jborean, we don't have to worry about Windows as we don't have a notion of user's home
# dir there.
split_path = path.split(os.path.sep, 1)
expand_path = split_path[0]
if expand_path == '~':
# Network connection plugins (network_cli, netconf, etc.) execute on the controller, rather than the remote host.
# As such, we want to avoid using remote_user for paths as remote_user may not line up with the local user
# This is a hack and should be solved by more intelligent handling of remote_tmp in 2.7
become_user = self.get_become_option('become_user')
if getattr(self._connection, '_remote_is_local', False):
pass
elif sudoable and self._connection.become and become_user:
expand_path = '~%s' % become_user
else:
# use remote user instead, if none set default to current user
expand_path = '~%s' % (self._get_remote_user() or '')
# use shell to construct appropriate command and execute
cmd = self._connection._shell.expand_user(expand_path)
data = self._low_level_execute_command(cmd, sudoable=False)
try:
initial_fragment = data['stdout'].strip().splitlines()[-1]
except IndexError:
initial_fragment = None
if not initial_fragment:
# Something went wrong trying to expand the path remotely. Try using pwd, if not, return
# the original string
cmd = self._connection._shell.pwd()
pwd = self._low_level_execute_command(cmd, sudoable=False).get('stdout', '').strip()
if pwd:
expanded = pwd
else:
expanded = path
elif len(split_path) > 1:
expanded = self._connection._shell.join_path(initial_fragment, *split_path[1:])
else:
expanded = initial_fragment
if '..' in os.path.dirname(expanded).split('/'):
raise AnsibleError("'%s' returned an invalid relative home directory path containing '..'" % self._get_remote_addr({}))
return expanded
def _strip_success_message(self, data):
'''
Removes the BECOME-SUCCESS message from the data.
'''
if data.strip().startswith('BECOME-SUCCESS-'):
data = re.sub(r'^((\r)?\n)?BECOME-SUCCESS.*(\r)?\n', '', data)
return data
def _update_module_args(self, module_name, module_args, task_vars):
# set check mode in the module arguments, if required
if self._task.check_mode:
if not self._supports_check_mode:
raise AnsibleError("check mode is not supported for this operation")
module_args['_ansible_check_mode'] = True
else:
module_args['_ansible_check_mode'] = False
# set no log in the module arguments, if required
no_target_syslog = C.config.get_config_value('DEFAULT_NO_TARGET_SYSLOG', variables=task_vars)
module_args['_ansible_no_log'] = self._task.no_log or no_target_syslog
# set debug in the module arguments, if required
module_args['_ansible_debug'] = C.DEFAULT_DEBUG
# let module know we are in diff mode
module_args['_ansible_diff'] = self._task.diff
# let module know our verbosity
module_args['_ansible_verbosity'] = display.verbosity
# give the module information about the ansible version
module_args['_ansible_version'] = __version__
# give the module information about its name
module_args['_ansible_module_name'] = module_name
# set the syslog facility to be used in the module
module_args['_ansible_syslog_facility'] = task_vars.get('ansible_syslog_facility', C.DEFAULT_SYSLOG_FACILITY)
# let module know about filesystems that selinux treats specially
module_args['_ansible_selinux_special_fs'] = C.DEFAULT_SELINUX_SPECIAL_FS
# what to do when parameter values are converted to strings
module_args['_ansible_string_conversion_action'] = C.STRING_CONVERSION_ACTION
# give the module the socket for persistent connections
module_args['_ansible_socket'] = getattr(self._connection, 'socket_path')
if not module_args['_ansible_socket']:
module_args['_ansible_socket'] = task_vars.get('ansible_socket')
# make sure all commands use the designated shell executable
module_args['_ansible_shell_executable'] = self._play_context.executable
# make sure modules are aware if they need to keep the remote files
module_args['_ansible_keep_remote_files'] = C.DEFAULT_KEEP_REMOTE_FILES
# make sure all commands use the designated temporary directory if created
if self._is_become_unprivileged(): # force fallback on remote_tmp as user cannot normally write to dir
module_args['_ansible_tmpdir'] = None
else:
module_args['_ansible_tmpdir'] = self._connection._shell.tmpdir
# make sure the remote_tmp value is sent through in case modules needs to create their own
module_args['_ansible_remote_tmp'] = self.get_shell_option('remote_tmp', default='~/.ansible/tmp')
def _execute_module(self, module_name=None, module_args=None, tmp=None, task_vars=None, persist_files=False, delete_remote_tmp=None, wrap_async=False):
'''
Transfer and run a module along with its arguments.
'''
if tmp is not None:
display.warning('_execute_module no longer honors the tmp parameter. Action plugins'
' should set self._connection._shell.tmpdir to share the tmpdir')
del tmp # No longer used
if delete_remote_tmp is not None:
display.warning('_execute_module no longer honors the delete_remote_tmp parameter.'
' Action plugins should check self._connection._shell.tmpdir to'
' see if a tmpdir existed before they were called to determine'
' if they are responsible for removing it.')
del delete_remote_tmp # No longer used
tmpdir = self._connection._shell.tmpdir
# We set the module_style to new here so the remote_tmp is created
# before the module args are built if remote_tmp is needed (async).
# If the module_style turns out to not be new and we didn't create the
# remote tmp here, it will still be created. This must be done before
# calling self._update_module_args() so the module wrapper has the
# correct remote_tmp value set
if not self._is_pipelining_enabled("new", wrap_async) and tmpdir is None:
self._make_tmp_path()
tmpdir = self._connection._shell.tmpdir
if task_vars is None:
task_vars = dict()
# if a module name was not specified for this execution, use the action from the task
if module_name is None:
module_name = self._task.action
if module_args is None:
module_args = self._task.args
self._update_module_args(module_name, module_args, task_vars)
remove_async_dir = None
if wrap_async or self._task.async_val:
async_dir = self.get_shell_option('async_dir', default="~/.ansible_async")
remove_async_dir = len(self._task.environment)
self._task.environment.append({"ANSIBLE_ASYNC_DIR": async_dir})
# FUTURE: refactor this along with module build process to better encapsulate "smart wrapper" functionality
(module_style, shebang, module_data, module_path) = self._configure_module(module_name=module_name, module_args=module_args, task_vars=task_vars)
display.vvv("Using module file %s" % module_path)
if not shebang and module_style != 'binary':
raise AnsibleError("module (%s) is missing interpreter line" % module_name)
self._used_interpreter = shebang
remote_module_path = None
if not self._is_pipelining_enabled(module_style, wrap_async):
# we might need remote tmp dir
if tmpdir is None:
self._make_tmp_path()
tmpdir = self._connection._shell.tmpdir
remote_module_filename = self._connection._shell.get_remote_filename(module_path)
remote_module_path = self._connection._shell.join_path(tmpdir, 'AnsiballZ_%s' % remote_module_filename)
args_file_path = None
if module_style in ('old', 'non_native_want_json', 'binary'):
# we'll also need a tmp file to hold our module arguments
args_file_path = self._connection._shell.join_path(tmpdir, 'args')
if remote_module_path or module_style != 'new':
display.debug("transferring module to remote %s" % remote_module_path)
if module_style == 'binary':
self._transfer_file(module_path, remote_module_path)
else:
self._transfer_data(remote_module_path, module_data)
if module_style == 'old':
# we need to dump the module args to a k=v string in a file on
# the remote system, which can be read and parsed by the module
args_data = ""
for k, v in module_args.items():
args_data += '%s=%s ' % (k, shlex.quote(text_type(v)))
self._transfer_data(args_file_path, args_data)
elif module_style in ('non_native_want_json', 'binary'):
self._transfer_data(args_file_path, json.dumps(module_args))
display.debug("done transferring module to remote")
environment_string = self._compute_environment_string()
# remove the ANSIBLE_ASYNC_DIR env entry if we added a temporary one for
# the async_wrapper task.
if remove_async_dir is not None:
del self._task.environment[remove_async_dir]
remote_files = []
if tmpdir and remote_module_path:
remote_files = [tmpdir, remote_module_path]
if args_file_path:
remote_files.append(args_file_path)
sudoable = True
in_data = None
cmd = ""
if wrap_async and not self._connection.always_pipeline_modules:
# configure, upload, and chmod the async_wrapper module
(async_module_style, shebang, async_module_data, async_module_path) = self._configure_module(
module_name='ansible.legacy.async_wrapper', module_args=dict(), task_vars=task_vars)
async_module_remote_filename = self._connection._shell.get_remote_filename(async_module_path)
remote_async_module_path = self._connection._shell.join_path(tmpdir, async_module_remote_filename)
self._transfer_data(remote_async_module_path, async_module_data)
remote_files.append(remote_async_module_path)
async_limit = self._task.async_val
async_jid = f'j{random.randint(0, 999999999999)}'
# call the interpreter for async_wrapper directly
# this permits use of a script for an interpreter on non-Linux platforms
interpreter = shebang.replace('#!', '').strip()
async_cmd = [interpreter, remote_async_module_path, async_jid, async_limit, remote_module_path]
if environment_string:
async_cmd.insert(0, environment_string)
if args_file_path:
async_cmd.append(args_file_path)
else:
# maintain a fixed number of positional parameters for async_wrapper
async_cmd.append('_')
if not self._should_remove_tmp_path(tmpdir):
async_cmd.append("-preserve_tmp")
cmd = " ".join(to_text(x) for x in async_cmd)
else:
if self._is_pipelining_enabled(module_style):
in_data = module_data
display.vvv("Pipelining is enabled.")
else:
cmd = remote_module_path
cmd = self._connection._shell.build_module_command(environment_string, shebang, cmd, arg_path=args_file_path).strip()
# Fix permissions of the tmpdir path and tmpdir files. This should be called after all
# files have been transferred.
if remote_files:
# remove none/empty
remote_files = [x for x in remote_files if x]
self._fixup_perms2(remote_files, self._get_remote_user())
# actually execute
res = self._low_level_execute_command(cmd, sudoable=sudoable, in_data=in_data)
# parse the main result
data = self._parse_returned_data(res)
# NOTE: INTERNAL KEYS ONLY ACCESSIBLE HERE
# get internal info before cleaning
if data.pop("_ansible_suppress_tmpdir_delete", False):
self._cleanup_remote_tmp = False
# NOTE: yum returns results .. but that made it 'compatible' with squashing, so we allow mappings, for now
if 'results' in data and (not isinstance(data['results'], Sequence) or isinstance(data['results'], string_types)):
data['ansible_module_results'] = data['results']
del data['results']
display.warning("Found internal 'results' key in module return, renamed to 'ansible_module_results'.")
# remove internal keys
remove_internal_keys(data)
if wrap_async:
# async_wrapper will clean up its tmpdir on its own so we want the controller side to
# forget about it now
self._connection._shell.tmpdir = None
# FIXME: for backwards compat, figure out if still makes sense
data['changed'] = True
# pre-split stdout/stderr into lines if needed
if 'stdout' in data and 'stdout_lines' not in data:
# if the value is 'False', a default won't catch it.
txt = data.get('stdout', None) or u''
data['stdout_lines'] = txt.splitlines()
if 'stderr' in data and 'stderr_lines' not in data:
# if the value is 'False', a default won't catch it.
txt = data.get('stderr', None) or u''
data['stderr_lines'] = txt.splitlines()
# propagate interpreter discovery results back to the controller
if self._discovered_interpreter_key:
if data.get('ansible_facts') is None:
data['ansible_facts'] = {}
data['ansible_facts'][self._discovered_interpreter_key] = self._discovered_interpreter
if self._discovery_warnings:
if data.get('warnings') is None:
data['warnings'] = []
data['warnings'].extend(self._discovery_warnings)
if self._discovery_deprecation_warnings:
if data.get('deprecations') is None:
data['deprecations'] = []
data['deprecations'].extend(self._discovery_deprecation_warnings)
# mark the entire module results untrusted as a template right here, since the current action could
# possibly template one of these values.
data = wrap_var(data)
display.debug("done with _execute_module (%s, %s)" % (module_name, module_args))
return data
def _parse_returned_data(self, res):
try:
filtered_output, warnings = _filter_non_json_lines(res.get('stdout', u''), objects_only=True)
for w in warnings:
display.warning(w)
data = json.loads(filtered_output)
if C.MODULE_STRICT_UTF8_RESPONSE and not data.pop('_ansible_trusted_utf8', None):
try:
_validate_utf8_json(data)
except UnicodeEncodeError:
# When removing this, also remove the loop and latin-1 from ansible.module_utils.common.text.converters.jsonify
display.deprecated(
f'Module "{self._task.resolved_action or self._task.action}" returned non UTF-8 data in '
'the JSON response. This will become an error in the future',
version='2.18',
)
data['_ansible_parsed'] = True
except ValueError:
# not valid json, lets try to capture error
data = dict(failed=True, _ansible_parsed=False)
data['module_stdout'] = res.get('stdout', u'')
if 'stderr' in res:
data['module_stderr'] = res['stderr']
if res['stderr'].startswith(u'Traceback'):
data['exception'] = res['stderr']
# in some cases a traceback will arrive on stdout instead of stderr, such as when using ssh with -tt
if 'exception' not in data and data['module_stdout'].startswith(u'Traceback'):
data['exception'] = data['module_stdout']
# The default
data['msg'] = "MODULE FAILURE"
# try to figure out if we are missing interpreter
if self._used_interpreter is not None:
interpreter = re.escape(self._used_interpreter.lstrip('!#'))
match = re.compile('%s: (?:No such file or directory|not found)' % interpreter)
if match.search(data['module_stderr']) or match.search(data['module_stdout']):
data['msg'] = "The module failed to execute correctly, you probably need to set the interpreter."
# always append hint
data['msg'] += '\nSee stdout/stderr for the exact error'
if 'rc' in res:
data['rc'] = res['rc']
return data
# FIXME: move to connection base
def _low_level_execute_command(self, cmd, sudoable=True, in_data=None, executable=None, encoding_errors='surrogate_then_replace', chdir=None):
'''
This is the function which executes the low level shell command, which
may be commands to create/remove directories for temporary files, or to
run the module code or python directly when pipelining.
:kwarg encoding_errors: If the value returned by the command isn't
utf-8 then we have to figure out how to transform it to unicode.
If the value is just going to be displayed to the user (or
discarded) then the default of 'replace' is fine. If the data is
used as a key or is going to be written back out to a file
verbatim, then this won't work. May have to use some sort of
replacement strategy (python3 could use surrogateescape)
:kwarg chdir: cd into this directory before executing the command.
'''
display.debug("_low_level_execute_command(): starting")
# if not cmd:
# # this can happen with powershell modules when there is no analog to a Windows command (like chmod)
# display.debug("_low_level_execute_command(): no command, exiting")
# return dict(stdout='', stderr='', rc=254)
if chdir:
display.debug("_low_level_execute_command(): changing cwd to %s for this command" % chdir)
cmd = self._connection._shell.append_command('cd %s' % chdir, cmd)
# https://github.com/ansible/ansible/issues/68054
if executable:
self._connection._shell.executable = executable
ruser = self._get_remote_user()
buser = self.get_become_option('become_user')
if (sudoable and self._connection.become and # if sudoable and have become
resource_from_fqcr(self._connection.transport) != 'network_cli' and # if not using network_cli
(C.BECOME_ALLOW_SAME_USER or (buser != ruser or not any((ruser, buser))))): # if we allow same user PE or users are different and either is set
display.debug("_low_level_execute_command(): using become for this command")
cmd = self._connection.become.build_become_command(cmd, self._connection._shell)
if self._connection.allow_executable:
if executable is None:
executable = self._play_context.executable
# mitigation for SSH race which can drop stdout (https://github.com/ansible/ansible/issues/13876)
# only applied for the default executable to avoid interfering with the raw action
cmd = self._connection._shell.append_command(cmd, 'sleep 0')
if executable:
cmd = executable + ' -c ' + shlex.quote(cmd)
display.debug("_low_level_execute_command(): executing: %s" % (cmd,))
# Change directory to basedir of task for command execution when connection is local
if self._connection.transport == 'local':
self._connection.cwd = to_bytes(self._loader.get_basedir(), errors='surrogate_or_strict')
rc, stdout, stderr = self._connection.exec_command(cmd, in_data=in_data, sudoable=sudoable)
# stdout and stderr may be either a file-like or a bytes object.
# Convert either one to a text type
if isinstance(stdout, binary_type):
out = to_text(stdout, errors=encoding_errors)
elif not isinstance(stdout, text_type):
out = to_text(b''.join(stdout.readlines()), errors=encoding_errors)
else:
out = stdout
if isinstance(stderr, binary_type):
err = to_text(stderr, errors=encoding_errors)
elif not isinstance(stderr, text_type):
err = to_text(b''.join(stderr.readlines()), errors=encoding_errors)
else:
err = stderr
if rc is None:
rc = 0
# be sure to remove the BECOME-SUCCESS message now
out = self._strip_success_message(out)
display.debug(u"_low_level_execute_command() done: rc=%d, stdout=%s, stderr=%s" % (rc, out, err))
return dict(rc=rc, stdout=out, stdout_lines=out.splitlines(), stderr=err, stderr_lines=err.splitlines())
def _get_diff_data(self, destination, source, task_vars, source_file=True):
# Note: Since we do not diff the source and destination before we transform from bytes into
# text the diff between source and destination may not be accurate. To fix this, we'd need
# to move the diffing from the callback plugins into here.
#
# Example of data which would cause trouble is src_content == b'\xff' and dest_content ==
# b'\xfe'. Neither of those are valid utf-8 so both get turned into the replacement
# character: diff['before'] = u'οΏ½' ; diff['after'] = u'οΏ½' When the callback plugin later
# diffs before and after it shows an empty diff.
diff = {}
display.debug("Going to peek to see if file has changed permissions")
peek_result = self._execute_module(
module_name='ansible.legacy.file', module_args=dict(path=destination, _diff_peek=True),
task_vars=task_vars, persist_files=True)
if peek_result.get('failed', False):
display.warning(u"Failed to get diff between '%s' and '%s': %s" % (os.path.basename(source), destination, to_text(peek_result.get(u'msg', u''))))
return diff
if peek_result.get('rc', 0) == 0:
if peek_result.get('state') in (None, 'absent'):
diff['before'] = u''
elif peek_result.get('appears_binary'):
diff['dst_binary'] = 1
elif peek_result.get('size') and C.MAX_FILE_SIZE_FOR_DIFF > 0 and peek_result['size'] > C.MAX_FILE_SIZE_FOR_DIFF:
diff['dst_larger'] = C.MAX_FILE_SIZE_FOR_DIFF
else:
display.debug(u"Slurping the file %s" % source)
dest_result = self._execute_module(
module_name='ansible.legacy.slurp', module_args=dict(path=destination),
task_vars=task_vars, persist_files=True)
if 'content' in dest_result:
dest_contents = dest_result['content']
if dest_result['encoding'] == u'base64':
dest_contents = base64.b64decode(dest_contents)
else:
raise AnsibleError("unknown encoding in content option, failed: %s" % to_native(dest_result))
diff['before_header'] = destination
diff['before'] = to_text(dest_contents)
if source_file:
st = os.stat(source)
if C.MAX_FILE_SIZE_FOR_DIFF > 0 and st[stat.ST_SIZE] > C.MAX_FILE_SIZE_FOR_DIFF:
diff['src_larger'] = C.MAX_FILE_SIZE_FOR_DIFF
else:
display.debug("Reading local copy of the file %s" % source)
try:
with open(source, 'rb') as src:
src_contents = src.read()
except Exception as e:
raise AnsibleError("Unexpected error while reading source (%s) for diff: %s " % (source, to_native(e)))
if b"\x00" in src_contents:
diff['src_binary'] = 1
else:
diff['after_header'] = source
diff['after'] = to_text(src_contents)
else:
display.debug(u"source of file passed in")
diff['after_header'] = u'dynamically generated'
diff['after'] = source
if self._task.no_log:
if 'before' in diff:
diff["before"] = u""
if 'after' in diff:
diff["after"] = u" [[ Diff output has been hidden because 'no_log: true' was specified for this result ]]\n"
return diff
def _find_needle(self, dirname, needle):
'''
find a needle in haystack of paths, optionally using 'dirname' as a subdir.
This will build the ordered list of paths to search and pass them to dwim
to get back the first existing file found.
'''
# dwim already deals with playbook basedirs
path_stack = self._task.get_search_path()
# if missing it will return a file not found exception
return self._loader.path_dwim_relative_stack(path_stack, dirname, needle)
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 79,749 |
copy with content and diff enabled prints wrong file modified
|
### Summary
When using the `copy` module together with `content` arg, wrong file name is printed when `ansible-playbook` is run with `--diff` switch. The file listed as modified is the temporary file in `/tmp` and not the real file name.
The `+++ after` in diff results output lists a temporary file.
### Issue Type
Bug Report
### Component Name
copy
### Ansible Version
```console
$ ansible --version
ansible [core 2.14.1]
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/ansible/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/ansible-venv/lib64/python3.9/site-packages/ansible
ansible collection location = /home/ansible/.ansible/collections:/usr/share/ansible/collections
executable location = /home/ansible-venv/bin/ansible
python version = 3.9.14 (main, Nov 7 2022, 00:00:00) [GCC 11.3.1 20220421 (Red Hat 11.3.1-2)] (/home/ansible-venv/bin/python3)
jinja version = 3.1.2
libyaml = True
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
irrelevant
```
### OS / Environment
EL9
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
---
- name: Test copy with content
hosts:
- all
gather_facts: false
tasks:
- name: Copy with content shows wrong file modified
ansible.builtin.copy:
dest: /tmp/test
content: aaa
```
### Expected Results
The file name passed to `copy` in arg `dest` should be listed in the diff line and not a temporary file.
### Actual Results
```console
TASK [Copy with content shows wrong file modified] *************************************************************************************************************
--- before
+++ after: /tmp/.ansible.ansible/ansible-local-332829qqzxldi3/tmpz_qauy37
@@ -0,0 +1 @@
+aaa
\ No newline at end of file
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/79749
|
https://github.com/ansible/ansible/pull/81678
|
243197f2d466d17082cb5334db7dcb2025f47bc7
|
e4468dc9442a5c1a3a52dda49586cd9ce41b7959
| 2023-01-16T14:22:43Z |
python
| 2023-09-12T15:32:41Z |
lib/ansible/plugins/action/copy.py
|
# (c) 2012-2014, Michael DeHaan <[email protected]>
# (c) 2017 Toshio Kuratomi <[email protected]>
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
# Make coding more python3-ish
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import json
import os
import os.path
import stat
import tempfile
import traceback
from ansible import constants as C
from ansible.errors import AnsibleError, AnsibleFileNotFound
from ansible.module_utils.basic import FILE_COMMON_ARGUMENTS
from ansible.module_utils.common.text.converters import to_bytes, to_native, to_text
from ansible.module_utils.parsing.convert_bool import boolean
from ansible.plugins.action import ActionBase
from ansible.utils.hashing import checksum
# Supplement the FILE_COMMON_ARGUMENTS with arguments that are specific to file
REAL_FILE_ARGS = frozenset(FILE_COMMON_ARGUMENTS.keys()).union(
('state', 'path', '_original_basename', 'recurse', 'force',
'_diff_peek', 'src'))
def _create_remote_file_args(module_args):
"""remove keys that are not relevant to file"""
return dict((k, v) for k, v in module_args.items() if k in REAL_FILE_ARGS)
def _create_remote_copy_args(module_args):
"""remove action plugin only keys"""
return dict((k, v) for k, v in module_args.items() if k not in ('content', 'decrypt'))
def _walk_dirs(topdir, base_path=None, local_follow=False, trailing_slash_detector=None):
"""
Walk a filesystem tree returning enough information to copy the files
:arg topdir: The directory that the filesystem tree is rooted at
:kwarg base_path: The initial directory structure to strip off of the
files for the destination directory. If this is None (the default),
the base_path is set to ``top_dir``.
:kwarg local_follow: Whether to follow symlinks on the source. When set
to False, no symlinks are dereferenced. When set to True (the
default), the code will dereference most symlinks. However, symlinks
can still be present if needed to break a circular link.
:kwarg trailing_slash_detector: Function to determine if a path has
a trailing directory separator. Only needed when dealing with paths on
a remote machine (in which case, pass in a function that is aware of the
directory separator conventions on the remote machine).
:returns: dictionary of tuples. All of the path elements in the structure are text strings.
This separates all the files, directories, and symlinks along with
important information about each::
{ 'files': [('/absolute/path/to/copy/from', 'relative/path/to/copy/to'), ...],
'directories': [('/absolute/path/to/copy/from', 'relative/path/to/copy/to'), ...],
'symlinks': [('/symlink/target/path', 'relative/path/to/copy/to'), ...],
}
The ``symlinks`` field is only populated if ``local_follow`` is set to False
*or* a circular symlink cannot be dereferenced.
"""
# Convert the path segments into byte strings
r_files = {'files': [], 'directories': [], 'symlinks': []}
def _recurse(topdir, rel_offset, parent_dirs, rel_base=u''):
"""
This is a closure (function utilizing variables from it's parent
function's scope) so that we only need one copy of all the containers.
Note that this function uses side effects (See the Variables used from
outer scope).
:arg topdir: The directory we are walking for files
:arg rel_offset: Integer defining how many characters to strip off of
the beginning of a path
:arg parent_dirs: Directories that we're copying that this directory is in.
:kwarg rel_base: String to prepend to the path after ``rel_offset`` is
applied to form the relative path.
Variables used from the outer scope
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
:r_files: Dictionary of files in the hierarchy. See the return value
for :func:`walk` for the structure of this dictionary.
:local_follow: Read-only inside of :func:`_recurse`. Whether to follow symlinks
"""
for base_path, sub_folders, files in os.walk(topdir):
for filename in files:
filepath = os.path.join(base_path, filename)
dest_filepath = os.path.join(rel_base, filepath[rel_offset:])
if os.path.islink(filepath):
# Dereference the symlnk
real_file = os.path.realpath(filepath)
if local_follow and os.path.isfile(real_file):
# Add the file pointed to by the symlink
r_files['files'].append((real_file, dest_filepath))
else:
# Mark this file as a symlink to copy
r_files['symlinks'].append((os.readlink(filepath), dest_filepath))
else:
# Just a normal file
r_files['files'].append((filepath, dest_filepath))
for dirname in sub_folders:
dirpath = os.path.join(base_path, dirname)
dest_dirpath = os.path.join(rel_base, dirpath[rel_offset:])
real_dir = os.path.realpath(dirpath)
dir_stats = os.stat(real_dir)
if os.path.islink(dirpath):
if local_follow:
if (dir_stats.st_dev, dir_stats.st_ino) in parent_dirs:
# Just insert the symlink if the target directory
# exists inside of the copy already
r_files['symlinks'].append((os.readlink(dirpath), dest_dirpath))
else:
# Walk the dirpath to find all parent directories.
new_parents = set()
parent_dir_list = os.path.dirname(dirpath).split(os.path.sep)
for parent in range(len(parent_dir_list), 0, -1):
parent_stat = os.stat(u'/'.join(parent_dir_list[:parent]))
if (parent_stat.st_dev, parent_stat.st_ino) in parent_dirs:
# Reached the point at which the directory
# tree is already known. Don't add any
# more or we might go to an ancestor that
# isn't being copied.
break
new_parents.add((parent_stat.st_dev, parent_stat.st_ino))
if (dir_stats.st_dev, dir_stats.st_ino) in new_parents:
# This was a a circular symlink. So add it as
# a symlink
r_files['symlinks'].append((os.readlink(dirpath), dest_dirpath))
else:
# Walk the directory pointed to by the symlink
r_files['directories'].append((real_dir, dest_dirpath))
offset = len(real_dir) + 1
_recurse(real_dir, offset, parent_dirs.union(new_parents), rel_base=dest_dirpath)
else:
# Add the symlink to the destination
r_files['symlinks'].append((os.readlink(dirpath), dest_dirpath))
else:
# Just a normal directory
r_files['directories'].append((dirpath, dest_dirpath))
# Check if the source ends with a "/" so that we know which directory
# level to work at (similar to rsync)
source_trailing_slash = False
if trailing_slash_detector:
source_trailing_slash = trailing_slash_detector(topdir)
else:
source_trailing_slash = topdir.endswith(os.path.sep)
# Calculate the offset needed to strip the base_path to make relative
# paths
if base_path is None:
base_path = topdir
if not source_trailing_slash:
base_path = os.path.dirname(base_path)
if topdir.startswith(base_path):
offset = len(base_path)
# Make sure we're making the new paths relative
if trailing_slash_detector and not trailing_slash_detector(base_path):
offset += 1
elif not base_path.endswith(os.path.sep):
offset += 1
if os.path.islink(topdir) and not local_follow:
r_files['symlinks'] = (os.readlink(topdir), os.path.basename(topdir))
return r_files
dir_stats = os.stat(topdir)
parents = frozenset(((dir_stats.st_dev, dir_stats.st_ino),))
# Actually walk the directory hierarchy
_recurse(topdir, offset, parents)
return r_files
class ActionModule(ActionBase):
TRANSFERS_FILES = True
def _ensure_invocation(self, result):
# NOTE: adding invocation arguments here needs to be kept in sync with
# any no_log specified in the argument_spec in the module.
# This is not automatic.
# NOTE: do not add to this. This should be made a generic function for action plugins.
# This should also use the same argspec as the module instead of keeping it in sync.
if 'invocation' not in result:
if self._play_context.no_log:
result['invocation'] = "CENSORED: no_log is set"
else:
# NOTE: Should be removed in the future. For now keep this broken
# behaviour, have a look in the PR 51582
result['invocation'] = self._task.args.copy()
result['invocation']['module_args'] = self._task.args.copy()
if isinstance(result['invocation'], dict):
if 'content' in result['invocation']:
result['invocation']['content'] = 'CENSORED: content is a no_log parameter'
if result['invocation'].get('module_args', {}).get('content') is not None:
result['invocation']['module_args']['content'] = 'VALUE_SPECIFIED_IN_NO_LOG_PARAMETER'
return result
def _copy_file(self, source_full, source_rel, content, content_tempfile,
dest, task_vars, follow):
decrypt = boolean(self._task.args.get('decrypt', True), strict=False)
force = boolean(self._task.args.get('force', 'yes'), strict=False)
raw = boolean(self._task.args.get('raw', 'no'), strict=False)
result = {}
result['diff'] = []
# If the local file does not exist, get_real_file() raises AnsibleFileNotFound
try:
source_full = self._loader.get_real_file(source_full, decrypt=decrypt)
except AnsibleFileNotFound as e:
result['failed'] = True
result['msg'] = "could not find src=%s, %s" % (source_full, to_text(e))
return result
# Get the local mode and set if user wanted it preserved
# https://github.com/ansible/ansible-modules-core/issues/1124
lmode = None
if self._task.args.get('mode', None) == 'preserve':
lmode = '0%03o' % stat.S_IMODE(os.stat(source_full).st_mode)
# This is kind of optimization - if user told us destination is
# dir, do path manipulation right away, otherwise we still check
# for dest being a dir via remote call below.
if self._connection._shell.path_has_trailing_slash(dest):
dest_file = self._connection._shell.join_path(dest, source_rel)
else:
dest_file = dest
# Attempt to get remote file info
dest_status = self._execute_remote_stat(dest_file, all_vars=task_vars, follow=follow, checksum=force)
if dest_status['exists'] and dest_status['isdir']:
# The dest is a directory.
if content is not None:
# If source was defined as content remove the temporary file and fail out.
self._remove_tempfile_if_content_defined(content, content_tempfile)
result['failed'] = True
result['msg'] = "can not use content with a dir as dest"
return result
else:
# Append the relative source location to the destination and get remote stats again
dest_file = self._connection._shell.join_path(dest, source_rel)
dest_status = self._execute_remote_stat(dest_file, all_vars=task_vars, follow=follow, checksum=force)
if dest_status['exists'] and not force:
# remote_file exists so continue to next iteration.
return None
# Generate a hash of the local file.
local_checksum = checksum(source_full)
if local_checksum != dest_status['checksum']:
# The checksums don't match and we will change or error out.
if self._play_context.diff and not raw:
result['diff'].append(self._get_diff_data(dest_file, source_full, task_vars))
if self._play_context.check_mode:
self._remove_tempfile_if_content_defined(content, content_tempfile)
result['changed'] = True
return result
# Define a remote directory that we will copy the file to.
tmp_src = self._connection._shell.join_path(self._connection._shell.tmpdir, 'source')
remote_path = None
if not raw:
remote_path = self._transfer_file(source_full, tmp_src)
else:
self._transfer_file(source_full, dest_file)
# We have copied the file remotely and no longer require our content_tempfile
self._remove_tempfile_if_content_defined(content, content_tempfile)
self._loader.cleanup_tmp_file(source_full)
# FIXME: I don't think this is needed when PIPELINING=0 because the source is created
# world readable. Access to the directory itself is controlled via fixup_perms2() as
# part of executing the module. Check that umask with scp/sftp/piped doesn't cause
# a problem before acting on this idea. (This idea would save a round-trip)
# fix file permissions when the copy is done as a different user
if remote_path:
self._fixup_perms2((self._connection._shell.tmpdir, remote_path))
if raw:
# Continue to next iteration if raw is defined.
return None
# Run the copy module
# src and dest here come after original and override them
# we pass dest only to make sure it includes trailing slash in case of recursive copy
new_module_args = _create_remote_copy_args(self._task.args)
new_module_args.update(
dict(
src=tmp_src,
dest=dest,
_original_basename=source_rel,
follow=follow
)
)
if not self._task.args.get('checksum'):
new_module_args['checksum'] = local_checksum
if lmode:
new_module_args['mode'] = lmode
module_return = self._execute_module(module_name='ansible.legacy.copy', module_args=new_module_args, task_vars=task_vars)
else:
# no need to transfer the file, already correct hash, but still need to call
# the file module in case we want to change attributes
self._remove_tempfile_if_content_defined(content, content_tempfile)
self._loader.cleanup_tmp_file(source_full)
if raw:
return None
# Fix for https://github.com/ansible/ansible-modules-core/issues/1568.
# If checksums match, and follow = True, find out if 'dest' is a link. If so,
# change it to point to the source of the link.
if follow:
dest_status_nofollow = self._execute_remote_stat(dest_file, all_vars=task_vars, follow=False)
if dest_status_nofollow['islnk'] and 'lnk_source' in dest_status_nofollow.keys():
dest = dest_status_nofollow['lnk_source']
# Build temporary module_args.
new_module_args = _create_remote_file_args(self._task.args)
new_module_args.update(
dict(
dest=dest,
_original_basename=source_rel,
recurse=False,
state='file',
)
)
# src is sent to the file module in _original_basename, not in src
try:
del new_module_args['src']
except KeyError:
pass
if lmode:
new_module_args['mode'] = lmode
# Execute the file module.
module_return = self._execute_module(module_name='ansible.legacy.file', module_args=new_module_args, task_vars=task_vars)
if not module_return.get('checksum'):
module_return['checksum'] = local_checksum
result.update(module_return)
return result
def _create_content_tempfile(self, content):
''' Create a tempfile containing defined content '''
fd, content_tempfile = tempfile.mkstemp(dir=C.DEFAULT_LOCAL_TMP)
f = os.fdopen(fd, 'wb')
content = to_bytes(content)
try:
f.write(content)
except Exception as err:
os.remove(content_tempfile)
raise Exception(err)
finally:
f.close()
return content_tempfile
def _remove_tempfile_if_content_defined(self, content, content_tempfile):
if content is not None:
os.remove(content_tempfile)
def run(self, tmp=None, task_vars=None):
''' handler for file transfer operations '''
if task_vars is None:
task_vars = dict()
result = super(ActionModule, self).run(tmp, task_vars)
del tmp # tmp no longer has any effect
source = self._task.args.get('src', None)
content = self._task.args.get('content', None)
dest = self._task.args.get('dest', None)
remote_src = boolean(self._task.args.get('remote_src', False), strict=False)
local_follow = boolean(self._task.args.get('local_follow', True), strict=False)
result['failed'] = True
if not source and content is None:
result['msg'] = 'src (or content) is required'
elif not dest:
result['msg'] = 'dest is required'
elif source and content is not None:
result['msg'] = 'src and content are mutually exclusive'
elif content is not None and dest is not None and dest.endswith("/"):
result['msg'] = "can not use content with a dir as dest"
else:
del result['failed']
if result.get('failed'):
return self._ensure_invocation(result)
# Define content_tempfile in case we set it after finding content populated.
content_tempfile = None
# If content is defined make a tmp file and write the content into it.
if content is not None:
try:
# If content comes to us as a dict it should be decoded json.
# We need to encode it back into a string to write it out.
if isinstance(content, dict) or isinstance(content, list):
content_tempfile = self._create_content_tempfile(json.dumps(content))
else:
content_tempfile = self._create_content_tempfile(content)
source = content_tempfile
except Exception as err:
result['failed'] = True
result['msg'] = "could not write content temp file: %s" % to_native(err)
return self._ensure_invocation(result)
# if we have first_available_file in our vars
# look up the files and use the first one we find as src
elif remote_src:
result.update(self._execute_module(module_name='ansible.legacy.copy', task_vars=task_vars))
return self._ensure_invocation(result)
else:
# find_needle returns a path that may not have a trailing slash on
# a directory so we need to determine that now (we use it just
# like rsync does to figure out whether to include the directory
# or only the files inside the directory
trailing_slash = source.endswith(os.path.sep)
try:
# find in expected paths
source = self._find_needle('files', source)
except AnsibleError as e:
result['failed'] = True
result['msg'] = to_text(e)
result['exception'] = traceback.format_exc()
return self._ensure_invocation(result)
if trailing_slash != source.endswith(os.path.sep):
if source[-1] == os.path.sep:
source = source[:-1]
else:
source = source + os.path.sep
# A list of source file tuples (full_path, relative_path) which will try to copy to the destination
source_files = {'files': [], 'directories': [], 'symlinks': []}
# If source is a directory populate our list else source is a file and translate it to a tuple.
if os.path.isdir(to_bytes(source, errors='surrogate_or_strict')):
# Get a list of the files we want to replicate on the remote side
source_files = _walk_dirs(source, local_follow=local_follow,
trailing_slash_detector=self._connection._shell.path_has_trailing_slash)
# If it's recursive copy, destination is always a dir,
# explicitly mark it so (note - copy module relies on this).
if not self._connection._shell.path_has_trailing_slash(dest):
dest = self._connection._shell.join_path(dest, '')
# FIXME: Can we optimize cases where there's only one file, no
# symlinks and any number of directories? In the original code,
# empty directories are not copied....
else:
source_files['files'] = [(source, os.path.basename(source))]
changed = False
module_return = dict(changed=False)
# A register for if we executed a module.
# Used to cut down on command calls when not recursive.
module_executed = False
# expand any user home dir specifier
dest = self._remote_expand_user(dest)
implicit_directories = set()
for source_full, source_rel in source_files['files']:
# copy files over. This happens first as directories that have
# a file do not need to be created later
# We only follow symlinks for files in the non-recursive case
if source_files['directories']:
follow = False
else:
follow = boolean(self._task.args.get('follow', False), strict=False)
module_return = self._copy_file(source_full, source_rel, content, content_tempfile, dest, task_vars, follow)
if module_return is None:
continue
if module_return.get('failed'):
result.update(module_return)
return self._ensure_invocation(result)
paths = os.path.split(source_rel)
dir_path = ''
for dir_component in paths:
os.path.join(dir_path, dir_component)
implicit_directories.add(dir_path)
if 'diff' in result and not result['diff']:
del result['diff']
module_executed = True
changed = changed or module_return.get('changed', False)
for src, dest_path in source_files['directories']:
# Find directories that are leaves as they might not have been
# created yet.
if dest_path in implicit_directories:
continue
# Use file module to create these
new_module_args = _create_remote_file_args(self._task.args)
new_module_args['path'] = os.path.join(dest, dest_path)
new_module_args['state'] = 'directory'
new_module_args['mode'] = self._task.args.get('directory_mode', None)
new_module_args['recurse'] = False
del new_module_args['src']
module_return = self._execute_module(module_name='ansible.legacy.file', module_args=new_module_args, task_vars=task_vars)
if module_return.get('failed'):
result.update(module_return)
return self._ensure_invocation(result)
module_executed = True
changed = changed or module_return.get('changed', False)
for target_path, dest_path in source_files['symlinks']:
# Copy symlinks over
new_module_args = _create_remote_file_args(self._task.args)
new_module_args['path'] = os.path.join(dest, dest_path)
new_module_args['src'] = target_path
new_module_args['state'] = 'link'
new_module_args['force'] = True
# Only follow remote symlinks in the non-recursive case
if source_files['directories']:
new_module_args['follow'] = False
# file module cannot deal with 'preserve' mode and is meaningless
# for symlinks anyway, so just don't pass it.
if new_module_args.get('mode', None) == 'preserve':
new_module_args.pop('mode')
module_return = self._execute_module(module_name='ansible.legacy.file', module_args=new_module_args, task_vars=task_vars)
module_executed = True
if module_return.get('failed'):
result.update(module_return)
return self._ensure_invocation(result)
changed = changed or module_return.get('changed', False)
if module_executed and len(source_files['files']) == 1:
result.update(module_return)
# the file module returns the file path as 'path', but
# the copy module uses 'dest', so add it if it's not there
if 'path' in result and 'dest' not in result:
result['dest'] = result['path']
else:
result.update(dict(dest=dest, src=source, changed=changed))
# Delete tmp path
self._remove_tmp_path(self._connection._shell.tmpdir)
return self._ensure_invocation(result)
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 79,749 |
copy with content and diff enabled prints wrong file modified
|
### Summary
When using the `copy` module together with `content` arg, wrong file name is printed when `ansible-playbook` is run with `--diff` switch. The file listed as modified is the temporary file in `/tmp` and not the real file name.
The `+++ after` in diff results output lists a temporary file.
### Issue Type
Bug Report
### Component Name
copy
### Ansible Version
```console
$ ansible --version
ansible [core 2.14.1]
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/ansible/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/ansible-venv/lib64/python3.9/site-packages/ansible
ansible collection location = /home/ansible/.ansible/collections:/usr/share/ansible/collections
executable location = /home/ansible-venv/bin/ansible
python version = 3.9.14 (main, Nov 7 2022, 00:00:00) [GCC 11.3.1 20220421 (Red Hat 11.3.1-2)] (/home/ansible-venv/bin/python3)
jinja version = 3.1.2
libyaml = True
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
irrelevant
```
### OS / Environment
EL9
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
---
- name: Test copy with content
hosts:
- all
gather_facts: false
tasks:
- name: Copy with content shows wrong file modified
ansible.builtin.copy:
dest: /tmp/test
content: aaa
```
### Expected Results
The file name passed to `copy` in arg `dest` should be listed in the diff line and not a temporary file.
### Actual Results
```console
TASK [Copy with content shows wrong file modified] *************************************************************************************************************
--- before
+++ after: /tmp/.ansible.ansible/ansible-local-332829qqzxldi3/tmpz_qauy37
@@ -0,0 +1 @@
+aaa
\ No newline at end of file
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/79749
|
https://github.com/ansible/ansible/pull/81678
|
243197f2d466d17082cb5334db7dcb2025f47bc7
|
e4468dc9442a5c1a3a52dda49586cd9ce41b7959
| 2023-01-16T14:22:43Z |
python
| 2023-09-12T15:32:41Z |
test/integration/targets/copy/tasks/main.yml
|
- block:
- name: Create a local temporary directory
shell: mktemp -d /tmp/ansible_test.XXXXXXXXX
register: tempfile_result
delegate_to: localhost
- set_fact:
local_temp_dir: '{{ tempfile_result.stdout }}'
remote_dir: '{{ remote_tmp_dir }}/copy'
symlinks:
ansible-test-abs-link: /tmp/ansible-test-abs-link
ansible-test-abs-link-dir: /tmp/ansible-test-abs-link-dir
circles: ../
invalid: invalid
invalid2: ../invalid
out_of_tree_circle: /tmp/ansible-test-link-dir/out_of_tree_circle
subdir3: ../subdir2/subdir3
bar.txt: ../bar.txt
- file: path={{local_temp_dir}} state=directory
name: ensure temp dir exists
# file cannot do this properly, use command instead
- name: Create symbolic link
command: "ln -s '{{ item.value }}' '{{ item.key }}'"
args:
chdir: '{{role_path}}/files/subdir/subdir1'
with_dict: "{{ symlinks }}"
delegate_to: localhost
- name: Create remote unprivileged remote user
user:
name: '{{ remote_unprivileged_user }}'
register: user
- name: Check sudoers dir
stat:
path: /etc/sudoers.d
register: etc_sudoers
- name: Set sudoers.d path fact
set_fact:
sudoers_d_file: "{{ '/etc/sudoers.d' if etc_sudoers.stat.exists else '/usr/local/etc/sudoers.d' }}/{{ remote_unprivileged_user }}"
- name: Create sudoers file
copy:
dest: "{{ sudoers_d_file }}"
content: "{{ remote_unprivileged_user }} ALL=(ALL) NOPASSWD: ALL"
- file:
path: "{{ user.home }}/.ssh"
owner: '{{ remote_unprivileged_user }}'
state: directory
mode: 0700
- name: Duplicate authorized_keys
copy:
src: $HOME/.ssh/authorized_keys
dest: '{{ user.home }}/.ssh/authorized_keys'
owner: '{{ remote_unprivileged_user }}'
mode: 0600
remote_src: yes
- file:
path: "{{ remote_dir }}"
state: directory
remote_user: '{{ remote_unprivileged_user }}'
# execute tests tasks using an unprivileged user, this is useful to avoid
# local/remote ambiguity when controller and managed hosts are identical.
- import_tasks: tests.yml
remote_user: '{{ remote_unprivileged_user }}'
- import_tasks: acls.yml
when: ansible_system == 'Linux'
- import_tasks: selinux.yml
when: ansible_os_family == 'RedHat' and ansible_selinux.get('mode') == 'enforcing'
- import_tasks: no_log.yml
delegate_to: localhost
- import_tasks: check_mode.yml
# https://github.com/ansible/ansible/issues/57618
- name: Test diff contents
copy:
content: 'Ansible managed\n'
dest: "{{ local_temp_dir }}/file.txt"
diff: yes
register: diff_output
- assert:
that:
- 'diff_output.diff[0].before == ""'
- '"Ansible managed" in diff_output.diff[0].after'
- name: tests with remote_src and non files
import_tasks: src_remote_file_is_not_file.yml
always:
- name: Cleaning
file:
path: '{{ local_temp_dir }}'
state: absent
delegate_to: localhost
- name: Remove symbolic link
file:
path: '{{ role_path }}/files/subdir/subdir1/{{ item.key }}'
state: absent
delegate_to: localhost
with_dict: "{{ symlinks }}"
- name: Remote unprivileged remote user
user:
name: '{{ remote_unprivileged_user }}'
state: absent
remove: yes
force: yes
- name: Remove sudoers.d file
file:
path: "{{ sudoers_d_file }}"
state: absent
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 81,574 |
reboot module times out without useful error when SSH connection fails with "permission denied"
|
### Summary
The reboot module waits until it can SSH to the server again. It keeps trying until either it succeeds or for the specified timeout.
When the SSH connection fails because the user does not have access to the server it just keeps waiting until the timeout expires and eventually reports a generic "timeout expired" message, which is unhelpful. It would be helpful if the module:
1. Reported the actual error it got on the last attempt before it gave up.
2. Ideally, distinguished permission errors like "Permission denied (publickey)." or "Your account has expired; please contact your system administrator." from the server not being up yet and gave up without waiting out the timeout for permission errors. (Or maybe those errors should have a separate timeout, lower by default?)
### Issue Type
Bug Report
### Component Name
ansible.builtin.reboot
### Ansible Version
```console
$ ansible --version
ansible [core 2.14.9]
config file = /media/sf_work/lops/ansible/ansible.cfg
configured module search path = ['/home/em/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/em/.direnv/python-3.9.5/lib/python3.9/site-packages/ansible
ansible collection location = /home/em/.ansible/collections:/usr/share/ansible/collections
executable location = /home/em/.direnv/python-3.9.5/bin/ansible
python version = 3.9.5 (default, Nov 23 2021, 15:27:38) [GCC 9.3.0] (/home/em/.direnv/python-3.9.5/bin/python3.9)
jinja version = 3.1.2
libyaml = True
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
CONFIG_FILE() = /media/sf_work/lops/ansible/ansible.cfg
DEFAULT_BECOME(/media/sf_work/lops/ansible/ansible.cfg) = True
DEFAULT_FILTER_PLUGIN_PATH(/media/sf_work/lops/ansible/ansible.cfg) = ['/media/sf_work/lops/ansible/filter_plugins']
DEFAULT_HASH_BEHAVIOUR(/media/sf_work/lops/ansible/ansible.cfg) = merge
DEFAULT_LOG_PATH(/media/sf_work/lops/ansible/ansible.cfg) = /media/sf_work/lops/ansible/ansible.log
DEFAULT_LOOKUP_PLUGIN_PATH(/media/sf_work/lops/ansible/ansible.cfg) = ['/media/sf_work/lops/ansible/lookup_plugins']
DEFAULT_ROLES_PATH(/media/sf_work/lops/ansible/ansible.cfg) = ['/media/sf_work/lops/ansible/roles']
DEFAULT_STDOUT_CALLBACK(/media/sf_work/lops/ansible/ansible.cfg) = debug
DEFAULT_TRANSPORT(/media/sf_work/lops/ansible/ansible.cfg) = smart
DEFAULT_VAULT_PASSWORD_FILE(/media/sf_work/lops/ansible/ansible.cfg) = /media/sf_work/lops/ansible/tools/vault-keyring.sh
DIFF_ALWAYS(/media/sf_work/lops/ansible/ansible.cfg) = True
RETRY_FILES_ENABLED(/media/sf_work/lops/ansible/ansible.cfg) = False
TRANSFORM_INVALID_GROUP_CHARS(/media/sf_work/lops/ansible/ansible.cfg) = ignore
CONNECTION:
==========
ssh:
___
pipelining(/media/sf_work/lops/ansible/ansible.cfg) = True
```
### OS / Environment
Ubuntu 22.04 target, Linux Mint 20 control machine
### Steps to Reproduce
Run a playbook that somehow disables the user it connects as (in my case I added `AllowGroups` to /etc/ssh/sshd_config and the user it connected as was not a member of that group) and then runs the `reboot` module.
### Expected Results
The target machine reboots and the `reboot` module then fails with an error like "Connection failed due to SSH error: Permission denied (publickey)." - preferably without waiting 10 minutes (the default timeout).
### Actual Results
The target machine reboots as expected, but the Ansible play appears to hang in the reboot module.
With ANSIBLE_DEBUG=1 I can see the real problem:
```console
148477 1692889630.14999: reboot: last boot time check fail 'Failed to connect to the host via ssh: ssh: connect to host MYHOST port 22: Connection refused', retrying in 12.3 seconds...
148477 1692889643.07223: reboot: last boot time check fail 'Failed to connect to the host via ssh: ubuntu@MYHOST: Permission denied (publickey).', retrying in 12.84 seconds...
... then more of the same until the timeout expires
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/81574
|
https://github.com/ansible/ansible/pull/81578
|
81c83c623cb78ca32d1a6ab7ff8a3e67bd62cc54
|
2793dfa594765d402f61d80128e916e0300a38fc
| 2023-08-24T15:55:11Z |
python
| 2023-09-15T17:50:26Z |
changelogs/fragments/reboot.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 81,574 |
reboot module times out without useful error when SSH connection fails with "permission denied"
|
### Summary
The reboot module waits until it can SSH to the server again. It keeps trying until either it succeeds or for the specified timeout.
When the SSH connection fails because the user does not have access to the server it just keeps waiting until the timeout expires and eventually reports a generic "timeout expired" message, which is unhelpful. It would be helpful if the module:
1. Reported the actual error it got on the last attempt before it gave up.
2. Ideally, distinguished permission errors like "Permission denied (publickey)." or "Your account has expired; please contact your system administrator." from the server not being up yet and gave up without waiting out the timeout for permission errors. (Or maybe those errors should have a separate timeout, lower by default?)
### Issue Type
Bug Report
### Component Name
ansible.builtin.reboot
### Ansible Version
```console
$ ansible --version
ansible [core 2.14.9]
config file = /media/sf_work/lops/ansible/ansible.cfg
configured module search path = ['/home/em/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/em/.direnv/python-3.9.5/lib/python3.9/site-packages/ansible
ansible collection location = /home/em/.ansible/collections:/usr/share/ansible/collections
executable location = /home/em/.direnv/python-3.9.5/bin/ansible
python version = 3.9.5 (default, Nov 23 2021, 15:27:38) [GCC 9.3.0] (/home/em/.direnv/python-3.9.5/bin/python3.9)
jinja version = 3.1.2
libyaml = True
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
CONFIG_FILE() = /media/sf_work/lops/ansible/ansible.cfg
DEFAULT_BECOME(/media/sf_work/lops/ansible/ansible.cfg) = True
DEFAULT_FILTER_PLUGIN_PATH(/media/sf_work/lops/ansible/ansible.cfg) = ['/media/sf_work/lops/ansible/filter_plugins']
DEFAULT_HASH_BEHAVIOUR(/media/sf_work/lops/ansible/ansible.cfg) = merge
DEFAULT_LOG_PATH(/media/sf_work/lops/ansible/ansible.cfg) = /media/sf_work/lops/ansible/ansible.log
DEFAULT_LOOKUP_PLUGIN_PATH(/media/sf_work/lops/ansible/ansible.cfg) = ['/media/sf_work/lops/ansible/lookup_plugins']
DEFAULT_ROLES_PATH(/media/sf_work/lops/ansible/ansible.cfg) = ['/media/sf_work/lops/ansible/roles']
DEFAULT_STDOUT_CALLBACK(/media/sf_work/lops/ansible/ansible.cfg) = debug
DEFAULT_TRANSPORT(/media/sf_work/lops/ansible/ansible.cfg) = smart
DEFAULT_VAULT_PASSWORD_FILE(/media/sf_work/lops/ansible/ansible.cfg) = /media/sf_work/lops/ansible/tools/vault-keyring.sh
DIFF_ALWAYS(/media/sf_work/lops/ansible/ansible.cfg) = True
RETRY_FILES_ENABLED(/media/sf_work/lops/ansible/ansible.cfg) = False
TRANSFORM_INVALID_GROUP_CHARS(/media/sf_work/lops/ansible/ansible.cfg) = ignore
CONNECTION:
==========
ssh:
___
pipelining(/media/sf_work/lops/ansible/ansible.cfg) = True
```
### OS / Environment
Ubuntu 22.04 target, Linux Mint 20 control machine
### Steps to Reproduce
Run a playbook that somehow disables the user it connects as (in my case I added `AllowGroups` to /etc/ssh/sshd_config and the user it connected as was not a member of that group) and then runs the `reboot` module.
### Expected Results
The target machine reboots and the `reboot` module then fails with an error like "Connection failed due to SSH error: Permission denied (publickey)." - preferably without waiting 10 minutes (the default timeout).
### Actual Results
The target machine reboots as expected, but the Ansible play appears to hang in the reboot module.
With ANSIBLE_DEBUG=1 I can see the real problem:
```console
148477 1692889630.14999: reboot: last boot time check fail 'Failed to connect to the host via ssh: ssh: connect to host MYHOST port 22: Connection refused', retrying in 12.3 seconds...
148477 1692889643.07223: reboot: last boot time check fail 'Failed to connect to the host via ssh: ubuntu@MYHOST: Permission denied (publickey).', retrying in 12.84 seconds...
... then more of the same until the timeout expires
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/81574
|
https://github.com/ansible/ansible/pull/81578
|
81c83c623cb78ca32d1a6ab7ff8a3e67bd62cc54
|
2793dfa594765d402f61d80128e916e0300a38fc
| 2023-08-24T15:55:11Z |
python
| 2023-09-15T17:50:26Z |
lib/ansible/plugins/action/reboot.py
|
# Copyright: (c) 2016-2018, Matt Davis <[email protected]>
# Copyright: (c) 2018, Sam Doran <[email protected]>
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import random
import time
from datetime import datetime, timedelta, timezone
from ansible.errors import AnsibleError, AnsibleConnectionFailure
from ansible.module_utils.common.text.converters import to_native, to_text
from ansible.module_utils.common.validation import check_type_list, check_type_str
from ansible.plugins.action import ActionBase
from ansible.utils.display import Display
display = Display()
class TimedOutException(Exception):
pass
class ActionModule(ActionBase):
TRANSFERS_FILES = False
_VALID_ARGS = frozenset((
'boot_time_command',
'connect_timeout',
'msg',
'post_reboot_delay',
'pre_reboot_delay',
'reboot_command',
'reboot_timeout',
'search_paths',
'test_command',
))
DEFAULT_REBOOT_TIMEOUT = 600
DEFAULT_CONNECT_TIMEOUT = None
DEFAULT_PRE_REBOOT_DELAY = 0
DEFAULT_POST_REBOOT_DELAY = 0
DEFAULT_TEST_COMMAND = 'whoami'
DEFAULT_BOOT_TIME_COMMAND = 'cat /proc/sys/kernel/random/boot_id'
DEFAULT_REBOOT_MESSAGE = 'Reboot initiated by Ansible'
DEFAULT_SHUTDOWN_COMMAND = 'shutdown'
DEFAULT_SHUTDOWN_COMMAND_ARGS = '-r {delay_min} "{message}"'
DEFAULT_SUDOABLE = True
DEPRECATED_ARGS = {} # type: dict[str, str]
BOOT_TIME_COMMANDS = {
'freebsd': '/sbin/sysctl kern.boottime',
'openbsd': '/sbin/sysctl kern.boottime',
'macosx': 'who -b',
'solaris': 'who -b',
'sunos': 'who -b',
'vmkernel': 'grep booted /var/log/vmksummary.log | tail -n 1',
'aix': 'who -b',
}
SHUTDOWN_COMMANDS = {
'alpine': 'reboot',
'vmkernel': 'reboot',
}
SHUTDOWN_COMMAND_ARGS = {
'alpine': '',
'void': '-r +{delay_min} "{message}"',
'freebsd': '-r +{delay_sec}s "{message}"',
'linux': DEFAULT_SHUTDOWN_COMMAND_ARGS,
'macosx': '-r +{delay_min} "{message}"',
'openbsd': '-r +{delay_min} "{message}"',
'solaris': '-y -g {delay_sec} -i 6 "{message}"',
'sunos': '-y -g {delay_sec} -i 6 "{message}"',
'vmkernel': '-d {delay_sec}',
'aix': '-Fr',
}
TEST_COMMANDS = {
'solaris': 'who',
'vmkernel': 'who',
}
def __init__(self, *args, **kwargs):
super(ActionModule, self).__init__(*args, **kwargs)
@property
def pre_reboot_delay(self):
return self._check_delay('pre_reboot_delay', self.DEFAULT_PRE_REBOOT_DELAY)
@property
def post_reboot_delay(self):
return self._check_delay('post_reboot_delay', self.DEFAULT_POST_REBOOT_DELAY)
def _check_delay(self, key, default):
"""Ensure that the value is positive or zero"""
value = int(self._task.args.get(key, self._task.args.get(key + '_sec', default)))
if value < 0:
value = 0
return value
def _get_value_from_facts(self, variable_name, distribution, default_value):
"""Get dist+version specific args first, then distribution, then family, lastly use default"""
attr = getattr(self, variable_name)
value = attr.get(
distribution['name'] + distribution['version'],
attr.get(
distribution['name'],
attr.get(
distribution['family'],
getattr(self, default_value))))
return value
def get_shutdown_command_args(self, distribution):
reboot_command = self._task.args.get('reboot_command')
if reboot_command is not None:
try:
reboot_command = check_type_str(reboot_command, allow_conversion=False)
except TypeError as e:
raise AnsibleError("Invalid value given for 'reboot_command': %s." % to_native(e))
# No args were provided
try:
return reboot_command.split(' ', 1)[1]
except IndexError:
return ''
else:
args = self._get_value_from_facts('SHUTDOWN_COMMAND_ARGS', distribution, 'DEFAULT_SHUTDOWN_COMMAND_ARGS')
# Convert seconds to minutes. If less that 60, set it to 0.
delay_min = self.pre_reboot_delay // 60
reboot_message = self._task.args.get('msg', self.DEFAULT_REBOOT_MESSAGE)
return args.format(delay_sec=self.pre_reboot_delay, delay_min=delay_min, message=reboot_message)
def get_distribution(self, task_vars):
# FIXME: only execute the module if we don't already have the facts we need
distribution = {}
display.debug('{action}: running setup module to get distribution'.format(action=self._task.action))
module_output = self._execute_module(
task_vars=task_vars,
module_name='ansible.legacy.setup',
module_args={'gather_subset': 'min'})
try:
if module_output.get('failed', False):
raise AnsibleError('Failed to determine system distribution. {0}, {1}'.format(
to_native(module_output['module_stdout']).strip(),
to_native(module_output['module_stderr']).strip()))
distribution['name'] = module_output['ansible_facts']['ansible_distribution'].lower()
distribution['version'] = to_text(module_output['ansible_facts']['ansible_distribution_version'].split('.')[0])
distribution['family'] = to_text(module_output['ansible_facts']['ansible_os_family'].lower())
display.debug("{action}: distribution: {dist}".format(action=self._task.action, dist=distribution))
return distribution
except KeyError as ke:
raise AnsibleError('Failed to get distribution information. Missing "{0}" in output.'.format(ke.args[0]))
def get_shutdown_command(self, task_vars, distribution):
reboot_command = self._task.args.get('reboot_command')
if reboot_command is not None:
try:
reboot_command = check_type_str(reboot_command, allow_conversion=False)
except TypeError as e:
raise AnsibleError("Invalid value given for 'reboot_command': %s." % to_native(e))
shutdown_bin = reboot_command.split(' ', 1)[0]
else:
shutdown_bin = self._get_value_from_facts('SHUTDOWN_COMMANDS', distribution, 'DEFAULT_SHUTDOWN_COMMAND')
if shutdown_bin[0] == '/':
return shutdown_bin
else:
default_search_paths = ['/sbin', '/bin', '/usr/sbin', '/usr/bin', '/usr/local/sbin']
search_paths = self._task.args.get('search_paths', default_search_paths)
try:
# Convert bare strings to a list
search_paths = check_type_list(search_paths)
except TypeError:
err_msg = "'search_paths' must be a string or flat list of strings, got {0}"
raise AnsibleError(err_msg.format(search_paths))
display.debug('{action}: running find module looking in {paths} to get path for "{command}"'.format(
action=self._task.action,
command=shutdown_bin,
paths=search_paths))
find_result = self._execute_module(
task_vars=task_vars,
# prevent collection search by calling with ansible.legacy (still allows library/ override of find)
module_name='ansible.legacy.find',
module_args={
'paths': search_paths,
'patterns': [shutdown_bin],
'file_type': 'any'
}
)
full_path = [x['path'] for x in find_result['files']]
if not full_path:
raise AnsibleError('Unable to find command "{0}" in search paths: {1}'.format(shutdown_bin, search_paths))
return full_path[0]
def deprecated_args(self):
for arg, version in self.DEPRECATED_ARGS.items():
if self._task.args.get(arg) is not None:
display.warning("Since Ansible {version}, {arg} is no longer a valid option for {action}".format(
version=version,
arg=arg,
action=self._task.action))
def get_system_boot_time(self, distribution):
boot_time_command = self._get_value_from_facts('BOOT_TIME_COMMANDS', distribution, 'DEFAULT_BOOT_TIME_COMMAND')
if self._task.args.get('boot_time_command'):
boot_time_command = self._task.args.get('boot_time_command')
try:
check_type_str(boot_time_command, allow_conversion=False)
except TypeError as e:
raise AnsibleError("Invalid value given for 'boot_time_command': %s." % to_native(e))
display.debug("{action}: getting boot time with command: '{command}'".format(action=self._task.action, command=boot_time_command))
command_result = self._low_level_execute_command(boot_time_command, sudoable=self.DEFAULT_SUDOABLE)
if command_result['rc'] != 0:
stdout = command_result['stdout']
stderr = command_result['stderr']
raise AnsibleError("{action}: failed to get host boot time info, rc: {rc}, stdout: {out}, stderr: {err}".format(
action=self._task.action,
rc=command_result['rc'],
out=to_native(stdout),
err=to_native(stderr)))
display.debug("{action}: last boot time: {boot}".format(action=self._task.action, boot=command_result['stdout'].strip()))
return command_result['stdout'].strip()
def check_boot_time(self, distribution, previous_boot_time):
display.vvv("{action}: attempting to get system boot time".format(action=self._task.action))
connect_timeout = self._task.args.get('connect_timeout', self._task.args.get('connect_timeout_sec', self.DEFAULT_CONNECT_TIMEOUT))
# override connection timeout from defaults to custom value
if connect_timeout:
try:
display.debug("{action}: setting connect_timeout to {value}".format(action=self._task.action, value=connect_timeout))
self._connection.set_option("connection_timeout", connect_timeout)
self._connection.reset()
except AttributeError:
display.warning("Connection plugin does not allow the connection timeout to be overridden")
# try and get boot time
try:
current_boot_time = self.get_system_boot_time(distribution)
except Exception as e:
raise e
# FreeBSD returns an empty string immediately before reboot so adding a length
# check to prevent prematurely assuming system has rebooted
if len(current_boot_time) == 0 or current_boot_time == previous_boot_time:
raise ValueError("boot time has not changed")
def run_test_command(self, distribution, **kwargs):
test_command = self._task.args.get('test_command', self._get_value_from_facts('TEST_COMMANDS', distribution, 'DEFAULT_TEST_COMMAND'))
display.vvv("{action}: attempting post-reboot test command".format(action=self._task.action))
display.debug("{action}: attempting post-reboot test command '{command}'".format(action=self._task.action, command=test_command))
try:
command_result = self._low_level_execute_command(test_command, sudoable=self.DEFAULT_SUDOABLE)
except Exception:
# may need to reset the connection in case another reboot occurred
# which has invalidated our connection
try:
self._connection.reset()
except AttributeError:
pass
raise
if command_result['rc'] != 0:
msg = 'Test command failed: {err} {out}'.format(
err=to_native(command_result['stderr']),
out=to_native(command_result['stdout']))
raise RuntimeError(msg)
display.vvv("{action}: system successfully rebooted".format(action=self._task.action))
def do_until_success_or_timeout(self, action, reboot_timeout, action_desc, distribution, action_kwargs=None):
max_end_time = datetime.now(timezone.utc) + timedelta(seconds=reboot_timeout)
if action_kwargs is None:
action_kwargs = {}
fail_count = 0
max_fail_sleep = 12
while datetime.now(timezone.utc) < max_end_time:
try:
action(distribution=distribution, **action_kwargs)
if action_desc:
display.debug('{action}: {desc} success'.format(action=self._task.action, desc=action_desc))
return
except Exception as e:
if isinstance(e, AnsibleConnectionFailure):
try:
self._connection.reset()
except AnsibleConnectionFailure:
pass
# Use exponential backoff with a max timout, plus a little bit of randomness
random_int = random.randint(0, 1000) / 1000
fail_sleep = 2 ** fail_count + random_int
if fail_sleep > max_fail_sleep:
fail_sleep = max_fail_sleep + random_int
if action_desc:
try:
error = to_text(e).splitlines()[-1]
except IndexError as e:
error = to_text(e)
display.debug("{action}: {desc} fail '{err}', retrying in {sleep:.4} seconds...".format(
action=self._task.action,
desc=action_desc,
err=error,
sleep=fail_sleep))
fail_count += 1
time.sleep(fail_sleep)
raise TimedOutException('Timed out waiting for {desc} (timeout={timeout})'.format(desc=action_desc, timeout=reboot_timeout))
def perform_reboot(self, task_vars, distribution):
result = {}
reboot_result = {}
shutdown_command = self.get_shutdown_command(task_vars, distribution)
shutdown_command_args = self.get_shutdown_command_args(distribution)
reboot_command = '{0} {1}'.format(shutdown_command, shutdown_command_args)
try:
display.vvv("{action}: rebooting server...".format(action=self._task.action))
display.debug("{action}: rebooting server with command '{command}'".format(action=self._task.action, command=reboot_command))
reboot_result = self._low_level_execute_command(reboot_command, sudoable=self.DEFAULT_SUDOABLE)
except AnsibleConnectionFailure as e:
# If the connection is closed too quickly due to the system being shutdown, carry on
display.debug('{action}: AnsibleConnectionFailure caught and handled: {error}'.format(action=self._task.action, error=to_text(e)))
reboot_result['rc'] = 0
result['start'] = datetime.now(timezone.utc)
if reboot_result['rc'] != 0:
result['failed'] = True
result['rebooted'] = False
result['msg'] = "Reboot command failed. Error was: '{stdout}, {stderr}'".format(
stdout=to_native(reboot_result['stdout'].strip()),
stderr=to_native(reboot_result['stderr'].strip()))
return result
result['failed'] = False
return result
def validate_reboot(self, distribution, original_connection_timeout=None, action_kwargs=None):
display.vvv('{action}: validating reboot'.format(action=self._task.action))
result = {}
try:
# keep on checking system boot_time with short connection responses
reboot_timeout = int(self._task.args.get('reboot_timeout', self._task.args.get('reboot_timeout_sec', self.DEFAULT_REBOOT_TIMEOUT)))
self.do_until_success_or_timeout(
action=self.check_boot_time,
action_desc="last boot time check",
reboot_timeout=reboot_timeout,
distribution=distribution,
action_kwargs=action_kwargs)
# Get the connect_timeout set on the connection to compare to the original
try:
connect_timeout = self._connection.get_option('connection_timeout')
except KeyError:
pass
else:
if original_connection_timeout != connect_timeout:
try:
display.debug("{action}: setting connect_timeout back to original value of {value}".format(
action=self._task.action,
value=original_connection_timeout))
self._connection.set_option("connection_timeout", original_connection_timeout)
self._connection.reset()
except (AnsibleError, AttributeError) as e:
# reset the connection to clear the custom connection timeout
display.debug("{action}: failed to reset connection_timeout back to default: {error}".format(action=self._task.action,
error=to_text(e)))
# finally run test command to ensure everything is working
# FUTURE: add a stability check (system must remain up for N seconds) to deal with self-multi-reboot updates
self.do_until_success_or_timeout(
action=self.run_test_command,
action_desc="post-reboot test command",
reboot_timeout=reboot_timeout,
distribution=distribution,
action_kwargs=action_kwargs)
result['rebooted'] = True
result['changed'] = True
except TimedOutException as toex:
result['failed'] = True
result['rebooted'] = True
result['msg'] = to_text(toex)
return result
return result
def run(self, tmp=None, task_vars=None):
self._supports_check_mode = True
self._supports_async = True
# If running with local connection, fail so we don't reboot ourself
if self._connection.transport == 'local':
msg = 'Running {0} with local connection would reboot the control node.'.format(self._task.action)
return {'changed': False, 'elapsed': 0, 'rebooted': False, 'failed': True, 'msg': msg}
if self._play_context.check_mode:
return {'changed': True, 'elapsed': 0, 'rebooted': True}
if task_vars is None:
task_vars = {}
self.deprecated_args()
result = super(ActionModule, self).run(tmp, task_vars)
if result.get('skipped', False) or result.get('failed', False):
return result
distribution = self.get_distribution(task_vars)
# Get current boot time
try:
previous_boot_time = self.get_system_boot_time(distribution)
except Exception as e:
result['failed'] = True
result['reboot'] = False
result['msg'] = to_text(e)
return result
# Get the original connection_timeout option var so it can be reset after
original_connection_timeout = None
try:
original_connection_timeout = self._connection.get_option('connection_timeout')
display.debug("{action}: saving original connect_timeout of {timeout}".format(action=self._task.action, timeout=original_connection_timeout))
except KeyError:
display.debug("{action}: connect_timeout connection option has not been set".format(action=self._task.action))
# Initiate reboot
reboot_result = self.perform_reboot(task_vars, distribution)
if reboot_result['failed']:
result = reboot_result
elapsed = datetime.now(timezone.utc) - reboot_result['start']
result['elapsed'] = elapsed.seconds
return result
if self.post_reboot_delay != 0:
display.debug("{action}: waiting an additional {delay} seconds".format(action=self._task.action, delay=self.post_reboot_delay))
display.vvv("{action}: waiting an additional {delay} seconds".format(action=self._task.action, delay=self.post_reboot_delay))
time.sleep(self.post_reboot_delay)
# Make sure reboot was successful
result = self.validate_reboot(distribution, original_connection_timeout, action_kwargs={'previous_boot_time': previous_boot_time})
elapsed = datetime.now(timezone.utc) - reboot_result['start']
result['elapsed'] = elapsed.seconds
return result
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 81,457 |
INI format inventory throws "invalid decimal literal"
|
### Summary
The following minimal inventory
```ini
gitlab-runner-01 ansible_host=gitlab-runner-01.internal.pcfe.net ansible_user=root
zimaboard-01 ansible_host=zimaboard-01.internal.pcfe.net ansible_user=ansible
```
Makes ansible complain
```text
<unknown>:1: SyntaxWarning: invalid decimal literal
<unknown>:1: SyntaxWarning: invalid decimal literal
```
But I fail to understand why.
As soon as I adjust the entries as follows (non numerical char right before the `.`), it stops complaining but of course that does not help me as the DNS names are then wrong.
```ini
gitlab-runner-01 ansible_host=gitlab-runner-01foo.internal.pcfe.net ansible_user=root
zimaboard-01 ansible_host=zimaboard-01bar.internal.pcfe.net ansible_user=ansible
```
### Issue Type
Bug Report
### Component Name
ini plugin
### Ansible Version
```console
$ ansible --version
ansible [core 2.14.8]
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/pcfe/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.11/site-packages/ansible
ansible collection location = /home/pcfe/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/bin/ansible
python version = 3.11.4 (main, Jun 7 2023, 00:00:00) [GCC 13.1.1 20230511 (Red Hat 13.1.1-2)] (/usr/bin/python3)
jinja version = 3.0.3
libyaml = True
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
CONFIG_FILE() = /etc/ansible/ansible.cfg
```
### OS / Environment
pcfe@t3600 ~ $ cat /etc/os-release
NAME="Fedora Linux"
VERSION="38 (KDE Plasma)"
ID=fedora
VERSION_ID=38
VERSION_CODENAME=""
PLATFORM_ID="platform:f38"
PRETTY_NAME="Fedora Linux 38 (KDE Plasma)"
ANSI_COLOR="0;38;2;60;110;180"
LOGO=fedora-logo-icon
CPE_NAME="cpe:/o:fedoraproject:fedora:38"
DEFAULT_HOSTNAME="fedora"
HOME_URL="https://fedoraproject.org/"
DOCUMENTATION_URL="https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/"
SUPPORT_URL="https://ask.fedoraproject.org/"
BUG_REPORT_URL="https://bugzilla.redhat.com/"
REDHAT_BUGZILLA_PRODUCT="Fedora"
REDHAT_BUGZILLA_PRODUCT_VERSION=38
REDHAT_SUPPORT_PRODUCT="Fedora"
REDHAT_SUPPORT_PRODUCT_VERSION=38
SUPPORT_END=2024-05-14
VARIANT="KDE Plasma"
VARIANT_ID=kde
pcfe@t3600 ~ $ rpm -qf $(which ansible)
ansible-core-2.14.8-1.fc38.noarch
pcfe@t3600 ~ $ rpm -V ansible-core-2.14.8-1.fc38.noarch
pcfe@t3600 ~ $
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```bash
pcfe@t3600 ~ $ pwd
/home/pcfe
pcfe@t3600 ~ $ ansible-config dump --only-changed -t all
CONFIG_FILE() = /etc/ansible/ansible.cfg
pcfe@t3600 ~ $ rpm -qf /etc/ansible/ansible.cfg
ansible-core-2.14.8-1.fc38.noarch
pcfe@t3600 ~ $ rpm -V ansible-core-2.14.8-1.fc38.noarch
pcfe@t3600 ~ $ cat ~/tmp/inventory.ini
gitlab-runner-01 ansible_host=gitlab-runner-01.internal.pcfe.net ansible_user=root
zimaboard-01 ansible_host=zimaboard-01.internal.pcfe.net ansible_user=ansible
pcfe@t3600 ~ $ ansible -i ~/tmp/inventory.ini -m ping all
<unknown>:1: SyntaxWarning: invalid decimal literal
<unknown>:1: SyntaxWarning: invalid decimal literal
gitlab-runner-01 | SUCCESS => {
"ansible_facts": {
"discovered_interpreter_python": "/usr/bin/python3"
},
"changed": false,
"ping": "pong"
}
zimaboard-01 | SUCCESS => {
"ansible_facts": {
"discovered_interpreter_python": "/usr/bin/python3"
},
"changed": false,
"ping": "pong"
}
pcfe@t3600 ~ $
```
### Expected Results
I would like to understand what makes Ansible complain about this minimal INI type inventory.
If there is nothing I did wrong in the inventory, then I would like Ansible to please not complain about invalid decimal literals.
Additionally, if there is something invalid in my INI format inventory, then it would be nice if Ansible was more specific (affected line number, pointer to which part of some spec I violate) towards the user.
Not every user might be comfortable zeroing in on such inventory lines with for example `git bisect` (which is how I found the two offending lines in my full inventory).
### Actual Results
```console
pcfe@t3600 ~ $ ls -l /home/pcfe/.ansible/plugins /home/pcfe/.ansible/collections
ls: cannot access '/home/pcfe/.ansible/plugins': No such file or directory
ls: cannot access '/home/pcfe/.ansible/collections': No such file or directory
pcfe@t3600 ~ $ cat ~/tmp/inventory.ini
gitlab-runner-01 ansible_host=gitlab-runner-01.internal.pcfe.net ansible_user=root
zimaboard-01 ansible_host=zimaboard-01.internal.pcfe.net ansible_user=ansible
pcfe@t3600 ~ $ ansible -i ~/tmp/inventory.ini -m ping all -vvvv
ansible [core 2.14.8]
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/pcfe/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.11/site-packages/ansible
ansible collection location = /home/pcfe/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/bin/ansible
python version = 3.11.4 (main, Jun 7 2023, 00:00:00) [GCC 13.1.1 20230511 (Red Hat 13.1.1-2)] (/usr/bin/python3)
jinja version = 3.0.3
libyaml = True
Using /etc/ansible/ansible.cfg as config file
setting up inventory plugins
host_list declined parsing /home/pcfe/tmp/inventory.ini as it did not pass its verify_file() method
script declined parsing /home/pcfe/tmp/inventory.ini as it did not pass its verify_file() method
auto declined parsing /home/pcfe/tmp/inventory.ini as it did not pass its verify_file() method
yaml declined parsing /home/pcfe/tmp/inventory.ini as it did not pass its verify_file() method
<unknown>:1: SyntaxWarning: invalid decimal literal
<unknown>:1: SyntaxWarning: invalid decimal literal
Parsed /home/pcfe/tmp/inventory.ini inventory source with ini plugin
[β¦]
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/81457
|
https://github.com/ansible/ansible/pull/81707
|
2793dfa594765d402f61d80128e916e0300a38fc
|
a1a6550daf305ec9815a7b12db42c68b63426878
| 2023-08-07T15:22:39Z |
python
| 2023-09-18T14:50:50Z |
changelogs/fragments/inventory_ini.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 81,457 |
INI format inventory throws "invalid decimal literal"
|
### Summary
The following minimal inventory
```ini
gitlab-runner-01 ansible_host=gitlab-runner-01.internal.pcfe.net ansible_user=root
zimaboard-01 ansible_host=zimaboard-01.internal.pcfe.net ansible_user=ansible
```
Makes ansible complain
```text
<unknown>:1: SyntaxWarning: invalid decimal literal
<unknown>:1: SyntaxWarning: invalid decimal literal
```
But I fail to understand why.
As soon as I adjust the entries as follows (non numerical char right before the `.`), it stops complaining but of course that does not help me as the DNS names are then wrong.
```ini
gitlab-runner-01 ansible_host=gitlab-runner-01foo.internal.pcfe.net ansible_user=root
zimaboard-01 ansible_host=zimaboard-01bar.internal.pcfe.net ansible_user=ansible
```
### Issue Type
Bug Report
### Component Name
ini plugin
### Ansible Version
```console
$ ansible --version
ansible [core 2.14.8]
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/pcfe/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.11/site-packages/ansible
ansible collection location = /home/pcfe/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/bin/ansible
python version = 3.11.4 (main, Jun 7 2023, 00:00:00) [GCC 13.1.1 20230511 (Red Hat 13.1.1-2)] (/usr/bin/python3)
jinja version = 3.0.3
libyaml = True
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
CONFIG_FILE() = /etc/ansible/ansible.cfg
```
### OS / Environment
pcfe@t3600 ~ $ cat /etc/os-release
NAME="Fedora Linux"
VERSION="38 (KDE Plasma)"
ID=fedora
VERSION_ID=38
VERSION_CODENAME=""
PLATFORM_ID="platform:f38"
PRETTY_NAME="Fedora Linux 38 (KDE Plasma)"
ANSI_COLOR="0;38;2;60;110;180"
LOGO=fedora-logo-icon
CPE_NAME="cpe:/o:fedoraproject:fedora:38"
DEFAULT_HOSTNAME="fedora"
HOME_URL="https://fedoraproject.org/"
DOCUMENTATION_URL="https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/"
SUPPORT_URL="https://ask.fedoraproject.org/"
BUG_REPORT_URL="https://bugzilla.redhat.com/"
REDHAT_BUGZILLA_PRODUCT="Fedora"
REDHAT_BUGZILLA_PRODUCT_VERSION=38
REDHAT_SUPPORT_PRODUCT="Fedora"
REDHAT_SUPPORT_PRODUCT_VERSION=38
SUPPORT_END=2024-05-14
VARIANT="KDE Plasma"
VARIANT_ID=kde
pcfe@t3600 ~ $ rpm -qf $(which ansible)
ansible-core-2.14.8-1.fc38.noarch
pcfe@t3600 ~ $ rpm -V ansible-core-2.14.8-1.fc38.noarch
pcfe@t3600 ~ $
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```bash
pcfe@t3600 ~ $ pwd
/home/pcfe
pcfe@t3600 ~ $ ansible-config dump --only-changed -t all
CONFIG_FILE() = /etc/ansible/ansible.cfg
pcfe@t3600 ~ $ rpm -qf /etc/ansible/ansible.cfg
ansible-core-2.14.8-1.fc38.noarch
pcfe@t3600 ~ $ rpm -V ansible-core-2.14.8-1.fc38.noarch
pcfe@t3600 ~ $ cat ~/tmp/inventory.ini
gitlab-runner-01 ansible_host=gitlab-runner-01.internal.pcfe.net ansible_user=root
zimaboard-01 ansible_host=zimaboard-01.internal.pcfe.net ansible_user=ansible
pcfe@t3600 ~ $ ansible -i ~/tmp/inventory.ini -m ping all
<unknown>:1: SyntaxWarning: invalid decimal literal
<unknown>:1: SyntaxWarning: invalid decimal literal
gitlab-runner-01 | SUCCESS => {
"ansible_facts": {
"discovered_interpreter_python": "/usr/bin/python3"
},
"changed": false,
"ping": "pong"
}
zimaboard-01 | SUCCESS => {
"ansible_facts": {
"discovered_interpreter_python": "/usr/bin/python3"
},
"changed": false,
"ping": "pong"
}
pcfe@t3600 ~ $
```
### Expected Results
I would like to understand what makes Ansible complain about this minimal INI type inventory.
If there is nothing I did wrong in the inventory, then I would like Ansible to please not complain about invalid decimal literals.
Additionally, if there is something invalid in my INI format inventory, then it would be nice if Ansible was more specific (affected line number, pointer to which part of some spec I violate) towards the user.
Not every user might be comfortable zeroing in on such inventory lines with for example `git bisect` (which is how I found the two offending lines in my full inventory).
### Actual Results
```console
pcfe@t3600 ~ $ ls -l /home/pcfe/.ansible/plugins /home/pcfe/.ansible/collections
ls: cannot access '/home/pcfe/.ansible/plugins': No such file or directory
ls: cannot access '/home/pcfe/.ansible/collections': No such file or directory
pcfe@t3600 ~ $ cat ~/tmp/inventory.ini
gitlab-runner-01 ansible_host=gitlab-runner-01.internal.pcfe.net ansible_user=root
zimaboard-01 ansible_host=zimaboard-01.internal.pcfe.net ansible_user=ansible
pcfe@t3600 ~ $ ansible -i ~/tmp/inventory.ini -m ping all -vvvv
ansible [core 2.14.8]
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/pcfe/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.11/site-packages/ansible
ansible collection location = /home/pcfe/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/bin/ansible
python version = 3.11.4 (main, Jun 7 2023, 00:00:00) [GCC 13.1.1 20230511 (Red Hat 13.1.1-2)] (/usr/bin/python3)
jinja version = 3.0.3
libyaml = True
Using /etc/ansible/ansible.cfg as config file
setting up inventory plugins
host_list declined parsing /home/pcfe/tmp/inventory.ini as it did not pass its verify_file() method
script declined parsing /home/pcfe/tmp/inventory.ini as it did not pass its verify_file() method
auto declined parsing /home/pcfe/tmp/inventory.ini as it did not pass its verify_file() method
yaml declined parsing /home/pcfe/tmp/inventory.ini as it did not pass its verify_file() method
<unknown>:1: SyntaxWarning: invalid decimal literal
<unknown>:1: SyntaxWarning: invalid decimal literal
Parsed /home/pcfe/tmp/inventory.ini inventory source with ini plugin
[β¦]
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/81457
|
https://github.com/ansible/ansible/pull/81707
|
2793dfa594765d402f61d80128e916e0300a38fc
|
a1a6550daf305ec9815a7b12db42c68b63426878
| 2023-08-07T15:22:39Z |
python
| 2023-09-18T14:50:50Z |
lib/ansible/plugins/inventory/ini.py
|
# Copyright (c) 2017 Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
DOCUMENTATION = '''
name: ini
version_added: "2.4"
short_description: Uses an Ansible INI file as inventory source.
description:
- INI file based inventory, sections are groups or group related with special C(:modifiers).
- Entries in sections C([group_1]) are hosts, members of the group.
- Hosts can have variables defined inline as key/value pairs separated by C(=).
- The C(children) modifier indicates that the section contains groups.
- The C(vars) modifier indicates that the section contains variables assigned to members of the group.
- Anything found outside a section is considered an 'ungrouped' host.
- Values passed in the INI format using the C(key=value) syntax are interpreted differently depending on where they are declared within your inventory.
- When declared inline with the host, INI values are processed by Python's ast.literal_eval function
(U(https://docs.python.org/3/library/ast.html#ast.literal_eval)) and interpreted as Python literal structures
(strings, numbers, tuples, lists, dicts, booleans, None). If you want a number to be treated as a string, you must quote it.
Host lines accept multiple C(key=value) parameters per line.
Therefore they need a way to indicate that a space is part of a value rather than a separator.
- When declared in a C(:vars) section, INI values are interpreted as strings. For example C(var=FALSE) would create a string equal to C(FALSE).
Unlike host lines, C(:vars) sections accept only a single entry per line, so everything after the C(=) must be the value for the entry.
- Do not rely on types set during definition, always make sure you specify type with a filter when needed when consuming the variable.
- See the Examples for proper quoting to prevent changes to variable type.
notes:
- Enabled in configuration by default.
- Consider switching to YAML format for inventory sources to avoid confusion on the actual type of a variable.
The YAML inventory plugin processes variable values consistently and correctly.
'''
EXAMPLES = '''# fmt: ini
# Example 1
[web]
host1
host2 ansible_port=222 # defined inline, interpreted as an integer
[web:vars]
http_port=8080 # all members of 'web' will inherit these
myvar=23 # defined in a :vars section, interpreted as a string
[web:children] # child groups will automatically add their hosts to parent group
apache
nginx
[apache]
tomcat1
tomcat2 myvar=34 # host specific vars override group vars
tomcat3 mysecret="'03#pa33w0rd'" # proper quoting to prevent value changes
[nginx]
jenkins1
[nginx:vars]
has_java = True # vars in child groups override same in parent
[all:vars]
has_java = False # 'all' is 'top' parent
# Example 2
host1 # this is 'ungrouped'
# both hosts have same IP but diff ports, also 'ungrouped'
host2 ansible_host=127.0.0.1 ansible_port=44
host3 ansible_host=127.0.0.1 ansible_port=45
[g1]
host4
[g2]
host4 # same host as above, but member of 2 groups, will inherit vars from both
# inventory hostnames are unique
'''
import ast
import re
from ansible.inventory.group import to_safe_group_name
from ansible.plugins.inventory import BaseFileInventoryPlugin
from ansible.errors import AnsibleError, AnsibleParserError
from ansible.module_utils.common.text.converters import to_bytes, to_text
from ansible.utils.shlex import shlex_split
class InventoryModule(BaseFileInventoryPlugin):
"""
Takes an INI-format inventory file and builds a list of groups and subgroups
with their associated hosts and variable settings.
"""
NAME = 'ini'
_COMMENT_MARKERS = frozenset((u';', u'#'))
b_COMMENT_MARKERS = frozenset((b';', b'#'))
def __init__(self):
super(InventoryModule, self).__init__()
self.patterns = {}
self._filename = None
def parse(self, inventory, loader, path, cache=True):
super(InventoryModule, self).parse(inventory, loader, path)
self._filename = path
try:
# Read in the hosts, groups, and variables defined in the inventory file.
if self.loader:
(b_data, private) = self.loader._get_file_contents(path)
else:
b_path = to_bytes(path, errors='surrogate_or_strict')
with open(b_path, 'rb') as fh:
b_data = fh.read()
try:
# Faster to do to_text once on a long string than many
# times on smaller strings
data = to_text(b_data, errors='surrogate_or_strict').splitlines()
except UnicodeError:
# Handle non-utf8 in comment lines: https://github.com/ansible/ansible/issues/17593
data = []
for line in b_data.splitlines():
if line and line[0] in self.b_COMMENT_MARKERS:
# Replace is okay for comment lines
# data.append(to_text(line, errors='surrogate_then_replace'))
# Currently we only need these lines for accurate lineno in errors
data.append(u'')
else:
# Non-comment lines still have to be valid uf-8
data.append(to_text(line, errors='surrogate_or_strict'))
self._parse(path, data)
except Exception as e:
raise AnsibleParserError(e)
def _raise_error(self, message):
raise AnsibleError("%s:%d: " % (self._filename, self.lineno) + message)
def _parse(self, path, lines):
'''
Populates self.groups from the given array of lines. Raises an error on
any parse failure.
'''
self._compile_patterns()
# We behave as though the first line of the inventory is '[ungrouped]',
# and begin to look for host definitions. We make a single pass through
# each line of the inventory, building up self.groups and adding hosts,
# subgroups, and setting variables as we go.
pending_declarations = {}
groupname = 'ungrouped'
state = 'hosts'
self.lineno = 0
for line in lines:
self.lineno += 1
line = line.strip()
# Skip empty lines and comments
if not line or line[0] in self._COMMENT_MARKERS:
continue
# Is this a [section] header? That tells us what group we're parsing
# definitions for, and what kind of definitions to expect.
m = self.patterns['section'].match(line)
if m:
(groupname, state) = m.groups()
groupname = to_safe_group_name(groupname)
state = state or 'hosts'
if state not in ['hosts', 'children', 'vars']:
title = ":".join(m.groups())
self._raise_error("Section [%s] has unknown type: %s" % (title, state))
# If we haven't seen this group before, we add a new Group.
if groupname not in self.inventory.groups:
# Either [groupname] or [groupname:children] is sufficient to declare a group,
# but [groupname:vars] is allowed only if the # group is declared elsewhere.
# We add the group anyway, but make a note in pending_declarations to check at the end.
#
# It's possible that a group is previously pending due to being defined as a child
# group, in that case we simply pass so that the logic below to process pending
# declarations will take the appropriate action for a pending child group instead of
# incorrectly handling it as a var state pending declaration
if state == 'vars' and groupname not in pending_declarations:
pending_declarations[groupname] = dict(line=self.lineno, state=state, name=groupname)
self.inventory.add_group(groupname)
# When we see a declaration that we've been waiting for, we process and delete.
if groupname in pending_declarations and state != 'vars':
if pending_declarations[groupname]['state'] == 'children':
self._add_pending_children(groupname, pending_declarations)
elif pending_declarations[groupname]['state'] == 'vars':
del pending_declarations[groupname]
continue
elif line.startswith('[') and line.endswith(']'):
self._raise_error("Invalid section entry: '%s'. Please make sure that there are no spaces" % line + " " +
"in the section entry, and that there are no other invalid characters")
# It's not a section, so the current state tells us what kind of
# definition it must be. The individual parsers will raise an
# error if we feed them something they can't digest.
# [groupname] contains host definitions that must be added to
# the current group.
if state == 'hosts':
hosts, port, variables = self._parse_host_definition(line)
self._populate_host_vars(hosts, variables, groupname, port)
# [groupname:vars] contains variable definitions that must be
# applied to the current group.
elif state == 'vars':
(k, v) = self._parse_variable_definition(line)
self.inventory.set_variable(groupname, k, v)
# [groupname:children] contains subgroup names that must be
# added as children of the current group. The subgroup names
# must themselves be declared as groups, but as before, they
# may only be declared later.
elif state == 'children':
child = self._parse_group_name(line)
if child not in self.inventory.groups:
if child not in pending_declarations:
pending_declarations[child] = dict(line=self.lineno, state=state, name=child, parents=[groupname])
else:
pending_declarations[child]['parents'].append(groupname)
else:
self.inventory.add_child(groupname, child)
else:
# This can happen only if the state checker accepts a state that isn't handled above.
self._raise_error("Entered unhandled state: %s" % (state))
# Any entries in pending_declarations not removed by a group declaration above mean that there was an unresolved reference.
# We report only the first such error here.
for g in pending_declarations:
decl = pending_declarations[g]
if decl['state'] == 'vars':
raise AnsibleError("%s:%d: Section [%s:vars] not valid for undefined group: %s" % (path, decl['line'], decl['name'], decl['name']))
elif decl['state'] == 'children':
raise AnsibleError("%s:%d: Section [%s:children] includes undefined group: %s" % (path, decl['line'], decl['parents'].pop(), decl['name']))
def _add_pending_children(self, group, pending):
for parent in pending[group]['parents']:
self.inventory.add_child(parent, group)
if parent in pending and pending[parent]['state'] == 'children':
self._add_pending_children(parent, pending)
del pending[group]
def _parse_group_name(self, line):
'''
Takes a single line and tries to parse it as a group name. Returns the
group name if successful, or raises an error.
'''
m = self.patterns['groupname'].match(line)
if m:
return m.group(1)
self._raise_error("Expected group name, got: %s" % (line))
def _parse_variable_definition(self, line):
'''
Takes a string and tries to parse it as a variable definition. Returns
the key and value if successful, or raises an error.
'''
# TODO: We parse variable assignments as a key (anything to the left of
# an '='"), an '=', and a value (anything left) and leave the value to
# _parse_value to sort out. We should be more systematic here about
# defining what is acceptable, how quotes work, and so on.
if '=' in line:
(k, v) = [e.strip() for e in line.split("=", 1)]
return (k, self._parse_value(v))
self._raise_error("Expected key=value, got: %s" % (line))
def _parse_host_definition(self, line):
'''
Takes a single line and tries to parse it as a host definition. Returns
a list of Hosts if successful, or raises an error.
'''
# A host definition comprises (1) a non-whitespace hostname or range,
# optionally followed by (2) a series of key="some value" assignments.
# We ignore any trailing whitespace and/or comments. For example, here
# are a series of host definitions in a group:
#
# [groupname]
# alpha
# beta:2345 user=admin # we'll tell shlex
# gamma sudo=True user=root # to ignore comments
try:
tokens = shlex_split(line, comments=True)
except ValueError as e:
self._raise_error("Error parsing host definition '%s': %s" % (line, e))
(hostnames, port) = self._expand_hostpattern(tokens[0])
# Try to process anything remaining as a series of key=value pairs.
variables = {}
for t in tokens[1:]:
if '=' not in t:
self._raise_error("Expected key=value host variable assignment, got: %s" % (t))
(k, v) = t.split('=', 1)
variables[k] = self._parse_value(v)
return hostnames, port, variables
def _expand_hostpattern(self, hostpattern):
'''
do some extra checks over normal processing
'''
# specification?
hostnames, port = super(InventoryModule, self)._expand_hostpattern(hostpattern)
if hostpattern.strip().endswith(':') and port is None:
raise AnsibleParserError("Invalid host pattern '%s' supplied, ending in ':' is not allowed, this character is reserved to provide a port." %
hostpattern)
for pattern in hostnames:
# some YAML parsing prevention checks
if pattern.strip() == '---':
raise AnsibleParserError("Invalid host pattern '%s' supplied, '---' is normally a sign this is a YAML file." % hostpattern)
return (hostnames, port)
@staticmethod
def _parse_value(v):
'''
Attempt to transform the string value from an ini file into a basic python object
(int, dict, list, unicode string, etc).
'''
try:
v = ast.literal_eval(v)
# Using explicit exceptions.
# Likely a string that literal_eval does not like. We wil then just set it.
except ValueError:
# For some reason this was thought to be malformed.
pass
except SyntaxError:
# Is this a hash with an equals at the end?
pass
return to_text(v, nonstring='passthru', errors='surrogate_or_strict')
def _compile_patterns(self):
'''
Compiles the regular expressions required to parse the inventory and
stores them in self.patterns.
'''
# Section names are square-bracketed expressions at the beginning of a
# line, comprising (1) a group name optionally followed by (2) a tag
# that specifies the contents of the section. We ignore any trailing
# whitespace and/or comments. For example:
#
# [groupname]
# [somegroup:vars]
# [naughty:children] # only get coal in their stockings
self.patterns['section'] = re.compile(
to_text(r'''^\[
([^:\]\s]+) # group name (see groupname below)
(?::(\w+))? # optional : and tag name
\]
\s* # ignore trailing whitespace
(?:\#.*)? # and/or a comment till the
$ # end of the line
''', errors='surrogate_or_strict'), re.X
)
# FIXME: What are the real restrictions on group names, or rather, what
# should they be? At the moment, they must be non-empty sequences of non
# whitespace characters excluding ':' and ']', but we should define more
# precise rules in order to support better diagnostics.
self.patterns['groupname'] = re.compile(
to_text(r'''^
([^:\]\s]+)
\s* # ignore trailing whitespace
(?:\#.*)? # and/or a comment till the
$ # end of the line
''', errors='surrogate_or_strict'), re.X
)
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 81,457 |
INI format inventory throws "invalid decimal literal"
|
### Summary
The following minimal inventory
```ini
gitlab-runner-01 ansible_host=gitlab-runner-01.internal.pcfe.net ansible_user=root
zimaboard-01 ansible_host=zimaboard-01.internal.pcfe.net ansible_user=ansible
```
Makes ansible complain
```text
<unknown>:1: SyntaxWarning: invalid decimal literal
<unknown>:1: SyntaxWarning: invalid decimal literal
```
But I fail to understand why.
As soon as I adjust the entries as follows (non numerical char right before the `.`), it stops complaining but of course that does not help me as the DNS names are then wrong.
```ini
gitlab-runner-01 ansible_host=gitlab-runner-01foo.internal.pcfe.net ansible_user=root
zimaboard-01 ansible_host=zimaboard-01bar.internal.pcfe.net ansible_user=ansible
```
### Issue Type
Bug Report
### Component Name
ini plugin
### Ansible Version
```console
$ ansible --version
ansible [core 2.14.8]
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/pcfe/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.11/site-packages/ansible
ansible collection location = /home/pcfe/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/bin/ansible
python version = 3.11.4 (main, Jun 7 2023, 00:00:00) [GCC 13.1.1 20230511 (Red Hat 13.1.1-2)] (/usr/bin/python3)
jinja version = 3.0.3
libyaml = True
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
CONFIG_FILE() = /etc/ansible/ansible.cfg
```
### OS / Environment
pcfe@t3600 ~ $ cat /etc/os-release
NAME="Fedora Linux"
VERSION="38 (KDE Plasma)"
ID=fedora
VERSION_ID=38
VERSION_CODENAME=""
PLATFORM_ID="platform:f38"
PRETTY_NAME="Fedora Linux 38 (KDE Plasma)"
ANSI_COLOR="0;38;2;60;110;180"
LOGO=fedora-logo-icon
CPE_NAME="cpe:/o:fedoraproject:fedora:38"
DEFAULT_HOSTNAME="fedora"
HOME_URL="https://fedoraproject.org/"
DOCUMENTATION_URL="https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/"
SUPPORT_URL="https://ask.fedoraproject.org/"
BUG_REPORT_URL="https://bugzilla.redhat.com/"
REDHAT_BUGZILLA_PRODUCT="Fedora"
REDHAT_BUGZILLA_PRODUCT_VERSION=38
REDHAT_SUPPORT_PRODUCT="Fedora"
REDHAT_SUPPORT_PRODUCT_VERSION=38
SUPPORT_END=2024-05-14
VARIANT="KDE Plasma"
VARIANT_ID=kde
pcfe@t3600 ~ $ rpm -qf $(which ansible)
ansible-core-2.14.8-1.fc38.noarch
pcfe@t3600 ~ $ rpm -V ansible-core-2.14.8-1.fc38.noarch
pcfe@t3600 ~ $
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```bash
pcfe@t3600 ~ $ pwd
/home/pcfe
pcfe@t3600 ~ $ ansible-config dump --only-changed -t all
CONFIG_FILE() = /etc/ansible/ansible.cfg
pcfe@t3600 ~ $ rpm -qf /etc/ansible/ansible.cfg
ansible-core-2.14.8-1.fc38.noarch
pcfe@t3600 ~ $ rpm -V ansible-core-2.14.8-1.fc38.noarch
pcfe@t3600 ~ $ cat ~/tmp/inventory.ini
gitlab-runner-01 ansible_host=gitlab-runner-01.internal.pcfe.net ansible_user=root
zimaboard-01 ansible_host=zimaboard-01.internal.pcfe.net ansible_user=ansible
pcfe@t3600 ~ $ ansible -i ~/tmp/inventory.ini -m ping all
<unknown>:1: SyntaxWarning: invalid decimal literal
<unknown>:1: SyntaxWarning: invalid decimal literal
gitlab-runner-01 | SUCCESS => {
"ansible_facts": {
"discovered_interpreter_python": "/usr/bin/python3"
},
"changed": false,
"ping": "pong"
}
zimaboard-01 | SUCCESS => {
"ansible_facts": {
"discovered_interpreter_python": "/usr/bin/python3"
},
"changed": false,
"ping": "pong"
}
pcfe@t3600 ~ $
```
### Expected Results
I would like to understand what makes Ansible complain about this minimal INI type inventory.
If there is nothing I did wrong in the inventory, then I would like Ansible to please not complain about invalid decimal literals.
Additionally, if there is something invalid in my INI format inventory, then it would be nice if Ansible was more specific (affected line number, pointer to which part of some spec I violate) towards the user.
Not every user might be comfortable zeroing in on such inventory lines with for example `git bisect` (which is how I found the two offending lines in my full inventory).
### Actual Results
```console
pcfe@t3600 ~ $ ls -l /home/pcfe/.ansible/plugins /home/pcfe/.ansible/collections
ls: cannot access '/home/pcfe/.ansible/plugins': No such file or directory
ls: cannot access '/home/pcfe/.ansible/collections': No such file or directory
pcfe@t3600 ~ $ cat ~/tmp/inventory.ini
gitlab-runner-01 ansible_host=gitlab-runner-01.internal.pcfe.net ansible_user=root
zimaboard-01 ansible_host=zimaboard-01.internal.pcfe.net ansible_user=ansible
pcfe@t3600 ~ $ ansible -i ~/tmp/inventory.ini -m ping all -vvvv
ansible [core 2.14.8]
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/pcfe/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.11/site-packages/ansible
ansible collection location = /home/pcfe/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/bin/ansible
python version = 3.11.4 (main, Jun 7 2023, 00:00:00) [GCC 13.1.1 20230511 (Red Hat 13.1.1-2)] (/usr/bin/python3)
jinja version = 3.0.3
libyaml = True
Using /etc/ansible/ansible.cfg as config file
setting up inventory plugins
host_list declined parsing /home/pcfe/tmp/inventory.ini as it did not pass its verify_file() method
script declined parsing /home/pcfe/tmp/inventory.ini as it did not pass its verify_file() method
auto declined parsing /home/pcfe/tmp/inventory.ini as it did not pass its verify_file() method
yaml declined parsing /home/pcfe/tmp/inventory.ini as it did not pass its verify_file() method
<unknown>:1: SyntaxWarning: invalid decimal literal
<unknown>:1: SyntaxWarning: invalid decimal literal
Parsed /home/pcfe/tmp/inventory.ini inventory source with ini plugin
[β¦]
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/81457
|
https://github.com/ansible/ansible/pull/81707
|
2793dfa594765d402f61d80128e916e0300a38fc
|
a1a6550daf305ec9815a7b12db42c68b63426878
| 2023-08-07T15:22:39Z |
python
| 2023-09-18T14:50:50Z |
test/integration/targets/inventory_ini/inventory.ini
|
[local]
testhost ansible_connection=local ansible_become=no ansible_become_user=ansibletest1
[all:vars]
ansible_python_interpreter="{{ ansible_playbook_python }}"
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 81,457 |
INI format inventory throws "invalid decimal literal"
|
### Summary
The following minimal inventory
```ini
gitlab-runner-01 ansible_host=gitlab-runner-01.internal.pcfe.net ansible_user=root
zimaboard-01 ansible_host=zimaboard-01.internal.pcfe.net ansible_user=ansible
```
Makes ansible complain
```text
<unknown>:1: SyntaxWarning: invalid decimal literal
<unknown>:1: SyntaxWarning: invalid decimal literal
```
But I fail to understand why.
As soon as I adjust the entries as follows (non numerical char right before the `.`), it stops complaining but of course that does not help me as the DNS names are then wrong.
```ini
gitlab-runner-01 ansible_host=gitlab-runner-01foo.internal.pcfe.net ansible_user=root
zimaboard-01 ansible_host=zimaboard-01bar.internal.pcfe.net ansible_user=ansible
```
### Issue Type
Bug Report
### Component Name
ini plugin
### Ansible Version
```console
$ ansible --version
ansible [core 2.14.8]
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/pcfe/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.11/site-packages/ansible
ansible collection location = /home/pcfe/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/bin/ansible
python version = 3.11.4 (main, Jun 7 2023, 00:00:00) [GCC 13.1.1 20230511 (Red Hat 13.1.1-2)] (/usr/bin/python3)
jinja version = 3.0.3
libyaml = True
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
CONFIG_FILE() = /etc/ansible/ansible.cfg
```
### OS / Environment
pcfe@t3600 ~ $ cat /etc/os-release
NAME="Fedora Linux"
VERSION="38 (KDE Plasma)"
ID=fedora
VERSION_ID=38
VERSION_CODENAME=""
PLATFORM_ID="platform:f38"
PRETTY_NAME="Fedora Linux 38 (KDE Plasma)"
ANSI_COLOR="0;38;2;60;110;180"
LOGO=fedora-logo-icon
CPE_NAME="cpe:/o:fedoraproject:fedora:38"
DEFAULT_HOSTNAME="fedora"
HOME_URL="https://fedoraproject.org/"
DOCUMENTATION_URL="https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/"
SUPPORT_URL="https://ask.fedoraproject.org/"
BUG_REPORT_URL="https://bugzilla.redhat.com/"
REDHAT_BUGZILLA_PRODUCT="Fedora"
REDHAT_BUGZILLA_PRODUCT_VERSION=38
REDHAT_SUPPORT_PRODUCT="Fedora"
REDHAT_SUPPORT_PRODUCT_VERSION=38
SUPPORT_END=2024-05-14
VARIANT="KDE Plasma"
VARIANT_ID=kde
pcfe@t3600 ~ $ rpm -qf $(which ansible)
ansible-core-2.14.8-1.fc38.noarch
pcfe@t3600 ~ $ rpm -V ansible-core-2.14.8-1.fc38.noarch
pcfe@t3600 ~ $
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```bash
pcfe@t3600 ~ $ pwd
/home/pcfe
pcfe@t3600 ~ $ ansible-config dump --only-changed -t all
CONFIG_FILE() = /etc/ansible/ansible.cfg
pcfe@t3600 ~ $ rpm -qf /etc/ansible/ansible.cfg
ansible-core-2.14.8-1.fc38.noarch
pcfe@t3600 ~ $ rpm -V ansible-core-2.14.8-1.fc38.noarch
pcfe@t3600 ~ $ cat ~/tmp/inventory.ini
gitlab-runner-01 ansible_host=gitlab-runner-01.internal.pcfe.net ansible_user=root
zimaboard-01 ansible_host=zimaboard-01.internal.pcfe.net ansible_user=ansible
pcfe@t3600 ~ $ ansible -i ~/tmp/inventory.ini -m ping all
<unknown>:1: SyntaxWarning: invalid decimal literal
<unknown>:1: SyntaxWarning: invalid decimal literal
gitlab-runner-01 | SUCCESS => {
"ansible_facts": {
"discovered_interpreter_python": "/usr/bin/python3"
},
"changed": false,
"ping": "pong"
}
zimaboard-01 | SUCCESS => {
"ansible_facts": {
"discovered_interpreter_python": "/usr/bin/python3"
},
"changed": false,
"ping": "pong"
}
pcfe@t3600 ~ $
```
### Expected Results
I would like to understand what makes Ansible complain about this minimal INI type inventory.
If there is nothing I did wrong in the inventory, then I would like Ansible to please not complain about invalid decimal literals.
Additionally, if there is something invalid in my INI format inventory, then it would be nice if Ansible was more specific (affected line number, pointer to which part of some spec I violate) towards the user.
Not every user might be comfortable zeroing in on such inventory lines with for example `git bisect` (which is how I found the two offending lines in my full inventory).
### Actual Results
```console
pcfe@t3600 ~ $ ls -l /home/pcfe/.ansible/plugins /home/pcfe/.ansible/collections
ls: cannot access '/home/pcfe/.ansible/plugins': No such file or directory
ls: cannot access '/home/pcfe/.ansible/collections': No such file or directory
pcfe@t3600 ~ $ cat ~/tmp/inventory.ini
gitlab-runner-01 ansible_host=gitlab-runner-01.internal.pcfe.net ansible_user=root
zimaboard-01 ansible_host=zimaboard-01.internal.pcfe.net ansible_user=ansible
pcfe@t3600 ~ $ ansible -i ~/tmp/inventory.ini -m ping all -vvvv
ansible [core 2.14.8]
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/pcfe/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.11/site-packages/ansible
ansible collection location = /home/pcfe/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/bin/ansible
python version = 3.11.4 (main, Jun 7 2023, 00:00:00) [GCC 13.1.1 20230511 (Red Hat 13.1.1-2)] (/usr/bin/python3)
jinja version = 3.0.3
libyaml = True
Using /etc/ansible/ansible.cfg as config file
setting up inventory plugins
host_list declined parsing /home/pcfe/tmp/inventory.ini as it did not pass its verify_file() method
script declined parsing /home/pcfe/tmp/inventory.ini as it did not pass its verify_file() method
auto declined parsing /home/pcfe/tmp/inventory.ini as it did not pass its verify_file() method
yaml declined parsing /home/pcfe/tmp/inventory.ini as it did not pass its verify_file() method
<unknown>:1: SyntaxWarning: invalid decimal literal
<unknown>:1: SyntaxWarning: invalid decimal literal
Parsed /home/pcfe/tmp/inventory.ini inventory source with ini plugin
[β¦]
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/81457
|
https://github.com/ansible/ansible/pull/81707
|
2793dfa594765d402f61d80128e916e0300a38fc
|
a1a6550daf305ec9815a7b12db42c68b63426878
| 2023-08-07T15:22:39Z |
python
| 2023-09-18T14:50:50Z |
test/integration/targets/inventory_ini/runme.sh
|
#!/usr/bin/env bash
set -eux
ansible-playbook -v -i inventory.ini test_ansible_become.yml
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 81,659 |
ANSIBLE_DEBUG causes add_host to fail
|
### Summary
Saw this happening with ansible 2.15.3
When using ANSIBLE_DEBUG=1 with a add_host task, the task fails with
ansible/utils/vars.py\", line 91, in combine_vars\n result = a | b\nTypeError: unsupported operand type(s) for |: 'VarsWithSources' and 'dict'
Same bug with ANSIBLE_DEBUG enabled: https://github.com/ansible/ansible/issues/79763
Where fix is needed: https://github.com/ansible/ansible/blob/3ec0850df9429f4b1abc78d9ba505df12d7dd1db/lib/ansible/utils/vars.py#L91
### Issue Type
Bug Report
### Component Name
add_host
### Ansible Version
```console
$ ansible --version
ansible [core 2.15.3]
config file = None
configured module search path = ['/home/runner/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/lib/python3.9/site-packages/ansible
ansible collection location = /home/runner/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/local/bin/ansible
python version = 3.9.17 (main, Jun 13 2023, 16:05:09) [GCC 8.3.0] (/usr/local/bin/python3)
jinja version = 3.0.3
libyaml = True
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
ansible-config [core 2.15.3]
config file = None
configured module search path = ['/home/runner/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/lib/python3.9/site-packages/ansible
ansible collection location = /home/runner/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/local/bin/ansible-config
python version = 3.9.17 (main, Jun 13 2023, 16:05:09) [GCC 8.3.0] (/usr/local/bin/python3)
jinja version = 3.0.3
libyaml = True
No config file found; using defaults
Loading collection ansible.builtin from
```
### OS / Environment
Debian GNU/Linux 10 (buster)
Linux 3.10.0-1127.el7.x86_64 x86_64
### Steps to Reproduce
```
- name: create inventory
hosts: localhost
gather_facts: no
tasks:
- add_host:
name: "{{ item }}"
groups: resource_types
with_items:
- node
- pod
- namespace
- ResourceQuota
```
### Expected Results
successfull result of playbook
### Actual Results
```console
ansible/utils/vars.py\", line 91, in combine_vars\n result = a | b\nTypeError: unsupported operand type(s) for |: 'VarsWithSources' and 'dict'
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/81659
|
https://github.com/ansible/ansible/pull/81700
|
f7234968d241d7171aadb1e873a67510753f3163
|
0ea40e09d1b35bcb69ff4d9cecf3d0defa4b36e8
| 2023-09-07T15:36:13Z |
python
| 2023-09-19T15:03:58Z |
changelogs/fragments/81659_varswithsources.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 81,659 |
ANSIBLE_DEBUG causes add_host to fail
|
### Summary
Saw this happening with ansible 2.15.3
When using ANSIBLE_DEBUG=1 with a add_host task, the task fails with
ansible/utils/vars.py\", line 91, in combine_vars\n result = a | b\nTypeError: unsupported operand type(s) for |: 'VarsWithSources' and 'dict'
Same bug with ANSIBLE_DEBUG enabled: https://github.com/ansible/ansible/issues/79763
Where fix is needed: https://github.com/ansible/ansible/blob/3ec0850df9429f4b1abc78d9ba505df12d7dd1db/lib/ansible/utils/vars.py#L91
### Issue Type
Bug Report
### Component Name
add_host
### Ansible Version
```console
$ ansible --version
ansible [core 2.15.3]
config file = None
configured module search path = ['/home/runner/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/lib/python3.9/site-packages/ansible
ansible collection location = /home/runner/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/local/bin/ansible
python version = 3.9.17 (main, Jun 13 2023, 16:05:09) [GCC 8.3.0] (/usr/local/bin/python3)
jinja version = 3.0.3
libyaml = True
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
ansible-config [core 2.15.3]
config file = None
configured module search path = ['/home/runner/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/lib/python3.9/site-packages/ansible
ansible collection location = /home/runner/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/local/bin/ansible-config
python version = 3.9.17 (main, Jun 13 2023, 16:05:09) [GCC 8.3.0] (/usr/local/bin/python3)
jinja version = 3.0.3
libyaml = True
No config file found; using defaults
Loading collection ansible.builtin from
```
### OS / Environment
Debian GNU/Linux 10 (buster)
Linux 3.10.0-1127.el7.x86_64 x86_64
### Steps to Reproduce
```
- name: create inventory
hosts: localhost
gather_facts: no
tasks:
- add_host:
name: "{{ item }}"
groups: resource_types
with_items:
- node
- pod
- namespace
- ResourceQuota
```
### Expected Results
successfull result of playbook
### Actual Results
```console
ansible/utils/vars.py\", line 91, in combine_vars\n result = a | b\nTypeError: unsupported operand type(s) for |: 'VarsWithSources' and 'dict'
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/81659
|
https://github.com/ansible/ansible/pull/81700
|
f7234968d241d7171aadb1e873a67510753f3163
|
0ea40e09d1b35bcb69ff4d9cecf3d0defa4b36e8
| 2023-09-07T15:36:13Z |
python
| 2023-09-19T15:03:58Z |
lib/ansible/utils/vars.py
|
# (c) 2012-2014, Michael DeHaan <[email protected]>
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
# Make coding more python3-ish
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import keyword
import random
import uuid
from collections.abc import MutableMapping, MutableSequence
from json import dumps
from ansible import constants as C
from ansible import context
from ansible.errors import AnsibleError, AnsibleOptionsError
from ansible.module_utils.six import string_types
from ansible.module_utils.common.text.converters import to_native, to_text
from ansible.parsing.splitter import parse_kv
ADDITIONAL_PY2_KEYWORDS = frozenset(("True", "False", "None"))
_MAXSIZE = 2 ** 32
cur_id = 0
node_mac = ("%012x" % uuid.getnode())[:12]
random_int = ("%08x" % random.randint(0, _MAXSIZE))[:8]
def get_unique_id():
global cur_id
cur_id += 1
return "-".join([
node_mac[0:8],
node_mac[8:12],
random_int[0:4],
random_int[4:8],
("%012x" % cur_id)[:12],
])
def _validate_mutable_mappings(a, b):
"""
Internal convenience function to ensure arguments are MutableMappings
This checks that all arguments are MutableMappings or raises an error
:raises AnsibleError: if one of the arguments is not a MutableMapping
"""
# If this becomes generally needed, change the signature to operate on
# a variable number of arguments instead.
if not (isinstance(a, MutableMapping) and isinstance(b, MutableMapping)):
myvars = []
for x in [a, b]:
try:
myvars.append(dumps(x))
except Exception:
myvars.append(to_native(x))
raise AnsibleError("failed to combine variables, expected dicts but got a '{0}' and a '{1}': \n{2}\n{3}".format(
a.__class__.__name__, b.__class__.__name__, myvars[0], myvars[1])
)
def combine_vars(a, b, merge=None):
"""
Return a copy of dictionaries of variables based on configured hash behavior
"""
if merge or merge is None and C.DEFAULT_HASH_BEHAVIOUR == "merge":
return merge_hash(a, b)
else:
# HASH_BEHAVIOUR == 'replace'
_validate_mutable_mappings(a, b)
result = a | b
return result
def merge_hash(x, y, recursive=True, list_merge='replace'):
"""
Return a new dictionary result of the merges of y into x,
so that keys from y take precedence over keys from x.
(x and y aren't modified)
"""
if list_merge not in ('replace', 'keep', 'append', 'prepend', 'append_rp', 'prepend_rp'):
raise AnsibleError("merge_hash: 'list_merge' argument can only be equal to 'replace', 'keep', 'append', 'prepend', 'append_rp' or 'prepend_rp'")
# verify x & y are dicts
_validate_mutable_mappings(x, y)
# to speed things up: if x is empty or equal to y, return y
# (this `if` can be remove without impact on the function
# except performance)
if x == {} or x == y:
return y.copy()
# in the following we will copy elements from y to x, but
# we don't want to modify x, so we create a copy of it
x = x.copy()
# to speed things up: use dict.update if possible
# (this `if` can be remove without impact on the function
# except performance)
if not recursive and list_merge == 'replace':
x.update(y)
return x
# insert each element of y in x, overriding the one in x
# (as y has higher priority)
# we copy elements from y to x instead of x to y because
# there is a high probability x will be the "default" dict the user
# want to "patch" with y
# therefore x will have much more elements than y
for key, y_value in y.items():
# if `key` isn't in x
# update x and move on to the next element of y
if key not in x:
x[key] = y_value
continue
# from this point we know `key` is in x
x_value = x[key]
# if both x's element and y's element are dicts
# recursively "combine" them or override x's with y's element
# depending on the `recursive` argument
# and move on to the next element of y
if isinstance(x_value, MutableMapping) and isinstance(y_value, MutableMapping):
if recursive:
x[key] = merge_hash(x_value, y_value, recursive, list_merge)
else:
x[key] = y_value
continue
# if both x's element and y's element are lists
# "merge" them depending on the `list_merge` argument
# and move on to the next element of y
if isinstance(x_value, MutableSequence) and isinstance(y_value, MutableSequence):
if list_merge == 'replace':
# replace x value by y's one as it has higher priority
x[key] = y_value
elif list_merge == 'append':
x[key] = x_value + y_value
elif list_merge == 'prepend':
x[key] = y_value + x_value
elif list_merge == 'append_rp':
# append all elements from y_value (high prio) to x_value (low prio)
# and remove x_value elements that are also in y_value
# we don't remove elements from x_value nor y_value that were already in double
# (we assume that there is a reason if there where such double elements)
# _rp stands for "remove present"
x[key] = [z for z in x_value if z not in y_value] + y_value
elif list_merge == 'prepend_rp':
# same as 'append_rp' but y_value elements are prepend
x[key] = y_value + [z for z in x_value if z not in y_value]
# else 'keep'
# keep x value even if y it's of higher priority
# it's done by not changing x[key]
continue
# else just override x's element with y's one
x[key] = y_value
return x
def load_extra_vars(loader):
if not getattr(load_extra_vars, 'extra_vars', None):
extra_vars = {}
for extra_vars_opt in context.CLIARGS.get('extra_vars', tuple()):
data = None
extra_vars_opt = to_text(extra_vars_opt, errors='surrogate_or_strict')
if extra_vars_opt is None or not extra_vars_opt:
continue
if extra_vars_opt.startswith(u"@"):
# Argument is a YAML file (JSON is a subset of YAML)
data = loader.load_from_file(extra_vars_opt[1:])
elif extra_vars_opt[0] in [u'/', u'.']:
raise AnsibleOptionsError("Please prepend extra_vars filename '%s' with '@'" % extra_vars_opt)
elif extra_vars_opt[0] in [u'[', u'{']:
# Arguments as YAML
data = loader.load(extra_vars_opt)
else:
# Arguments as Key-value
data = parse_kv(extra_vars_opt)
if isinstance(data, MutableMapping):
extra_vars = combine_vars(extra_vars, data)
else:
raise AnsibleOptionsError("Invalid extra vars data supplied. '%s' could not be made into a dictionary" % extra_vars_opt)
setattr(load_extra_vars, 'extra_vars', extra_vars)
return load_extra_vars.extra_vars
def load_options_vars(version):
if not getattr(load_options_vars, 'options_vars', None):
if version is None:
version = 'Unknown'
options_vars = {'ansible_version': version}
attrs = {'check': 'check_mode',
'diff': 'diff_mode',
'forks': 'forks',
'inventory': 'inventory_sources',
'skip_tags': 'skip_tags',
'subset': 'limit',
'tags': 'run_tags',
'verbosity': 'verbosity'}
for attr, alias in attrs.items():
opt = context.CLIARGS.get(attr)
if opt is not None:
options_vars['ansible_%s' % alias] = opt
setattr(load_options_vars, 'options_vars', options_vars)
return load_options_vars.options_vars
def _isidentifier_PY3(ident):
if not isinstance(ident, string_types):
return False
if not ident.isascii():
return False
if not ident.isidentifier():
return False
if keyword.iskeyword(ident):
return False
return True
isidentifier = _isidentifier_PY3
isidentifier.__doc__ = """Determine if string is valid identifier.
The purpose of this function is to be used to validate any variables created in
a play to be valid Python identifiers and to not conflict with Python keywords
to prevent unexpected behavior. Since Python 2 and Python 3 differ in what
a valid identifier is, this function unifies the validation so playbooks are
portable between the two. The following changes were made:
* disallow non-ascii characters (Python 3 allows for them as opposed to Python 2)
* True, False and None are reserved keywords (these are reserved keywords
on Python 3 as opposed to Python 2)
:arg ident: A text string of identifier to check. Note: It is callers
responsibility to convert ident to text if it is not already.
Originally posted at http://stackoverflow.com/a/29586366
"""
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 81,659 |
ANSIBLE_DEBUG causes add_host to fail
|
### Summary
Saw this happening with ansible 2.15.3
When using ANSIBLE_DEBUG=1 with a add_host task, the task fails with
ansible/utils/vars.py\", line 91, in combine_vars\n result = a | b\nTypeError: unsupported operand type(s) for |: 'VarsWithSources' and 'dict'
Same bug with ANSIBLE_DEBUG enabled: https://github.com/ansible/ansible/issues/79763
Where fix is needed: https://github.com/ansible/ansible/blob/3ec0850df9429f4b1abc78d9ba505df12d7dd1db/lib/ansible/utils/vars.py#L91
### Issue Type
Bug Report
### Component Name
add_host
### Ansible Version
```console
$ ansible --version
ansible [core 2.15.3]
config file = None
configured module search path = ['/home/runner/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/lib/python3.9/site-packages/ansible
ansible collection location = /home/runner/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/local/bin/ansible
python version = 3.9.17 (main, Jun 13 2023, 16:05:09) [GCC 8.3.0] (/usr/local/bin/python3)
jinja version = 3.0.3
libyaml = True
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
ansible-config [core 2.15.3]
config file = None
configured module search path = ['/home/runner/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/lib/python3.9/site-packages/ansible
ansible collection location = /home/runner/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/local/bin/ansible-config
python version = 3.9.17 (main, Jun 13 2023, 16:05:09) [GCC 8.3.0] (/usr/local/bin/python3)
jinja version = 3.0.3
libyaml = True
No config file found; using defaults
Loading collection ansible.builtin from
```
### OS / Environment
Debian GNU/Linux 10 (buster)
Linux 3.10.0-1127.el7.x86_64 x86_64
### Steps to Reproduce
```
- name: create inventory
hosts: localhost
gather_facts: no
tasks:
- add_host:
name: "{{ item }}"
groups: resource_types
with_items:
- node
- pod
- namespace
- ResourceQuota
```
### Expected Results
successfull result of playbook
### Actual Results
```console
ansible/utils/vars.py\", line 91, in combine_vars\n result = a | b\nTypeError: unsupported operand type(s) for |: 'VarsWithSources' and 'dict'
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/81659
|
https://github.com/ansible/ansible/pull/81700
|
f7234968d241d7171aadb1e873a67510753f3163
|
0ea40e09d1b35bcb69ff4d9cecf3d0defa4b36e8
| 2023-09-07T15:36:13Z |
python
| 2023-09-19T15:03:58Z |
lib/ansible/vars/manager.py
|
# (c) 2012-2014, Michael DeHaan <[email protected]>
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
# Make coding more python3-ish
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import os
import sys
from collections import defaultdict
from collections.abc import Mapping, MutableMapping, Sequence
from hashlib import sha1
from jinja2.exceptions import UndefinedError
from ansible import constants as C
from ansible.errors import AnsibleError, AnsibleParserError, AnsibleUndefinedVariable, AnsibleFileNotFound, AnsibleAssertionError, AnsibleTemplateError
from ansible.inventory.host import Host
from ansible.inventory.helpers import sort_groups, get_group_vars
from ansible.module_utils.common.text.converters import to_text
from ansible.module_utils.six import text_type, string_types
from ansible.plugins.loader import lookup_loader
from ansible.vars.fact_cache import FactCache
from ansible.template import Templar
from ansible.utils.display import Display
from ansible.utils.listify import listify_lookup_plugin_terms
from ansible.utils.vars import combine_vars, load_extra_vars, load_options_vars
from ansible.utils.unsafe_proxy import wrap_var
from ansible.vars.clean import namespace_facts, clean_facts
from ansible.vars.plugins import get_vars_from_inventory_sources, get_vars_from_path
display = Display()
def preprocess_vars(a):
'''
Ensures that vars contained in the parameter passed in are
returned as a list of dictionaries, to ensure for instance
that vars loaded from a file conform to an expected state.
'''
if a is None:
return None
elif not isinstance(a, list):
data = [a]
else:
data = a
for item in data:
if not isinstance(item, MutableMapping):
raise AnsibleError("variable files must contain either a dictionary of variables, or a list of dictionaries. Got: %s (%s)" % (a, type(a)))
return data
class VariableManager:
_ALLOWED = frozenset(['plugins_by_group', 'groups_plugins_play', 'groups_plugins_inventory', 'groups_inventory',
'all_plugins_play', 'all_plugins_inventory', 'all_inventory'])
def __init__(self, loader=None, inventory=None, version_info=None):
self._nonpersistent_fact_cache = defaultdict(dict)
self._vars_cache = defaultdict(dict)
self._extra_vars = defaultdict(dict)
self._host_vars_files = defaultdict(dict)
self._group_vars_files = defaultdict(dict)
self._inventory = inventory
self._loader = loader
self._hostvars = None
self._omit_token = '__omit_place_holder__%s' % sha1(os.urandom(64)).hexdigest()
self._options_vars = load_options_vars(version_info)
# If the basedir is specified as the empty string then it results in cwd being used.
# This is not a safe location to load vars from.
basedir = self._options_vars.get('basedir', False)
self.safe_basedir = bool(basedir is False or basedir)
# load extra vars
self._extra_vars = load_extra_vars(loader=self._loader)
# load fact cache
try:
self._fact_cache = FactCache()
except AnsibleError as e:
# bad cache plugin is not fatal error
# fallback to a dict as in memory cache
display.warning(to_text(e))
self._fact_cache = {}
def __getstate__(self):
data = dict(
fact_cache=self._fact_cache,
np_fact_cache=self._nonpersistent_fact_cache,
vars_cache=self._vars_cache,
extra_vars=self._extra_vars,
host_vars_files=self._host_vars_files,
group_vars_files=self._group_vars_files,
omit_token=self._omit_token,
options_vars=self._options_vars,
inventory=self._inventory,
safe_basedir=self.safe_basedir,
)
return data
def __setstate__(self, data):
self._fact_cache = data.get('fact_cache', defaultdict(dict))
self._nonpersistent_fact_cache = data.get('np_fact_cache', defaultdict(dict))
self._vars_cache = data.get('vars_cache', defaultdict(dict))
self._extra_vars = data.get('extra_vars', dict())
self._host_vars_files = data.get('host_vars_files', defaultdict(dict))
self._group_vars_files = data.get('group_vars_files', defaultdict(dict))
self._omit_token = data.get('omit_token', '__omit_place_holder__%s' % sha1(os.urandom(64)).hexdigest())
self._inventory = data.get('inventory', None)
self._options_vars = data.get('options_vars', dict())
self.safe_basedir = data.get('safe_basedir', False)
self._loader = None
self._hostvars = None
@property
def extra_vars(self):
return self._extra_vars
def set_inventory(self, inventory):
self._inventory = inventory
def get_vars(self, play=None, host=None, task=None, include_hostvars=True, include_delegate_to=False, use_cache=True,
_hosts=None, _hosts_all=None, stage='task'):
'''
Returns the variables, with optional "context" given via the parameters
for the play, host, and task (which could possibly result in different
sets of variables being returned due to the additional context).
The order of precedence is:
- play->roles->get_default_vars (if there is a play context)
- group_vars_files[host] (if there is a host context)
- host_vars_files[host] (if there is a host context)
- host->get_vars (if there is a host context)
- fact_cache[host] (if there is a host context)
- play vars (if there is a play context)
- play vars_files (if there's no host context, ignore
file names that cannot be templated)
- task->get_vars (if there is a task context)
- vars_cache[host] (if there is a host context)
- extra vars
``_hosts`` and ``_hosts_all`` should be considered private args, with only internal trusted callers relying
on the functionality they provide. These arguments may be removed at a later date without a deprecation
period and without warning.
'''
display.debug("in VariableManager get_vars()")
all_vars = dict()
magic_variables = self._get_magic_variables(
play=play,
host=host,
task=task,
include_hostvars=include_hostvars,
_hosts=_hosts,
_hosts_all=_hosts_all,
)
_vars_sources = {}
def _combine_and_track(data, new_data, source):
'''
Wrapper function to update var sources dict and call combine_vars()
See notes in the VarsWithSources docstring for caveats and limitations of the source tracking
'''
if C.DEFAULT_DEBUG:
# Populate var sources dict
for key in new_data:
_vars_sources[key] = source
return combine_vars(data, new_data)
# default for all cases
basedirs = []
if self.safe_basedir: # avoid adhoc/console loading cwd
basedirs = [self._loader.get_basedir()]
if play:
if not C.DEFAULT_PRIVATE_ROLE_VARS:
# first we compile any vars specified in defaults/main.yml
# for all roles within the specified play
for role in play.get_roles():
# role from roles or include_role+public or import_role and completed
if not role.from_include or role.public or (role.static and role._completed.get(to_text(host), False)):
all_vars = _combine_and_track(all_vars, role.get_default_vars(), "role '%s' defaults" % role.name)
if task:
# set basedirs
if C.PLAYBOOK_VARS_ROOT == 'all': # should be default
basedirs = task.get_search_path()
elif C.PLAYBOOK_VARS_ROOT in ('bottom', 'playbook_dir'): # only option in 2.4.0
basedirs = [task.get_search_path()[0]]
elif C.PLAYBOOK_VARS_ROOT != 'top':
# preserves default basedirs, only option pre 2.3
raise AnsibleError('Unknown playbook vars logic: %s' % C.PLAYBOOK_VARS_ROOT)
# if we have a task in this context, and that task has a role, make
# sure it sees its defaults above any other roles, as we previously
# (v1) made sure each task had a copy of its roles default vars
# TODO: investigate why we need play or include_role check?
if task._role is not None and (play or task.action in C._ACTION_INCLUDE_ROLE):
all_vars = _combine_and_track(all_vars, task._role.get_default_vars(dep_chain=task.get_dep_chain()),
"role '%s' defaults" % task._role.name)
if host:
# THE 'all' group and the rest of groups for a host, used below
all_group = self._inventory.groups.get('all')
host_groups = sort_groups([g for g in host.get_groups() if g.name not in ['all']])
def _get_plugin_vars(plugin, path, entities):
data = {}
try:
data = plugin.get_vars(self._loader, path, entities)
except AttributeError:
try:
for entity in entities:
if isinstance(entity, Host):
data |= plugin.get_host_vars(entity.name)
else:
data |= plugin.get_group_vars(entity.name)
except AttributeError:
if hasattr(plugin, 'run'):
raise AnsibleError("Cannot use v1 type vars plugin %s from %s" % (plugin._load_name, plugin._original_path))
else:
raise AnsibleError("Invalid vars plugin %s from %s" % (plugin._load_name, plugin._original_path))
return data
# internal functions that actually do the work
def _plugins_inventory(entities):
''' merges all entities by inventory source '''
return get_vars_from_inventory_sources(self._loader, self._inventory._sources, entities, stage)
def _plugins_play(entities):
''' merges all entities adjacent to play '''
data = {}
for path in basedirs:
data = _combine_and_track(data, get_vars_from_path(self._loader, path, entities, stage), "path '%s'" % path)
return data
# configurable functions that are sortable via config, remember to add to _ALLOWED if expanding this list
def all_inventory():
return all_group.get_vars()
def all_plugins_inventory():
return _plugins_inventory([all_group])
def all_plugins_play():
return _plugins_play([all_group])
def groups_inventory():
''' gets group vars from inventory '''
return get_group_vars(host_groups)
def groups_plugins_inventory():
''' gets plugin sources from inventory for groups '''
return _plugins_inventory(host_groups)
def groups_plugins_play():
''' gets plugin sources from play for groups '''
return _plugins_play(host_groups)
def plugins_by_groups():
'''
merges all plugin sources by group,
This should be used instead, NOT in combination with the other groups_plugins* functions
'''
data = {}
for group in host_groups:
data[group] = _combine_and_track(data[group], _plugins_inventory(group), "inventory group_vars for '%s'" % group)
data[group] = _combine_and_track(data[group], _plugins_play(group), "playbook group_vars for '%s'" % group)
return data
# Merge groups as per precedence config
# only allow to call the functions we want exposed
for entry in C.VARIABLE_PRECEDENCE:
if entry in self._ALLOWED:
display.debug('Calling %s to load vars for %s' % (entry, host.name))
all_vars = _combine_and_track(all_vars, locals()[entry](), "group vars, precedence entry '%s'" % entry)
else:
display.warning('Ignoring unknown variable precedence entry: %s' % (entry))
# host vars, from inventory, inventory adjacent and play adjacent via plugins
all_vars = _combine_and_track(all_vars, host.get_vars(), "host vars for '%s'" % host)
all_vars = _combine_and_track(all_vars, _plugins_inventory([host]), "inventory host_vars for '%s'" % host)
all_vars = _combine_and_track(all_vars, _plugins_play([host]), "playbook host_vars for '%s'" % host)
# finally, the facts caches for this host, if it exists
# TODO: cleaning of facts should eventually become part of taskresults instead of vars
try:
facts = wrap_var(self._fact_cache.get(host.name, {}))
all_vars |= namespace_facts(facts)
# push facts to main namespace
if C.INJECT_FACTS_AS_VARS:
all_vars = _combine_and_track(all_vars, wrap_var(clean_facts(facts)), "facts")
else:
# always 'promote' ansible_local
all_vars = _combine_and_track(all_vars, wrap_var({'ansible_local': facts.get('ansible_local', {})}), "facts")
except KeyError:
pass
if play:
all_vars = _combine_and_track(all_vars, play.get_vars(), "play vars")
vars_files = play.get_vars_files()
try:
for vars_file_item in vars_files:
# create a set of temporary vars here, which incorporate the extra
# and magic vars so we can properly template the vars_files entries
# NOTE: this makes them depend on host vars/facts so things like
# ansible_facts['os_distribution'] can be used, ala include_vars.
# Consider DEPRECATING this in the future, since we have include_vars ...
temp_vars = combine_vars(all_vars, self._extra_vars)
temp_vars = combine_vars(temp_vars, magic_variables)
templar = Templar(loader=self._loader, variables=temp_vars)
# we assume each item in the list is itself a list, as we
# support "conditional includes" for vars_files, which mimics
# the with_first_found mechanism.
vars_file_list = vars_file_item
if not isinstance(vars_file_list, list):
vars_file_list = [vars_file_list]
# now we iterate through the (potential) files, and break out
# as soon as we read one from the list. If none are found, we
# raise an error, which is silently ignored at this point.
try:
for vars_file in vars_file_list:
vars_file = templar.template(vars_file)
if not (isinstance(vars_file, Sequence)):
raise AnsibleError(
"Invalid vars_files entry found: %r\n"
"vars_files entries should be either a string type or "
"a list of string types after template expansion" % vars_file
)
try:
play_search_stack = play.get_search_path()
found_file = real_file = self._loader.path_dwim_relative_stack(play_search_stack, 'vars', vars_file)
data = preprocess_vars(self._loader.load_from_file(found_file, unsafe=True, cache=False))
if data is not None:
for item in data:
all_vars = _combine_and_track(all_vars, item, "play vars_files from '%s'" % vars_file)
break
except AnsibleFileNotFound:
# we continue on loader failures
continue
except AnsibleParserError:
raise
else:
# if include_delegate_to is set to False or we don't have a host, we ignore the missing
# vars file here because we're working on a delegated host or require host vars, see NOTE above
if include_delegate_to and host:
raise AnsibleFileNotFound("vars file %s was not found" % vars_file_item)
except (UndefinedError, AnsibleUndefinedVariable):
if host is not None and self._fact_cache.get(host.name, dict()).get('module_setup') and task is not None:
raise AnsibleUndefinedVariable("an undefined variable was found when attempting to template the vars_files item '%s'"
% vars_file_item, obj=vars_file_item)
else:
# we do not have a full context here, and the missing variable could be because of that
# so just show a warning and continue
display.vvv("skipping vars_file '%s' due to an undefined variable" % vars_file_item)
continue
display.vvv("Read vars_file '%s'" % vars_file_item)
except TypeError:
raise AnsibleParserError("Error while reading vars files - please supply a list of file names. "
"Got '%s' of type %s" % (vars_files, type(vars_files)))
# By default, we now merge in all exported vars from all roles in the play,
# unless the user has disabled this via a config option
if not C.DEFAULT_PRIVATE_ROLE_VARS:
for role in play.get_roles():
if not role.from_include or role.public or (role.static and role._completed.get(to_text(host), False)):
all_vars = _combine_and_track(all_vars, role.get_vars(include_params=False, only_exports=True), "role '%s' exported vars" % role.name)
# next, we merge in the vars from the role, which will specifically
# follow the role dependency chain, and then we merge in the tasks
# vars (which will look at parent blocks/task includes)
if task:
if task._role:
all_vars = _combine_and_track(all_vars, task._role.get_vars(task.get_dep_chain(), include_params=True, only_exports=False),
"role '%s' all vars" % task._role.name)
all_vars = _combine_and_track(all_vars, task.get_vars(), "task vars")
# next, we merge in the vars cache (include vars) and nonpersistent
# facts cache (set_fact/register), in that order
if host:
# include_vars non-persistent cache
all_vars = _combine_and_track(all_vars, self._vars_cache.get(host.get_name(), dict()), "include_vars")
# fact non-persistent cache
all_vars = _combine_and_track(all_vars, self._nonpersistent_fact_cache.get(host.name, dict()), "set_fact")
# next, we merge in role params and task include params
if task:
# special case for include tasks, where the include params
# may be specified in the vars field for the task, which should
# have higher precedence than the vars/np facts above
all_vars = _combine_and_track(all_vars, task.get_include_params(), "include params")
# extra vars
all_vars = _combine_and_track(all_vars, self._extra_vars, "extra vars")
# magic variables
all_vars = _combine_and_track(all_vars, magic_variables, "magic vars")
# special case for the 'environment' magic variable, as someone
# may have set it as a variable and we don't want to stomp on it
if task:
all_vars['environment'] = task.environment
# 'vars' magic var
if task or play:
# has to be copy, otherwise recursive ref
all_vars['vars'] = all_vars.copy()
# if we have a host and task and we're delegating to another host,
# figure out the variables for that host now so we don't have to rely on host vars later
if task and host and task.delegate_to is not None and include_delegate_to:
all_vars['ansible_delegated_vars'], all_vars['_ansible_loop_cache'] = self._get_delegated_vars(play, task, all_vars)
display.debug("done with get_vars()")
if C.DEFAULT_DEBUG:
# Use VarsWithSources wrapper class to display var sources
return VarsWithSources.new_vars_with_sources(all_vars, _vars_sources)
else:
return all_vars
def _get_magic_variables(self, play, host, task, include_hostvars, _hosts=None, _hosts_all=None):
'''
Returns a dictionary of so-called "magic" variables in Ansible,
which are special variables we set internally for use.
'''
variables = {}
variables['playbook_dir'] = os.path.abspath(self._loader.get_basedir())
variables['ansible_playbook_python'] = sys.executable
variables['ansible_config_file'] = C.CONFIG_FILE
if play:
# This is a list of all role names of all dependencies for all roles for this play
dependency_role_names = list({d.get_name() for r in play.roles for d in r.get_all_dependencies()})
# This is a list of all role names of all roles for this play
play_role_names = [r.get_name() for r in play.roles]
# ansible_role_names includes all role names, dependent or directly referenced by the play
variables['ansible_role_names'] = list(set(dependency_role_names + play_role_names))
# ansible_play_role_names includes the names of all roles directly referenced by this play
# roles that are implicitly referenced via dependencies are not listed.
variables['ansible_play_role_names'] = play_role_names
# ansible_dependent_role_names includes the names of all roles that are referenced via dependencies
# dependencies that are also explicitly named as roles are included in this list
variables['ansible_dependent_role_names'] = dependency_role_names
# DEPRECATED: role_names should be deprecated in favor of ansible_role_names or ansible_play_role_names
variables['role_names'] = variables['ansible_play_role_names']
variables['ansible_play_name'] = play.get_name()
if task:
if task._role:
variables['role_name'] = task._role.get_name(include_role_fqcn=False)
variables['role_path'] = task._role._role_path
variables['role_uuid'] = text_type(task._role._uuid)
variables['ansible_collection_name'] = task._role._role_collection
variables['ansible_role_name'] = task._role.get_name()
if self._inventory is not None:
variables['groups'] = self._inventory.get_groups_dict()
if play:
templar = Templar(loader=self._loader)
if not play.finalized and templar.is_template(play.hosts):
pattern = 'all'
else:
pattern = play.hosts or 'all'
# add the list of hosts in the play, as adjusted for limit/filters
if not _hosts_all:
_hosts_all = [h.name for h in self._inventory.get_hosts(pattern=pattern, ignore_restrictions=True)]
if not _hosts:
_hosts = [h.name for h in self._inventory.get_hosts()]
variables['ansible_play_hosts_all'] = _hosts_all[:]
variables['ansible_play_hosts'] = [x for x in variables['ansible_play_hosts_all'] if x not in play._removed_hosts]
variables['ansible_play_batch'] = [x for x in _hosts if x not in play._removed_hosts]
# DEPRECATED: play_hosts should be deprecated in favor of ansible_play_batch,
# however this would take work in the templating engine, so for now we'll add both
variables['play_hosts'] = variables['ansible_play_batch']
# the 'omit' value allows params to be left out if the variable they are based on is undefined
variables['omit'] = self._omit_token
# Set options vars
for option, option_value in self._options_vars.items():
variables[option] = option_value
if self._hostvars is not None and include_hostvars:
variables['hostvars'] = self._hostvars
return variables
def get_delegated_vars_and_hostname(self, templar, task, variables):
"""Get the delegated_vars for an individual task invocation, which may be be in the context
of an individual loop iteration.
Not used directly be VariableManager, but used primarily within TaskExecutor
"""
delegated_vars = {}
delegated_host_name = None
if task.delegate_to:
delegated_host_name = templar.template(task.delegate_to, fail_on_undefined=False)
delegated_host = self._inventory.get_host(delegated_host_name)
if delegated_host is None:
for h in self._inventory.get_hosts(ignore_limits=True, ignore_restrictions=True):
# check if the address matches, or if both the delegated_to host
# and the current host are in the list of localhost aliases
if h.address == delegated_host_name:
delegated_host = h
break
else:
delegated_host = Host(name=delegated_host_name)
delegated_vars['ansible_delegated_vars'] = {
delegated_host_name: self.get_vars(
play=task.get_play(),
host=delegated_host,
task=task,
include_delegate_to=False,
include_hostvars=True,
)
}
delegated_vars['ansible_delegated_vars'][delegated_host_name]['inventory_hostname'] = variables.get('inventory_hostname')
return delegated_vars, delegated_host_name
def _get_delegated_vars(self, play, task, existing_variables):
# This method has a lot of code copied from ``TaskExecutor._get_loop_items``
# if this is failing, and ``TaskExecutor._get_loop_items`` is not
# then more will have to be copied here.
# TODO: dedupe code here and with ``TaskExecutor._get_loop_items``
# this may be possible once we move pre-processing pre fork
if not hasattr(task, 'loop'):
# This "task" is not a Task, so we need to skip it
return {}, None
display.deprecated(
'Getting delegated variables via get_vars is no longer used, and is handled within the TaskExecutor.',
version='2.18',
)
# we unfortunately need to template the delegate_to field here,
# as we're fetching vars before post_validate has been called on
# the task that has been passed in
vars_copy = existing_variables.copy()
# get search path for this task to pass to lookup plugins
vars_copy['ansible_search_path'] = task.get_search_path()
# ensure basedir is always in (dwim already searches here but we need to display it)
if self._loader.get_basedir() not in vars_copy['ansible_search_path']:
vars_copy['ansible_search_path'].append(self._loader.get_basedir())
templar = Templar(loader=self._loader, variables=vars_copy)
items = []
has_loop = True
if task.loop_with is not None:
if task.loop_with in lookup_loader:
fail = True
if task.loop_with == 'first_found':
# first_found loops are special. If the item is undefined then we want to fall through to the next
fail = False
try:
loop_terms = listify_lookup_plugin_terms(terms=task.loop, templar=templar, fail_on_undefined=fail, convert_bare=False)
if not fail:
loop_terms = [t for t in loop_terms if not templar.is_template(t)]
mylookup = lookup_loader.get(task.loop_with, loader=self._loader, templar=templar)
# give lookup task 'context' for subdir (mostly needed for first_found)
for subdir in ['template', 'var', 'file']: # TODO: move this to constants?
if subdir in task.action:
break
setattr(mylookup, '_subdir', subdir + 's')
items = wrap_var(mylookup.run(terms=loop_terms, variables=vars_copy))
except AnsibleTemplateError:
# This task will be skipped later due to this, so we just setup
# a dummy array for the later code so it doesn't fail
items = [None]
else:
raise AnsibleError("Failed to find the lookup named '%s' in the available lookup plugins" % task.loop_with)
elif task.loop is not None:
try:
items = templar.template(task.loop)
except AnsibleTemplateError:
# This task will be skipped later due to this, so we just setup
# a dummy array for the later code so it doesn't fail
items = [None]
else:
has_loop = False
items = [None]
# since host can change per loop, we keep dict per host name resolved
delegated_host_vars = dict()
item_var = getattr(task.loop_control, 'loop_var', 'item')
cache_items = False
for item in items:
# update the variables with the item value for templating, in case we need it
if item is not None:
vars_copy[item_var] = item
templar.available_variables = vars_copy
delegated_host_name = templar.template(task.delegate_to, fail_on_undefined=False)
if delegated_host_name != task.delegate_to:
cache_items = True
if delegated_host_name is None:
raise AnsibleError(message="Undefined delegate_to host for task:", obj=task._ds)
if not isinstance(delegated_host_name, string_types):
raise AnsibleError(message="the field 'delegate_to' has an invalid type (%s), and could not be"
" converted to a string type." % type(delegated_host_name), obj=task._ds)
if delegated_host_name in delegated_host_vars:
# no need to repeat ourselves, as the delegate_to value
# does not appear to be tied to the loop item variable
continue
# now try to find the delegated-to host in inventory, or failing that,
# create a new host on the fly so we can fetch variables for it
delegated_host = None
if self._inventory is not None:
delegated_host = self._inventory.get_host(delegated_host_name)
# try looking it up based on the address field, and finally
# fall back to creating a host on the fly to use for the var lookup
if delegated_host is None:
for h in self._inventory.get_hosts(ignore_limits=True, ignore_restrictions=True):
# check if the address matches, or if both the delegated_to host
# and the current host are in the list of localhost aliases
if h.address == delegated_host_name:
delegated_host = h
break
else:
delegated_host = Host(name=delegated_host_name)
else:
delegated_host = Host(name=delegated_host_name)
# now we go fetch the vars for the delegated-to host and save them in our
# master dictionary of variables to be used later in the TaskExecutor/PlayContext
delegated_host_vars[delegated_host_name] = self.get_vars(
play=play,
host=delegated_host,
task=task,
include_delegate_to=False,
include_hostvars=True,
)
delegated_host_vars[delegated_host_name]['inventory_hostname'] = vars_copy.get('inventory_hostname')
_ansible_loop_cache = None
if has_loop and cache_items:
# delegate_to templating produced a change, so we will cache the templated items
# in a special private hostvar
# this ensures that delegate_to+loop doesn't produce different results than TaskExecutor
# which may reprocess the loop
_ansible_loop_cache = items
return delegated_host_vars, _ansible_loop_cache
def clear_facts(self, hostname):
'''
Clears the facts for a host
'''
self._fact_cache.pop(hostname, None)
def set_host_facts(self, host, facts):
'''
Sets or updates the given facts for a host in the fact cache.
'''
if not isinstance(facts, Mapping):
raise AnsibleAssertionError("the type of 'facts' to set for host_facts should be a Mapping but is a %s" % type(facts))
try:
host_cache = self._fact_cache[host]
except KeyError:
# We get to set this as new
host_cache = facts
else:
if not isinstance(host_cache, MutableMapping):
raise TypeError('The object retrieved for {0} must be a MutableMapping but was'
' a {1}'.format(host, type(host_cache)))
# Update the existing facts
host_cache |= facts
# Save the facts back to the backing store
self._fact_cache[host] = host_cache
def set_nonpersistent_facts(self, host, facts):
'''
Sets or updates the given facts for a host in the fact cache.
'''
if not isinstance(facts, Mapping):
raise AnsibleAssertionError("the type of 'facts' to set for nonpersistent_facts should be a Mapping but is a %s" % type(facts))
try:
self._nonpersistent_fact_cache[host] |= facts
except KeyError:
self._nonpersistent_fact_cache[host] = facts
def set_host_variable(self, host, varname, value):
'''
Sets a value in the vars_cache for a host.
'''
if host not in self._vars_cache:
self._vars_cache[host] = dict()
if varname in self._vars_cache[host] and isinstance(self._vars_cache[host][varname], MutableMapping) and isinstance(value, MutableMapping):
self._vars_cache[host] = combine_vars(self._vars_cache[host], {varname: value})
else:
self._vars_cache[host][varname] = value
class VarsWithSources(MutableMapping):
'''
Dict-like class for vars that also provides source information for each var
This class can only store the source for top-level vars. It does no tracking
on its own, just shows a debug message with the information that it is provided
when a particular var is accessed.
'''
def __init__(self, *args, **kwargs):
''' Dict-compatible constructor '''
self.data = dict(*args, **kwargs)
self.sources = {}
@classmethod
def new_vars_with_sources(cls, data, sources):
''' Alternate constructor method to instantiate class with sources '''
v = cls(data)
v.sources = sources
return v
def get_source(self, key):
return self.sources.get(key, None)
def __getitem__(self, key):
val = self.data[key]
# See notes in the VarsWithSources docstring for caveats and limitations of the source tracking
display.debug("variable '%s' from source: %s" % (key, self.sources.get(key, "unknown")))
return val
def __setitem__(self, key, value):
self.data[key] = value
def __delitem__(self, key):
del self.data[key]
def __iter__(self):
return iter(self.data)
def __len__(self):
return len(self.data)
# Prevent duplicate debug messages by defining our own __contains__ pointing at the underlying dict
def __contains__(self, key):
return self.data.__contains__(key)
def copy(self):
return VarsWithSources.new_vars_with_sources(self.data.copy(), self.sources.copy())
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 81,659 |
ANSIBLE_DEBUG causes add_host to fail
|
### Summary
Saw this happening with ansible 2.15.3
When using ANSIBLE_DEBUG=1 with a add_host task, the task fails with
ansible/utils/vars.py\", line 91, in combine_vars\n result = a | b\nTypeError: unsupported operand type(s) for |: 'VarsWithSources' and 'dict'
Same bug with ANSIBLE_DEBUG enabled: https://github.com/ansible/ansible/issues/79763
Where fix is needed: https://github.com/ansible/ansible/blob/3ec0850df9429f4b1abc78d9ba505df12d7dd1db/lib/ansible/utils/vars.py#L91
### Issue Type
Bug Report
### Component Name
add_host
### Ansible Version
```console
$ ansible --version
ansible [core 2.15.3]
config file = None
configured module search path = ['/home/runner/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/lib/python3.9/site-packages/ansible
ansible collection location = /home/runner/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/local/bin/ansible
python version = 3.9.17 (main, Jun 13 2023, 16:05:09) [GCC 8.3.0] (/usr/local/bin/python3)
jinja version = 3.0.3
libyaml = True
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
ansible-config [core 2.15.3]
config file = None
configured module search path = ['/home/runner/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/lib/python3.9/site-packages/ansible
ansible collection location = /home/runner/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/local/bin/ansible-config
python version = 3.9.17 (main, Jun 13 2023, 16:05:09) [GCC 8.3.0] (/usr/local/bin/python3)
jinja version = 3.0.3
libyaml = True
No config file found; using defaults
Loading collection ansible.builtin from
```
### OS / Environment
Debian GNU/Linux 10 (buster)
Linux 3.10.0-1127.el7.x86_64 x86_64
### Steps to Reproduce
```
- name: create inventory
hosts: localhost
gather_facts: no
tasks:
- add_host:
name: "{{ item }}"
groups: resource_types
with_items:
- node
- pod
- namespace
- ResourceQuota
```
### Expected Results
successfull result of playbook
### Actual Results
```console
ansible/utils/vars.py\", line 91, in combine_vars\n result = a | b\nTypeError: unsupported operand type(s) for |: 'VarsWithSources' and 'dict'
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/81659
|
https://github.com/ansible/ansible/pull/81700
|
f7234968d241d7171aadb1e873a67510753f3163
|
0ea40e09d1b35bcb69ff4d9cecf3d0defa4b36e8
| 2023-09-07T15:36:13Z |
python
| 2023-09-19T15:03:58Z |
test/units/utils/test_vars.py
|
# (c) 2012-2014, Michael DeHaan <[email protected]>
# (c) 2015, Toshio Kuraotmi <[email protected]>
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
# Make coding more python3-ish
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
from collections import defaultdict
from unittest import mock
from units.compat import unittest
from ansible.errors import AnsibleError
from ansible.utils.vars import combine_vars, merge_hash
class TestVariableUtils(unittest.TestCase):
def setUp(self):
pass
def tearDown(self):
pass
combine_vars_merge_data = (
dict(
a=dict(a=1),
b=dict(b=2),
result=dict(a=1, b=2),
),
dict(
a=dict(a=1, c=dict(foo='bar')),
b=dict(b=2, c=dict(baz='bam')),
result=dict(a=1, b=2, c=dict(foo='bar', baz='bam'))
),
dict(
a=defaultdict(a=1, c=defaultdict(foo='bar')),
b=dict(b=2, c=dict(baz='bam')),
result=defaultdict(a=1, b=2, c=defaultdict(foo='bar', baz='bam'))
),
)
combine_vars_replace_data = (
dict(
a=dict(a=1),
b=dict(b=2),
result=dict(a=1, b=2)
),
dict(
a=dict(a=1, c=dict(foo='bar')),
b=dict(b=2, c=dict(baz='bam')),
result=dict(a=1, b=2, c=dict(baz='bam'))
),
dict(
a=defaultdict(a=1, c=dict(foo='bar')),
b=dict(b=2, c=defaultdict(baz='bam')),
result=defaultdict(a=1, b=2, c=defaultdict(baz='bam'))
),
)
def test_combine_vars_improper_args(self):
with mock.patch('ansible.constants.DEFAULT_HASH_BEHAVIOUR', 'replace'):
with self.assertRaises(AnsibleError):
combine_vars([1, 2, 3], dict(a=1))
with self.assertRaises(AnsibleError):
combine_vars(dict(a=1), [1, 2, 3])
with mock.patch('ansible.constants.DEFAULT_HASH_BEHAVIOUR', 'merge'):
with self.assertRaises(AnsibleError):
combine_vars([1, 2, 3], dict(a=1))
with self.assertRaises(AnsibleError):
combine_vars(dict(a=1), [1, 2, 3])
def test_combine_vars_replace(self):
with mock.patch('ansible.constants.DEFAULT_HASH_BEHAVIOUR', 'replace'):
for test in self.combine_vars_replace_data:
self.assertEqual(combine_vars(test['a'], test['b']), test['result'])
def test_combine_vars_merge(self):
with mock.patch('ansible.constants.DEFAULT_HASH_BEHAVIOUR', 'merge'):
for test in self.combine_vars_merge_data:
self.assertEqual(combine_vars(test['a'], test['b']), test['result'])
merge_hash_data = {
"low_prio": {
"a": {
"a'": {
"x": "low_value",
"y": "low_value",
"list": ["low_value"]
}
},
"b": [1, 1, 2, 3]
},
"high_prio": {
"a": {
"a'": {
"y": "high_value",
"z": "high_value",
"list": ["high_value"]
}
},
"b": [3, 4, 4, {"5": "value"}]
}
}
def test_merge_hash_simple(self):
for test in self.combine_vars_merge_data:
self.assertEqual(merge_hash(test['a'], test['b']), test['result'])
low = self.merge_hash_data['low_prio']
high = self.merge_hash_data['high_prio']
expected = {
"a": {
"a'": {
"x": "low_value",
"y": "high_value",
"z": "high_value",
"list": ["high_value"]
}
},
"b": high['b']
}
self.assertEqual(merge_hash(low, high), expected)
def test_merge_hash_non_recursive_and_list_replace(self):
low = self.merge_hash_data['low_prio']
high = self.merge_hash_data['high_prio']
expected = high
self.assertEqual(merge_hash(low, high, False, 'replace'), expected)
def test_merge_hash_non_recursive_and_list_keep(self):
low = self.merge_hash_data['low_prio']
high = self.merge_hash_data['high_prio']
expected = {
"a": high['a'],
"b": low['b']
}
self.assertEqual(merge_hash(low, high, False, 'keep'), expected)
def test_merge_hash_non_recursive_and_list_append(self):
low = self.merge_hash_data['low_prio']
high = self.merge_hash_data['high_prio']
expected = {
"a": high['a'],
"b": low['b'] + high['b']
}
self.assertEqual(merge_hash(low, high, False, 'append'), expected)
def test_merge_hash_non_recursive_and_list_prepend(self):
low = self.merge_hash_data['low_prio']
high = self.merge_hash_data['high_prio']
expected = {
"a": high['a'],
"b": high['b'] + low['b']
}
self.assertEqual(merge_hash(low, high, False, 'prepend'), expected)
def test_merge_hash_non_recursive_and_list_append_rp(self):
low = self.merge_hash_data['low_prio']
high = self.merge_hash_data['high_prio']
expected = {
"a": high['a'],
"b": [1, 1, 2] + high['b']
}
self.assertEqual(merge_hash(low, high, False, 'append_rp'), expected)
def test_merge_hash_non_recursive_and_list_prepend_rp(self):
low = self.merge_hash_data['low_prio']
high = self.merge_hash_data['high_prio']
expected = {
"a": high['a'],
"b": high['b'] + [1, 1, 2]
}
self.assertEqual(merge_hash(low, high, False, 'prepend_rp'), expected)
def test_merge_hash_recursive_and_list_replace(self):
low = self.merge_hash_data['low_prio']
high = self.merge_hash_data['high_prio']
expected = {
"a": {
"a'": {
"x": "low_value",
"y": "high_value",
"z": "high_value",
"list": ["high_value"]
}
},
"b": high['b']
}
self.assertEqual(merge_hash(low, high, True, 'replace'), expected)
def test_merge_hash_recursive_and_list_keep(self):
low = self.merge_hash_data['low_prio']
high = self.merge_hash_data['high_prio']
expected = {
"a": {
"a'": {
"x": "low_value",
"y": "high_value",
"z": "high_value",
"list": ["low_value"]
}
},
"b": low['b']
}
self.assertEqual(merge_hash(low, high, True, 'keep'), expected)
def test_merge_hash_recursive_and_list_append(self):
low = self.merge_hash_data['low_prio']
high = self.merge_hash_data['high_prio']
expected = {
"a": {
"a'": {
"x": "low_value",
"y": "high_value",
"z": "high_value",
"list": ["low_value", "high_value"]
}
},
"b": low['b'] + high['b']
}
self.assertEqual(merge_hash(low, high, True, 'append'), expected)
def test_merge_hash_recursive_and_list_prepend(self):
low = self.merge_hash_data['low_prio']
high = self.merge_hash_data['high_prio']
expected = {
"a": {
"a'": {
"x": "low_value",
"y": "high_value",
"z": "high_value",
"list": ["high_value", "low_value"]
}
},
"b": high['b'] + low['b']
}
self.assertEqual(merge_hash(low, high, True, 'prepend'), expected)
def test_merge_hash_recursive_and_list_append_rp(self):
low = self.merge_hash_data['low_prio']
high = self.merge_hash_data['high_prio']
expected = {
"a": {
"a'": {
"x": "low_value",
"y": "high_value",
"z": "high_value",
"list": ["low_value", "high_value"]
}
},
"b": [1, 1, 2] + high['b']
}
self.assertEqual(merge_hash(low, high, True, 'append_rp'), expected)
def test_merge_hash_recursive_and_list_prepend_rp(self):
low = self.merge_hash_data['low_prio']
high = self.merge_hash_data['high_prio']
expected = {
"a": {
"a'": {
"x": "low_value",
"y": "high_value",
"z": "high_value",
"list": ["high_value", "low_value"]
}
},
"b": high['b'] + [1, 1, 2]
}
self.assertEqual(merge_hash(low, high, True, 'prepend_rp'), expected)
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 81,716 |
Remove deprecated functionality from ansible-doc for 2.17
|
### Summary
ansible-doc contains deprecated calls to be removed for 2.17
### Issue Type
Feature Idea
### Component Name
`lib/ansible/cli/doc.py`
|
https://github.com/ansible/ansible/issues/81716
|
https://github.com/ansible/ansible/pull/81729
|
3ec7a6e0db53b254fde26abc190fcb2f4af1ce88
|
4b7705b07a64408515d0e164b62d4a8f814918db
| 2023-09-18T21:01:45Z |
python
| 2023-09-19T23:48:33Z |
changelogs/fragments/81716-ansible-doc.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 81,716 |
Remove deprecated functionality from ansible-doc for 2.17
|
### Summary
ansible-doc contains deprecated calls to be removed for 2.17
### Issue Type
Feature Idea
### Component Name
`lib/ansible/cli/doc.py`
|
https://github.com/ansible/ansible/issues/81716
|
https://github.com/ansible/ansible/pull/81729
|
3ec7a6e0db53b254fde26abc190fcb2f4af1ce88
|
4b7705b07a64408515d0e164b62d4a8f814918db
| 2023-09-18T21:01:45Z |
python
| 2023-09-19T23:48:33Z |
lib/ansible/cli/doc.py
|
#!/usr/bin/env python
# Copyright: (c) 2014, James Tanner <[email protected]>
# Copyright: (c) 2018, Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
# PYTHON_ARGCOMPLETE_OK
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
# ansible.cli needs to be imported first, to ensure the source bin/* scripts run that code first
from ansible.cli import CLI
import pkgutil
import os
import os.path
import re
import textwrap
import traceback
import ansible.plugins.loader as plugin_loader
from pathlib import Path
from ansible import constants as C
from ansible import context
from ansible.cli.arguments import option_helpers as opt_help
from ansible.collections.list import list_collection_dirs
from ansible.errors import AnsibleError, AnsibleOptionsError, AnsibleParserError, AnsiblePluginNotFound
from ansible.module_utils.common.text.converters import to_native, to_text
from ansible.module_utils.common.collections import is_sequence
from ansible.module_utils.common.json import json_dump
from ansible.module_utils.common.yaml import yaml_dump
from ansible.module_utils.compat import importlib
from ansible.module_utils.six import string_types
from ansible.parsing.plugin_docs import read_docstub
from ansible.parsing.utils.yaml import from_yaml
from ansible.parsing.yaml.dumper import AnsibleDumper
from ansible.plugins.list import list_plugins
from ansible.plugins.loader import action_loader, fragment_loader
from ansible.utils.collection_loader import AnsibleCollectionConfig, AnsibleCollectionRef
from ansible.utils.collection_loader._collection_finder import _get_collection_name_from_path
from ansible.utils.display import Display
from ansible.utils.plugin_docs import get_plugin_docs, get_docstring, get_versioned_doclink
display = Display()
TARGET_OPTIONS = C.DOCUMENTABLE_PLUGINS + ('role', 'keyword',)
PB_OBJECTS = ['Play', 'Role', 'Block', 'Task']
PB_LOADED = {}
SNIPPETS = ['inventory', 'lookup', 'module']
def add_collection_plugins(plugin_list, plugin_type, coll_filter=None):
display.deprecated("add_collection_plugins method, use ansible.plugins.list functions instead.", version='2.17')
plugin_list.update(list_plugins(plugin_type, coll_filter))
def jdump(text):
try:
display.display(json_dump(text))
except TypeError as e:
display.vvv(traceback.format_exc())
raise AnsibleError('We could not convert all the documentation into JSON as there was a conversion issue: %s' % to_native(e))
class RoleMixin(object):
"""A mixin containing all methods relevant to role argument specification functionality.
Note: The methods for actual display of role data are not present here.
"""
# Potential locations of the role arg spec file in the meta subdir, with main.yml
# having the lowest priority.
ROLE_ARGSPEC_FILES = ['argument_specs' + e for e in C.YAML_FILENAME_EXTENSIONS] + ["main" + e for e in C.YAML_FILENAME_EXTENSIONS]
def _load_argspec(self, role_name, collection_path=None, role_path=None):
"""Load the role argument spec data from the source file.
:param str role_name: The name of the role for which we want the argspec data.
:param str collection_path: Path to the collection containing the role. This
will be None for standard roles.
:param str role_path: Path to the standard role. This will be None for
collection roles.
We support two files containing the role arg spec data: either meta/main.yml
or meta/argument_spec.yml. The argument_spec.yml file will take precedence
over the meta/main.yml file, if it exists. Data is NOT combined between the
two files.
:returns: A dict of all data underneath the ``argument_specs`` top-level YAML
key in the argspec data file. Empty dict is returned if there is no data.
"""
if collection_path:
meta_path = os.path.join(collection_path, 'roles', role_name, 'meta')
elif role_path:
meta_path = os.path.join(role_path, 'meta')
else:
raise AnsibleError("A path is required to load argument specs for role '%s'" % role_name)
path = None
# Check all potential spec files
for specfile in self.ROLE_ARGSPEC_FILES:
full_path = os.path.join(meta_path, specfile)
if os.path.exists(full_path):
path = full_path
break
if path is None:
return {}
try:
with open(path, 'r') as f:
data = from_yaml(f.read(), file_name=path)
if data is None:
data = {}
return data.get('argument_specs', {})
except (IOError, OSError) as e:
raise AnsibleParserError("An error occurred while trying to read the file '%s': %s" % (path, to_native(e)), orig_exc=e)
def _find_all_normal_roles(self, role_paths, name_filters=None):
"""Find all non-collection roles that have an argument spec file.
Note that argument specs do not actually need to exist within the spec file.
:param role_paths: A tuple of one or more role paths. When a role with the same name
is found in multiple paths, only the first-found role is returned.
:param name_filters: A tuple of one or more role names used to filter the results.
:returns: A set of tuples consisting of: role name, full role path
"""
found = set()
found_names = set()
for path in role_paths:
if not os.path.isdir(path):
continue
# Check each subdir for an argument spec file
for entry in os.listdir(path):
role_path = os.path.join(path, entry)
# Check all potential spec files
for specfile in self.ROLE_ARGSPEC_FILES:
full_path = os.path.join(role_path, 'meta', specfile)
if os.path.exists(full_path):
if name_filters is None or entry in name_filters:
if entry not in found_names:
found.add((entry, role_path))
found_names.add(entry)
# select first-found
break
return found
def _find_all_collection_roles(self, name_filters=None, collection_filter=None):
"""Find all collection roles with an argument spec file.
Note that argument specs do not actually need to exist within the spec file.
:param name_filters: A tuple of one or more role names used to filter the results. These
might be fully qualified with the collection name (e.g., community.general.roleA)
or not (e.g., roleA).
:param collection_filter: A list of strings containing the FQCN of a collection which will
be used to limit results. This filter will take precedence over the name_filters.
:returns: A set of tuples consisting of: role name, collection name, collection path
"""
found = set()
b_colldirs = list_collection_dirs(coll_filter=collection_filter)
for b_path in b_colldirs:
path = to_text(b_path, errors='surrogate_or_strict')
collname = _get_collection_name_from_path(b_path)
roles_dir = os.path.join(path, 'roles')
if os.path.exists(roles_dir):
for entry in os.listdir(roles_dir):
# Check all potential spec files
for specfile in self.ROLE_ARGSPEC_FILES:
full_path = os.path.join(roles_dir, entry, 'meta', specfile)
if os.path.exists(full_path):
if name_filters is None:
found.add((entry, collname, path))
else:
# Name filters might contain a collection FQCN or not.
for fqcn in name_filters:
if len(fqcn.split('.')) == 3:
(ns, col, role) = fqcn.split('.')
if '.'.join([ns, col]) == collname and entry == role:
found.add((entry, collname, path))
elif fqcn == entry:
found.add((entry, collname, path))
break
return found
def _build_summary(self, role, collection, argspec):
"""Build a summary dict for a role.
Returns a simplified role arg spec containing only the role entry points and their
short descriptions, and the role collection name (if applicable).
:param role: The simple role name.
:param collection: The collection containing the role (None or empty string if N/A).
:param argspec: The complete role argspec data dict.
:returns: A tuple with the FQCN role name and a summary dict.
"""
if collection:
fqcn = '.'.join([collection, role])
else:
fqcn = role
summary = {}
summary['collection'] = collection
summary['entry_points'] = {}
for ep in argspec.keys():
entry_spec = argspec[ep] or {}
summary['entry_points'][ep] = entry_spec.get('short_description', '')
return (fqcn, summary)
def _build_doc(self, role, path, collection, argspec, entry_point):
if collection:
fqcn = '.'.join([collection, role])
else:
fqcn = role
doc = {}
doc['path'] = path
doc['collection'] = collection
doc['entry_points'] = {}
for ep in argspec.keys():
if entry_point is None or ep == entry_point:
entry_spec = argspec[ep] or {}
doc['entry_points'][ep] = entry_spec
# If we didn't add any entry points (b/c of filtering), ignore this entry.
if len(doc['entry_points'].keys()) == 0:
doc = None
return (fqcn, doc)
def _create_role_list(self, fail_on_errors=True):
"""Return a dict describing the listing of all roles with arg specs.
:param role_paths: A tuple of one or more role paths.
:returns: A dict indexed by role name, with 'collection' and 'entry_points' keys per role.
Example return:
results = {
'roleA': {
'collection': '',
'entry_points': {
'main': 'Short description for main'
}
},
'a.b.c.roleB': {
'collection': 'a.b.c',
'entry_points': {
'main': 'Short description for main',
'alternate': 'Short description for alternate entry point'
}
'x.y.z.roleB': {
'collection': 'x.y.z',
'entry_points': {
'main': 'Short description for main',
}
},
}
"""
roles_path = self._get_roles_path()
collection_filter = self._get_collection_filter()
if not collection_filter:
roles = self._find_all_normal_roles(roles_path)
else:
roles = []
collroles = self._find_all_collection_roles(collection_filter=collection_filter)
result = {}
for role, role_path in roles:
try:
argspec = self._load_argspec(role, role_path=role_path)
fqcn, summary = self._build_summary(role, '', argspec)
result[fqcn] = summary
except Exception as e:
if fail_on_errors:
raise
result[role] = {
'error': 'Error while loading role argument spec: %s' % to_native(e),
}
for role, collection, collection_path in collroles:
try:
argspec = self._load_argspec(role, collection_path=collection_path)
fqcn, summary = self._build_summary(role, collection, argspec)
result[fqcn] = summary
except Exception as e:
if fail_on_errors:
raise
result['%s.%s' % (collection, role)] = {
'error': 'Error while loading role argument spec: %s' % to_native(e),
}
return result
def _create_role_doc(self, role_names, entry_point=None, fail_on_errors=True):
"""
:param role_names: A tuple of one or more role names.
:param role_paths: A tuple of one or more role paths.
:param entry_point: A role entry point name for filtering.
:param fail_on_errors: When set to False, include errors in the JSON output instead of raising errors
:returns: A dict indexed by role name, with 'collection', 'entry_points', and 'path' keys per role.
"""
roles_path = self._get_roles_path()
roles = self._find_all_normal_roles(roles_path, name_filters=role_names)
collroles = self._find_all_collection_roles(name_filters=role_names)
result = {}
for role, role_path in roles:
try:
argspec = self._load_argspec(role, role_path=role_path)
fqcn, doc = self._build_doc(role, role_path, '', argspec, entry_point)
if doc:
result[fqcn] = doc
except Exception as e: # pylint:disable=broad-except
result[role] = {
'error': 'Error while processing role: %s' % to_native(e),
}
for role, collection, collection_path in collroles:
try:
argspec = self._load_argspec(role, collection_path=collection_path)
fqcn, doc = self._build_doc(role, collection_path, collection, argspec, entry_point)
if doc:
result[fqcn] = doc
except Exception as e: # pylint:disable=broad-except
result['%s.%s' % (collection, role)] = {
'error': 'Error while processing role: %s' % to_native(e),
}
return result
class DocCLI(CLI, RoleMixin):
''' displays information on modules installed in Ansible libraries.
It displays a terse listing of plugins and their short descriptions,
provides a printout of their DOCUMENTATION strings,
and it can create a short "snippet" which can be pasted into a playbook. '''
name = 'ansible-doc'
# default ignore list for detailed views
IGNORE = ('module', 'docuri', 'version_added', 'version_added_collection', 'short_description', 'now_date', 'plainexamples', 'returndocs', 'collection')
# Warning: If you add more elements here, you also need to add it to the docsite build (in the
# ansible-community/antsibull repo)
_ITALIC = re.compile(r"\bI\(([^)]+)\)")
_BOLD = re.compile(r"\bB\(([^)]+)\)")
_MODULE = re.compile(r"\bM\(([^)]+)\)")
_PLUGIN = re.compile(r"\bP\(([^#)]+)#([a-z]+)\)")
_LINK = re.compile(r"\bL\(([^)]+), *([^)]+)\)")
_URL = re.compile(r"\bU\(([^)]+)\)")
_REF = re.compile(r"\bR\(([^)]+), *([^)]+)\)")
_CONST = re.compile(r"\bC\(([^)]+)\)")
_SEM_PARAMETER_STRING = r"\(((?:[^\\)]+|\\.)+)\)"
_SEM_OPTION_NAME = re.compile(r"\bO" + _SEM_PARAMETER_STRING)
_SEM_OPTION_VALUE = re.compile(r"\bV" + _SEM_PARAMETER_STRING)
_SEM_ENV_VARIABLE = re.compile(r"\bE" + _SEM_PARAMETER_STRING)
_SEM_RET_VALUE = re.compile(r"\bRV" + _SEM_PARAMETER_STRING)
_RULER = re.compile(r"\bHORIZONTALLINE\b")
# helper for unescaping
_UNESCAPE = re.compile(r"\\(.)")
_FQCN_TYPE_PREFIX_RE = re.compile(r'^([^.]+\.[^.]+\.[^#]+)#([a-z]+):(.*)$')
_IGNORE_MARKER = 'ignore:'
# rst specific
_RST_NOTE = re.compile(r".. note::")
_RST_SEEALSO = re.compile(r".. seealso::")
_RST_ROLES = re.compile(r":\w+?:`")
_RST_DIRECTIVES = re.compile(r".. \w+?::")
def __init__(self, args):
super(DocCLI, self).__init__(args)
self.plugin_list = set()
@staticmethod
def _tty_ify_sem_simle(matcher):
text = DocCLI._UNESCAPE.sub(r'\1', matcher.group(1))
return f"`{text}'"
@staticmethod
def _tty_ify_sem_complex(matcher):
text = DocCLI._UNESCAPE.sub(r'\1', matcher.group(1))
value = None
if '=' in text:
text, value = text.split('=', 1)
m = DocCLI._FQCN_TYPE_PREFIX_RE.match(text)
if m:
plugin_fqcn = m.group(1)
plugin_type = m.group(2)
text = m.group(3)
elif text.startswith(DocCLI._IGNORE_MARKER):
text = text[len(DocCLI._IGNORE_MARKER):]
plugin_fqcn = plugin_type = ''
else:
plugin_fqcn = plugin_type = ''
entrypoint = None
if ':' in text:
entrypoint, text = text.split(':', 1)
if value is not None:
text = f"{text}={value}"
if plugin_fqcn and plugin_type:
plugin_suffix = '' if plugin_type in ('role', 'module', 'playbook') else ' plugin'
plugin = f"{plugin_type}{plugin_suffix} {plugin_fqcn}"
if plugin_type == 'role' and entrypoint is not None:
plugin = f"{plugin}, {entrypoint} entrypoint"
return f"`{text}' (of {plugin})"
return f"`{text}'"
@classmethod
def find_plugins(cls, path, internal, plugin_type, coll_filter=None):
display.deprecated("find_plugins method as it is incomplete/incorrect. use ansible.plugins.list functions instead.", version='2.17')
return list_plugins(plugin_type, coll_filter, [path]).keys()
@classmethod
def tty_ify(cls, text):
# general formatting
t = cls._ITALIC.sub(r"`\1'", text) # I(word) => `word'
t = cls._BOLD.sub(r"*\1*", t) # B(word) => *word*
t = cls._MODULE.sub("[" + r"\1" + "]", t) # M(word) => [word]
t = cls._URL.sub(r"\1", t) # U(word) => word
t = cls._LINK.sub(r"\1 <\2>", t) # L(word, url) => word <url>
t = cls._PLUGIN.sub("[" + r"\1" + "]", t) # P(word#type) => [word]
t = cls._REF.sub(r"\1", t) # R(word, sphinx-ref) => word
t = cls._CONST.sub(r"`\1'", t) # C(word) => `word'
t = cls._SEM_OPTION_NAME.sub(cls._tty_ify_sem_complex, t) # O(expr)
t = cls._SEM_OPTION_VALUE.sub(cls._tty_ify_sem_simle, t) # V(expr)
t = cls._SEM_ENV_VARIABLE.sub(cls._tty_ify_sem_simle, t) # E(expr)
t = cls._SEM_RET_VALUE.sub(cls._tty_ify_sem_complex, t) # RV(expr)
t = cls._RULER.sub("\n{0}\n".format("-" * 13), t) # HORIZONTALLINE => -------
# remove rst
t = cls._RST_SEEALSO.sub(r"See also:", t) # seealso to See also:
t = cls._RST_NOTE.sub(r"Note:", t) # .. note:: to note:
t = cls._RST_ROLES.sub(r"`", t) # remove :ref: and other tags, keep tilde to match ending one
t = cls._RST_DIRECTIVES.sub(r"", t) # remove .. stuff:: in general
return t
def init_parser(self):
coll_filter = 'A supplied argument will be used for filtering, can be a namespace or full collection name.'
super(DocCLI, self).init_parser(
desc="plugin documentation tool",
epilog="See man pages for Ansible CLI options or website for tutorials https://docs.ansible.com"
)
opt_help.add_module_options(self.parser)
opt_help.add_basedir_options(self.parser)
# targets
self.parser.add_argument('args', nargs='*', help='Plugin', metavar='plugin')
self.parser.add_argument("-t", "--type", action="store", default='module', dest='type',
help='Choose which plugin type (defaults to "module"). '
'Available plugin types are : {0}'.format(TARGET_OPTIONS),
choices=TARGET_OPTIONS)
# formatting
self.parser.add_argument("-j", "--json", action="store_true", default=False, dest='json_format',
help='Change output into json format.')
# TODO: warn if not used with -t roles
# role-specific options
self.parser.add_argument("-r", "--roles-path", dest='roles_path', default=C.DEFAULT_ROLES_PATH,
type=opt_help.unfrack_path(pathsep=True),
action=opt_help.PrependListAction,
help='The path to the directory containing your roles.')
# modifiers
exclusive = self.parser.add_mutually_exclusive_group()
# TODO: warn if not used with -t roles
exclusive.add_argument("-e", "--entry-point", dest="entry_point",
help="Select the entry point for role(s).")
# TODO: warn with --json as it is incompatible
exclusive.add_argument("-s", "--snippet", action="store_true", default=False, dest='show_snippet',
help='Show playbook snippet for these plugin types: %s' % ', '.join(SNIPPETS))
# TODO: warn when arg/plugin is passed
exclusive.add_argument("-F", "--list_files", action="store_true", default=False, dest="list_files",
help='Show plugin names and their source files without summaries (implies --list). %s' % coll_filter)
exclusive.add_argument("-l", "--list", action="store_true", default=False, dest='list_dir',
help='List available plugins. %s' % coll_filter)
exclusive.add_argument("--metadata-dump", action="store_true", default=False, dest='dump',
help='**For internal use only** Dump json metadata for all entries, ignores other options.')
self.parser.add_argument("--no-fail-on-errors", action="store_true", default=False, dest='no_fail_on_errors',
help='**For internal use only** Only used for --metadata-dump. '
'Do not fail on errors. Report the error message in the JSON instead.')
def post_process_args(self, options):
options = super(DocCLI, self).post_process_args(options)
display.verbosity = options.verbosity
return options
def display_plugin_list(self, results):
# format for user
displace = max(len(x) for x in results.keys())
linelimit = display.columns - displace - 5
text = []
deprecated = []
# format display per option
if context.CLIARGS['list_files']:
# list plugin file names
for plugin in sorted(results.keys()):
filename = to_native(results[plugin])
# handle deprecated for builtin/legacy
pbreak = plugin.split('.')
if pbreak[-1].startswith('_') and pbreak[0] == 'ansible' and pbreak[1] in ('builtin', 'legacy'):
pbreak[-1] = pbreak[-1][1:]
plugin = '.'.join(pbreak)
deprecated.append("%-*s %-*.*s" % (displace, plugin, linelimit, len(filename), filename))
else:
text.append("%-*s %-*.*s" % (displace, plugin, linelimit, len(filename), filename))
else:
# list plugin names and short desc
for plugin in sorted(results.keys()):
desc = DocCLI.tty_ify(results[plugin])
if len(desc) > linelimit:
desc = desc[:linelimit] + '...'
pbreak = plugin.split('.')
# TODO: add mark for deprecated collection plugins
if pbreak[-1].startswith('_') and plugin.startswith(('ansible.builtin.', 'ansible.legacy.')):
# Handle deprecated ansible.builtin plugins
pbreak[-1] = pbreak[-1][1:]
plugin = '.'.join(pbreak)
deprecated.append("%-*s %-*.*s" % (displace, plugin, linelimit, len(desc), desc))
else:
text.append("%-*s %-*.*s" % (displace, plugin, linelimit, len(desc), desc))
if len(deprecated) > 0:
text.append("\nDEPRECATED:")
text.extend(deprecated)
# display results
DocCLI.pager("\n".join(text))
def _display_available_roles(self, list_json):
"""Display all roles we can find with a valid argument specification.
Output is: fqcn role name, entry point, short description
"""
roles = list(list_json.keys())
entry_point_names = set()
for role in roles:
for entry_point in list_json[role]['entry_points'].keys():
entry_point_names.add(entry_point)
max_role_len = 0
max_ep_len = 0
if roles:
max_role_len = max(len(x) for x in roles)
if entry_point_names:
max_ep_len = max(len(x) for x in entry_point_names)
linelimit = display.columns - max_role_len - max_ep_len - 5
text = []
for role in sorted(roles):
for entry_point, desc in list_json[role]['entry_points'].items():
if len(desc) > linelimit:
desc = desc[:linelimit] + '...'
text.append("%-*s %-*s %s" % (max_role_len, role,
max_ep_len, entry_point,
desc))
# display results
DocCLI.pager("\n".join(text))
def _display_role_doc(self, role_json):
roles = list(role_json.keys())
text = []
for role in roles:
text += self.get_role_man_text(role, role_json[role])
# display results
DocCLI.pager("\n".join(text))
@staticmethod
def _list_keywords():
return from_yaml(pkgutil.get_data('ansible', 'keyword_desc.yml'))
@staticmethod
def _get_keywords_docs(keys):
data = {}
descs = DocCLI._list_keywords()
for key in keys:
if key.startswith('with_'):
# simplify loops, dont want to handle every with_<lookup> combo
keyword = 'loop'
elif key == 'async':
# cause async became reserved in python we had to rename internally
keyword = 'async_val'
else:
keyword = key
try:
# if no desc, typeerror raised ends this block
kdata = {'description': descs[key]}
# get playbook objects for keyword and use first to get keyword attributes
kdata['applies_to'] = []
for pobj in PB_OBJECTS:
if pobj not in PB_LOADED:
obj_class = 'ansible.playbook.%s' % pobj.lower()
loaded_class = importlib.import_module(obj_class)
PB_LOADED[pobj] = getattr(loaded_class, pobj, None)
if keyword in PB_LOADED[pobj].fattributes:
kdata['applies_to'].append(pobj)
# we should only need these once
if 'type' not in kdata:
fa = PB_LOADED[pobj].fattributes.get(keyword)
if getattr(fa, 'private'):
kdata = {}
raise KeyError
kdata['type'] = getattr(fa, 'isa', 'string')
if keyword.endswith('when') or keyword in ('until',):
# TODO: make this a field attribute property,
# would also helps with the warnings on {{}} stacking
kdata['template'] = 'implicit'
elif getattr(fa, 'static'):
kdata['template'] = 'static'
else:
kdata['template'] = 'explicit'
# those that require no processing
for visible in ('alias', 'priority'):
kdata[visible] = getattr(fa, visible)
# remove None keys
for k in list(kdata.keys()):
if kdata[k] is None:
del kdata[k]
data[key] = kdata
except (AttributeError, KeyError) as e:
display.warning("Skipping Invalid keyword '%s' specified: %s" % (key, to_text(e)))
if display.verbosity >= 3:
display.verbose(traceback.format_exc())
return data
def _get_collection_filter(self):
coll_filter = None
if len(context.CLIARGS['args']) >= 1:
coll_filter = context.CLIARGS['args']
for coll_name in coll_filter:
if not AnsibleCollectionRef.is_valid_collection_name(coll_name):
raise AnsibleError('Invalid collection name (must be of the form namespace.collection): {0}'.format(coll_name))
return coll_filter
def _list_plugins(self, plugin_type, content):
results = {}
self.plugins = {}
loader = DocCLI._prep_loader(plugin_type)
coll_filter = self._get_collection_filter()
self.plugins.update(list_plugins(plugin_type, coll_filter))
# get appropriate content depending on option
if content == 'dir':
results = self._get_plugin_list_descriptions(loader)
elif content == 'files':
results = {k: self.plugins[k][0] for k in self.plugins.keys()}
else:
results = {k: {} for k in self.plugins.keys()}
self.plugin_list = set() # reset for next iteration
return results
def _get_plugins_docs(self, plugin_type, names, fail_ok=False, fail_on_errors=True):
loader = DocCLI._prep_loader(plugin_type)
# get the docs for plugins in the command line list
plugin_docs = {}
for plugin in names:
doc = {}
try:
doc, plainexamples, returndocs, metadata = get_plugin_docs(plugin, plugin_type, loader, fragment_loader, (context.CLIARGS['verbosity'] > 0))
except AnsiblePluginNotFound as e:
display.warning(to_native(e))
continue
except Exception as e:
if not fail_on_errors:
plugin_docs[plugin] = {'error': 'Missing documentation or could not parse documentation: %s' % to_native(e)}
continue
display.vvv(traceback.format_exc())
msg = "%s %s missing documentation (or could not parse documentation): %s\n" % (plugin_type, plugin, to_native(e))
if fail_ok:
display.warning(msg)
else:
raise AnsibleError(msg)
if not doc:
# The doc section existed but was empty
if not fail_on_errors:
plugin_docs[plugin] = {'error': 'No valid documentation found'}
continue
docs = DocCLI._combine_plugin_doc(plugin, plugin_type, doc, plainexamples, returndocs, metadata)
if not fail_on_errors:
# Check whether JSON serialization would break
try:
json_dump(docs)
except Exception as e: # pylint:disable=broad-except
plugin_docs[plugin] = {'error': 'Cannot serialize documentation as JSON: %s' % to_native(e)}
continue
plugin_docs[plugin] = docs
return plugin_docs
def _get_roles_path(self):
'''
Add any 'roles' subdir in playbook dir to the roles search path.
And as a last resort, add the playbook dir itself. Order being:
- 'roles' subdir of playbook dir
- DEFAULT_ROLES_PATH (default in cliargs)
- playbook dir (basedir)
NOTE: This matches logic in RoleDefinition._load_role_path() method.
'''
roles_path = context.CLIARGS['roles_path']
if context.CLIARGS['basedir'] is not None:
subdir = os.path.join(context.CLIARGS['basedir'], "roles")
if os.path.isdir(subdir):
roles_path = (subdir,) + roles_path
roles_path = roles_path + (context.CLIARGS['basedir'],)
return roles_path
@staticmethod
def _prep_loader(plugin_type):
''' return a plugint type specific loader '''
loader = getattr(plugin_loader, '%s_loader' % plugin_type)
# add to plugin paths from command line
if context.CLIARGS['basedir'] is not None:
loader.add_directory(context.CLIARGS['basedir'], with_subdir=True)
if context.CLIARGS['module_path']:
for path in context.CLIARGS['module_path']:
if path:
loader.add_directory(path)
# save only top level paths for errors
loader._paths = None # reset so we can use subdirs later
return loader
def run(self):
super(DocCLI, self).run()
basedir = context.CLIARGS['basedir']
plugin_type = context.CLIARGS['type'].lower()
do_json = context.CLIARGS['json_format'] or context.CLIARGS['dump']
listing = context.CLIARGS['list_files'] or context.CLIARGS['list_dir']
if context.CLIARGS['list_files']:
content = 'files'
elif context.CLIARGS['list_dir']:
content = 'dir'
else:
content = None
docs = {}
if basedir:
AnsibleCollectionConfig.playbook_paths = basedir
if plugin_type not in TARGET_OPTIONS:
raise AnsibleOptionsError("Unknown or undocumentable plugin type: %s" % plugin_type)
if context.CLIARGS['dump']:
# we always dump all types, ignore restrictions
ptypes = TARGET_OPTIONS
docs['all'] = {}
for ptype in ptypes:
no_fail = bool(not context.CLIARGS['no_fail_on_errors'])
if ptype == 'role':
roles = self._create_role_list(fail_on_errors=no_fail)
docs['all'][ptype] = self._create_role_doc(roles.keys(), context.CLIARGS['entry_point'], fail_on_errors=no_fail)
elif ptype == 'keyword':
names = DocCLI._list_keywords()
docs['all'][ptype] = DocCLI._get_keywords_docs(names.keys())
else:
plugin_names = self._list_plugins(ptype, None)
docs['all'][ptype] = self._get_plugins_docs(ptype, plugin_names, fail_ok=(ptype in ('test', 'filter')), fail_on_errors=no_fail)
# reset list after each type to avoid polution
elif listing:
if plugin_type == 'keyword':
docs = DocCLI._list_keywords()
elif plugin_type == 'role':
docs = self._create_role_list()
else:
docs = self._list_plugins(plugin_type, content)
else:
# here we require a name
if len(context.CLIARGS['args']) == 0:
raise AnsibleOptionsError("Missing name(s), incorrect options passed for detailed documentation.")
if plugin_type == 'keyword':
docs = DocCLI._get_keywords_docs(context.CLIARGS['args'])
elif plugin_type == 'role':
docs = self._create_role_doc(context.CLIARGS['args'], context.CLIARGS['entry_point'])
else:
# display specific plugin docs
docs = self._get_plugins_docs(plugin_type, context.CLIARGS['args'])
# Display the docs
if do_json:
jdump(docs)
else:
text = []
if plugin_type in C.DOCUMENTABLE_PLUGINS:
if listing and docs:
self.display_plugin_list(docs)
elif context.CLIARGS['show_snippet']:
if plugin_type not in SNIPPETS:
raise AnsibleError('Snippets are only available for the following plugin'
' types: %s' % ', '.join(SNIPPETS))
for plugin, doc_data in docs.items():
try:
textret = DocCLI.format_snippet(plugin, plugin_type, doc_data['doc'])
except ValueError as e:
display.warning("Unable to construct a snippet for"
" '{0}': {1}".format(plugin, to_text(e)))
else:
text.append(textret)
else:
# Some changes to how plain text docs are formatted
for plugin, doc_data in docs.items():
textret = DocCLI.format_plugin_doc(plugin, plugin_type,
doc_data['doc'], doc_data['examples'],
doc_data['return'], doc_data['metadata'])
if textret:
text.append(textret)
else:
display.warning("No valid documentation was retrieved from '%s'" % plugin)
elif plugin_type == 'role':
if context.CLIARGS['list_dir'] and docs:
self._display_available_roles(docs)
elif docs:
self._display_role_doc(docs)
elif docs:
text = DocCLI.tty_ify(DocCLI._dump_yaml(docs))
if text:
DocCLI.pager(''.join(text))
return 0
@staticmethod
def get_all_plugins_of_type(plugin_type):
loader = getattr(plugin_loader, '%s_loader' % plugin_type)
paths = loader._get_paths_with_context()
plugins = {}
for path_context in paths:
plugins.update(list_plugins(plugin_type))
return sorted(plugins.keys())
@staticmethod
def get_plugin_metadata(plugin_type, plugin_name):
# if the plugin lives in a non-python file (eg, win_X.ps1), require the corresponding python file for docs
loader = getattr(plugin_loader, '%s_loader' % plugin_type)
result = loader.find_plugin_with_context(plugin_name, mod_type='.py', ignore_deprecated=True, check_aliases=True)
if not result.resolved:
raise AnsibleError("unable to load {0} plugin named {1} ".format(plugin_type, plugin_name))
filename = result.plugin_resolved_path
collection_name = result.plugin_resolved_collection
try:
doc, __, __, __ = get_docstring(filename, fragment_loader, verbose=(context.CLIARGS['verbosity'] > 0),
collection_name=collection_name, plugin_type=plugin_type)
except Exception:
display.vvv(traceback.format_exc())
raise AnsibleError("%s %s at %s has a documentation formatting error or is missing documentation." % (plugin_type, plugin_name, filename))
if doc is None:
# Removed plugins don't have any documentation
return None
return dict(
name=plugin_name,
namespace=DocCLI.namespace_from_plugin_filepath(filename, plugin_name, loader.package_path),
description=doc.get('short_description', "UNKNOWN"),
version_added=doc.get('version_added', "UNKNOWN")
)
@staticmethod
def namespace_from_plugin_filepath(filepath, plugin_name, basedir):
if not basedir.endswith('/'):
basedir += '/'
rel_path = filepath.replace(basedir, '')
extension_free = os.path.splitext(rel_path)[0]
namespace_only = extension_free.rsplit(plugin_name, 1)[0].strip('/_')
clean_ns = namespace_only.replace('/', '.')
if clean_ns == '':
clean_ns = None
return clean_ns
@staticmethod
def _combine_plugin_doc(plugin, plugin_type, doc, plainexamples, returndocs, metadata):
# generate extra data
if plugin_type == 'module':
# is there corresponding action plugin?
if plugin in action_loader:
doc['has_action'] = True
else:
doc['has_action'] = False
# return everything as one dictionary
return {'doc': doc, 'examples': plainexamples, 'return': returndocs, 'metadata': metadata}
@staticmethod
def format_snippet(plugin, plugin_type, doc):
''' return heavily commented plugin use to insert into play '''
if plugin_type == 'inventory' and doc.get('options', {}).get('plugin'):
# these do not take a yaml config that we can write a snippet for
raise ValueError('The {0} inventory plugin does not take YAML type config source'
' that can be used with the "auto" plugin so a snippet cannot be'
' created.'.format(plugin))
text = []
if plugin_type == 'lookup':
text = _do_lookup_snippet(doc)
elif 'options' in doc:
text = _do_yaml_snippet(doc)
text.append('')
return "\n".join(text)
@staticmethod
def format_plugin_doc(plugin, plugin_type, doc, plainexamples, returndocs, metadata):
collection_name = doc['collection']
# TODO: do we really want this?
# add_collection_to_versions_and_dates(doc, '(unknown)', is_module=(plugin_type == 'module'))
# remove_current_collection_from_versions_and_dates(doc, collection_name, is_module=(plugin_type == 'module'))
# remove_current_collection_from_versions_and_dates(
# returndocs, collection_name, is_module=(plugin_type == 'module'), return_docs=True)
# assign from other sections
doc['plainexamples'] = plainexamples
doc['returndocs'] = returndocs
doc['metadata'] = metadata
try:
text = DocCLI.get_man_text(doc, collection_name, plugin_type)
except Exception as e:
display.vvv(traceback.format_exc())
raise AnsibleError("Unable to retrieve documentation from '%s' due to: %s" % (plugin, to_native(e)), orig_exc=e)
return text
def _get_plugin_list_descriptions(self, loader):
descs = {}
for plugin in self.plugins.keys():
# TODO: move to plugin itself i.e: plugin.get_desc()
doc = None
filename = Path(to_native(self.plugins[plugin][0]))
docerror = None
try:
doc = read_docstub(filename)
except Exception as e:
docerror = e
# plugin file was empty or had error, lets try other options
if doc is None:
# handle test/filters that are in file with diff name
base = plugin.split('.')[-1]
basefile = filename.with_name(base + filename.suffix)
for extension in C.DOC_EXTENSIONS:
docfile = basefile.with_suffix(extension)
try:
if docfile.exists():
doc = read_docstub(docfile)
except Exception as e:
docerror = e
if docerror:
display.warning("%s has a documentation formatting error: %s" % (plugin, docerror))
continue
if not doc or not isinstance(doc, dict):
desc = 'UNDOCUMENTED'
else:
desc = doc.get('short_description', 'INVALID SHORT DESCRIPTION').strip()
descs[plugin] = desc
return descs
@staticmethod
def print_paths(finder):
''' Returns a string suitable for printing of the search path '''
# Uses a list to get the order right
ret = []
for i in finder._get_paths(subdirs=False):
i = to_text(i, errors='surrogate_or_strict')
if i not in ret:
ret.append(i)
return os.pathsep.join(ret)
@staticmethod
def _dump_yaml(struct, flow_style=False):
return yaml_dump(struct, default_flow_style=flow_style, default_style="''", Dumper=AnsibleDumper).rstrip('\n')
@staticmethod
def _indent_lines(text, indent):
return DocCLI.tty_ify('\n'.join([indent + line for line in text.split('\n')]))
@staticmethod
def _format_version_added(version_added, version_added_collection=None):
if version_added_collection == 'ansible.builtin':
version_added_collection = 'ansible-core'
# In ansible-core, version_added can be 'historical'
if version_added == 'historical':
return 'historical'
if version_added_collection:
version_added = '%s of %s' % (version_added, version_added_collection)
return 'version %s' % (version_added, )
@staticmethod
def add_fields(text, fields, limit, opt_indent, return_values=False, base_indent=''):
for o in sorted(fields):
# Create a copy so we don't modify the original (in case YAML anchors have been used)
opt = dict(fields[o])
# required is used as indicator and removed
required = opt.pop('required', False)
if not isinstance(required, bool):
raise AnsibleError("Incorrect value for 'Required', a boolean is needed.: %s" % required)
if required:
opt_leadin = "="
else:
opt_leadin = "-"
text.append("%s%s %s" % (base_indent, opt_leadin, o))
# description is specifically formated and can either be string or list of strings
if 'description' not in opt:
raise AnsibleError("All (sub-)options and return values must have a 'description' field")
if is_sequence(opt['description']):
for entry_idx, entry in enumerate(opt['description'], 1):
if not isinstance(entry, string_types):
raise AnsibleError("Expected string in description of %s at index %s, got %s" % (o, entry_idx, type(entry)))
text.append(textwrap.fill(DocCLI.tty_ify(entry), limit, initial_indent=opt_indent, subsequent_indent=opt_indent))
else:
if not isinstance(opt['description'], string_types):
raise AnsibleError("Expected string in description of %s, got %s" % (o, type(opt['description'])))
text.append(textwrap.fill(DocCLI.tty_ify(opt['description']), limit, initial_indent=opt_indent, subsequent_indent=opt_indent))
del opt['description']
suboptions = []
for subkey in ('options', 'suboptions', 'contains', 'spec'):
if subkey in opt:
suboptions.append((subkey, opt.pop(subkey)))
if not required and not return_values and 'default' not in opt:
opt['default'] = None
# sanitize config items
conf = {}
for config in ('env', 'ini', 'yaml', 'vars', 'keyword'):
if config in opt and opt[config]:
# Create a copy so we don't modify the original (in case YAML anchors have been used)
conf[config] = [dict(item) for item in opt.pop(config)]
for ignore in DocCLI.IGNORE:
for item in conf[config]:
if ignore in item:
del item[ignore]
# reformat cli optoins
if 'cli' in opt and opt['cli']:
conf['cli'] = []
for cli in opt['cli']:
if 'option' not in cli:
conf['cli'].append({'name': cli['name'], 'option': '--%s' % cli['name'].replace('_', '-')})
else:
conf['cli'].append(cli)
del opt['cli']
# add custom header for conf
if conf:
text.append(DocCLI._indent_lines(DocCLI._dump_yaml({'set_via': conf}), opt_indent))
# these we handle at the end of generic option processing
version_added = opt.pop('version_added', None)
version_added_collection = opt.pop('version_added_collection', None)
# general processing for options
for k in sorted(opt):
if k.startswith('_'):
continue
if is_sequence(opt[k]):
text.append(DocCLI._indent_lines('%s: %s' % (k, DocCLI._dump_yaml(opt[k], flow_style=True)), opt_indent))
else:
text.append(DocCLI._indent_lines(DocCLI._dump_yaml({k: opt[k]}), opt_indent))
if version_added:
text.append("%sadded in: %s\n" % (opt_indent, DocCLI._format_version_added(version_added, version_added_collection)))
for subkey, subdata in suboptions:
text.append('')
text.append("%s%s:\n" % (opt_indent, subkey.upper()))
DocCLI.add_fields(text, subdata, limit, opt_indent + ' ', return_values, opt_indent)
if not suboptions:
text.append('')
def get_role_man_text(self, role, role_json):
'''Generate text for the supplied role suitable for display.
This is similar to get_man_text(), but roles are different enough that we have
a separate method for formatting their display.
:param role: The role name.
:param role_json: The JSON for the given role as returned from _create_role_doc().
:returns: A array of text suitable for displaying to screen.
'''
text = []
opt_indent = " "
pad = display.columns * 0.20
limit = max(display.columns - int(pad), 70)
text.append("> %s (%s)\n" % (role.upper(), role_json.get('path')))
for entry_point in role_json['entry_points']:
doc = role_json['entry_points'][entry_point]
if doc.get('short_description'):
text.append("ENTRY POINT: %s - %s\n" % (entry_point, doc.get('short_description')))
else:
text.append("ENTRY POINT: %s\n" % entry_point)
if doc.get('description'):
if isinstance(doc['description'], list):
desc = " ".join(doc['description'])
else:
desc = doc['description']
text.append("%s\n" % textwrap.fill(DocCLI.tty_ify(desc),
limit, initial_indent=opt_indent,
subsequent_indent=opt_indent))
if doc.get('options'):
text.append("OPTIONS (= is mandatory):\n")
DocCLI.add_fields(text, doc.pop('options'), limit, opt_indent)
text.append('')
if doc.get('attributes'):
text.append("ATTRIBUTES:\n")
text.append(DocCLI._indent_lines(DocCLI._dump_yaml(doc.pop('attributes')), opt_indent))
text.append('')
# generic elements we will handle identically
for k in ('author',):
if k not in doc:
continue
if isinstance(doc[k], string_types):
text.append('%s: %s' % (k.upper(), textwrap.fill(DocCLI.tty_ify(doc[k]),
limit - (len(k) + 2), subsequent_indent=opt_indent)))
elif isinstance(doc[k], (list, tuple)):
text.append('%s: %s' % (k.upper(), ', '.join(doc[k])))
else:
# use empty indent since this affects the start of the yaml doc, not it's keys
text.append(DocCLI._indent_lines(DocCLI._dump_yaml({k.upper(): doc[k]}), ''))
text.append('')
return text
@staticmethod
def get_man_text(doc, collection_name='', plugin_type=''):
# Create a copy so we don't modify the original
doc = dict(doc)
DocCLI.IGNORE = DocCLI.IGNORE + (context.CLIARGS['type'],)
opt_indent = " "
text = []
pad = display.columns * 0.20
limit = max(display.columns - int(pad), 70)
plugin_name = doc.get(context.CLIARGS['type'], doc.get('name')) or doc.get('plugin_type') or plugin_type
if collection_name:
plugin_name = '%s.%s' % (collection_name, plugin_name)
text.append("> %s (%s)\n" % (plugin_name.upper(), doc.pop('filename')))
if isinstance(doc['description'], list):
desc = " ".join(doc.pop('description'))
else:
desc = doc.pop('description')
text.append("%s\n" % textwrap.fill(DocCLI.tty_ify(desc), limit, initial_indent=opt_indent,
subsequent_indent=opt_indent))
if 'version_added' in doc:
version_added = doc.pop('version_added')
version_added_collection = doc.pop('version_added_collection', None)
text.append("ADDED IN: %s\n" % DocCLI._format_version_added(version_added, version_added_collection))
if doc.get('deprecated', False):
text.append("DEPRECATED: \n")
if isinstance(doc['deprecated'], dict):
if 'removed_at_date' in doc['deprecated']:
text.append(
"\tReason: %(why)s\n\tWill be removed in a release after %(removed_at_date)s\n\tAlternatives: %(alternative)s" % doc.pop('deprecated')
)
else:
if 'version' in doc['deprecated'] and 'removed_in' not in doc['deprecated']:
doc['deprecated']['removed_in'] = doc['deprecated']['version']
text.append("\tReason: %(why)s\n\tWill be removed in: Ansible %(removed_in)s\n\tAlternatives: %(alternative)s" % doc.pop('deprecated'))
else:
text.append("%s" % doc.pop('deprecated'))
text.append("\n")
if doc.pop('has_action', False):
text.append(" * note: %s\n" % "This module has a corresponding action plugin.")
if doc.get('options', False):
text.append("OPTIONS (= is mandatory):\n")
DocCLI.add_fields(text, doc.pop('options'), limit, opt_indent)
text.append('')
if doc.get('attributes', False):
text.append("ATTRIBUTES:\n")
text.append(DocCLI._indent_lines(DocCLI._dump_yaml(doc.pop('attributes')), opt_indent))
text.append('')
if doc.get('notes', False):
text.append("NOTES:")
for note in doc['notes']:
text.append(textwrap.fill(DocCLI.tty_ify(note), limit - 6,
initial_indent=opt_indent[:-2] + "* ", subsequent_indent=opt_indent))
text.append('')
text.append('')
del doc['notes']
if doc.get('seealso', False):
text.append("SEE ALSO:")
for item in doc['seealso']:
if 'module' in item:
text.append(textwrap.fill(DocCLI.tty_ify('Module %s' % item['module']),
limit - 6, initial_indent=opt_indent[:-2] + "* ", subsequent_indent=opt_indent))
description = item.get('description')
if description is None and item['module'].startswith('ansible.builtin.'):
description = 'The official documentation on the %s module.' % item['module']
if description is not None:
text.append(textwrap.fill(DocCLI.tty_ify(description),
limit - 6, initial_indent=opt_indent + ' ', subsequent_indent=opt_indent + ' '))
if item['module'].startswith('ansible.builtin.'):
relative_url = 'collections/%s_module.html' % item['module'].replace('.', '/', 2)
text.append(textwrap.fill(DocCLI.tty_ify(get_versioned_doclink(relative_url)),
limit - 6, initial_indent=opt_indent + ' ', subsequent_indent=opt_indent))
elif 'plugin' in item and 'plugin_type' in item:
plugin_suffix = ' plugin' if item['plugin_type'] not in ('module', 'role') else ''
text.append(textwrap.fill(DocCLI.tty_ify('%s%s %s' % (item['plugin_type'].title(), plugin_suffix, item['plugin'])),
limit - 6, initial_indent=opt_indent[:-2] + "* ", subsequent_indent=opt_indent))
description = item.get('description')
if description is None and item['plugin'].startswith('ansible.builtin.'):
description = 'The official documentation on the %s %s%s.' % (item['plugin'], item['plugin_type'], plugin_suffix)
if description is not None:
text.append(textwrap.fill(DocCLI.tty_ify(description),
limit - 6, initial_indent=opt_indent + ' ', subsequent_indent=opt_indent + ' '))
if item['plugin'].startswith('ansible.builtin.'):
relative_url = 'collections/%s_%s.html' % (item['plugin'].replace('.', '/', 2), item['plugin_type'])
text.append(textwrap.fill(DocCLI.tty_ify(get_versioned_doclink(relative_url)),
limit - 6, initial_indent=opt_indent + ' ', subsequent_indent=opt_indent))
elif 'name' in item and 'link' in item and 'description' in item:
text.append(textwrap.fill(DocCLI.tty_ify(item['name']),
limit - 6, initial_indent=opt_indent[:-2] + "* ", subsequent_indent=opt_indent))
text.append(textwrap.fill(DocCLI.tty_ify(item['description']),
limit - 6, initial_indent=opt_indent + ' ', subsequent_indent=opt_indent + ' '))
text.append(textwrap.fill(DocCLI.tty_ify(item['link']),
limit - 6, initial_indent=opt_indent + ' ', subsequent_indent=opt_indent + ' '))
elif 'ref' in item and 'description' in item:
text.append(textwrap.fill(DocCLI.tty_ify('Ansible documentation [%s]' % item['ref']),
limit - 6, initial_indent=opt_indent[:-2] + "* ", subsequent_indent=opt_indent))
text.append(textwrap.fill(DocCLI.tty_ify(item['description']),
limit - 6, initial_indent=opt_indent + ' ', subsequent_indent=opt_indent + ' '))
text.append(textwrap.fill(DocCLI.tty_ify(get_versioned_doclink('/#stq=%s&stp=1' % item['ref'])),
limit - 6, initial_indent=opt_indent + ' ', subsequent_indent=opt_indent + ' '))
text.append('')
text.append('')
del doc['seealso']
if doc.get('requirements', False):
req = ", ".join(doc.pop('requirements'))
text.append("REQUIREMENTS:%s\n" % textwrap.fill(DocCLI.tty_ify(req), limit - 16, initial_indent=" ", subsequent_indent=opt_indent))
# Generic handler
for k in sorted(doc):
if k in DocCLI.IGNORE or not doc[k]:
continue
if isinstance(doc[k], string_types):
text.append('%s: %s' % (k.upper(), textwrap.fill(DocCLI.tty_ify(doc[k]), limit - (len(k) + 2), subsequent_indent=opt_indent)))
elif isinstance(doc[k], (list, tuple)):
text.append('%s: %s' % (k.upper(), ', '.join(doc[k])))
else:
# use empty indent since this affects the start of the yaml doc, not it's keys
text.append(DocCLI._indent_lines(DocCLI._dump_yaml({k.upper(): doc[k]}), ''))
del doc[k]
text.append('')
if doc.get('plainexamples', False):
text.append("EXAMPLES:")
text.append('')
if isinstance(doc['plainexamples'], string_types):
text.append(doc.pop('plainexamples').strip())
else:
try:
text.append(yaml_dump(doc.pop('plainexamples'), indent=2, default_flow_style=False))
except Exception as e:
raise AnsibleParserError("Unable to parse examples section", orig_exc=e)
text.append('')
text.append('')
if doc.get('returndocs', False):
text.append("RETURN VALUES:")
DocCLI.add_fields(text, doc.pop('returndocs'), limit, opt_indent, return_values=True)
return "\n".join(text)
def _do_yaml_snippet(doc):
text = []
mdesc = DocCLI.tty_ify(doc['short_description'])
module = doc.get('module')
if module:
# this is actually a usable task!
text.append("- name: %s" % (mdesc))
text.append(" %s:" % (module))
else:
# just a comment, hopefully useful yaml file
text.append("# %s:" % doc.get('plugin', doc.get('name')))
pad = 29
subdent = '# '.rjust(pad + 2)
limit = display.columns - pad
for o in sorted(doc['options'].keys()):
opt = doc['options'][o]
if isinstance(opt['description'], string_types):
desc = DocCLI.tty_ify(opt['description'])
else:
desc = DocCLI.tty_ify(" ".join(opt['description']))
required = opt.get('required', False)
if not isinstance(required, bool):
raise ValueError("Incorrect value for 'Required', a boolean is needed: %s" % required)
o = '%s:' % o
if module:
if required:
desc = "(required) %s" % desc
text.append(" %-20s # %s" % (o, textwrap.fill(desc, limit, subsequent_indent=subdent)))
else:
if required:
default = '(required)'
else:
default = opt.get('default', 'None')
text.append("%s %-9s # %s" % (o, default, textwrap.fill(desc, limit, subsequent_indent=subdent, max_lines=3)))
return text
def _do_lookup_snippet(doc):
text = []
snippet = "lookup('%s', " % doc.get('plugin', doc.get('name'))
comment = []
for o in sorted(doc['options'].keys()):
opt = doc['options'][o]
comment.append('# %s(%s): %s' % (o, opt.get('type', 'string'), opt.get('description', '')))
if o in ('_terms', '_raw', '_list'):
# these are 'list of arguments'
snippet += '< %s >' % (o)
continue
required = opt.get('required', False)
if not isinstance(required, bool):
raise ValueError("Incorrect value for 'Required', a boolean is needed: %s" % required)
if required:
default = '<REQUIRED>'
else:
default = opt.get('default', 'None')
if opt.get('type') in ('string', 'str'):
snippet += ", %s='%s'" % (o, default)
else:
snippet += ', %s=%s' % (o, default)
snippet += ")"
if comment:
text.extend(comment)
text.append('')
text.append(snippet)
return text
def main(args=None):
DocCLI.cli_executor(args)
if __name__ == '__main__':
main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 81,716 |
Remove deprecated functionality from ansible-doc for 2.17
|
### Summary
ansible-doc contains deprecated calls to be removed for 2.17
### Issue Type
Feature Idea
### Component Name
`lib/ansible/cli/doc.py`
|
https://github.com/ansible/ansible/issues/81716
|
https://github.com/ansible/ansible/pull/81729
|
3ec7a6e0db53b254fde26abc190fcb2f4af1ce88
|
4b7705b07a64408515d0e164b62d4a8f814918db
| 2023-09-18T21:01:45Z |
python
| 2023-09-19T23:48:33Z |
test/sanity/ignore.txt
|
lib/ansible/cli/scripts/ansible_connection_cli_stub.py shebang
lib/ansible/config/base.yml no-unwanted-files
lib/ansible/executor/powershell/async_watchdog.ps1 pslint:PSCustomUseLiteralPath
lib/ansible/executor/powershell/async_wrapper.ps1 pslint:PSCustomUseLiteralPath
lib/ansible/executor/powershell/exec_wrapper.ps1 pslint:PSCustomUseLiteralPath
lib/ansible/galaxy/collection/__init__.py mypy-3.10:attr-defined # inline ignore has no effect
lib/ansible/galaxy/collection/__init__.py mypy-3.11:attr-defined # inline ignore has no effect
lib/ansible/galaxy/collection/__init__.py mypy-3.12:attr-defined # inline ignore has no effect
lib/ansible/galaxy/collection/gpg.py mypy-3.10:arg-type
lib/ansible/galaxy/collection/gpg.py mypy-3.11:arg-type
lib/ansible/galaxy/collection/gpg.py mypy-3.12:arg-type
lib/ansible/parsing/yaml/constructor.py mypy-3.10:type-var # too many occurrences to ignore inline
lib/ansible/parsing/yaml/constructor.py mypy-3.11:type-var # too many occurrences to ignore inline
lib/ansible/parsing/yaml/constructor.py mypy-3.12:type-var # too many occurrences to ignore inline
lib/ansible/keyword_desc.yml no-unwanted-files
lib/ansible/modules/apt.py validate-modules:parameter-invalid
lib/ansible/modules/apt_repository.py validate-modules:parameter-invalid
lib/ansible/modules/assemble.py validate-modules:nonexistent-parameter-documented
lib/ansible/modules/async_status.py validate-modules!skip
lib/ansible/modules/async_wrapper.py ansible-doc!skip # not an actual module
lib/ansible/modules/async_wrapper.py pylint:ansible-bad-function # ignore, required
lib/ansible/modules/async_wrapper.py use-argspec-type-path
lib/ansible/modules/blockinfile.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/blockinfile.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/command.py validate-modules:doc-default-does-not-match-spec # _uses_shell is undocumented
lib/ansible/modules/command.py validate-modules:doc-missing-type
lib/ansible/modules/command.py validate-modules:nonexistent-parameter-documented
lib/ansible/modules/command.py validate-modules:undocumented-parameter
lib/ansible/modules/copy.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/copy.py validate-modules:nonexistent-parameter-documented
lib/ansible/modules/copy.py validate-modules:undocumented-parameter
lib/ansible/modules/dnf.py validate-modules:parameter-invalid
lib/ansible/modules/dnf5.py validate-modules:parameter-invalid
lib/ansible/modules/file.py validate-modules:undocumented-parameter
lib/ansible/modules/find.py use-argspec-type-path # fix needed
lib/ansible/modules/git.py use-argspec-type-path
lib/ansible/modules/git.py validate-modules:doc-required-mismatch
lib/ansible/modules/lineinfile.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/lineinfile.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/lineinfile.py validate-modules:nonexistent-parameter-documented
lib/ansible/modules/package_facts.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/replace.py validate-modules:nonexistent-parameter-documented
lib/ansible/modules/service.py validate-modules:nonexistent-parameter-documented
lib/ansible/modules/service.py validate-modules:use-run-command-not-popen
lib/ansible/modules/stat.py validate-modules:parameter-invalid
lib/ansible/modules/systemd_service.py validate-modules:parameter-invalid
lib/ansible/modules/uri.py validate-modules:doc-required-mismatch
lib/ansible/modules/user.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/user.py validate-modules:use-run-command-not-popen
lib/ansible/modules/yum.py validate-modules:parameter-invalid
lib/ansible/module_utils/basic.py pylint:unused-import # deferring resolution to allow enabling the rule now
lib/ansible/module_utils/compat/_selectors2.py future-import-boilerplate # ignore bundled
lib/ansible/module_utils/compat/_selectors2.py metaclass-boilerplate # ignore bundled
lib/ansible/module_utils/compat/selinux.py import-2.7!skip # pass/fail depends on presence of libselinux.so
lib/ansible/module_utils/compat/selinux.py import-3.6!skip # pass/fail depends on presence of libselinux.so
lib/ansible/module_utils/compat/selinux.py import-3.7!skip # pass/fail depends on presence of libselinux.so
lib/ansible/module_utils/compat/selinux.py import-3.8!skip # pass/fail depends on presence of libselinux.so
lib/ansible/module_utils/compat/selinux.py import-3.9!skip # pass/fail depends on presence of libselinux.so
lib/ansible/module_utils/compat/selinux.py import-3.10!skip # pass/fail depends on presence of libselinux.so
lib/ansible/module_utils/compat/selinux.py import-3.11!skip # pass/fail depends on presence of libselinux.so
lib/ansible/module_utils/compat/selinux.py import-3.12!skip # pass/fail depends on presence of libselinux.so
lib/ansible/module_utils/distro/_distro.py future-import-boilerplate # ignore bundled
lib/ansible/module_utils/distro/_distro.py metaclass-boilerplate # ignore bundled
lib/ansible/module_utils/distro/_distro.py no-assert
lib/ansible/module_utils/distro/_distro.py pep8!skip # bundled code we don't want to modify
lib/ansible/module_utils/distro/_distro.py pylint:undefined-variable # ignore bundled
lib/ansible/module_utils/distro/_distro.py pylint:using-constant-test # bundled code we don't want to modify
lib/ansible/module_utils/distro/__init__.py empty-init # breaks namespacing, bundled, do not override
lib/ansible/module_utils/facts/__init__.py empty-init # breaks namespacing, deprecate and eventually remove
lib/ansible/module_utils/powershell/Ansible.ModuleUtils.ArgvParser.psm1 pslint:PSUseApprovedVerbs
lib/ansible/module_utils/powershell/Ansible.ModuleUtils.CommandUtil.psm1 pslint:PSProvideCommentHelp # need to agree on best format for comment location
lib/ansible/module_utils/powershell/Ansible.ModuleUtils.CommandUtil.psm1 pslint:PSUseApprovedVerbs
lib/ansible/module_utils/powershell/Ansible.ModuleUtils.FileUtil.psm1 pslint:PSCustomUseLiteralPath
lib/ansible/module_utils/powershell/Ansible.ModuleUtils.FileUtil.psm1 pslint:PSProvideCommentHelp
lib/ansible/module_utils/powershell/Ansible.ModuleUtils.Legacy.psm1 pslint:PSCustomUseLiteralPath
lib/ansible/module_utils/powershell/Ansible.ModuleUtils.Legacy.psm1 pslint:PSUseApprovedVerbs
lib/ansible/module_utils/powershell/Ansible.ModuleUtils.LinkUtil.psm1 pslint:PSUseApprovedVerbs
lib/ansible/module_utils/pycompat24.py no-get-exception
lib/ansible/module_utils/six/__init__.py empty-init # breaks namespacing, bundled, do not override
lib/ansible/module_utils/six/__init__.py future-import-boilerplate # ignore bundled
lib/ansible/module_utils/six/__init__.py metaclass-boilerplate # ignore bundled
lib/ansible/module_utils/six/__init__.py no-basestring
lib/ansible/module_utils/six/__init__.py no-dict-iteritems
lib/ansible/module_utils/six/__init__.py no-dict-iterkeys
lib/ansible/module_utils/six/__init__.py no-dict-itervalues
lib/ansible/module_utils/six/__init__.py pylint:self-assigning-variable
lib/ansible/module_utils/six/__init__.py pylint:trailing-comma-tuple
lib/ansible/module_utils/six/__init__.py replace-urlopen
lib/ansible/module_utils/urls.py replace-urlopen
lib/ansible/parsing/yaml/objects.py pylint:arguments-renamed
lib/ansible/playbook/collectionsearch.py required-and-default-attributes # https://github.com/ansible/ansible/issues/61460
lib/ansible/playbook/role/include.py pylint:arguments-renamed
lib/ansible/plugins/action/normal.py action-plugin-docs # default action plugin for modules without a dedicated action plugin
lib/ansible/plugins/cache/base.py ansible-doc!skip # not a plugin, but a stub for backwards compatibility
lib/ansible/plugins/callback/__init__.py pylint:arguments-renamed
lib/ansible/plugins/inventory/advanced_host_list.py pylint:arguments-renamed
lib/ansible/plugins/inventory/host_list.py pylint:arguments-renamed
lib/ansible/utils/collection_loader/_collection_finder.py pylint:deprecated-class
lib/ansible/utils/collection_loader/_collection_meta.py pylint:deprecated-class
test/integration/targets/ansible-test-sanity/ansible_collections/ns/col/tests/integration/targets/hello/files/bad.py pylint:ansible-bad-function # ignore, required for testing
test/integration/targets/ansible-test-sanity/ansible_collections/ns/col/tests/integration/targets/hello/files/bad.py pylint:ansible-bad-import-from # ignore, required for testing
test/integration/targets/ansible-test-sanity/ansible_collections/ns/col/tests/integration/targets/hello/files/bad.py pylint:ansible-bad-import # ignore, required for testing
test/integration/targets/ansible-test-sanity/ansible_collections/ns/col/plugins/plugin_utils/check_pylint.py pylint:disallowed-name # ignore, required for testing
test/integration/targets/ansible-test-integration/ansible_collections/ns/col/plugins/modules/hello.py pylint:relative-beyond-top-level
test/integration/targets/ansible-test-units/ansible_collections/ns/col/plugins/modules/hello.py pylint:relative-beyond-top-level
test/integration/targets/ansible-test-units/ansible_collections/ns/col/tests/unit/plugins/modules/test_hello.py pylint:relative-beyond-top-level
test/integration/targets/ansible-test-units/ansible_collections/ns/col/tests/unit/plugins/module_utils/test_my_util.py pylint:relative-beyond-top-level
test/integration/targets/ansible-test-docker/ansible_collections/ns/col/plugins/modules/hello.py pylint:relative-beyond-top-level
test/integration/targets/ansible-test-docker/ansible_collections/ns/col/tests/unit/plugins/modules/test_hello.py pylint:relative-beyond-top-level
test/integration/targets/ansible-test-docker/ansible_collections/ns/col/tests/unit/plugins/module_utils/test_my_util.py pylint:relative-beyond-top-level
test/integration/targets/ansible-test-no-tty/ansible_collections/ns/col/vendored_pty.py pep8!skip # vendored code
test/integration/targets/collections_relative_imports/collection_root/ansible_collections/my_ns/my_col/plugins/modules/my_module.py pylint:relative-beyond-top-level
test/integration/targets/collections_relative_imports/collection_root/ansible_collections/my_ns/my_col/plugins/module_utils/my_util2.py pylint:relative-beyond-top-level
test/integration/targets/fork_safe_stdio/vendored_pty.py pep8!skip # vendored code
test/integration/targets/gathering_facts/library/bogus_facts shebang
test/integration/targets/gathering_facts/library/dummy1 shebang
test/integration/targets/gathering_facts/library/facts_one shebang
test/integration/targets/gathering_facts/library/facts_two shebang
test/integration/targets/gathering_facts/library/slow shebang
test/integration/targets/incidental_win_reboot/templates/post_reboot.ps1 pslint!skip
test/integration/targets/json_cleanup/library/bad_json shebang
test/integration/targets/lookup_csvfile/files/crlf.csv line-endings
test/integration/targets/lookup_ini/lookup-8859-15.ini no-smart-quotes
test/integration/targets/module_precedence/lib_with_extension/a.ini shebang
test/integration/targets/module_precedence/lib_with_extension/ping.ini shebang
test/integration/targets/module_precedence/roles_with_extension/foo/library/a.ini shebang
test/integration/targets/module_precedence/roles_with_extension/foo/library/ping.ini shebang
test/integration/targets/module_utils/library/test.py future-import-boilerplate # allow testing of Python 2.x implicit relative imports
test/integration/targets/old_style_modules_posix/library/helloworld.sh shebang
test/integration/targets/template/files/encoding_1252_utf-8.expected no-smart-quotes
test/integration/targets/template/files/encoding_1252_windows-1252.expected no-smart-quotes
test/integration/targets/template/files/foo.dos.txt line-endings
test/integration/targets/template/templates/encoding_1252.j2 no-smart-quotes
test/integration/targets/unicode/unicode.yml no-smart-quotes
test/integration/targets/windows-minimal/library/win_ping_syntax_error.ps1 pslint!skip
test/integration/targets/win_exec_wrapper/library/test_fail.ps1 pslint:PSCustomUseLiteralPath
test/integration/targets/win_exec_wrapper/tasks/main.yml no-smart-quotes # We are explicitly testing smart quote support for env vars
test/integration/targets/win_fetch/tasks/main.yml no-smart-quotes # We are explictly testing smart quotes in the file name to fetch
test/integration/targets/win_module_utils/library/legacy_only_new_way_win_line_ending.ps1 line-endings # Explicitly tests that we still work with Windows line endings
test/integration/targets/win_module_utils/library/legacy_only_old_way_win_line_ending.ps1 line-endings # Explicitly tests that we still work with Windows line endings
test/integration/targets/win_script/files/test_script.ps1 pslint:PSAvoidUsingWriteHost # Keep
test/integration/targets/win_script/files/test_script_removes_file.ps1 pslint:PSCustomUseLiteralPath
test/integration/targets/win_script/files/test_script_with_args.ps1 pslint:PSAvoidUsingWriteHost # Keep
test/integration/targets/win_script/files/test_script_with_splatting.ps1 pslint:PSAvoidUsingWriteHost # Keep
test/lib/ansible_test/_data/requirements/sanity.pslint.ps1 pslint:PSCustomUseLiteralPath # Uses wildcards on purpose
test/support/network-integration/collections/ansible_collections/ansible/netcommon/plugins/module_utils/compat/ipaddress.py no-unicode-literals
test/support/network-integration/collections/ansible_collections/cisco/ios/plugins/cliconf/ios.py pylint:arguments-renamed
test/support/network-integration/collections/ansible_collections/vyos/vyos/plugins/cliconf/vyos.py pylint:arguments-renamed
test/support/windows-integration/collections/ansible_collections/ansible/windows/plugins/module_utils/WebRequest.psm1 pslint!skip
test/support/windows-integration/collections/ansible_collections/ansible/windows/plugins/modules/win_uri.ps1 pslint!skip
test/support/windows-integration/plugins/modules/async_status.ps1 pslint!skip
test/support/windows-integration/plugins/modules/setup.ps1 pslint!skip
test/support/windows-integration/plugins/modules/slurp.ps1 pslint!skip
test/support/windows-integration/plugins/modules/win_acl.ps1 pslint!skip
test/support/windows-integration/plugins/modules/win_certificate_store.ps1 pslint!skip
test/support/windows-integration/plugins/modules/win_command.ps1 pslint!skip
test/support/windows-integration/plugins/modules/win_copy.ps1 pslint!skip
test/support/windows-integration/plugins/modules/win_file.ps1 pslint!skip
test/support/windows-integration/plugins/modules/win_get_url.ps1 pslint!skip
test/support/windows-integration/plugins/modules/win_lineinfile.ps1 pslint!skip
test/support/windows-integration/plugins/modules/win_regedit.ps1 pslint!skip
test/support/windows-integration/plugins/modules/win_shell.ps1 pslint!skip
test/support/windows-integration/plugins/modules/win_stat.ps1 pslint!skip
test/support/windows-integration/plugins/modules/win_tempfile.ps1 pslint!skip
test/support/windows-integration/plugins/modules/win_user_right.ps1 pslint!skip
test/support/windows-integration/plugins/modules/win_user.ps1 pslint!skip
test/support/windows-integration/plugins/modules/win_wait_for.ps1 pslint!skip
test/support/windows-integration/plugins/modules/win_whoami.ps1 pslint!skip
test/units/module_utils/basic/test_deprecate_warn.py pylint:ansible-deprecated-no-version
test/units/module_utils/basic/test_deprecate_warn.py pylint:ansible-deprecated-version
test/units/module_utils/common/warnings/test_deprecate.py pylint:ansible-deprecated-no-version # testing Display.deprecated call without a version or date
test/units/module_utils/common/warnings/test_deprecate.py pylint:ansible-deprecated-version # testing Deprecated version found in call to Display.deprecated or AnsibleModule.deprecate
test/units/module_utils/urls/fixtures/multipart.txt line-endings # Fixture for HTTP tests that use CRLF
test/units/utils/collection_loader/fixtures/collections/ansible_collections/testns/testcoll/plugins/action/my_action.py pylint:relative-beyond-top-level
test/units/utils/collection_loader/fixtures/collections/ansible_collections/testns/testcoll/plugins/modules/__init__.py empty-init # testing that collections don't need inits
test/units/utils/collection_loader/fixtures/collections_masked/ansible_collections/ansible/__init__.py empty-init # testing that collections don't need inits
test/units/utils/collection_loader/fixtures/collections_masked/ansible_collections/__init__.py empty-init # testing that collections don't need inits
test/units/utils/collection_loader/fixtures/collections_masked/ansible_collections/testns/__init__.py empty-init # testing that collections don't need inits
test/units/utils/collection_loader/fixtures/collections_masked/ansible_collections/testns/testcoll/__init__.py empty-init # testing that collections don't need inits
test/units/utils/collection_loader/test_collection_loader.py pylint:undefined-variable # magic runtime local var splatting
.github/CONTRIBUTING.md pymarkdown:line-length
hacking/backport/README.md pymarkdown:no-bare-urls
hacking/ticket_stubs/bug_internal_api.md pymarkdown:no-bare-urls
hacking/ticket_stubs/bug_wrong_repo.md pymarkdown:no-bare-urls
hacking/ticket_stubs/collections.md pymarkdown:line-length
hacking/ticket_stubs/collections.md pymarkdown:no-bare-urls
hacking/ticket_stubs/guide_newbie_about_gh_and_contributing_to_ansible.md pymarkdown:no-bare-urls
hacking/ticket_stubs/no_thanks.md pymarkdown:line-length
hacking/ticket_stubs/no_thanks.md pymarkdown:no-bare-urls
hacking/ticket_stubs/pr_duplicate.md pymarkdown:no-bare-urls
hacking/ticket_stubs/pr_merged.md pymarkdown:no-bare-urls
hacking/ticket_stubs/proposal.md pymarkdown:no-bare-urls
hacking/ticket_stubs/question_not_bug.md pymarkdown:no-bare-urls
hacking/ticket_stubs/resolved.md pymarkdown:no-bare-urls
hacking/ticket_stubs/wider_discussion.md pymarkdown:no-bare-urls
lib/ansible/galaxy/data/apb/README.md pymarkdown:line-length
lib/ansible/galaxy/data/container/README.md pymarkdown:line-length
lib/ansible/galaxy/data/default/role/README.md pymarkdown:line-length
lib/ansible/galaxy/data/network/README.md pymarkdown:line-length
README.md pymarkdown:line-length
test/integration/targets/ansible-vault/invalid_format/README.md pymarkdown:no-bare-urls
test/support/README.md pymarkdown:no-bare-urls
test/units/cli/test_data/role_skeleton/README.md pymarkdown:line-length
lib/ansible/cli/doc.py pylint:ansible-deprecated-version # 2.17 deprecation
lib/ansible/utils/encrypt.py pylint:ansible-deprecated-version # 2.17 deprecation
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 81,618 |
Ansible galaxy collection build build_collection fails creating the files to be added to the release tarball
|
### Summary
When using:
```python
from ansible.galaxy.collection import build_collection
```
And calling it like:
```python
input_path= "/home/ccamacho/dev/automationhub/ccamacho/automationhub/" # Where galaxy.yml is
ouput_path= "releases" # The output folder relative to the input_path like <input_path/output_path/>
build_collection(input_path, output_path, True)
```
It fails with:
```python-traceback
The full traceback is:
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/ansible/executor/task_executor.py", line 165, in run
res = self._execute()
File "/usr/local/lib/python3.10/dist-packages/ansible/executor/task_executor.py", line 660, in _execute
result = self._handler.run(task_vars=vars_copy)
File "/home/ccamacho/.ansible/collections/ansible_collections/ccamacho/automationhub/plugins/action/collection_build.py", line 89, in run
galaxy.collection_build()
File "/home/ccamacho/.ansible/collections/ansible_collections/ccamacho/automationhub/plugins/plugin_utils/automationhub.py", line 64, in collection_build
build_collection(self.options.input_path, self.options.output_path, True)
File "/usr/local/lib/python3.10/dist-packages/ansible/galaxy/collection/__init__.py", line 514, in build_collection
collection_output = _build_collection_tar(b_collection_path, b_collection_output, collection_manifest, file_manifest)
File "/usr/local/lib/python3.10/dist-packages/ansible/galaxy/collection/__init__.py", line 1395, in _build_collection_tar
tar_file.add(
File "/usr/lib/python3.10/tarfile.py", line 2157, in add
tarinfo = self.gettarinfo(name, arcname)
File "/usr/lib/python3.10/tarfile.py", line 2030, in gettarinfo
statres = os.lstat(name)
FileNotFoundError: [Errno 2] No such file or directory: '/home/ccamacho/dev/automationhub/ccamacho/automationhub/laybooks/collection_publish.yml'
fatal: [localhost]: FAILED! => {
"msg": "Unexpected failure during module execution: [Errno 2] No such file or directory: '/home/ccamacho/dev/automationhub/ccamacho/automationhub/laybooks/collection_publish.yml'",
"stdout": ""
}
```
### Issue Type
Bug Report
### Component Name
[`lib/ansible/galaxy/collection/__init__.py`](https://github.com/ansible/ansible/blob/devel/lib/ansible/galaxy/collection/__init__.py)
### Ansible Version
```console
$ ansible --version
ansible [core 2.15.3]
config file = None
configured module search path = ['/home/ccamacho/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/lib/python3.10/dist-packages/ansible
ansible collection location = /home/ccamacho/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/local/bin/ansible
python version = 3.10.12 (main, Jun 11 2023, 05:26:28) [GCC 11.4.0] (/usr/bin/python3)
jinja version = 3.1.2
libyaml = True
```
### Configuration
Nothing custom or special
### OS / Environment
CentOS Stream9
### Steps to Reproduce
Using python directly:
```python
input_path= "/home/ccamacho/dev/automationhub/ccamacho/automationhub/" # Where galaxy.yml is
ouput_path= "releases" # The output folder relative to the input_path like <input_path/output_path/>
build_collection(input_path, output_path, True)
```
Testing the collection:
```shell
git clone https://github.com/ccamacho/automationhub
cd automationhub/ccamacho/automationhub/
ansible-galaxy collection build -v --force --output-path releases/
ansible-galaxy collection install releases/ccamacho-automationhub-1.0.0.tar.gz --force
ansible-playbook ./playbooks/collection_build.yml -vvvvv
```
### Expected Results
The collection is built correctly
### Actual Results
```python-traceback
FileNotFoundError: [Errno 2] No such file or directory: '/home/ccamacho/dev/automationhub/ccamacho/automationhub/laybooks/collection_publish.yml'
```
As you can see it should be "playbooks" instead of "laybooks"
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/81618
|
https://github.com/ansible/ansible/pull/81619
|
e4b9f9c6ae77388ff0d0a51d4943939636a03161
|
9244b2bff86961c896c2f2325b7c7f30b461819c
| 2023-09-02T15:50:54Z |
python
| 2023-09-20T18:18:37Z |
changelogs/fragments/fix-build-files-manifest-walk.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 81,618 |
Ansible galaxy collection build build_collection fails creating the files to be added to the release tarball
|
### Summary
When using:
```python
from ansible.galaxy.collection import build_collection
```
And calling it like:
```python
input_path= "/home/ccamacho/dev/automationhub/ccamacho/automationhub/" # Where galaxy.yml is
ouput_path= "releases" # The output folder relative to the input_path like <input_path/output_path/>
build_collection(input_path, output_path, True)
```
It fails with:
```python-traceback
The full traceback is:
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/ansible/executor/task_executor.py", line 165, in run
res = self._execute()
File "/usr/local/lib/python3.10/dist-packages/ansible/executor/task_executor.py", line 660, in _execute
result = self._handler.run(task_vars=vars_copy)
File "/home/ccamacho/.ansible/collections/ansible_collections/ccamacho/automationhub/plugins/action/collection_build.py", line 89, in run
galaxy.collection_build()
File "/home/ccamacho/.ansible/collections/ansible_collections/ccamacho/automationhub/plugins/plugin_utils/automationhub.py", line 64, in collection_build
build_collection(self.options.input_path, self.options.output_path, True)
File "/usr/local/lib/python3.10/dist-packages/ansible/galaxy/collection/__init__.py", line 514, in build_collection
collection_output = _build_collection_tar(b_collection_path, b_collection_output, collection_manifest, file_manifest)
File "/usr/local/lib/python3.10/dist-packages/ansible/galaxy/collection/__init__.py", line 1395, in _build_collection_tar
tar_file.add(
File "/usr/lib/python3.10/tarfile.py", line 2157, in add
tarinfo = self.gettarinfo(name, arcname)
File "/usr/lib/python3.10/tarfile.py", line 2030, in gettarinfo
statres = os.lstat(name)
FileNotFoundError: [Errno 2] No such file or directory: '/home/ccamacho/dev/automationhub/ccamacho/automationhub/laybooks/collection_publish.yml'
fatal: [localhost]: FAILED! => {
"msg": "Unexpected failure during module execution: [Errno 2] No such file or directory: '/home/ccamacho/dev/automationhub/ccamacho/automationhub/laybooks/collection_publish.yml'",
"stdout": ""
}
```
### Issue Type
Bug Report
### Component Name
[`lib/ansible/galaxy/collection/__init__.py`](https://github.com/ansible/ansible/blob/devel/lib/ansible/galaxy/collection/__init__.py)
### Ansible Version
```console
$ ansible --version
ansible [core 2.15.3]
config file = None
configured module search path = ['/home/ccamacho/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/lib/python3.10/dist-packages/ansible
ansible collection location = /home/ccamacho/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/local/bin/ansible
python version = 3.10.12 (main, Jun 11 2023, 05:26:28) [GCC 11.4.0] (/usr/bin/python3)
jinja version = 3.1.2
libyaml = True
```
### Configuration
Nothing custom or special
### OS / Environment
CentOS Stream9
### Steps to Reproduce
Using python directly:
```python
input_path= "/home/ccamacho/dev/automationhub/ccamacho/automationhub/" # Where galaxy.yml is
ouput_path= "releases" # The output folder relative to the input_path like <input_path/output_path/>
build_collection(input_path, output_path, True)
```
Testing the collection:
```shell
git clone https://github.com/ccamacho/automationhub
cd automationhub/ccamacho/automationhub/
ansible-galaxy collection build -v --force --output-path releases/
ansible-galaxy collection install releases/ccamacho-automationhub-1.0.0.tar.gz --force
ansible-playbook ./playbooks/collection_build.yml -vvvvv
```
### Expected Results
The collection is built correctly
### Actual Results
```python-traceback
FileNotFoundError: [Errno 2] No such file or directory: '/home/ccamacho/dev/automationhub/ccamacho/automationhub/laybooks/collection_publish.yml'
```
As you can see it should be "playbooks" instead of "laybooks"
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/81618
|
https://github.com/ansible/ansible/pull/81619
|
e4b9f9c6ae77388ff0d0a51d4943939636a03161
|
9244b2bff86961c896c2f2325b7c7f30b461819c
| 2023-09-02T15:50:54Z |
python
| 2023-09-20T18:18:37Z |
lib/ansible/galaxy/collection/__init__.py
|
# -*- coding: utf-8 -*-
# Copyright: (c) 2019-2021, Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
"""Installed collections management package."""
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import errno
import fnmatch
import functools
import json
import os
import pathlib
import queue
import re
import shutil
import stat
import sys
import tarfile
import tempfile
import textwrap
import threading
import time
import typing as t
from collections import namedtuple
from contextlib import contextmanager
from dataclasses import dataclass, fields as dc_fields
from hashlib import sha256
from io import BytesIO
from importlib.metadata import distribution
from itertools import chain
try:
from packaging.requirements import Requirement as PkgReq
except ImportError:
class PkgReq: # type: ignore[no-redef]
pass
HAS_PACKAGING = False
else:
HAS_PACKAGING = True
try:
from distlib.manifest import Manifest # type: ignore[import]
from distlib import DistlibException # type: ignore[import]
except ImportError:
HAS_DISTLIB = False
else:
HAS_DISTLIB = True
if t.TYPE_CHECKING:
from ansible.galaxy.collection.concrete_artifact_manager import (
ConcreteArtifactsManager,
)
ManifestKeysType = t.Literal[
'collection_info', 'file_manifest_file', 'format',
]
FileMetaKeysType = t.Literal[
'name',
'ftype',
'chksum_type',
'chksum_sha256',
'format',
]
CollectionInfoKeysType = t.Literal[
# collection meta:
'namespace', 'name', 'version',
'authors', 'readme',
'tags', 'description',
'license', 'license_file',
'dependencies',
'repository', 'documentation',
'homepage', 'issues',
# files meta:
FileMetaKeysType,
]
ManifestValueType = t.Dict[CollectionInfoKeysType, t.Union[int, str, t.List[str], t.Dict[str, str], None]]
CollectionManifestType = t.Dict[ManifestKeysType, ManifestValueType]
FileManifestEntryType = t.Dict[FileMetaKeysType, t.Union[str, int, None]]
FilesManifestType = t.Dict[t.Literal['files', 'format'], t.Union[t.List[FileManifestEntryType], int]]
import ansible.constants as C
from ansible.compat.importlib_resources import files
from ansible.errors import AnsibleError
from ansible.galaxy.api import GalaxyAPI
from ansible.galaxy.collection.concrete_artifact_manager import (
_consume_file,
_download_file,
_get_json_from_installed_dir,
_get_meta_from_src_dir,
_tarfile_extract,
)
from ansible.galaxy.collection.galaxy_api_proxy import MultiGalaxyAPIProxy
from ansible.galaxy.collection.gpg import (
run_gpg_verify,
parse_gpg_errors,
get_signature_from_source,
GPG_ERROR_MAP,
)
try:
from ansible.galaxy.dependency_resolution import (
build_collection_dependency_resolver,
)
from ansible.galaxy.dependency_resolution.errors import (
CollectionDependencyResolutionImpossible,
CollectionDependencyInconsistentCandidate,
)
from ansible.galaxy.dependency_resolution.providers import (
RESOLVELIB_VERSION,
RESOLVELIB_LOWERBOUND,
RESOLVELIB_UPPERBOUND,
)
except ImportError:
HAS_RESOLVELIB = False
else:
HAS_RESOLVELIB = True
from ansible.galaxy.dependency_resolution.dataclasses import (
Candidate, Requirement, _is_installed_collection_dir,
)
from ansible.galaxy.dependency_resolution.versioning import meets_requirements
from ansible.plugins.loader import get_all_plugin_loaders
from ansible.module_utils.common.text.converters import to_bytes, to_native, to_text
from ansible.module_utils.common.collections import is_sequence
from ansible.module_utils.common.yaml import yaml_dump
from ansible.utils.collection_loader import AnsibleCollectionRef
from ansible.utils.display import Display
from ansible.utils.hashing import secure_hash, secure_hash_s
from ansible.utils.sentinel import Sentinel
display = Display()
MANIFEST_FORMAT = 1
MANIFEST_FILENAME = 'MANIFEST.json'
ModifiedContent = namedtuple('ModifiedContent', ['filename', 'expected', 'installed'])
SIGNATURE_COUNT_RE = r"^(?P<strict>\+)?(?:(?P<count>\d+)|(?P<all>all))$"
@dataclass
class ManifestControl:
directives: list[str] = None
omit_default_directives: bool = False
def __post_init__(self):
# Allow a dict representing this dataclass to be splatted directly.
# Requires attrs to have a default value, so anything with a default
# of None is swapped for its, potentially mutable, default
for field in dc_fields(self):
if getattr(self, field.name) is None:
super().__setattr__(field.name, field.type())
class CollectionSignatureError(Exception):
def __init__(self, reasons=None, stdout=None, rc=None, ignore=False):
self.reasons = reasons
self.stdout = stdout
self.rc = rc
self.ignore = ignore
self._reason_wrapper = None
def _report_unexpected(self, collection_name):
return (
f"Unexpected error for '{collection_name}': "
f"GnuPG signature verification failed with the return code {self.rc} and output {self.stdout}"
)
def _report_expected(self, collection_name):
header = f"Signature verification failed for '{collection_name}' (return code {self.rc}):"
return header + self._format_reasons()
def _format_reasons(self):
if self._reason_wrapper is None:
self._reason_wrapper = textwrap.TextWrapper(
initial_indent=" * ", # 6 chars
subsequent_indent=" ", # 6 chars
)
wrapped_reasons = [
'\n'.join(self._reason_wrapper.wrap(reason))
for reason in self.reasons
]
return '\n' + '\n'.join(wrapped_reasons)
def report(self, collection_name):
if self.reasons:
return self._report_expected(collection_name)
return self._report_unexpected(collection_name)
# FUTURE: expose actual verify result details for a collection on this object, maybe reimplement as dataclass on py3.8+
class CollectionVerifyResult:
def __init__(self, collection_name): # type: (str) -> None
self.collection_name = collection_name # type: str
self.success = True # type: bool
def verify_local_collection(local_collection, remote_collection, artifacts_manager):
# type: (Candidate, t.Optional[Candidate], ConcreteArtifactsManager) -> CollectionVerifyResult
"""Verify integrity of the locally installed collection.
:param local_collection: Collection being checked.
:param remote_collection: Upstream collection (optional, if None, only verify local artifact)
:param artifacts_manager: Artifacts manager.
:return: a collection verify result object.
"""
result = CollectionVerifyResult(local_collection.fqcn)
b_collection_path = to_bytes(local_collection.src, errors='surrogate_or_strict')
display.display("Verifying '{coll!s}'.".format(coll=local_collection))
display.display(
u"Installed collection found at '{path!s}'".
format(path=to_text(local_collection.src)),
)
modified_content = [] # type: list[ModifiedContent]
verify_local_only = remote_collection is None
# partial away the local FS detail so we can just ask generically during validation
get_json_from_validation_source = functools.partial(_get_json_from_installed_dir, b_collection_path)
get_hash_from_validation_source = functools.partial(_get_file_hash, b_collection_path)
if not verify_local_only:
# Compare installed version versus requirement version
if local_collection.ver != remote_collection.ver:
err = (
"{local_fqcn!s} has the version '{local_ver!s}' but "
"is being compared to '{remote_ver!s}'".format(
local_fqcn=local_collection.fqcn,
local_ver=local_collection.ver,
remote_ver=remote_collection.ver,
)
)
display.display(err)
result.success = False
return result
manifest_file = os.path.join(to_text(b_collection_path, errors='surrogate_or_strict'), MANIFEST_FILENAME)
signatures = list(local_collection.signatures)
if verify_local_only and local_collection.source_info is not None:
signatures = [info["signature"] for info in local_collection.source_info["signatures"]] + signatures
elif not verify_local_only and remote_collection.signatures:
signatures = list(remote_collection.signatures) + signatures
keyring_configured = artifacts_manager.keyring is not None
if not keyring_configured and signatures:
display.warning(
"The GnuPG keyring used for collection signature "
"verification was not configured but signatures were "
"provided by the Galaxy server. "
"Configure a keyring for ansible-galaxy to verify "
"the origin of the collection. "
"Skipping signature verification."
)
elif keyring_configured:
if not verify_file_signatures(
local_collection.fqcn,
manifest_file,
signatures,
artifacts_manager.keyring,
artifacts_manager.required_successful_signature_count,
artifacts_manager.ignore_signature_errors,
):
result.success = False
return result
display.vvvv(f"GnuPG signature verification succeeded, verifying contents of {local_collection}")
if verify_local_only:
# since we're not downloading this, just seed it with the value from disk
manifest_hash = get_hash_from_validation_source(MANIFEST_FILENAME)
elif keyring_configured and remote_collection.signatures:
manifest_hash = get_hash_from_validation_source(MANIFEST_FILENAME)
else:
# fetch remote
# NOTE: AnsibleError is raised on URLError
b_temp_tar_path = artifacts_manager.get_artifact_path_from_unknown(remote_collection)
display.vvv(
u"Remote collection cached as '{path!s}'".format(path=to_text(b_temp_tar_path))
)
# partial away the tarball details so we can just ask generically during validation
get_json_from_validation_source = functools.partial(_get_json_from_tar_file, b_temp_tar_path)
get_hash_from_validation_source = functools.partial(_get_tar_file_hash, b_temp_tar_path)
# Verify the downloaded manifest hash matches the installed copy before verifying the file manifest
manifest_hash = get_hash_from_validation_source(MANIFEST_FILENAME)
_verify_file_hash(b_collection_path, MANIFEST_FILENAME, manifest_hash, modified_content)
display.display('MANIFEST.json hash: {manifest_hash}'.format(manifest_hash=manifest_hash))
manifest = get_json_from_validation_source(MANIFEST_FILENAME)
# Use the manifest to verify the file manifest checksum
file_manifest_data = manifest['file_manifest_file']
file_manifest_filename = file_manifest_data['name']
expected_hash = file_manifest_data['chksum_%s' % file_manifest_data['chksum_type']]
# Verify the file manifest before using it to verify individual files
_verify_file_hash(b_collection_path, file_manifest_filename, expected_hash, modified_content)
file_manifest = get_json_from_validation_source(file_manifest_filename)
collection_dirs = set()
collection_files = {
os.path.join(b_collection_path, b'MANIFEST.json'),
os.path.join(b_collection_path, b'FILES.json'),
}
# Use the file manifest to verify individual file checksums
for manifest_data in file_manifest['files']:
name = manifest_data['name']
if manifest_data['ftype'] == 'file':
collection_files.add(
os.path.join(b_collection_path, to_bytes(name, errors='surrogate_or_strict'))
)
expected_hash = manifest_data['chksum_%s' % manifest_data['chksum_type']]
_verify_file_hash(b_collection_path, name, expected_hash, modified_content)
if manifest_data['ftype'] == 'dir':
collection_dirs.add(
os.path.join(b_collection_path, to_bytes(name, errors='surrogate_or_strict'))
)
# Find any paths not in the FILES.json
for root, dirs, files in os.walk(b_collection_path):
for name in files:
full_path = os.path.join(root, name)
path = to_text(full_path[len(b_collection_path) + 1::], errors='surrogate_or_strict')
if full_path not in collection_files:
modified_content.append(
ModifiedContent(filename=path, expected='the file does not exist', installed='the file exists')
)
for name in dirs:
full_path = os.path.join(root, name)
path = to_text(full_path[len(b_collection_path) + 1::], errors='surrogate_or_strict')
if full_path not in collection_dirs:
modified_content.append(
ModifiedContent(filename=path, expected='the directory does not exist', installed='the directory exists')
)
if modified_content:
result.success = False
display.display(
'Collection {fqcn!s} contains modified content '
'in the following files:'.
format(fqcn=to_text(local_collection.fqcn)),
)
for content_change in modified_content:
display.display(' %s' % content_change.filename)
display.v(" Expected: %s\n Found: %s" % (content_change.expected, content_change.installed))
else:
what = "are internally consistent with its manifest" if verify_local_only else "match the remote collection"
display.display(
"Successfully verified that checksums for '{coll!s}' {what!s}.".
format(coll=local_collection, what=what),
)
return result
def verify_file_signatures(fqcn, manifest_file, detached_signatures, keyring, required_successful_count, ignore_signature_errors):
# type: (str, str, list[str], str, str, list[str]) -> bool
successful = 0
error_messages = []
signature_count_requirements = re.match(SIGNATURE_COUNT_RE, required_successful_count).groupdict()
strict = signature_count_requirements['strict'] or False
require_all = signature_count_requirements['all']
require_count = signature_count_requirements['count']
if require_count is not None:
require_count = int(require_count)
for signature in detached_signatures:
signature = to_text(signature, errors='surrogate_or_strict')
try:
verify_file_signature(manifest_file, signature, keyring, ignore_signature_errors)
except CollectionSignatureError as error:
if error.ignore:
# Do not include ignored errors in either the failed or successful count
continue
error_messages.append(error.report(fqcn))
else:
successful += 1
if require_all:
continue
if successful == require_count:
break
if strict and not successful:
verified = False
display.display(f"Signature verification failed for '{fqcn}': no successful signatures")
elif require_all:
verified = not error_messages
if not verified:
display.display(f"Signature verification failed for '{fqcn}': some signatures failed")
else:
verified = not detached_signatures or require_count == successful
if not verified:
display.display(f"Signature verification failed for '{fqcn}': fewer successful signatures than required")
if not verified:
for msg in error_messages:
display.vvvv(msg)
return verified
def verify_file_signature(manifest_file, detached_signature, keyring, ignore_signature_errors):
# type: (str, str, str, list[str]) -> None
"""Run the gpg command and parse any errors. Raises CollectionSignatureError on failure."""
gpg_result, gpg_verification_rc = run_gpg_verify(manifest_file, detached_signature, keyring, display)
if gpg_result:
errors = parse_gpg_errors(gpg_result)
try:
error = next(errors)
except StopIteration:
pass
else:
reasons = []
ignored_reasons = 0
for error in chain([error], errors):
# Get error status (dict key) from the class (dict value)
status_code = list(GPG_ERROR_MAP.keys())[list(GPG_ERROR_MAP.values()).index(error.__class__)]
if status_code in ignore_signature_errors:
ignored_reasons += 1
reasons.append(error.get_gpg_error_description())
ignore = len(reasons) == ignored_reasons
raise CollectionSignatureError(reasons=set(reasons), stdout=gpg_result, rc=gpg_verification_rc, ignore=ignore)
if gpg_verification_rc:
raise CollectionSignatureError(stdout=gpg_result, rc=gpg_verification_rc)
# No errors and rc is 0, verify was successful
return None
def build_collection(u_collection_path, u_output_path, force):
# type: (str, str, bool) -> str
"""Creates the Ansible collection artifact in a .tar.gz file.
:param u_collection_path: The path to the collection to build. This should be the directory that contains the
galaxy.yml file.
:param u_output_path: The path to create the collection build artifact. This should be a directory.
:param force: Whether to overwrite an existing collection build artifact or fail.
:return: The path to the collection build artifact.
"""
b_collection_path = to_bytes(u_collection_path, errors='surrogate_or_strict')
try:
collection_meta = _get_meta_from_src_dir(b_collection_path)
except LookupError as lookup_err:
raise AnsibleError(to_native(lookup_err)) from lookup_err
collection_manifest = _build_manifest(**collection_meta)
file_manifest = _build_files_manifest(
b_collection_path,
collection_meta['namespace'], # type: ignore[arg-type]
collection_meta['name'], # type: ignore[arg-type]
collection_meta['build_ignore'], # type: ignore[arg-type]
collection_meta['manifest'], # type: ignore[arg-type]
collection_meta['license_file'], # type: ignore[arg-type]
)
artifact_tarball_file_name = '{ns!s}-{name!s}-{ver!s}.tar.gz'.format(
name=collection_meta['name'],
ns=collection_meta['namespace'],
ver=collection_meta['version'],
)
b_collection_output = os.path.join(
to_bytes(u_output_path),
to_bytes(artifact_tarball_file_name, errors='surrogate_or_strict'),
)
if os.path.exists(b_collection_output):
if os.path.isdir(b_collection_output):
raise AnsibleError("The output collection artifact '%s' already exists, "
"but is a directory - aborting" % to_native(b_collection_output))
elif not force:
raise AnsibleError("The file '%s' already exists. You can use --force to re-create "
"the collection artifact." % to_native(b_collection_output))
collection_output = _build_collection_tar(b_collection_path, b_collection_output, collection_manifest, file_manifest)
return collection_output
def download_collections(
collections, # type: t.Iterable[Requirement]
output_path, # type: str
apis, # type: t.Iterable[GalaxyAPI]
no_deps, # type: bool
allow_pre_release, # type: bool
artifacts_manager, # type: ConcreteArtifactsManager
): # type: (...) -> None
"""Download Ansible collections as their tarball from a Galaxy server to the path specified and creates a requirements
file of the downloaded requirements to be used for an install.
:param collections: The collections to download, should be a list of tuples with (name, requirement, Galaxy Server).
:param output_path: The path to download the collections to.
:param apis: A list of GalaxyAPIs to query when search for a collection.
:param validate_certs: Whether to validate the certificate if downloading a tarball from a non-Galaxy host.
:param no_deps: Ignore any collection dependencies and only download the base requirements.
:param allow_pre_release: Do not ignore pre-release versions when selecting the latest.
"""
with _display_progress("Process download dependency map"):
dep_map = _resolve_depenency_map(
set(collections),
galaxy_apis=apis,
preferred_candidates=None,
concrete_artifacts_manager=artifacts_manager,
no_deps=no_deps,
allow_pre_release=allow_pre_release,
upgrade=False,
# Avoid overhead getting signatures since they are not currently applicable to downloaded collections
include_signatures=False,
offline=False,
)
b_output_path = to_bytes(output_path, errors='surrogate_or_strict')
requirements = []
with _display_progress(
"Starting collection download process to '{path!s}'".
format(path=output_path),
):
for fqcn, concrete_coll_pin in dep_map.copy().items(): # FIXME: move into the provider
if concrete_coll_pin.is_virtual:
display.display(
'Virtual collection {coll!s} is not downloadable'.
format(coll=to_text(concrete_coll_pin)),
)
continue
display.display(
u"Downloading collection '{coll!s}' to '{path!s}'".
format(coll=to_text(concrete_coll_pin), path=to_text(b_output_path)),
)
b_src_path = artifacts_manager.get_artifact_path_from_unknown(concrete_coll_pin)
b_dest_path = os.path.join(
b_output_path,
os.path.basename(b_src_path),
)
if concrete_coll_pin.is_dir:
b_dest_path = to_bytes(
build_collection(
to_text(b_src_path, errors='surrogate_or_strict'),
to_text(output_path, errors='surrogate_or_strict'),
force=True,
),
errors='surrogate_or_strict',
)
else:
shutil.copy(to_native(b_src_path), to_native(b_dest_path))
display.display(
"Collection '{coll!s}' was downloaded successfully".
format(coll=concrete_coll_pin),
)
requirements.append({
# FIXME: Consider using a more specific upgraded format
# FIXME: having FQCN in the name field, with src field
# FIXME: pointing to the file path, and explicitly set
# FIXME: type. If version and name are set, it'd
# FIXME: perform validation against the actual metadata
# FIXME: in the artifact src points at.
'name': to_native(os.path.basename(b_dest_path)),
'version': concrete_coll_pin.ver,
})
requirements_path = os.path.join(output_path, 'requirements.yml')
b_requirements_path = to_bytes(
requirements_path, errors='surrogate_or_strict',
)
display.display(
u'Writing requirements.yml file of downloaded collections '
"to '{path!s}'".format(path=to_text(requirements_path)),
)
yaml_bytes = to_bytes(
yaml_dump({'collections': requirements}),
errors='surrogate_or_strict',
)
with open(b_requirements_path, mode='wb') as req_fd:
req_fd.write(yaml_bytes)
def publish_collection(collection_path, api, wait, timeout):
"""Publish an Ansible collection tarball into an Ansible Galaxy server.
:param collection_path: The path to the collection tarball to publish.
:param api: A GalaxyAPI to publish the collection to.
:param wait: Whether to wait until the import process is complete.
:param timeout: The time in seconds to wait for the import process to finish, 0 is indefinite.
"""
import_uri = api.publish_collection(collection_path)
if wait:
# Galaxy returns a url fragment which differs between v2 and v3. The second to last entry is
# always the task_id, though.
# v2: {"task": "https://galaxy-dev.ansible.com/api/v2/collection-imports/35573/"}
# v3: {"task": "/api/automation-hub/v3/imports/collections/838d1308-a8f4-402c-95cb-7823f3806cd8/"}
task_id = None
for path_segment in reversed(import_uri.split('/')):
if path_segment:
task_id = path_segment
break
if not task_id:
raise AnsibleError("Publishing the collection did not return valid task info. Cannot wait for task status. Returned task info: '%s'" % import_uri)
with _display_progress(
"Collection has been published to the Galaxy server "
"{api.name!s} {api.api_server!s}".format(api=api),
):
api.wait_import_task(task_id, timeout)
display.display("Collection has been successfully published and imported to the Galaxy server %s %s"
% (api.name, api.api_server))
else:
display.display("Collection has been pushed to the Galaxy server %s %s, not waiting until import has "
"completed due to --no-wait being set. Import task results can be found at %s"
% (api.name, api.api_server, import_uri))
def install_collections(
collections, # type: t.Iterable[Requirement]
output_path, # type: str
apis, # type: t.Iterable[GalaxyAPI]
ignore_errors, # type: bool
no_deps, # type: bool
force, # type: bool
force_deps, # type: bool
upgrade, # type: bool
allow_pre_release, # type: bool
artifacts_manager, # type: ConcreteArtifactsManager
disable_gpg_verify, # type: bool
offline, # type: bool
read_requirement_paths, # type: set[str]
): # type: (...) -> None
"""Install Ansible collections to the path specified.
:param collections: The collections to install.
:param output_path: The path to install the collections to.
:param apis: A list of GalaxyAPIs to query when searching for a collection.
:param validate_certs: Whether to validate the certificates if downloading a tarball.
:param ignore_errors: Whether to ignore any errors when installing the collection.
:param no_deps: Ignore any collection dependencies and only install the base requirements.
:param force: Re-install a collection if it has already been installed.
:param force_deps: Re-install a collection as well as its dependencies if they have already been installed.
"""
existing_collections = {
Requirement(coll.fqcn, coll.ver, coll.src, coll.type, None)
for path in {output_path} | read_requirement_paths
for coll in find_existing_collections(path, artifacts_manager)
}
unsatisfied_requirements = set(
chain.from_iterable(
(
Requirement.from_dir_path(to_bytes(sub_coll), artifacts_manager)
for sub_coll in (
artifacts_manager.
get_direct_collection_dependencies(install_req).
keys()
)
)
if install_req.is_subdirs else (install_req, )
for install_req in collections
),
)
requested_requirements_names = {req.fqcn for req in unsatisfied_requirements}
# NOTE: Don't attempt to reevaluate already installed deps
# NOTE: unless `--force` or `--force-with-deps` is passed
unsatisfied_requirements -= set() if force or force_deps else {
req
for req in unsatisfied_requirements
for exs in existing_collections
if req.fqcn == exs.fqcn and meets_requirements(exs.ver, req.ver)
}
if not unsatisfied_requirements and not upgrade:
display.display(
'Nothing to do. All requested collections are already '
'installed. If you want to reinstall them, '
'consider using `--force`.'
)
return
# FIXME: This probably needs to be improved to
# FIXME: properly match differing src/type.
existing_non_requested_collections = {
coll for coll in existing_collections
if coll.fqcn not in requested_requirements_names
}
preferred_requirements = (
[] if force_deps
else existing_non_requested_collections if force
else existing_collections
)
preferred_collections = {
# NOTE: No need to include signatures if the collection is already installed
Candidate(coll.fqcn, coll.ver, coll.src, coll.type, None)
for coll in preferred_requirements
}
with _display_progress("Process install dependency map"):
dependency_map = _resolve_depenency_map(
collections,
galaxy_apis=apis,
preferred_candidates=preferred_collections,
concrete_artifacts_manager=artifacts_manager,
no_deps=no_deps,
allow_pre_release=allow_pre_release,
upgrade=upgrade,
include_signatures=not disable_gpg_verify,
offline=offline,
)
keyring_exists = artifacts_manager.keyring is not None
with _display_progress("Starting collection install process"):
for fqcn, concrete_coll_pin in dependency_map.items():
if concrete_coll_pin.is_virtual:
display.vvvv(
"'{coll!s}' is virtual, skipping.".
format(coll=to_text(concrete_coll_pin)),
)
continue
if concrete_coll_pin in preferred_collections:
display.display(
"'{coll!s}' is already installed, skipping.".
format(coll=to_text(concrete_coll_pin)),
)
continue
if not disable_gpg_verify and concrete_coll_pin.signatures and not keyring_exists:
# Duplicate warning msgs are not displayed
display.warning(
"The GnuPG keyring used for collection signature "
"verification was not configured but signatures were "
"provided by the Galaxy server to verify authenticity. "
"Configure a keyring for ansible-galaxy to use "
"or disable signature verification. "
"Skipping signature verification."
)
if concrete_coll_pin.type == 'galaxy':
concrete_coll_pin = concrete_coll_pin.with_signatures_repopulated()
try:
install(concrete_coll_pin, output_path, artifacts_manager)
except AnsibleError as err:
if ignore_errors:
display.warning(
'Failed to install collection {coll!s} but skipping '
'due to --ignore-errors being set. Error: {error!s}'.
format(
coll=to_text(concrete_coll_pin),
error=to_text(err),
)
)
else:
raise
# NOTE: imported in ansible.cli.galaxy
def validate_collection_name(name): # type: (str) -> str
"""Validates the collection name as an input from the user or a requirements file fit the requirements.
:param name: The input name with optional range specifier split by ':'.
:return: The input value, required for argparse validation.
"""
collection, dummy, dummy = name.partition(':')
if AnsibleCollectionRef.is_valid_collection_name(collection):
return name
raise AnsibleError("Invalid collection name '%s', "
"name must be in the format <namespace>.<collection>. \n"
"Please make sure namespace and collection name contains "
"characters from [a-zA-Z0-9_] only." % name)
# NOTE: imported in ansible.cli.galaxy
def validate_collection_path(collection_path): # type: (str) -> str
"""Ensure a given path ends with 'ansible_collections'
:param collection_path: The path that should end in 'ansible_collections'
:return: collection_path ending in 'ansible_collections' if it does not already.
"""
if os.path.split(collection_path)[1] != 'ansible_collections':
return os.path.join(collection_path, 'ansible_collections')
return collection_path
def verify_collections(
collections, # type: t.Iterable[Requirement]
search_paths, # type: t.Iterable[str]
apis, # type: t.Iterable[GalaxyAPI]
ignore_errors, # type: bool
local_verify_only, # type: bool
artifacts_manager, # type: ConcreteArtifactsManager
): # type: (...) -> list[CollectionVerifyResult]
r"""Verify the integrity of locally installed collections.
:param collections: The collections to check.
:param search_paths: Locations for the local collection lookup.
:param apis: A list of GalaxyAPIs to query when searching for a collection.
:param ignore_errors: Whether to ignore any errors when verifying the collection.
:param local_verify_only: When True, skip downloads and only verify local manifests.
:param artifacts_manager: Artifacts manager.
:return: list of CollectionVerifyResult objects describing the results of each collection verification
"""
results = [] # type: list[CollectionVerifyResult]
api_proxy = MultiGalaxyAPIProxy(apis, artifacts_manager)
with _display_progress():
for collection in collections:
try:
if collection.is_concrete_artifact:
raise AnsibleError(
message="'{coll_type!s}' type is not supported. "
'The format namespace.name is expected.'.
format(coll_type=collection.type)
)
# NOTE: Verify local collection exists before
# NOTE: downloading its source artifact from
# NOTE: a galaxy server.
default_err = 'Collection %s is not installed in any of the collection paths.' % collection.fqcn
for search_path in search_paths:
b_search_path = to_bytes(
os.path.join(
search_path,
collection.namespace, collection.name,
),
errors='surrogate_or_strict',
)
if not os.path.isdir(b_search_path):
continue
if not _is_installed_collection_dir(b_search_path):
default_err = (
"Collection %s does not have a MANIFEST.json. "
"A MANIFEST.json is expected if the collection has been built "
"and installed via ansible-galaxy" % collection.fqcn
)
continue
local_collection = Candidate.from_dir_path(
b_search_path, artifacts_manager,
)
supplemental_signatures = [
get_signature_from_source(source, display)
for source in collection.signature_sources or []
]
local_collection = Candidate(
local_collection.fqcn,
local_collection.ver,
local_collection.src,
local_collection.type,
signatures=frozenset(supplemental_signatures),
)
break
else:
raise AnsibleError(message=default_err)
if local_verify_only:
remote_collection = None
else:
signatures = api_proxy.get_signatures(local_collection)
signatures.extend([
get_signature_from_source(source, display)
for source in collection.signature_sources or []
])
remote_collection = Candidate(
collection.fqcn,
collection.ver if collection.ver != '*'
else local_collection.ver,
None, 'galaxy',
frozenset(signatures),
)
# Download collection on a galaxy server for comparison
try:
# NOTE: If there are no signatures, trigger the lookup. If found,
# NOTE: it'll cache download URL and token in artifact manager.
# NOTE: If there are no Galaxy server signatures, only user-provided signature URLs,
# NOTE: those alone validate the MANIFEST.json and the remote collection is not downloaded.
# NOTE: The remote MANIFEST.json is only used in verification if there are no signatures.
if artifacts_manager.keyring is None or not signatures:
api_proxy.get_collection_version_metadata(
remote_collection,
)
except AnsibleError as e: # FIXME: does this actually emit any errors?
# FIXME: extract the actual message and adjust this:
expected_error_msg = (
'Failed to find collection {coll.fqcn!s}:{coll.ver!s}'.
format(coll=collection)
)
if e.message == expected_error_msg:
raise AnsibleError(
'Failed to find remote collection '
"'{coll!s}' on any of the galaxy servers".
format(coll=collection)
)
raise
result = verify_local_collection(local_collection, remote_collection, artifacts_manager)
results.append(result)
except AnsibleError as err:
if ignore_errors:
display.warning(
"Failed to verify collection '{coll!s}' but skipping "
'due to --ignore-errors being set. '
'Error: {err!s}'.
format(coll=collection, err=to_text(err)),
)
else:
raise
return results
@contextmanager
def _tempdir():
b_temp_path = tempfile.mkdtemp(dir=to_bytes(C.DEFAULT_LOCAL_TMP, errors='surrogate_or_strict'))
try:
yield b_temp_path
finally:
shutil.rmtree(b_temp_path)
@contextmanager
def _display_progress(msg=None):
config_display = C.GALAXY_DISPLAY_PROGRESS
display_wheel = sys.stdout.isatty() if config_display is None else config_display
global display
if msg is not None:
display.display(msg)
if not display_wheel:
yield
return
def progress(display_queue, actual_display):
actual_display.debug("Starting display_progress display thread")
t = threading.current_thread()
while True:
for c in "|/-\\":
actual_display.display(c + "\b", newline=False)
time.sleep(0.1)
# Display a message from the main thread
while True:
try:
method, args, kwargs = display_queue.get(block=False, timeout=0.1)
except queue.Empty:
break
else:
func = getattr(actual_display, method)
func(*args, **kwargs)
if getattr(t, "finish", False):
actual_display.debug("Received end signal for display_progress display thread")
return
class DisplayThread(object):
def __init__(self, display_queue):
self.display_queue = display_queue
def __getattr__(self, attr):
def call_display(*args, **kwargs):
self.display_queue.put((attr, args, kwargs))
return call_display
# Temporary override the global display class with our own which add the calls to a queue for the thread to call.
old_display = display
try:
display_queue = queue.Queue()
display = DisplayThread(display_queue)
t = threading.Thread(target=progress, args=(display_queue, old_display))
t.daemon = True
t.start()
try:
yield
finally:
t.finish = True
t.join()
except Exception:
# The exception is re-raised so we can sure the thread is finished and not using the display anymore
raise
finally:
display = old_display
def _verify_file_hash(b_path, filename, expected_hash, error_queue):
b_file_path = to_bytes(os.path.join(to_text(b_path), filename), errors='surrogate_or_strict')
if not os.path.isfile(b_file_path):
actual_hash = None
else:
with open(b_file_path, mode='rb') as file_object:
actual_hash = _consume_file(file_object)
if expected_hash != actual_hash:
error_queue.append(ModifiedContent(filename=filename, expected=expected_hash, installed=actual_hash))
def _make_manifest():
return {
'files': [
{
'name': '.',
'ftype': 'dir',
'chksum_type': None,
'chksum_sha256': None,
'format': MANIFEST_FORMAT,
},
],
'format': MANIFEST_FORMAT,
}
def _make_entry(name, ftype, chksum_type='sha256', chksum=None):
return {
'name': name,
'ftype': ftype,
'chksum_type': chksum_type if chksum else None,
f'chksum_{chksum_type}': chksum,
'format': MANIFEST_FORMAT
}
def _build_files_manifest(b_collection_path, namespace, name, ignore_patterns,
manifest_control, license_file):
# type: (bytes, str, str, list[str], dict[str, t.Any], t.Optional[str]) -> FilesManifestType
if ignore_patterns and manifest_control is not Sentinel:
raise AnsibleError('"build_ignore" and "manifest" are mutually exclusive')
if manifest_control is not Sentinel:
return _build_files_manifest_distlib(
b_collection_path,
namespace,
name,
manifest_control,
license_file,
)
return _build_files_manifest_walk(b_collection_path, namespace, name, ignore_patterns)
def _build_files_manifest_distlib(b_collection_path, namespace, name, manifest_control,
license_file):
# type: (bytes, str, str, dict[str, t.Any], t.Optional[str]) -> FilesManifestType
if not HAS_DISTLIB:
raise AnsibleError('Use of "manifest" requires the python "distlib" library')
if manifest_control is None:
manifest_control = {}
try:
control = ManifestControl(**manifest_control)
except TypeError as ex:
raise AnsibleError(f'Invalid "manifest" provided: {ex}')
if not is_sequence(control.directives):
raise AnsibleError(f'"manifest.directives" must be a list, got: {control.directives.__class__.__name__}')
if not isinstance(control.omit_default_directives, bool):
raise AnsibleError(
'"manifest.omit_default_directives" is expected to be a boolean, got: '
f'{control.omit_default_directives.__class__.__name__}'
)
if control.omit_default_directives and not control.directives:
raise AnsibleError(
'"manifest.omit_default_directives" was set to True, but no directives were defined '
'in "manifest.directives". This would produce an empty collection artifact.'
)
directives = []
if control.omit_default_directives:
directives.extend(control.directives)
else:
directives.extend([
'include meta/*.yml',
'include *.txt *.md *.rst *.license COPYING LICENSE',
'recursive-include .reuse **',
'recursive-include LICENSES **',
'recursive-include tests **',
'recursive-include docs **.rst **.yml **.yaml **.json **.j2 **.txt **.license',
'recursive-include roles **.yml **.yaml **.json **.j2 **.license',
'recursive-include playbooks **.yml **.yaml **.json **.license',
'recursive-include changelogs **.yml **.yaml **.license',
'recursive-include plugins */**.py */**.license',
])
if license_file:
directives.append(f'include {license_file}')
plugins = set(l.package.split('.')[-1] for d, l in get_all_plugin_loaders())
for plugin in sorted(plugins):
if plugin in ('modules', 'module_utils'):
continue
elif plugin in C.DOCUMENTABLE_PLUGINS:
directives.append(
f'recursive-include plugins/{plugin} **.yml **.yaml'
)
directives.extend([
'recursive-include plugins/modules **.ps1 **.yml **.yaml **.license',
'recursive-include plugins/module_utils **.ps1 **.psm1 **.cs **.license',
])
directives.extend(control.directives)
directives.extend([
f'exclude galaxy.yml galaxy.yaml MANIFEST.json FILES.json {namespace}-{name}-*.tar.gz',
'recursive-exclude tests/output **',
'global-exclude /.* /__pycache__ *.pyc *.pyo *.bak *~ *.swp',
])
display.vvv('Manifest Directives:')
display.vvv(textwrap.indent('\n'.join(directives), ' '))
u_collection_path = to_text(b_collection_path, errors='surrogate_or_strict')
m = Manifest(u_collection_path)
for directive in directives:
try:
m.process_directive(directive)
except DistlibException as e:
raise AnsibleError(f'Invalid manifest directive: {e}')
except Exception as e:
raise AnsibleError(f'Unknown error processing manifest directive: {e}')
manifest = _make_manifest()
for abs_path in m.sorted(wantdirs=True):
rel_path = os.path.relpath(abs_path, u_collection_path)
if os.path.isdir(abs_path):
manifest_entry = _make_entry(rel_path, 'dir')
else:
manifest_entry = _make_entry(
rel_path,
'file',
chksum_type='sha256',
chksum=secure_hash(abs_path, hash_func=sha256)
)
manifest['files'].append(manifest_entry)
return manifest
def _build_files_manifest_walk(b_collection_path, namespace, name, ignore_patterns):
# type: (bytes, str, str, list[str]) -> FilesManifestType
# We always ignore .pyc and .retry files as well as some well known version control directories. The ignore
# patterns can be extended by the build_ignore key in galaxy.yml
b_ignore_patterns = [
b'MANIFEST.json',
b'FILES.json',
b'galaxy.yml',
b'galaxy.yaml',
b'.git',
b'*.pyc',
b'*.retry',
b'tests/output', # Ignore ansible-test result output directory.
to_bytes('{0}-{1}-*.tar.gz'.format(namespace, name)), # Ignores previously built artifacts in the root dir.
]
b_ignore_patterns += [to_bytes(p) for p in ignore_patterns]
b_ignore_dirs = frozenset([b'CVS', b'.bzr', b'.hg', b'.git', b'.svn', b'__pycache__', b'.tox'])
manifest = _make_manifest()
def _walk(b_path, b_top_level_dir):
for b_item in os.listdir(b_path):
b_abs_path = os.path.join(b_path, b_item)
b_rel_base_dir = b'' if b_path == b_top_level_dir else b_path[len(b_top_level_dir) + 1:]
b_rel_path = os.path.join(b_rel_base_dir, b_item)
rel_path = to_text(b_rel_path, errors='surrogate_or_strict')
if os.path.isdir(b_abs_path):
if any(b_item == b_path for b_path in b_ignore_dirs) or \
any(fnmatch.fnmatch(b_rel_path, b_pattern) for b_pattern in b_ignore_patterns):
display.vvv("Skipping '%s' for collection build" % to_text(b_abs_path))
continue
if os.path.islink(b_abs_path):
b_link_target = os.path.realpath(b_abs_path)
if not _is_child_path(b_link_target, b_top_level_dir):
display.warning("Skipping '%s' as it is a symbolic link to a directory outside the collection"
% to_text(b_abs_path))
continue
manifest['files'].append(_make_entry(rel_path, 'dir'))
if not os.path.islink(b_abs_path):
_walk(b_abs_path, b_top_level_dir)
else:
if any(fnmatch.fnmatch(b_rel_path, b_pattern) for b_pattern in b_ignore_patterns):
display.vvv("Skipping '%s' for collection build" % to_text(b_abs_path))
continue
# Handling of file symlinks occur in _build_collection_tar, the manifest for a symlink is the same for
# a normal file.
manifest['files'].append(
_make_entry(
rel_path,
'file',
chksum_type='sha256',
chksum=secure_hash(b_abs_path, hash_func=sha256)
)
)
_walk(b_collection_path, b_collection_path)
return manifest
# FIXME: accept a dict produced from `galaxy.yml` instead of separate args
def _build_manifest(namespace, name, version, authors, readme, tags, description, license_file,
dependencies, repository, documentation, homepage, issues, **kwargs):
manifest = {
'collection_info': {
'namespace': namespace,
'name': name,
'version': version,
'authors': authors,
'readme': readme,
'tags': tags,
'description': description,
'license': kwargs['license'],
'license_file': license_file or None, # Handle galaxy.yml having an empty string (None)
'dependencies': dependencies,
'repository': repository,
'documentation': documentation,
'homepage': homepage,
'issues': issues,
},
'file_manifest_file': {
'name': 'FILES.json',
'ftype': 'file',
'chksum_type': 'sha256',
'chksum_sha256': None, # Filled out in _build_collection_tar
'format': MANIFEST_FORMAT
},
'format': MANIFEST_FORMAT,
}
return manifest
def _build_collection_tar(
b_collection_path, # type: bytes
b_tar_path, # type: bytes
collection_manifest, # type: CollectionManifestType
file_manifest, # type: FilesManifestType
): # type: (...) -> str
"""Build a tar.gz collection artifact from the manifest data."""
files_manifest_json = to_bytes(json.dumps(file_manifest, indent=True), errors='surrogate_or_strict')
collection_manifest['file_manifest_file']['chksum_sha256'] = secure_hash_s(files_manifest_json, hash_func=sha256)
collection_manifest_json = to_bytes(json.dumps(collection_manifest, indent=True), errors='surrogate_or_strict')
with _tempdir() as b_temp_path:
b_tar_filepath = os.path.join(b_temp_path, os.path.basename(b_tar_path))
with tarfile.open(b_tar_filepath, mode='w:gz') as tar_file:
# Add the MANIFEST.json and FILES.json file to the archive
for name, b in [(MANIFEST_FILENAME, collection_manifest_json), ('FILES.json', files_manifest_json)]:
b_io = BytesIO(b)
tar_info = tarfile.TarInfo(name)
tar_info.size = len(b)
tar_info.mtime = int(time.time())
tar_info.mode = 0o0644
tar_file.addfile(tarinfo=tar_info, fileobj=b_io)
for file_info in file_manifest['files']: # type: ignore[union-attr]
if file_info['name'] == '.':
continue
# arcname expects a native string, cannot be bytes
filename = to_native(file_info['name'], errors='surrogate_or_strict')
b_src_path = os.path.join(b_collection_path, to_bytes(filename, errors='surrogate_or_strict'))
def reset_stat(tarinfo):
if tarinfo.type != tarfile.SYMTYPE:
existing_is_exec = tarinfo.mode & stat.S_IXUSR
tarinfo.mode = 0o0755 if existing_is_exec or tarinfo.isdir() else 0o0644
tarinfo.uid = tarinfo.gid = 0
tarinfo.uname = tarinfo.gname = ''
return tarinfo
if os.path.islink(b_src_path):
b_link_target = os.path.realpath(b_src_path)
if not os.path.exists(b_link_target):
raise AnsibleError(f"Failed to find the target path '{to_native(b_link_target)}' for the symlink '{to_native(b_src_path)}'.")
if _is_child_path(b_link_target, b_collection_path):
b_rel_path = os.path.relpath(b_link_target, start=os.path.dirname(b_src_path))
tar_info = tarfile.TarInfo(filename)
tar_info.type = tarfile.SYMTYPE
tar_info.linkname = to_native(b_rel_path, errors='surrogate_or_strict')
tar_info = reset_stat(tar_info)
tar_file.addfile(tarinfo=tar_info)
continue
# Dealing with a normal file, just add it by name.
tar_file.add(
to_native(os.path.realpath(b_src_path)),
arcname=filename,
recursive=False,
filter=reset_stat,
)
shutil.copy(to_native(b_tar_filepath), to_native(b_tar_path))
collection_name = "%s.%s" % (collection_manifest['collection_info']['namespace'],
collection_manifest['collection_info']['name'])
tar_path = to_text(b_tar_path)
display.display(u'Created collection for %s at %s' % (collection_name, tar_path))
return tar_path
def _build_collection_dir(b_collection_path, b_collection_output, collection_manifest, file_manifest):
"""Build a collection directory from the manifest data.
This should follow the same pattern as _build_collection_tar.
"""
os.makedirs(b_collection_output, mode=0o0755)
files_manifest_json = to_bytes(json.dumps(file_manifest, indent=True), errors='surrogate_or_strict')
collection_manifest['file_manifest_file']['chksum_sha256'] = secure_hash_s(files_manifest_json, hash_func=sha256)
collection_manifest_json = to_bytes(json.dumps(collection_manifest, indent=True), errors='surrogate_or_strict')
# Write contents to the files
for name, b in [(MANIFEST_FILENAME, collection_manifest_json), ('FILES.json', files_manifest_json)]:
b_path = os.path.join(b_collection_output, to_bytes(name, errors='surrogate_or_strict'))
with open(b_path, 'wb') as file_obj, BytesIO(b) as b_io:
shutil.copyfileobj(b_io, file_obj)
os.chmod(b_path, 0o0644)
base_directories = []
for file_info in sorted(file_manifest['files'], key=lambda x: x['name']):
if file_info['name'] == '.':
continue
src_file = os.path.join(b_collection_path, to_bytes(file_info['name'], errors='surrogate_or_strict'))
dest_file = os.path.join(b_collection_output, to_bytes(file_info['name'], errors='surrogate_or_strict'))
existing_is_exec = os.stat(src_file, follow_symlinks=False).st_mode & stat.S_IXUSR
mode = 0o0755 if existing_is_exec else 0o0644
# ensure symlinks to dirs are not translated to empty dirs
if os.path.isdir(src_file) and not os.path.islink(src_file):
mode = 0o0755
base_directories.append(src_file)
os.mkdir(dest_file, mode)
else:
# do not follow symlinks to ensure the original link is used
shutil.copyfile(src_file, dest_file, follow_symlinks=False)
# avoid setting specific permission on symlinks since it does not
# support avoid following symlinks and will thrown an exception if the
# symlink target does not exist
if not os.path.islink(dest_file):
os.chmod(dest_file, mode)
collection_output = to_text(b_collection_output)
return collection_output
def _normalize_collection_path(path):
str_path = path.as_posix() if isinstance(path, pathlib.Path) else path
return pathlib.Path(
# This is annoying, but GalaxyCLI._resolve_path did it
os.path.expandvars(str_path)
).expanduser().absolute()
def find_existing_collections(path_filter, artifacts_manager, namespace_filter=None, collection_filter=None, dedupe=True):
"""Locate all collections under a given path.
:param path: Collection dirs layout search path.
:param artifacts_manager: Artifacts manager.
"""
if files is None:
raise AnsibleError('importlib_resources is not installed and is required')
if path_filter and not is_sequence(path_filter):
path_filter = [path_filter]
if namespace_filter and not is_sequence(namespace_filter):
namespace_filter = [namespace_filter]
if collection_filter and not is_sequence(collection_filter):
collection_filter = [collection_filter]
paths = set()
for path in files('ansible_collections').glob('*/*/'):
path = _normalize_collection_path(path)
if not path.is_dir():
continue
if path_filter:
for pf in path_filter:
try:
path.relative_to(_normalize_collection_path(pf))
except ValueError:
continue
break
else:
continue
paths.add(path)
seen = set()
for path in paths:
namespace = path.parent.name
name = path.name
if namespace_filter and namespace not in namespace_filter:
continue
if collection_filter and name not in collection_filter:
continue
if dedupe:
try:
collection_path = files(f'ansible_collections.{namespace}.{name}')
except ImportError:
continue
if collection_path in seen:
continue
seen.add(collection_path)
else:
collection_path = path
b_collection_path = to_bytes(collection_path.as_posix())
try:
req = Candidate.from_dir_path_as_unknown(b_collection_path, artifacts_manager)
except ValueError as val_err:
display.warning(f'{val_err}')
continue
display.vvv(
u"Found installed collection {coll!s} at '{path!s}'".
format(coll=to_text(req), path=to_text(req.src))
)
yield req
def install(collection, path, artifacts_manager): # FIXME: mv to dataclasses?
# type: (Candidate, str, ConcreteArtifactsManager) -> None
"""Install a collection under a given path.
:param collection: Collection to be installed.
:param path: Collection dirs layout path.
:param artifacts_manager: Artifacts manager.
"""
b_artifact_path = artifacts_manager.get_artifact_path_from_unknown(collection)
collection_path = os.path.join(path, collection.namespace, collection.name)
b_collection_path = to_bytes(collection_path, errors='surrogate_or_strict')
display.display(
u"Installing '{coll!s}' to '{path!s}'".
format(coll=to_text(collection), path=collection_path),
)
if os.path.exists(b_collection_path):
shutil.rmtree(b_collection_path)
if collection.is_dir:
install_src(collection, b_artifact_path, b_collection_path, artifacts_manager)
else:
install_artifact(
b_artifact_path,
b_collection_path,
artifacts_manager._b_working_directory,
collection.signatures,
artifacts_manager.keyring,
artifacts_manager.required_successful_signature_count,
artifacts_manager.ignore_signature_errors,
)
if (collection.is_online_index_pointer and isinstance(collection.src, GalaxyAPI)):
write_source_metadata(
collection,
b_collection_path,
artifacts_manager
)
display.display(
'{coll!s} was installed successfully'.
format(coll=to_text(collection)),
)
def write_source_metadata(collection, b_collection_path, artifacts_manager):
# type: (Candidate, bytes, ConcreteArtifactsManager) -> None
source_data = artifacts_manager.get_galaxy_artifact_source_info(collection)
b_yaml_source_data = to_bytes(yaml_dump(source_data), errors='surrogate_or_strict')
b_info_dest = collection.construct_galaxy_info_path(b_collection_path)
b_info_dir = os.path.split(b_info_dest)[0]
if os.path.exists(b_info_dir):
shutil.rmtree(b_info_dir)
try:
os.mkdir(b_info_dir, mode=0o0755)
with open(b_info_dest, mode='w+b') as fd:
fd.write(b_yaml_source_data)
os.chmod(b_info_dest, 0o0644)
except Exception:
# Ensure we don't leave the dir behind in case of a failure.
if os.path.isdir(b_info_dir):
shutil.rmtree(b_info_dir)
raise
def verify_artifact_manifest(manifest_file, signatures, keyring, required_signature_count, ignore_signature_errors):
# type: (str, list[str], str, str, list[str]) -> None
failed_verify = False
coll_path_parts = to_text(manifest_file, errors='surrogate_or_strict').split(os.path.sep)
collection_name = '%s.%s' % (coll_path_parts[-3], coll_path_parts[-2]) # get 'ns' and 'coll' from /path/to/ns/coll/MANIFEST.json
if not verify_file_signatures(collection_name, manifest_file, signatures, keyring, required_signature_count, ignore_signature_errors):
raise AnsibleError(f"Not installing {collection_name} because GnuPG signature verification failed.")
display.vvvv(f"GnuPG signature verification succeeded for {collection_name}")
def install_artifact(b_coll_targz_path, b_collection_path, b_temp_path, signatures, keyring, required_signature_count, ignore_signature_errors):
"""Install a collection from tarball under a given path.
:param b_coll_targz_path: Collection tarball to be installed.
:param b_collection_path: Collection dirs layout path.
:param b_temp_path: Temporary dir path.
:param signatures: frozenset of signatures to verify the MANIFEST.json
:param keyring: The keyring used during GPG verification
:param required_signature_count: The number of signatures that must successfully verify the collection
:param ignore_signature_errors: GPG errors to ignore during signature verification
"""
try:
with tarfile.open(b_coll_targz_path, mode='r') as collection_tar:
# Remove this once py3.11 is our controller minimum
# Workaround for https://bugs.python.org/issue47231
# See _extract_tar_dir
collection_tar._ansible_normalized_cache = {
m.name.removesuffix(os.path.sep): m for m in collection_tar.getmembers()
} # deprecated: description='TarFile member index' core_version='2.18' python_version='3.11'
# Verify the signature on the MANIFEST.json before extracting anything else
_extract_tar_file(collection_tar, MANIFEST_FILENAME, b_collection_path, b_temp_path)
if keyring is not None:
manifest_file = os.path.join(to_text(b_collection_path, errors='surrogate_or_strict'), MANIFEST_FILENAME)
verify_artifact_manifest(manifest_file, signatures, keyring, required_signature_count, ignore_signature_errors)
files_member_obj = collection_tar.getmember('FILES.json')
with _tarfile_extract(collection_tar, files_member_obj) as (dummy, files_obj):
files = json.loads(to_text(files_obj.read(), errors='surrogate_or_strict'))
_extract_tar_file(collection_tar, 'FILES.json', b_collection_path, b_temp_path)
for file_info in files['files']:
file_name = file_info['name']
if file_name == '.':
continue
if file_info['ftype'] == 'file':
_extract_tar_file(collection_tar, file_name, b_collection_path, b_temp_path,
expected_hash=file_info['chksum_sha256'])
else:
_extract_tar_dir(collection_tar, file_name, b_collection_path)
except Exception:
# Ensure we don't leave the dir behind in case of a failure.
shutil.rmtree(b_collection_path)
b_namespace_path = os.path.dirname(b_collection_path)
if not os.listdir(b_namespace_path):
os.rmdir(b_namespace_path)
raise
def install_src(collection, b_collection_path, b_collection_output_path, artifacts_manager):
r"""Install the collection from source control into given dir.
Generates the Ansible collection artifact data from a galaxy.yml and
installs the artifact to a directory.
This should follow the same pattern as build_collection, but instead
of creating an artifact, install it.
:param collection: Collection to be installed.
:param b_collection_path: Collection dirs layout path.
:param b_collection_output_path: The installation directory for the \
collection artifact.
:param artifacts_manager: Artifacts manager.
:raises AnsibleError: If no collection metadata found.
"""
collection_meta = artifacts_manager.get_direct_collection_meta(collection)
if 'build_ignore' not in collection_meta: # installed collection, not src
# FIXME: optimize this? use a different process? copy instead of build?
collection_meta['build_ignore'] = []
collection_meta['manifest'] = Sentinel
collection_manifest = _build_manifest(**collection_meta)
file_manifest = _build_files_manifest(
b_collection_path,
collection_meta['namespace'], collection_meta['name'],
collection_meta['build_ignore'],
collection_meta['manifest'],
collection_meta['license_file'],
)
collection_output_path = _build_collection_dir(
b_collection_path, b_collection_output_path,
collection_manifest, file_manifest,
)
display.display(
'Created collection for {coll!s} at {path!s}'.
format(coll=collection, path=collection_output_path)
)
def _extract_tar_dir(tar, dirname, b_dest):
""" Extracts a directory from a collection tar. """
dirname = to_native(dirname, errors='surrogate_or_strict').removesuffix(os.path.sep)
try:
tar_member = tar._ansible_normalized_cache[dirname]
except KeyError:
raise AnsibleError("Unable to extract '%s' from collection" % dirname)
b_dir_path = os.path.join(b_dest, to_bytes(dirname, errors='surrogate_or_strict'))
b_parent_path = os.path.dirname(b_dir_path)
try:
os.makedirs(b_parent_path, mode=0o0755)
except OSError as e:
if e.errno != errno.EEXIST:
raise
if tar_member.type == tarfile.SYMTYPE:
b_link_path = to_bytes(tar_member.linkname, errors='surrogate_or_strict')
if not _is_child_path(b_link_path, b_dest, link_name=b_dir_path):
raise AnsibleError("Cannot extract symlink '%s' in collection: path points to location outside of "
"collection '%s'" % (to_native(dirname), b_link_path))
os.symlink(b_link_path, b_dir_path)
else:
if not os.path.isdir(b_dir_path):
os.mkdir(b_dir_path, 0o0755)
def _extract_tar_file(tar, filename, b_dest, b_temp_path, expected_hash=None):
""" Extracts a file from a collection tar. """
with _get_tar_file_member(tar, filename) as (tar_member, tar_obj):
if tar_member.type == tarfile.SYMTYPE:
actual_hash = _consume_file(tar_obj)
else:
with tempfile.NamedTemporaryFile(dir=b_temp_path, delete=False) as tmpfile_obj:
actual_hash = _consume_file(tar_obj, tmpfile_obj)
if expected_hash and actual_hash != expected_hash:
raise AnsibleError("Checksum mismatch for '%s' inside collection at '%s'"
% (to_native(filename, errors='surrogate_or_strict'), to_native(tar.name)))
b_dest_filepath = os.path.abspath(os.path.join(b_dest, to_bytes(filename, errors='surrogate_or_strict')))
b_parent_dir = os.path.dirname(b_dest_filepath)
if not _is_child_path(b_parent_dir, b_dest):
raise AnsibleError("Cannot extract tar entry '%s' as it will be placed outside the collection directory"
% to_native(filename, errors='surrogate_or_strict'))
if not os.path.exists(b_parent_dir):
# Seems like Galaxy does not validate if all file entries have a corresponding dir ftype entry. This check
# makes sure we create the parent directory even if it wasn't set in the metadata.
os.makedirs(b_parent_dir, mode=0o0755)
if tar_member.type == tarfile.SYMTYPE:
b_link_path = to_bytes(tar_member.linkname, errors='surrogate_or_strict')
if not _is_child_path(b_link_path, b_dest, link_name=b_dest_filepath):
raise AnsibleError("Cannot extract symlink '%s' in collection: path points to location outside of "
"collection '%s'" % (to_native(filename), b_link_path))
os.symlink(b_link_path, b_dest_filepath)
else:
shutil.move(to_bytes(tmpfile_obj.name, errors='surrogate_or_strict'), b_dest_filepath)
# Default to rw-r--r-- and only add execute if the tar file has execute.
tar_member = tar.getmember(to_native(filename, errors='surrogate_or_strict'))
new_mode = 0o644
if stat.S_IMODE(tar_member.mode) & stat.S_IXUSR:
new_mode |= 0o0111
os.chmod(b_dest_filepath, new_mode)
def _get_tar_file_member(tar, filename):
n_filename = to_native(filename, errors='surrogate_or_strict')
try:
member = tar.getmember(n_filename)
except KeyError:
raise AnsibleError("Collection tar at '%s' does not contain the expected file '%s'." % (
to_native(tar.name),
n_filename))
return _tarfile_extract(tar, member)
def _get_json_from_tar_file(b_path, filename):
file_contents = ''
with tarfile.open(b_path, mode='r') as collection_tar:
with _get_tar_file_member(collection_tar, filename) as (dummy, tar_obj):
bufsize = 65536
data = tar_obj.read(bufsize)
while data:
file_contents += to_text(data)
data = tar_obj.read(bufsize)
return json.loads(file_contents)
def _get_tar_file_hash(b_path, filename):
with tarfile.open(b_path, mode='r') as collection_tar:
with _get_tar_file_member(collection_tar, filename) as (dummy, tar_obj):
return _consume_file(tar_obj)
def _get_file_hash(b_path, filename): # type: (bytes, str) -> str
filepath = os.path.join(b_path, to_bytes(filename, errors='surrogate_or_strict'))
with open(filepath, 'rb') as fp:
return _consume_file(fp)
def _is_child_path(path, parent_path, link_name=None):
""" Checks that path is a path within the parent_path specified. """
b_path = to_bytes(path, errors='surrogate_or_strict')
if link_name and not os.path.isabs(b_path):
# If link_name is specified, path is the source of the link and we need to resolve the absolute path.
b_link_dir = os.path.dirname(to_bytes(link_name, errors='surrogate_or_strict'))
b_path = os.path.abspath(os.path.join(b_link_dir, b_path))
b_parent_path = to_bytes(parent_path, errors='surrogate_or_strict')
return b_path == b_parent_path or b_path.startswith(b_parent_path + to_bytes(os.path.sep))
def _resolve_depenency_map(
requested_requirements, # type: t.Iterable[Requirement]
galaxy_apis, # type: t.Iterable[GalaxyAPI]
concrete_artifacts_manager, # type: ConcreteArtifactsManager
preferred_candidates, # type: t.Iterable[Candidate] | None
no_deps, # type: bool
allow_pre_release, # type: bool
upgrade, # type: bool
include_signatures, # type: bool
offline, # type: bool
): # type: (...) -> dict[str, Candidate]
"""Return the resolved dependency map."""
if not HAS_RESOLVELIB:
raise AnsibleError("Failed to import resolvelib, check that a supported version is installed")
if not HAS_PACKAGING:
raise AnsibleError("Failed to import packaging, check that a supported version is installed")
req = None
try:
dist = distribution('ansible-core')
except Exception:
pass
else:
req = next((rr for r in (dist.requires or []) if (rr := PkgReq(r)).name == 'resolvelib'), None)
finally:
if req is None:
# TODO: replace the hardcoded versions with a warning if the dist info is missing
# display.warning("Unable to find 'ansible-core' distribution requirements to verify the resolvelib version is supported.")
if not RESOLVELIB_LOWERBOUND <= RESOLVELIB_VERSION < RESOLVELIB_UPPERBOUND:
raise AnsibleError(
f"ansible-galaxy requires resolvelib<{RESOLVELIB_UPPERBOUND.vstring},>={RESOLVELIB_LOWERBOUND.vstring}"
)
elif not req.specifier.contains(RESOLVELIB_VERSION.vstring):
raise AnsibleError(f"ansible-galaxy requires {req.name}{req.specifier}")
pre_release_hint = '' if allow_pre_release else (
'Hint: Pre-releases hosted on Galaxy or Automation Hub are not '
'installed by default unless a specific version is requested. '
'To enable pre-releases globally, use --pre.'
)
collection_dep_resolver = build_collection_dependency_resolver(
galaxy_apis=galaxy_apis,
concrete_artifacts_manager=concrete_artifacts_manager,
preferred_candidates=preferred_candidates,
with_deps=not no_deps,
with_pre_releases=allow_pre_release,
upgrade=upgrade,
include_signatures=include_signatures,
offline=offline,
)
try:
return collection_dep_resolver.resolve(
requested_requirements,
max_rounds=2000000, # NOTE: same constant pip uses
).mapping
except CollectionDependencyResolutionImpossible as dep_exc:
conflict_causes = (
'* {req.fqcn!s}:{req.ver!s} ({dep_origin!s})'.format(
req=req_inf.requirement,
dep_origin='direct request'
if req_inf.parent is None
else 'dependency of {parent!s}'.
format(parent=req_inf.parent),
)
for req_inf in dep_exc.causes
)
error_msg_lines = list(chain(
(
'Failed to resolve the requested '
'dependencies map. Could not satisfy the following '
'requirements:',
),
conflict_causes,
))
error_msg_lines.append(pre_release_hint)
raise AnsibleError('\n'.join(error_msg_lines)) from dep_exc
except CollectionDependencyInconsistentCandidate as dep_exc:
parents = [
"%s.%s:%s" % (p.namespace, p.name, p.ver)
for p in dep_exc.criterion.iter_parent()
if p is not None
]
error_msg_lines = [
(
'Failed to resolve the requested dependencies map. '
'Got the candidate {req.fqcn!s}:{req.ver!s} ({dep_origin!s}) '
'which didn\'t satisfy all of the following requirements:'.
format(
req=dep_exc.candidate,
dep_origin='direct request'
if not parents else 'dependency of {parent!s}'.
format(parent=', '.join(parents))
)
)
]
for req in dep_exc.criterion.iter_requirement():
error_msg_lines.append(
'* {req.fqcn!s}:{req.ver!s}'.format(req=req)
)
error_msg_lines.append(pre_release_hint)
raise AnsibleError('\n'.join(error_msg_lines)) from dep_exc
except ValueError as exc:
raise AnsibleError(to_native(exc)) from exc
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 81,618 |
Ansible galaxy collection build build_collection fails creating the files to be added to the release tarball
|
### Summary
When using:
```python
from ansible.galaxy.collection import build_collection
```
And calling it like:
```python
input_path= "/home/ccamacho/dev/automationhub/ccamacho/automationhub/" # Where galaxy.yml is
ouput_path= "releases" # The output folder relative to the input_path like <input_path/output_path/>
build_collection(input_path, output_path, True)
```
It fails with:
```python-traceback
The full traceback is:
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/ansible/executor/task_executor.py", line 165, in run
res = self._execute()
File "/usr/local/lib/python3.10/dist-packages/ansible/executor/task_executor.py", line 660, in _execute
result = self._handler.run(task_vars=vars_copy)
File "/home/ccamacho/.ansible/collections/ansible_collections/ccamacho/automationhub/plugins/action/collection_build.py", line 89, in run
galaxy.collection_build()
File "/home/ccamacho/.ansible/collections/ansible_collections/ccamacho/automationhub/plugins/plugin_utils/automationhub.py", line 64, in collection_build
build_collection(self.options.input_path, self.options.output_path, True)
File "/usr/local/lib/python3.10/dist-packages/ansible/galaxy/collection/__init__.py", line 514, in build_collection
collection_output = _build_collection_tar(b_collection_path, b_collection_output, collection_manifest, file_manifest)
File "/usr/local/lib/python3.10/dist-packages/ansible/galaxy/collection/__init__.py", line 1395, in _build_collection_tar
tar_file.add(
File "/usr/lib/python3.10/tarfile.py", line 2157, in add
tarinfo = self.gettarinfo(name, arcname)
File "/usr/lib/python3.10/tarfile.py", line 2030, in gettarinfo
statres = os.lstat(name)
FileNotFoundError: [Errno 2] No such file or directory: '/home/ccamacho/dev/automationhub/ccamacho/automationhub/laybooks/collection_publish.yml'
fatal: [localhost]: FAILED! => {
"msg": "Unexpected failure during module execution: [Errno 2] No such file or directory: '/home/ccamacho/dev/automationhub/ccamacho/automationhub/laybooks/collection_publish.yml'",
"stdout": ""
}
```
### Issue Type
Bug Report
### Component Name
[`lib/ansible/galaxy/collection/__init__.py`](https://github.com/ansible/ansible/blob/devel/lib/ansible/galaxy/collection/__init__.py)
### Ansible Version
```console
$ ansible --version
ansible [core 2.15.3]
config file = None
configured module search path = ['/home/ccamacho/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/lib/python3.10/dist-packages/ansible
ansible collection location = /home/ccamacho/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/local/bin/ansible
python version = 3.10.12 (main, Jun 11 2023, 05:26:28) [GCC 11.4.0] (/usr/bin/python3)
jinja version = 3.1.2
libyaml = True
```
### Configuration
Nothing custom or special
### OS / Environment
CentOS Stream9
### Steps to Reproduce
Using python directly:
```python
input_path= "/home/ccamacho/dev/automationhub/ccamacho/automationhub/" # Where galaxy.yml is
ouput_path= "releases" # The output folder relative to the input_path like <input_path/output_path/>
build_collection(input_path, output_path, True)
```
Testing the collection:
```shell
git clone https://github.com/ccamacho/automationhub
cd automationhub/ccamacho/automationhub/
ansible-galaxy collection build -v --force --output-path releases/
ansible-galaxy collection install releases/ccamacho-automationhub-1.0.0.tar.gz --force
ansible-playbook ./playbooks/collection_build.yml -vvvvv
```
### Expected Results
The collection is built correctly
### Actual Results
```python-traceback
FileNotFoundError: [Errno 2] No such file or directory: '/home/ccamacho/dev/automationhub/ccamacho/automationhub/laybooks/collection_publish.yml'
```
As you can see it should be "playbooks" instead of "laybooks"
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/81618
|
https://github.com/ansible/ansible/pull/81619
|
e4b9f9c6ae77388ff0d0a51d4943939636a03161
|
9244b2bff86961c896c2f2325b7c7f30b461819c
| 2023-09-02T15:50:54Z |
python
| 2023-09-20T18:18:37Z |
test/units/galaxy/test_collection.py
|
# -*- coding: utf-8 -*-
# Copyright: (c) 2019, Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
# Make coding more python3-ish
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import json
import os
import pytest
import re
import tarfile
import tempfile
import uuid
from hashlib import sha256
from io import BytesIO
from unittest.mock import MagicMock, mock_open, patch
import ansible.constants as C
from ansible import context
from ansible.cli import galaxy
from ansible.cli.galaxy import GalaxyCLI
from ansible.errors import AnsibleError
from ansible.galaxy import api, collection, token
from ansible.module_utils.common.text.converters import to_bytes, to_native, to_text
from ansible.module_utils.six.moves import builtins
from ansible.utils import context_objects as co
from ansible.utils.display import Display
from ansible.utils.hashing import secure_hash_s
from ansible.utils.sentinel import Sentinel
@pytest.fixture(autouse='function')
def reset_cli_args():
co.GlobalCLIArgs._Singleton__instance = None
yield
co.GlobalCLIArgs._Singleton__instance = None
@pytest.fixture()
def collection_input(tmp_path_factory):
''' Creates a collection skeleton directory for build tests '''
test_dir = to_text(tmp_path_factory.mktemp('test-Γ
ΓΕΓΞ²ΕΓ Collections Input'))
namespace = 'ansible_namespace'
collection = 'collection'
skeleton = os.path.join(os.path.dirname(os.path.split(__file__)[0]), 'cli', 'test_data', 'collection_skeleton')
galaxy_args = ['ansible-galaxy', 'collection', 'init', '%s.%s' % (namespace, collection),
'-c', '--init-path', test_dir, '--collection-skeleton', skeleton]
GalaxyCLI(args=galaxy_args).run()
collection_dir = os.path.join(test_dir, namespace, collection)
output_dir = to_text(tmp_path_factory.mktemp('test-Γ
ΓΕΓΞ²ΕΓ Collections Output'))
return collection_dir, output_dir
@pytest.fixture()
def collection_artifact(monkeypatch, tmp_path_factory):
''' Creates a temp collection artifact and mocked open_url instance for publishing tests '''
mock_open = MagicMock()
monkeypatch.setattr(collection.concrete_artifact_manager, 'open_url', mock_open)
mock_uuid = MagicMock()
mock_uuid.return_value.hex = 'uuid'
monkeypatch.setattr(uuid, 'uuid4', mock_uuid)
tmp_path = tmp_path_factory.mktemp('test-Γ
ΓΕΓΞ²ΕΓ Collections')
input_file = to_text(tmp_path / 'collection.tar.gz')
with tarfile.open(input_file, 'w:gz') as tfile:
b_io = BytesIO(b"\x00\x01\x02\x03")
tar_info = tarfile.TarInfo('test')
tar_info.size = 4
tar_info.mode = 0o0644
tfile.addfile(tarinfo=tar_info, fileobj=b_io)
return input_file, mock_open
@pytest.fixture()
def galaxy_yml_dir(request, tmp_path_factory):
b_test_dir = to_bytes(tmp_path_factory.mktemp('test-Γ
ΓΕΓΞ²ΕΓ Collections'))
b_galaxy_yml = os.path.join(b_test_dir, b'galaxy.yml')
with open(b_galaxy_yml, 'wb') as galaxy_obj:
galaxy_obj.write(to_bytes(request.param))
yield b_test_dir
@pytest.fixture()
def tmp_tarfile(tmp_path_factory, manifest_info):
''' Creates a temporary tar file for _extract_tar_file tests '''
filename = u'Γ
ΓΕΓΞ²ΕΓ'
temp_dir = to_bytes(tmp_path_factory.mktemp('test-%s Collections' % to_native(filename)))
tar_file = os.path.join(temp_dir, to_bytes('%s.tar.gz' % filename))
data = os.urandom(8)
with tarfile.open(tar_file, 'w:gz') as tfile:
b_io = BytesIO(data)
tar_info = tarfile.TarInfo(filename)
tar_info.size = len(data)
tar_info.mode = 0o0644
tfile.addfile(tarinfo=tar_info, fileobj=b_io)
b_data = to_bytes(json.dumps(manifest_info, indent=True), errors='surrogate_or_strict')
b_io = BytesIO(b_data)
tar_info = tarfile.TarInfo('MANIFEST.json')
tar_info.size = len(b_data)
tar_info.mode = 0o0644
tfile.addfile(tarinfo=tar_info, fileobj=b_io)
sha256_hash = sha256()
sha256_hash.update(data)
with tarfile.open(tar_file, 'r') as tfile:
yield temp_dir, tfile, filename, sha256_hash.hexdigest()
@pytest.fixture()
def galaxy_server():
context.CLIARGS._store = {'ignore_certs': False}
galaxy_api = api.GalaxyAPI(None, 'test_server', 'https://galaxy.ansible.com',
token=token.GalaxyToken(token='key'))
return galaxy_api
@pytest.fixture()
def manifest_template():
def get_manifest_info(namespace='ansible_namespace', name='collection', version='0.1.0'):
return {
"collection_info": {
"namespace": namespace,
"name": name,
"version": version,
"authors": [
"shertel"
],
"readme": "README.md",
"tags": [
"test",
"collection"
],
"description": "Test",
"license": [
"MIT"
],
"license_file": None,
"dependencies": {},
"repository": "https://github.com/{0}/{1}".format(namespace, name),
"documentation": None,
"homepage": None,
"issues": None
},
"file_manifest_file": {
"name": "FILES.json",
"ftype": "file",
"chksum_type": "sha256",
"chksum_sha256": "files_manifest_checksum",
"format": 1
},
"format": 1
}
return get_manifest_info
@pytest.fixture()
def manifest_info(manifest_template):
return manifest_template()
@pytest.fixture()
def manifest(manifest_info):
b_data = to_bytes(json.dumps(manifest_info))
with patch.object(builtins, 'open', mock_open(read_data=b_data)) as m:
with open('MANIFEST.json', mode='rb') as fake_file:
yield fake_file, sha256(b_data).hexdigest()
@pytest.mark.parametrize(
'required_signature_count,valid',
[
("1", True),
("+1", True),
("all", True),
("+all", True),
("-1", False),
("invalid", False),
("1.5", False),
("+", False),
]
)
def test_cli_options(required_signature_count, valid, monkeypatch):
cli_args = [
'ansible-galaxy',
'collection',
'install',
'namespace.collection:1.0.0',
'--keyring',
'~/.ansible/pubring.kbx',
'--required-valid-signature-count',
required_signature_count
]
galaxy_cli = GalaxyCLI(args=cli_args)
mock_execute_install = MagicMock()
monkeypatch.setattr(galaxy_cli, '_execute_install_collection', mock_execute_install)
if valid:
galaxy_cli.run()
else:
with pytest.raises(SystemExit, match='2') as error:
galaxy_cli.run()
@pytest.mark.parametrize(
"config,server",
[
(
# Options to create ini config
{
'url': 'https://galaxy.ansible.com',
'validate_certs': 'False',
},
# Expected server attributes
{
'validate_certs': False,
},
),
(
{
'url': 'https://galaxy.ansible.com',
'validate_certs': 'True',
},
{
'validate_certs': True,
},
),
],
)
def test_bool_type_server_config_options(config, server, monkeypatch):
cli_args = [
'ansible-galaxy',
'collection',
'install',
'namespace.collection:1.0.0',
]
config_lines = [
"[galaxy]",
"server_list=server1\n",
"[galaxy_server.server1]",
"url=%s" % config['url'],
"validate_certs=%s\n" % config['validate_certs'],
]
with tempfile.NamedTemporaryFile(suffix='.cfg') as tmp_file:
tmp_file.write(
to_bytes('\n'.join(config_lines))
)
tmp_file.flush()
with patch.object(C, 'GALAXY_SERVER_LIST', ['server1']):
with patch.object(C.config, '_config_file', tmp_file.name):
C.config._parse_config_file()
galaxy_cli = GalaxyCLI(args=cli_args)
mock_execute_install = MagicMock()
monkeypatch.setattr(galaxy_cli, '_execute_install_collection', mock_execute_install)
galaxy_cli.run()
assert galaxy_cli.api_servers[0].name == 'server1'
assert galaxy_cli.api_servers[0].validate_certs == server['validate_certs']
@pytest.mark.parametrize('global_ignore_certs', [True, False])
def test_validate_certs(global_ignore_certs, monkeypatch):
cli_args = [
'ansible-galaxy',
'collection',
'install',
'namespace.collection:1.0.0',
]
if global_ignore_certs:
cli_args.append('--ignore-certs')
galaxy_cli = GalaxyCLI(args=cli_args)
mock_execute_install = MagicMock()
monkeypatch.setattr(galaxy_cli, '_execute_install_collection', mock_execute_install)
galaxy_cli.run()
assert len(galaxy_cli.api_servers) == 1
assert galaxy_cli.api_servers[0].validate_certs is not global_ignore_certs
@pytest.mark.parametrize(
["ignore_certs_cli", "ignore_certs_cfg", "expected_validate_certs"],
[
(None, None, True),
(None, True, False),
(None, False, True),
(True, None, False),
(True, True, False),
(True, False, False),
]
)
def test_validate_certs_with_server_url(ignore_certs_cli, ignore_certs_cfg, expected_validate_certs, monkeypatch):
cli_args = [
'ansible-galaxy',
'collection',
'install',
'namespace.collection:1.0.0',
'-s',
'https://galaxy.ansible.com'
]
if ignore_certs_cli:
cli_args.append('--ignore-certs')
if ignore_certs_cfg is not None:
monkeypatch.setattr(C, 'GALAXY_IGNORE_CERTS', ignore_certs_cfg)
galaxy_cli = GalaxyCLI(args=cli_args)
mock_execute_install = MagicMock()
monkeypatch.setattr(galaxy_cli, '_execute_install_collection', mock_execute_install)
galaxy_cli.run()
assert len(galaxy_cli.api_servers) == 1
assert galaxy_cli.api_servers[0].validate_certs == expected_validate_certs
@pytest.mark.parametrize(
["ignore_certs_cli", "ignore_certs_cfg", "expected_server2_validate_certs", "expected_server3_validate_certs"],
[
(None, None, True, True),
(None, True, True, False),
(None, False, True, True),
(True, None, False, False),
(True, True, False, False),
(True, False, False, False),
]
)
def test_validate_certs_server_config(ignore_certs_cfg, ignore_certs_cli, expected_server2_validate_certs, expected_server3_validate_certs, monkeypatch):
server_names = ['server1', 'server2', 'server3']
cfg_lines = [
"[galaxy]",
"server_list=server1,server2,server3",
"[galaxy_server.server1]",
"url=https://galaxy.ansible.com/api/",
"validate_certs=False",
"[galaxy_server.server2]",
"url=https://galaxy.ansible.com/api/",
"validate_certs=True",
"[galaxy_server.server3]",
"url=https://galaxy.ansible.com/api/",
]
cli_args = [
'ansible-galaxy',
'collection',
'install',
'namespace.collection:1.0.0',
]
if ignore_certs_cli:
cli_args.append('--ignore-certs')
if ignore_certs_cfg is not None:
monkeypatch.setattr(C, 'GALAXY_IGNORE_CERTS', ignore_certs_cfg)
monkeypatch.setattr(C, 'GALAXY_SERVER_LIST', server_names)
with tempfile.NamedTemporaryFile(suffix='.cfg') as tmp_file:
tmp_file.write(to_bytes('\n'.join(cfg_lines), errors='surrogate_or_strict'))
tmp_file.flush()
monkeypatch.setattr(C.config, '_config_file', tmp_file.name)
C.config._parse_config_file()
galaxy_cli = GalaxyCLI(args=cli_args)
mock_execute_install = MagicMock()
monkeypatch.setattr(galaxy_cli, '_execute_install_collection', mock_execute_install)
galaxy_cli.run()
# (not) --ignore-certs > server's validate_certs > (not) GALAXY_IGNORE_CERTS > True
assert galaxy_cli.api_servers[0].validate_certs is False
assert galaxy_cli.api_servers[1].validate_certs is expected_server2_validate_certs
assert galaxy_cli.api_servers[2].validate_certs is expected_server3_validate_certs
@pytest.mark.parametrize(
["timeout_cli", "timeout_cfg", "timeout_fallback", "expected_timeout"],
[
(None, None, None, 60),
(None, None, 10, 10),
(None, 20, 10, 20),
(30, 20, 10, 30),
]
)
def test_timeout_server_config(timeout_cli, timeout_cfg, timeout_fallback, expected_timeout, monkeypatch):
cli_args = [
'ansible-galaxy',
'collection',
'install',
'namespace.collection:1.0.0',
]
if timeout_cli is not None:
cli_args.extend(["--timeout", f"{timeout_cli}"])
cfg_lines = ["[galaxy]", "server_list=server1"]
if timeout_fallback is not None:
cfg_lines.append(f"server_timeout={timeout_fallback}")
# fix default in server config since C.GALAXY_SERVER_TIMEOUT was already evaluated
server_additional = galaxy.SERVER_ADDITIONAL.copy()
server_additional['timeout']['default'] = timeout_fallback
monkeypatch.setattr(galaxy, 'SERVER_ADDITIONAL', server_additional)
cfg_lines.extend(["[galaxy_server.server1]", "url=https://galaxy.ansible.com/api/"])
if timeout_cfg is not None:
cfg_lines.append(f"timeout={timeout_cfg}")
monkeypatch.setattr(C, 'GALAXY_SERVER_LIST', ['server1'])
with tempfile.NamedTemporaryFile(suffix='.cfg') as tmp_file:
tmp_file.write(to_bytes('\n'.join(cfg_lines), errors='surrogate_or_strict'))
tmp_file.flush()
monkeypatch.setattr(C.config, '_config_file', tmp_file.name)
C.config._parse_config_file()
galaxy_cli = GalaxyCLI(args=cli_args)
mock_execute_install = MagicMock()
monkeypatch.setattr(galaxy_cli, '_execute_install_collection', mock_execute_install)
galaxy_cli.run()
assert galaxy_cli.api_servers[0].timeout == expected_timeout
def test_build_collection_no_galaxy_yaml():
fake_path = u'/fake/Γ
ΓΕΓΞ²ΕΓ/path'
expected = to_native("The collection galaxy.yml path '%s/galaxy.yml' does not exist." % fake_path)
with pytest.raises(AnsibleError, match=expected):
collection.build_collection(fake_path, u'output', False)
def test_build_existing_output_file(collection_input):
input_dir, output_dir = collection_input
existing_output_dir = os.path.join(output_dir, 'ansible_namespace-collection-0.1.0.tar.gz')
os.makedirs(existing_output_dir)
expected = "The output collection artifact '%s' already exists, but is a directory - aborting" \
% to_native(existing_output_dir)
with pytest.raises(AnsibleError, match=expected):
collection.build_collection(to_text(input_dir, errors='surrogate_or_strict'), to_text(output_dir, errors='surrogate_or_strict'), False)
def test_build_existing_output_without_force(collection_input):
input_dir, output_dir = collection_input
existing_output = os.path.join(output_dir, 'ansible_namespace-collection-0.1.0.tar.gz')
with open(existing_output, 'w+') as out_file:
out_file.write("random garbage")
out_file.flush()
expected = "The file '%s' already exists. You can use --force to re-create the collection artifact." \
% to_native(existing_output)
with pytest.raises(AnsibleError, match=expected):
collection.build_collection(to_text(input_dir, errors='surrogate_or_strict'), to_text(output_dir, errors='surrogate_or_strict'), False)
def test_build_existing_output_with_force(collection_input):
input_dir, output_dir = collection_input
existing_output = os.path.join(output_dir, 'ansible_namespace-collection-0.1.0.tar.gz')
with open(existing_output, 'w+') as out_file:
out_file.write("random garbage")
out_file.flush()
collection.build_collection(to_text(input_dir, errors='surrogate_or_strict'), to_text(output_dir, errors='surrogate_or_strict'), True)
# Verify the file was replaced with an actual tar file
assert tarfile.is_tarfile(existing_output)
def test_build_with_existing_files_and_manifest(collection_input):
input_dir, output_dir = collection_input
with open(os.path.join(input_dir, 'MANIFEST.json'), "wb") as fd:
fd.write(b'{"collection_info": {"version": "6.6.6"}, "version": 1}')
with open(os.path.join(input_dir, 'FILES.json'), "wb") as fd:
fd.write(b'{"files": [], "format": 1}')
with open(os.path.join(input_dir, "plugins", "MANIFEST.json"), "wb") as fd:
fd.write(b"test data that should be in build")
collection.build_collection(to_text(input_dir, errors='surrogate_or_strict'), to_text(output_dir, errors='surrogate_or_strict'), False)
output_artifact = os.path.join(output_dir, 'ansible_namespace-collection-0.1.0.tar.gz')
assert tarfile.is_tarfile(output_artifact)
with tarfile.open(output_artifact, mode='r') as actual:
members = actual.getmembers()
manifest_file = [m for m in members if m.path == "MANIFEST.json"][0]
manifest_file_obj = actual.extractfile(manifest_file.name)
manifest_file_text = manifest_file_obj.read()
manifest_file_obj.close()
assert manifest_file_text != b'{"collection_info": {"version": "6.6.6"}, "version": 1}'
json_file = [m for m in members if m.path == "MANIFEST.json"][0]
json_file_obj = actual.extractfile(json_file.name)
json_file_text = json_file_obj.read()
json_file_obj.close()
assert json_file_text != b'{"files": [], "format": 1}'
sub_manifest_file = [m for m in members if m.path == "plugins/MANIFEST.json"][0]
sub_manifest_file_obj = actual.extractfile(sub_manifest_file.name)
sub_manifest_file_text = sub_manifest_file_obj.read()
sub_manifest_file_obj.close()
assert sub_manifest_file_text == b"test data that should be in build"
@pytest.mark.parametrize('galaxy_yml_dir', [b'namespace: value: broken'], indirect=True)
def test_invalid_yaml_galaxy_file(galaxy_yml_dir):
galaxy_file = os.path.join(galaxy_yml_dir, b'galaxy.yml')
expected = to_native(b"Failed to parse the galaxy.yml at '%s' with the following error:" % galaxy_file)
with pytest.raises(AnsibleError, match=expected):
collection.concrete_artifact_manager._get_meta_from_src_dir(galaxy_yml_dir)
@pytest.mark.parametrize('galaxy_yml_dir', [b'namespace: test_namespace'], indirect=True)
def test_missing_required_galaxy_key(galaxy_yml_dir):
galaxy_file = os.path.join(galaxy_yml_dir, b'galaxy.yml')
expected = "The collection galaxy.yml at '%s' is missing the following mandatory keys: authors, name, " \
"readme, version" % to_native(galaxy_file)
with pytest.raises(AnsibleError, match=expected):
collection.concrete_artifact_manager._get_meta_from_src_dir(galaxy_yml_dir)
@pytest.mark.parametrize('galaxy_yml_dir', [b'namespace: test_namespace'], indirect=True)
def test_galaxy_yaml_no_mandatory_keys(galaxy_yml_dir):
expected = "The collection galaxy.yml at '%s/galaxy.yml' is missing the " \
"following mandatory keys: authors, name, readme, version" % to_native(galaxy_yml_dir)
with pytest.raises(ValueError, match=expected):
assert collection.concrete_artifact_manager._get_meta_from_src_dir(galaxy_yml_dir, require_build_metadata=False) == expected
@pytest.mark.parametrize('galaxy_yml_dir', [b'My life story is so very interesting'], indirect=True)
def test_galaxy_yaml_no_mandatory_keys_bad_yaml(galaxy_yml_dir):
expected = "The collection galaxy.yml at '%s/galaxy.yml' is incorrectly formatted." % to_native(galaxy_yml_dir)
with pytest.raises(AnsibleError, match=expected):
collection.concrete_artifact_manager._get_meta_from_src_dir(galaxy_yml_dir)
@pytest.mark.parametrize('galaxy_yml_dir', [b"""
namespace: namespace
name: collection
authors: Jordan
version: 0.1.0
readme: README.md
invalid: value"""], indirect=True)
def test_warning_extra_keys(galaxy_yml_dir, monkeypatch):
display_mock = MagicMock()
monkeypatch.setattr(Display, 'warning', display_mock)
collection.concrete_artifact_manager._get_meta_from_src_dir(galaxy_yml_dir)
assert display_mock.call_count == 1
assert display_mock.call_args[0][0] == "Found unknown keys in collection galaxy.yml at '%s/galaxy.yml': invalid"\
% to_text(galaxy_yml_dir)
@pytest.mark.parametrize('galaxy_yml_dir', [b"""
namespace: namespace
name: collection
authors: Jordan
version: 0.1.0
readme: README.md"""], indirect=True)
def test_defaults_galaxy_yml(galaxy_yml_dir):
actual = collection.concrete_artifact_manager._get_meta_from_src_dir(galaxy_yml_dir)
assert actual['namespace'] == 'namespace'
assert actual['name'] == 'collection'
assert actual['authors'] == ['Jordan']
assert actual['version'] == '0.1.0'
assert actual['readme'] == 'README.md'
assert actual['description'] is None
assert actual['repository'] is None
assert actual['documentation'] is None
assert actual['homepage'] is None
assert actual['issues'] is None
assert actual['tags'] == []
assert actual['dependencies'] == {}
assert actual['license'] == []
@pytest.mark.parametrize('galaxy_yml_dir', [(b"""
namespace: namespace
name: collection
authors: Jordan
version: 0.1.0
readme: README.md
license: MIT"""), (b"""
namespace: namespace
name: collection
authors: Jordan
version: 0.1.0
readme: README.md
license:
- MIT""")], indirect=True)
def test_galaxy_yml_list_value(galaxy_yml_dir):
actual = collection.concrete_artifact_manager._get_meta_from_src_dir(galaxy_yml_dir)
assert actual['license'] == ['MIT']
def test_build_ignore_files_and_folders(collection_input, monkeypatch):
input_dir = collection_input[0]
mock_display = MagicMock()
monkeypatch.setattr(Display, 'vvv', mock_display)
git_folder = os.path.join(input_dir, '.git')
retry_file = os.path.join(input_dir, 'ansible.retry')
tests_folder = os.path.join(input_dir, 'tests', 'output')
tests_output_file = os.path.join(tests_folder, 'result.txt')
os.makedirs(git_folder)
os.makedirs(tests_folder)
with open(retry_file, 'w+') as ignore_file:
ignore_file.write('random')
ignore_file.flush()
with open(tests_output_file, 'w+') as tests_file:
tests_file.write('random')
tests_file.flush()
actual = collection._build_files_manifest(to_bytes(input_dir), 'namespace', 'collection', [], Sentinel, None)
assert actual['format'] == 1
for manifest_entry in actual['files']:
assert manifest_entry['name'] not in ['.git', 'ansible.retry', 'galaxy.yml', 'tests/output', 'tests/output/result.txt']
expected_msgs = [
"Skipping '%s/galaxy.yml' for collection build" % to_text(input_dir),
"Skipping '%s' for collection build" % to_text(retry_file),
"Skipping '%s' for collection build" % to_text(git_folder),
"Skipping '%s' for collection build" % to_text(tests_folder),
]
assert mock_display.call_count == 4
assert mock_display.mock_calls[0][1][0] in expected_msgs
assert mock_display.mock_calls[1][1][0] in expected_msgs
assert mock_display.mock_calls[2][1][0] in expected_msgs
assert mock_display.mock_calls[3][1][0] in expected_msgs
def test_build_ignore_older_release_in_root(collection_input, monkeypatch):
input_dir = collection_input[0]
mock_display = MagicMock()
monkeypatch.setattr(Display, 'vvv', mock_display)
# This is expected to be ignored because it is in the root collection dir.
release_file = os.path.join(input_dir, 'namespace-collection-0.0.0.tar.gz')
# This is not expected to be ignored because it is not in the root collection dir.
fake_release_file = os.path.join(input_dir, 'plugins', 'namespace-collection-0.0.0.tar.gz')
for filename in [release_file, fake_release_file]:
with open(filename, 'w+') as file_obj:
file_obj.write('random')
file_obj.flush()
actual = collection._build_files_manifest(to_bytes(input_dir), 'namespace', 'collection', [], Sentinel, None)
assert actual['format'] == 1
plugin_release_found = False
for manifest_entry in actual['files']:
assert manifest_entry['name'] != 'namespace-collection-0.0.0.tar.gz'
if manifest_entry['name'] == 'plugins/namespace-collection-0.0.0.tar.gz':
plugin_release_found = True
assert plugin_release_found
expected_msgs = [
"Skipping '%s/galaxy.yml' for collection build" % to_text(input_dir),
"Skipping '%s' for collection build" % to_text(release_file)
]
assert mock_display.call_count == 2
assert mock_display.mock_calls[0][1][0] in expected_msgs
assert mock_display.mock_calls[1][1][0] in expected_msgs
def test_build_ignore_patterns(collection_input, monkeypatch):
input_dir = collection_input[0]
mock_display = MagicMock()
monkeypatch.setattr(Display, 'vvv', mock_display)
actual = collection._build_files_manifest(to_bytes(input_dir), 'namespace', 'collection',
['*.md', 'plugins/action', 'playbooks/*.j2'],
Sentinel, None)
assert actual['format'] == 1
expected_missing = [
'README.md',
'docs/My Collection.md',
'plugins/action',
'playbooks/templates/test.conf.j2',
'playbooks/templates/subfolder/test.conf.j2',
]
# Files or dirs that are close to a match but are not, make sure they are present
expected_present = [
'docs',
'roles/common/templates/test.conf.j2',
'roles/common/templates/subfolder/test.conf.j2',
]
actual_files = [e['name'] for e in actual['files']]
for m in expected_missing:
assert m not in actual_files
for p in expected_present:
assert p in actual_files
expected_msgs = [
"Skipping '%s/galaxy.yml' for collection build" % to_text(input_dir),
"Skipping '%s/README.md' for collection build" % to_text(input_dir),
"Skipping '%s/docs/My Collection.md' for collection build" % to_text(input_dir),
"Skipping '%s/plugins/action' for collection build" % to_text(input_dir),
"Skipping '%s/playbooks/templates/test.conf.j2' for collection build" % to_text(input_dir),
"Skipping '%s/playbooks/templates/subfolder/test.conf.j2' for collection build" % to_text(input_dir),
]
assert mock_display.call_count == len(expected_msgs)
assert mock_display.mock_calls[0][1][0] in expected_msgs
assert mock_display.mock_calls[1][1][0] in expected_msgs
assert mock_display.mock_calls[2][1][0] in expected_msgs
assert mock_display.mock_calls[3][1][0] in expected_msgs
assert mock_display.mock_calls[4][1][0] in expected_msgs
assert mock_display.mock_calls[5][1][0] in expected_msgs
def test_build_ignore_symlink_target_outside_collection(collection_input, monkeypatch):
input_dir, outside_dir = collection_input
mock_display = MagicMock()
monkeypatch.setattr(Display, 'warning', mock_display)
link_path = os.path.join(input_dir, 'plugins', 'connection')
os.symlink(outside_dir, link_path)
actual = collection._build_files_manifest(to_bytes(input_dir), 'namespace', 'collection', [], Sentinel, None)
for manifest_entry in actual['files']:
assert manifest_entry['name'] != 'plugins/connection'
assert mock_display.call_count == 1
assert mock_display.mock_calls[0][1][0] == "Skipping '%s' as it is a symbolic link to a directory outside " \
"the collection" % to_text(link_path)
def test_build_copy_symlink_target_inside_collection(collection_input):
input_dir = collection_input[0]
os.makedirs(os.path.join(input_dir, 'playbooks', 'roles'))
roles_link = os.path.join(input_dir, 'playbooks', 'roles', 'linked')
roles_target = os.path.join(input_dir, 'roles', 'linked')
roles_target_tasks = os.path.join(roles_target, 'tasks')
os.makedirs(roles_target_tasks)
with open(os.path.join(roles_target_tasks, 'main.yml'), 'w+') as tasks_main:
tasks_main.write("---\n- hosts: localhost\n tasks:\n - ping:")
tasks_main.flush()
os.symlink(roles_target, roles_link)
actual = collection._build_files_manifest(to_bytes(input_dir), 'namespace', 'collection', [], Sentinel, None)
linked_entries = [e for e in actual['files'] if e['name'].startswith('playbooks/roles/linked')]
assert len(linked_entries) == 1
assert linked_entries[0]['name'] == 'playbooks/roles/linked'
assert linked_entries[0]['ftype'] == 'dir'
def test_build_with_symlink_inside_collection(collection_input):
input_dir, output_dir = collection_input
os.makedirs(os.path.join(input_dir, 'playbooks', 'roles'))
roles_link = os.path.join(input_dir, 'playbooks', 'roles', 'linked')
file_link = os.path.join(input_dir, 'docs', 'README.md')
roles_target = os.path.join(input_dir, 'roles', 'linked')
roles_target_tasks = os.path.join(roles_target, 'tasks')
os.makedirs(roles_target_tasks)
with open(os.path.join(roles_target_tasks, 'main.yml'), 'w+') as tasks_main:
tasks_main.write("---\n- hosts: localhost\n tasks:\n - ping:")
tasks_main.flush()
os.symlink(roles_target, roles_link)
os.symlink(os.path.join(input_dir, 'README.md'), file_link)
collection.build_collection(to_text(input_dir, errors='surrogate_or_strict'), to_text(output_dir, errors='surrogate_or_strict'), False)
output_artifact = os.path.join(output_dir, 'ansible_namespace-collection-0.1.0.tar.gz')
assert tarfile.is_tarfile(output_artifact)
with tarfile.open(output_artifact, mode='r') as actual:
members = actual.getmembers()
linked_folder = [m for m in members if m.path == 'playbooks/roles/linked'][0]
assert linked_folder.type == tarfile.SYMTYPE
assert linked_folder.linkname == '../../roles/linked'
linked_file = [m for m in members if m.path == 'docs/README.md'][0]
assert linked_file.type == tarfile.SYMTYPE
assert linked_file.linkname == '../README.md'
linked_file_obj = actual.extractfile(linked_file.name)
actual_file = secure_hash_s(linked_file_obj.read())
linked_file_obj.close()
assert actual_file == '08f24200b9fbe18903e7a50930c9d0df0b8d7da3' # shasum test/units/cli/test_data/collection_skeleton/README.md
def test_publish_no_wait(galaxy_server, collection_artifact, monkeypatch):
mock_display = MagicMock()
monkeypatch.setattr(Display, 'display', mock_display)
artifact_path, mock_open = collection_artifact
fake_import_uri = 'https://galaxy.server.com/api/v2/import/1234'
mock_publish = MagicMock()
mock_publish.return_value = fake_import_uri
monkeypatch.setattr(galaxy_server, 'publish_collection', mock_publish)
collection.publish_collection(artifact_path, galaxy_server, False, 0)
assert mock_publish.call_count == 1
assert mock_publish.mock_calls[0][1][0] == artifact_path
assert mock_display.call_count == 1
assert mock_display.mock_calls[0][1][0] == \
"Collection has been pushed to the Galaxy server %s %s, not waiting until import has completed due to " \
"--no-wait being set. Import task results can be found at %s" % (galaxy_server.name, galaxy_server.api_server,
fake_import_uri)
def test_publish_with_wait(galaxy_server, collection_artifact, monkeypatch):
mock_display = MagicMock()
monkeypatch.setattr(Display, 'display', mock_display)
artifact_path, mock_open = collection_artifact
fake_import_uri = 'https://galaxy.server.com/api/v2/import/1234'
mock_publish = MagicMock()
mock_publish.return_value = fake_import_uri
monkeypatch.setattr(galaxy_server, 'publish_collection', mock_publish)
mock_wait = MagicMock()
monkeypatch.setattr(galaxy_server, 'wait_import_task', mock_wait)
collection.publish_collection(artifact_path, galaxy_server, True, 0)
assert mock_publish.call_count == 1
assert mock_publish.mock_calls[0][1][0] == artifact_path
assert mock_wait.call_count == 1
assert mock_wait.mock_calls[0][1][0] == '1234'
assert mock_display.mock_calls[0][1][0] == "Collection has been published to the Galaxy server test_server %s" \
% galaxy_server.api_server
def test_download_file(tmp_path_factory, monkeypatch):
temp_dir = to_bytes(tmp_path_factory.mktemp('test-Γ
ΓΕΓΞ²ΕΓ Collections'))
data = b"\x00\x01\x02\x03"
sha256_hash = sha256()
sha256_hash.update(data)
mock_open = MagicMock()
mock_open.return_value = BytesIO(data)
monkeypatch.setattr(collection.concrete_artifact_manager, 'open_url', mock_open)
expected = temp_dir
actual = collection._download_file('http://google.com/file', temp_dir, sha256_hash.hexdigest(), True)
assert actual.startswith(expected)
assert os.path.isfile(actual)
with open(actual, 'rb') as file_obj:
assert file_obj.read() == data
assert mock_open.call_count == 1
assert mock_open.mock_calls[0][1][0] == 'http://google.com/file'
def test_download_file_hash_mismatch(tmp_path_factory, monkeypatch):
temp_dir = to_bytes(tmp_path_factory.mktemp('test-Γ
ΓΕΓΞ²ΕΓ Collections'))
data = b"\x00\x01\x02\x03"
mock_open = MagicMock()
mock_open.return_value = BytesIO(data)
monkeypatch.setattr(collection.concrete_artifact_manager, 'open_url', mock_open)
expected = "Mismatch artifact hash with downloaded file"
with pytest.raises(AnsibleError, match=expected):
collection._download_file('http://google.com/file', temp_dir, 'bad', True)
def test_extract_tar_file_invalid_hash(tmp_tarfile):
temp_dir, tfile, filename, dummy = tmp_tarfile
expected = "Checksum mismatch for '%s' inside collection at '%s'" % (to_native(filename), to_native(tfile.name))
with pytest.raises(AnsibleError, match=expected):
collection._extract_tar_file(tfile, filename, temp_dir, temp_dir, "fakehash")
def test_extract_tar_file_missing_member(tmp_tarfile):
temp_dir, tfile, dummy, dummy = tmp_tarfile
expected = "Collection tar at '%s' does not contain the expected file 'missing'." % to_native(tfile.name)
with pytest.raises(AnsibleError, match=expected):
collection._extract_tar_file(tfile, 'missing', temp_dir, temp_dir)
def test_extract_tar_file_missing_parent_dir(tmp_tarfile):
temp_dir, tfile, filename, checksum = tmp_tarfile
output_dir = os.path.join(temp_dir, b'output')
output_file = os.path.join(output_dir, to_bytes(filename))
collection._extract_tar_file(tfile, filename, output_dir, temp_dir, checksum)
os.path.isfile(output_file)
def test_extract_tar_file_outside_dir(tmp_path_factory):
filename = u'Γ
ΓΕΓΞ²ΕΓ'
temp_dir = to_bytes(tmp_path_factory.mktemp('test-%s Collections' % to_native(filename)))
tar_file = os.path.join(temp_dir, to_bytes('%s.tar.gz' % filename))
data = os.urandom(8)
tar_filename = '../%s.sh' % filename
with tarfile.open(tar_file, 'w:gz') as tfile:
b_io = BytesIO(data)
tar_info = tarfile.TarInfo(tar_filename)
tar_info.size = len(data)
tar_info.mode = 0o0644
tfile.addfile(tarinfo=tar_info, fileobj=b_io)
expected = re.escape("Cannot extract tar entry '%s' as it will be placed outside the collection directory"
% to_native(tar_filename))
with tarfile.open(tar_file, 'r') as tfile:
with pytest.raises(AnsibleError, match=expected):
collection._extract_tar_file(tfile, tar_filename, os.path.join(temp_dir, to_bytes(filename)), temp_dir)
def test_require_one_of_collections_requirements_with_both():
cli = GalaxyCLI(args=['ansible-galaxy', 'collection', 'verify', 'namespace.collection', '-r', 'requirements.yml'])
with pytest.raises(AnsibleError) as req_err:
cli._require_one_of_collections_requirements(('namespace.collection',), 'requirements.yml')
with pytest.raises(AnsibleError) as cli_err:
cli.run()
assert req_err.value.message == cli_err.value.message == 'The positional collection_name arg and --requirements-file are mutually exclusive.'
def test_require_one_of_collections_requirements_with_neither():
cli = GalaxyCLI(args=['ansible-galaxy', 'collection', 'verify'])
with pytest.raises(AnsibleError) as req_err:
cli._require_one_of_collections_requirements((), '')
with pytest.raises(AnsibleError) as cli_err:
cli.run()
assert req_err.value.message == cli_err.value.message == 'You must specify a collection name or a requirements file.'
def test_require_one_of_collections_requirements_with_collections():
cli = GalaxyCLI(args=['ansible-galaxy', 'collection', 'verify', 'namespace1.collection1', 'namespace2.collection1:1.0.0'])
collections = ('namespace1.collection1', 'namespace2.collection1:1.0.0',)
requirements = cli._require_one_of_collections_requirements(collections, '')['collections']
req_tuples = [('%s.%s' % (req.namespace, req.name), req.ver, req.src, req.type,) for req in requirements]
assert req_tuples == [('namespace1.collection1', '*', None, 'galaxy'), ('namespace2.collection1', '1.0.0', None, 'galaxy')]
@patch('ansible.cli.galaxy.GalaxyCLI._parse_requirements_file')
def test_require_one_of_collections_requirements_with_requirements(mock_parse_requirements_file, galaxy_server):
cli = GalaxyCLI(args=['ansible-galaxy', 'collection', 'verify', '-r', 'requirements.yml', 'namespace.collection'])
mock_parse_requirements_file.return_value = {'collections': [('namespace.collection', '1.0.5', galaxy_server)]}
requirements = cli._require_one_of_collections_requirements((), 'requirements.yml')['collections']
assert mock_parse_requirements_file.call_count == 1
assert requirements == [('namespace.collection', '1.0.5', galaxy_server)]
@patch('ansible.cli.galaxy.GalaxyCLI.execute_verify', spec=True)
def test_call_GalaxyCLI(execute_verify):
galaxy_args = ['ansible-galaxy', 'collection', 'verify', 'namespace.collection']
GalaxyCLI(args=galaxy_args).run()
assert execute_verify.call_count == 1
@patch('ansible.cli.galaxy.GalaxyCLI.execute_verify')
def test_call_GalaxyCLI_with_implicit_role(execute_verify):
galaxy_args = ['ansible-galaxy', 'verify', 'namespace.implicit_role']
with pytest.raises(SystemExit):
GalaxyCLI(args=galaxy_args).run()
assert not execute_verify.called
@patch('ansible.cli.galaxy.GalaxyCLI.execute_verify')
def test_call_GalaxyCLI_with_role(execute_verify):
galaxy_args = ['ansible-galaxy', 'role', 'verify', 'namespace.role']
with pytest.raises(SystemExit):
GalaxyCLI(args=galaxy_args).run()
assert not execute_verify.called
@patch('ansible.cli.galaxy.verify_collections', spec=True)
def test_execute_verify_with_defaults(mock_verify_collections):
galaxy_args = ['ansible-galaxy', 'collection', 'verify', 'namespace.collection:1.0.4']
GalaxyCLI(args=galaxy_args).run()
assert mock_verify_collections.call_count == 1
print("Call args {0}".format(mock_verify_collections.call_args[0]))
requirements, search_paths, galaxy_apis, ignore_errors = mock_verify_collections.call_args[0]
assert [('%s.%s' % (r.namespace, r.name), r.ver, r.src, r.type) for r in requirements] == [('namespace.collection', '1.0.4', None, 'galaxy')]
for install_path in search_paths:
assert install_path.endswith('ansible_collections')
assert galaxy_apis[0].api_server == 'https://galaxy.ansible.com'
assert ignore_errors is False
@patch('ansible.cli.galaxy.verify_collections', spec=True)
def test_execute_verify(mock_verify_collections):
GalaxyCLI(args=[
'ansible-galaxy', 'collection', 'verify', 'namespace.collection:1.0.4', '--ignore-certs',
'-p', '~/.ansible', '--ignore-errors', '--server', 'http://galaxy-dev.com',
]).run()
assert mock_verify_collections.call_count == 1
requirements, search_paths, galaxy_apis, ignore_errors = mock_verify_collections.call_args[0]
assert [('%s.%s' % (r.namespace, r.name), r.ver, r.src, r.type) for r in requirements] == [('namespace.collection', '1.0.4', None, 'galaxy')]
for install_path in search_paths:
assert install_path.endswith('ansible_collections')
assert galaxy_apis[0].api_server == 'http://galaxy-dev.com'
assert ignore_errors is True
def test_verify_file_hash_deleted_file(manifest_info):
data = to_bytes(json.dumps(manifest_info))
digest = sha256(data).hexdigest()
namespace = manifest_info['collection_info']['namespace']
name = manifest_info['collection_info']['name']
version = manifest_info['collection_info']['version']
server = 'http://galaxy.ansible.com'
error_queue = []
with patch.object(builtins, 'open', mock_open(read_data=data)) as m:
with patch.object(collection.os.path, 'isfile', MagicMock(return_value=False)) as mock_isfile:
collection._verify_file_hash(b'path/', 'file', digest, error_queue)
mock_isfile.assert_called_once()
assert len(error_queue) == 1
assert error_queue[0].installed is None
assert error_queue[0].expected == digest
def test_verify_file_hash_matching_hash(manifest_info):
data = to_bytes(json.dumps(manifest_info))
digest = sha256(data).hexdigest()
namespace = manifest_info['collection_info']['namespace']
name = manifest_info['collection_info']['name']
version = manifest_info['collection_info']['version']
server = 'http://galaxy.ansible.com'
error_queue = []
with patch.object(builtins, 'open', mock_open(read_data=data)) as m:
with patch.object(collection.os.path, 'isfile', MagicMock(return_value=True)) as mock_isfile:
collection._verify_file_hash(b'path/', 'file', digest, error_queue)
mock_isfile.assert_called_once()
assert error_queue == []
def test_verify_file_hash_mismatching_hash(manifest_info):
data = to_bytes(json.dumps(manifest_info))
digest = sha256(data).hexdigest()
different_digest = 'not_{0}'.format(digest)
namespace = manifest_info['collection_info']['namespace']
name = manifest_info['collection_info']['name']
version = manifest_info['collection_info']['version']
server = 'http://galaxy.ansible.com'
error_queue = []
with patch.object(builtins, 'open', mock_open(read_data=data)) as m:
with patch.object(collection.os.path, 'isfile', MagicMock(return_value=True)) as mock_isfile:
collection._verify_file_hash(b'path/', 'file', different_digest, error_queue)
mock_isfile.assert_called_once()
assert len(error_queue) == 1
assert error_queue[0].installed == digest
assert error_queue[0].expected == different_digest
def test_consume_file(manifest):
manifest_file, checksum = manifest
assert checksum == collection._consume_file(manifest_file)
def test_consume_file_and_write_contents(manifest, manifest_info):
manifest_file, checksum = manifest
write_to = BytesIO()
actual_hash = collection._consume_file(manifest_file, write_to)
write_to.seek(0)
assert to_bytes(json.dumps(manifest_info)) == write_to.read()
assert actual_hash == checksum
def test_get_tar_file_member(tmp_tarfile):
temp_dir, tfile, filename, checksum = tmp_tarfile
with collection._get_tar_file_member(tfile, filename) as (tar_file_member, tar_file_obj):
assert isinstance(tar_file_member, tarfile.TarInfo)
assert isinstance(tar_file_obj, tarfile.ExFileObject)
def test_get_nonexistent_tar_file_member(tmp_tarfile):
temp_dir, tfile, filename, checksum = tmp_tarfile
file_does_not_exist = filename + 'nonexistent'
with pytest.raises(AnsibleError) as err:
collection._get_tar_file_member(tfile, file_does_not_exist)
assert to_text(err.value.message) == "Collection tar at '%s' does not contain the expected file '%s'." % (to_text(tfile.name), file_does_not_exist)
def test_get_tar_file_hash(tmp_tarfile):
temp_dir, tfile, filename, checksum = tmp_tarfile
assert checksum == collection._get_tar_file_hash(tfile.name, filename)
def test_get_json_from_tar_file(tmp_tarfile):
temp_dir, tfile, filename, checksum = tmp_tarfile
assert 'MANIFEST.json' in tfile.getnames()
data = collection._get_json_from_tar_file(tfile.name, 'MANIFEST.json')
assert isinstance(data, dict)
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 78,264 |
If python discovery fails due to a connection error, it defaults to /usr/bin/python and then continues to try to execute the intended command regardless
|
### Summary
When the auto discovery method is used, python discovery happens first, and then the task command is executed.
However, if there is a connection problem, depending on the timing of when the connection gets restored, the python discovery could fail due to an AnsibleConnectionFailure, ignore it, and return /usr/bin/python. Then, the task command is executed with the wrong python interpreter.
It should be that if the python discovery fails (at least due to Connection failures - you could decide there are other cases where this should occur), that the command immediately fails. Or at least have an option to allow this behavior.
Here is the informal code change that we made to enable this behavior in /usr/local/lib/python3.8/site-packages/ansible/executor/interpreter_discovery.py:
```
@@ -14,6 +14,7 @@
from ansible.module_utils.distro import LinuxDistribution
from ansible.utils.display import Display
from ansible.utils.plugin_docs import get_versioned_doclink
+from ansible.errors import AnsibleConnectionFailure
from distutils.version import LooseVersion
from traceback import format_exc
@@ -146,6 +147,8 @@
return platform_interpreter
except NotImplementedError as ex:
display.vvv(msg=u'Python interpreter discovery fallback ({0})'.format(to_text(ex)), host=host)
+ except AnsibleConnectionFailure as ex:
+ raise ex
except Exception as ex:
if not is_silent:
display.warning(msg=u'Unhandled error in Python interpreter discovery for host {0}: {1}'.format(host, to_text(ex)))
```
Simply allow the exception to fall-through to fail the task with a connection error, which will fail the task appropriately or allow the connection plugins to handle it appropriately.
### Issue Type
Bug Report
### Component Name
interpreter_python.py
### Ansible Version
```console
$ ansible --version
ansible [core 2.11.5]
ansible python module location = /usr/local/lib/python3.8/site-packages/ansible
executable location = /usr/local/bin/ansible
python version = 3.8.6 (default, Jan 22 2021, 11:41:28) [GCC 8.4.1 20200928 (Red Hat 8.4.1-1)]
jinja version = 3.0.2
libyaml = True
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
```
### OS / Environment
RHEL8
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
ansible <host> -m ping -vvv
```
on a host that is down
### Expected Results
if the discovery fails due to a connection problem, the task fails
### Actual Results
```console
if the discovery fails due to a connection problem, it tries to execute the rest of the task anyway
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/78264
|
https://github.com/ansible/ansible/pull/81745
|
62c10199d11428f014ed999a533487e69f4832b3
|
86fd7026a88988c224ae175a281e7e6e2f3c5bc3
| 2022-07-14T18:30:50Z |
python
| 2023-09-21T18:13:40Z |
changelogs/fragments/interpreter_discovery.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 78,264 |
If python discovery fails due to a connection error, it defaults to /usr/bin/python and then continues to try to execute the intended command regardless
|
### Summary
When the auto discovery method is used, python discovery happens first, and then the task command is executed.
However, if there is a connection problem, depending on the timing of when the connection gets restored, the python discovery could fail due to an AnsibleConnectionFailure, ignore it, and return /usr/bin/python. Then, the task command is executed with the wrong python interpreter.
It should be that if the python discovery fails (at least due to Connection failures - you could decide there are other cases where this should occur), that the command immediately fails. Or at least have an option to allow this behavior.
Here is the informal code change that we made to enable this behavior in /usr/local/lib/python3.8/site-packages/ansible/executor/interpreter_discovery.py:
```
@@ -14,6 +14,7 @@
from ansible.module_utils.distro import LinuxDistribution
from ansible.utils.display import Display
from ansible.utils.plugin_docs import get_versioned_doclink
+from ansible.errors import AnsibleConnectionFailure
from distutils.version import LooseVersion
from traceback import format_exc
@@ -146,6 +147,8 @@
return platform_interpreter
except NotImplementedError as ex:
display.vvv(msg=u'Python interpreter discovery fallback ({0})'.format(to_text(ex)), host=host)
+ except AnsibleConnectionFailure as ex:
+ raise ex
except Exception as ex:
if not is_silent:
display.warning(msg=u'Unhandled error in Python interpreter discovery for host {0}: {1}'.format(host, to_text(ex)))
```
Simply allow the exception to fall-through to fail the task with a connection error, which will fail the task appropriately or allow the connection plugins to handle it appropriately.
### Issue Type
Bug Report
### Component Name
interpreter_python.py
### Ansible Version
```console
$ ansible --version
ansible [core 2.11.5]
ansible python module location = /usr/local/lib/python3.8/site-packages/ansible
executable location = /usr/local/bin/ansible
python version = 3.8.6 (default, Jan 22 2021, 11:41:28) [GCC 8.4.1 20200928 (Red Hat 8.4.1-1)]
jinja version = 3.0.2
libyaml = True
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
```
### OS / Environment
RHEL8
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
ansible <host> -m ping -vvv
```
on a host that is down
### Expected Results
if the discovery fails due to a connection problem, the task fails
### Actual Results
```console
if the discovery fails due to a connection problem, it tries to execute the rest of the task anyway
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/78264
|
https://github.com/ansible/ansible/pull/81745
|
62c10199d11428f014ed999a533487e69f4832b3
|
86fd7026a88988c224ae175a281e7e6e2f3c5bc3
| 2022-07-14T18:30:50Z |
python
| 2023-09-21T18:13:40Z |
lib/ansible/executor/interpreter_discovery.py
|
# Copyright: (c) 2018 Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import bisect
import json
import pkgutil
import re
from ansible import constants as C
from ansible.module_utils.common.text.converters import to_native, to_text
from ansible.module_utils.distro import LinuxDistribution
from ansible.utils.display import Display
from ansible.utils.plugin_docs import get_versioned_doclink
from ansible.module_utils.compat.version import LooseVersion
from ansible.module_utils.facts.system.distribution import Distribution
from traceback import format_exc
OS_FAMILY_LOWER = {k.lower(): v.lower() for k, v in Distribution.OS_FAMILY.items()}
display = Display()
foundre = re.compile(r'(?s)PLATFORM[\r\n]+(.*)FOUND(.*)ENDFOUND')
class InterpreterDiscoveryRequiredError(Exception):
def __init__(self, message, interpreter_name, discovery_mode):
super(InterpreterDiscoveryRequiredError, self).__init__(message)
self.interpreter_name = interpreter_name
self.discovery_mode = discovery_mode
def __str__(self):
return self.message
def __repr__(self):
# TODO: proper repr impl
return self.message
def discover_interpreter(action, interpreter_name, discovery_mode, task_vars):
# interpreter discovery is a 2-step process with the target. First, we use a simple shell-agnostic bootstrap to
# get the system type from uname, and find any random Python that can get us the info we need. For supported
# target OS types, we'll dispatch a Python script that calls plaform.dist() (for older platforms, where available)
# and brings back /etc/os-release (if present). The proper Python path is looked up in a table of known
# distros/versions with included Pythons; if nothing is found, depending on the discovery mode, either the
# default fallback of /usr/bin/python is used (if we know it's there), or discovery fails.
# FUTURE: add logical equivalence for "python3" in the case of py3-only modules?
if interpreter_name != 'python':
raise ValueError('Interpreter discovery not supported for {0}'.format(interpreter_name))
host = task_vars.get('inventory_hostname', 'unknown')
res = None
platform_type = 'unknown'
found_interpreters = [u'/usr/bin/python'] # fallback value
is_auto_legacy = discovery_mode.startswith('auto_legacy')
is_silent = discovery_mode.endswith('_silent')
try:
platform_python_map = C.config.get_config_value('_INTERPRETER_PYTHON_DISTRO_MAP', variables=task_vars)
bootstrap_python_list = C.config.get_config_value('INTERPRETER_PYTHON_FALLBACK', variables=task_vars)
display.vvv(msg=u"Attempting {0} interpreter discovery".format(interpreter_name), host=host)
# not all command -v impls accept a list of commands, so we have to call it once per python
command_list = ["command -v '%s'" % py for py in bootstrap_python_list]
shell_bootstrap = "echo PLATFORM; uname; echo FOUND; {0}; echo ENDFOUND".format('; '.join(command_list))
# FUTURE: in most cases we probably don't want to use become, but maybe sometimes we do?
res = action._low_level_execute_command(shell_bootstrap, sudoable=False)
raw_stdout = res.get('stdout', u'')
match = foundre.match(raw_stdout)
if not match:
display.debug(u'raw interpreter discovery output: {0}'.format(raw_stdout), host=host)
raise ValueError('unexpected output from Python interpreter discovery')
platform_type = match.groups()[0].lower().strip()
found_interpreters = [interp.strip() for interp in match.groups()[1].splitlines() if interp.startswith('/')]
display.debug(u"found interpreters: {0}".format(found_interpreters), host=host)
if not found_interpreters:
if not is_silent:
action._discovery_warnings.append(u'No python interpreters found for '
u'host {0} (tried {1})'.format(host, bootstrap_python_list))
# this is lame, but returning None or throwing an exception is uglier
return u'/usr/bin/python'
if platform_type != 'linux':
raise NotImplementedError('unsupported platform for extended discovery: {0}'.format(to_native(platform_type)))
platform_script = pkgutil.get_data('ansible.executor.discovery', 'python_target.py')
# FUTURE: respect pipelining setting instead of just if the connection supports it?
if action._connection.has_pipelining:
res = action._low_level_execute_command(found_interpreters[0], sudoable=False, in_data=platform_script)
else:
# FUTURE: implement on-disk case (via script action or ?)
raise NotImplementedError('pipelining support required for extended interpreter discovery')
platform_info = json.loads(res.get('stdout'))
distro, version = _get_linux_distro(platform_info)
if not distro or not version:
raise NotImplementedError('unable to get Linux distribution/version info')
family = OS_FAMILY_LOWER.get(distro.lower().strip())
version_map = platform_python_map.get(distro.lower().strip()) or platform_python_map.get(family)
if not version_map:
raise NotImplementedError('unsupported Linux distribution: {0}'.format(distro))
platform_interpreter = to_text(_version_fuzzy_match(version, version_map), errors='surrogate_or_strict')
# provide a transition period for hosts that were using /usr/bin/python previously (but shouldn't have been)
if is_auto_legacy:
if platform_interpreter != u'/usr/bin/python' and u'/usr/bin/python' in found_interpreters:
if not is_silent:
action._discovery_warnings.append(
u"Distribution {0} {1} on host {2} should use {3}, but is using "
u"/usr/bin/python for backward compatibility with prior Ansible releases. "
u"See {4} for more information"
.format(distro, version, host, platform_interpreter,
get_versioned_doclink('reference_appendices/interpreter_discovery.html')))
return u'/usr/bin/python'
if platform_interpreter not in found_interpreters:
if platform_interpreter not in bootstrap_python_list:
# sanity check to make sure we looked for it
if not is_silent:
action._discovery_warnings \
.append(u"Platform interpreter {0} on host {1} is missing from bootstrap list"
.format(platform_interpreter, host))
if not is_silent:
action._discovery_warnings \
.append(u"Distribution {0} {1} on host {2} should use {3}, but is using {4}, since the "
u"discovered platform python interpreter was not present. See {5} "
u"for more information."
.format(distro, version, host, platform_interpreter, found_interpreters[0],
get_versioned_doclink('reference_appendices/interpreter_discovery.html')))
return found_interpreters[0]
return platform_interpreter
except NotImplementedError as ex:
display.vvv(msg=u'Python interpreter discovery fallback ({0})'.format(to_text(ex)), host=host)
except Exception as ex:
if not is_silent:
display.warning(msg=u'Unhandled error in Python interpreter discovery for host {0}: {1}'.format(host, to_text(ex)))
display.debug(msg=u'Interpreter discovery traceback:\n{0}'.format(to_text(format_exc())), host=host)
if res and res.get('stderr'):
display.vvv(msg=u'Interpreter discovery remote stderr:\n{0}'.format(to_text(res.get('stderr'))), host=host)
if not is_silent:
action._discovery_warnings \
.append(u"Platform {0} on host {1} is using the discovered Python interpreter at {2}, but future installation of "
u"another Python interpreter could change the meaning of that path. See {3} "
u"for more information."
.format(platform_type, host, found_interpreters[0],
get_versioned_doclink('reference_appendices/interpreter_discovery.html')))
return found_interpreters[0]
def _get_linux_distro(platform_info):
dist_result = platform_info.get('platform_dist_result', [])
if len(dist_result) == 3 and any(dist_result):
return dist_result[0], dist_result[1]
osrelease_content = platform_info.get('osrelease_content')
if not osrelease_content:
return u'', u''
osr = LinuxDistribution._parse_os_release_content(osrelease_content)
return osr.get('id', u''), osr.get('version_id', u'')
def _version_fuzzy_match(version, version_map):
# try exact match first
res = version_map.get(version)
if res:
return res
sorted_looseversions = sorted([LooseVersion(v) for v in version_map.keys()])
find_looseversion = LooseVersion(version)
# slot match; return nearest previous version we're newer than
kpos = bisect.bisect(sorted_looseversions, find_looseversion)
if kpos == 0:
# older than everything in the list, return the oldest version
# TODO: warning-worthy?
return version_map.get(sorted_looseversions[0].vstring)
# TODO: is "past the end of the list" warning-worthy too (at least if it's not a major version match)?
# return the next-oldest entry that we're newer than...
return version_map.get(sorted_looseversions[kpos - 1].vstring)
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 78,264 |
If python discovery fails due to a connection error, it defaults to /usr/bin/python and then continues to try to execute the intended command regardless
|
### Summary
When the auto discovery method is used, python discovery happens first, and then the task command is executed.
However, if there is a connection problem, depending on the timing of when the connection gets restored, the python discovery could fail due to an AnsibleConnectionFailure, ignore it, and return /usr/bin/python. Then, the task command is executed with the wrong python interpreter.
It should be that if the python discovery fails (at least due to Connection failures - you could decide there are other cases where this should occur), that the command immediately fails. Or at least have an option to allow this behavior.
Here is the informal code change that we made to enable this behavior in /usr/local/lib/python3.8/site-packages/ansible/executor/interpreter_discovery.py:
```
@@ -14,6 +14,7 @@
from ansible.module_utils.distro import LinuxDistribution
from ansible.utils.display import Display
from ansible.utils.plugin_docs import get_versioned_doclink
+from ansible.errors import AnsibleConnectionFailure
from distutils.version import LooseVersion
from traceback import format_exc
@@ -146,6 +147,8 @@
return platform_interpreter
except NotImplementedError as ex:
display.vvv(msg=u'Python interpreter discovery fallback ({0})'.format(to_text(ex)), host=host)
+ except AnsibleConnectionFailure as ex:
+ raise ex
except Exception as ex:
if not is_silent:
display.warning(msg=u'Unhandled error in Python interpreter discovery for host {0}: {1}'.format(host, to_text(ex)))
```
Simply allow the exception to fall-through to fail the task with a connection error, which will fail the task appropriately or allow the connection plugins to handle it appropriately.
### Issue Type
Bug Report
### Component Name
interpreter_python.py
### Ansible Version
```console
$ ansible --version
ansible [core 2.11.5]
ansible python module location = /usr/local/lib/python3.8/site-packages/ansible
executable location = /usr/local/bin/ansible
python version = 3.8.6 (default, Jan 22 2021, 11:41:28) [GCC 8.4.1 20200928 (Red Hat 8.4.1-1)]
jinja version = 3.0.2
libyaml = True
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
```
### OS / Environment
RHEL8
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
ansible <host> -m ping -vvv
```
on a host that is down
### Expected Results
if the discovery fails due to a connection problem, the task fails
### Actual Results
```console
if the discovery fails due to a connection problem, it tries to execute the rest of the task anyway
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/78264
|
https://github.com/ansible/ansible/pull/81745
|
62c10199d11428f014ed999a533487e69f4832b3
|
86fd7026a88988c224ae175a281e7e6e2f3c5bc3
| 2022-07-14T18:30:50Z |
python
| 2023-09-21T18:13:40Z |
test/units/executor/test_interpreter_discovery.py
|
# -*- coding: utf-8 -*-
# (c) 2019, Jordan Borean <[email protected]>
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
# Make coding more python3-ish
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
from unittest.mock import MagicMock
from ansible.executor.interpreter_discovery import discover_interpreter
from ansible.module_utils.common.text.converters import to_text
mock_ubuntu_platform_res = to_text(
r'{"osrelease_content": "NAME=\"Ubuntu\"\nVERSION=\"16.04.5 LTS (Xenial Xerus)\"\nID=ubuntu\nID_LIKE=debian\n'
r'PRETTY_NAME=\"Ubuntu 16.04.5 LTS\"\nVERSION_ID=\"16.04\"\nHOME_URL=\"http://www.ubuntu.com/\"\n'
r'SUPPORT_URL=\"http://help.ubuntu.com/\"\nBUG_REPORT_URL=\"http://bugs.launchpad.net/ubuntu/\"\n'
r'VERSION_CODENAME=xenial\nUBUNTU_CODENAME=xenial\n", "platform_dist_result": ["Ubuntu", "16.04", "xenial"]}'
)
def test_discovery_interpreter_linux_auto_legacy():
res1 = u'PLATFORM\nLinux\nFOUND\n/usr/bin/python\n/usr/bin/python3\nENDFOUND'
mock_action = MagicMock()
mock_action._low_level_execute_command.side_effect = [{'stdout': res1}, {'stdout': mock_ubuntu_platform_res}]
actual = discover_interpreter(mock_action, 'python', 'auto_legacy', {'inventory_hostname': u'host-fΓ³ΓΆbΓ€r'})
assert actual == u'/usr/bin/python'
assert len(mock_action.method_calls) == 3
assert mock_action.method_calls[2][0] == '_discovery_warnings.append'
assert u'Distribution Ubuntu 16.04 on host host-fΓ³ΓΆbΓ€r should use /usr/bin/python3, but is using /usr/bin/python' \
u' for backward compatibility' in mock_action.method_calls[2][1][0]
def test_discovery_interpreter_linux_auto_legacy_silent():
res1 = u'PLATFORM\nLinux\nFOUND\n/usr/bin/python\n/usr/bin/python3\nENDFOUND'
mock_action = MagicMock()
mock_action._low_level_execute_command.side_effect = [{'stdout': res1}, {'stdout': mock_ubuntu_platform_res}]
actual = discover_interpreter(mock_action, 'python', 'auto_legacy_silent', {'inventory_hostname': u'host-fΓ³ΓΆbΓ€r'})
assert actual == u'/usr/bin/python'
assert len(mock_action.method_calls) == 2
def test_discovery_interpreter_linux_auto():
res1 = u'PLATFORM\nLinux\nFOUND\n/usr/bin/python\n/usr/bin/python3\nENDFOUND'
mock_action = MagicMock()
mock_action._low_level_execute_command.side_effect = [{'stdout': res1}, {'stdout': mock_ubuntu_platform_res}]
actual = discover_interpreter(mock_action, 'python', 'auto', {'inventory_hostname': u'host-fΓ³ΓΆbΓ€r'})
assert actual == u'/usr/bin/python3'
assert len(mock_action.method_calls) == 2
def test_discovery_interpreter_non_linux():
mock_action = MagicMock()
mock_action._low_level_execute_command.return_value = \
{'stdout': u'PLATFORM\nDarwin\nFOUND\n/usr/bin/python\nENDFOUND'}
actual = discover_interpreter(mock_action, 'python', 'auto_legacy', {'inventory_hostname': u'host-fΓ³ΓΆbΓ€r'})
assert actual == u'/usr/bin/python'
assert len(mock_action.method_calls) == 2
assert mock_action.method_calls[1][0] == '_discovery_warnings.append'
assert u'Platform darwin on host host-fΓ³ΓΆbΓ€r is using the discovered Python interpreter at /usr/bin/python, ' \
u'but future installation of another Python interpreter could change the meaning of that path' \
in mock_action.method_calls[1][1][0]
def test_no_interpreters_found():
mock_action = MagicMock()
mock_action._low_level_execute_command.return_value = {'stdout': u'PLATFORM\nWindows\nFOUND\nENDFOUND'}
actual = discover_interpreter(mock_action, 'python', 'auto_legacy', {'inventory_hostname': u'host-fΓ³ΓΆbΓ€r'})
assert actual == u'/usr/bin/python'
assert len(mock_action.method_calls) == 2
assert mock_action.method_calls[1][0] == '_discovery_warnings.append'
assert u'No python interpreters found for host host-fΓ³ΓΆbΓ€r (tried' \
in mock_action.method_calls[1][1][0]
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 81,722 |
include_tasks within handler called within include_role doesn't work
|
### Summary
If there's a role with a `include_tasks` handler, and it's dynamically included by `include_role`, Ansible cannot find the included file. However, it can find the included file perfectly well when the role with the handler is included in a play standalone.
### Issue Type
Bug Report
### Component Name
handlers
### Ansible Version
```console
$ ansible --version
ansible [core 2.15.4]
config file = None
configured module search path = ['/Users/tensin/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /opt/homebrew/Cellar/ansible/8.4.0/libexec/lib/python3.11/site-packages/ansible
ansible collection location = /Users/tensin/.ansible/collections:/usr/share/ansible/collections
executable location = /opt/homebrew/bin/ansible
python version = 3.11.5 (main, Aug 24 2023, 15:09:45) [Clang 14.0.3 (clang-1403.0.22.14.1)] (/opt/homebrew/Cellar/ansible/8.4.0/libexec/bin/python)
jinja version = 3.1.2
libyaml = True
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
CONFIG_FILE() = None
EDITOR(env: EDITOR) = vim
```
### OS / Environment
MacOS Ventura
### Steps to Reproduce
```
# tree
.
|-- playbook.yml
`-- roles
|-- bar
| |-- handlers
| | |-- item.yml
| | `-- main.yml
| `-- tasks
| `-- main.yml
`-- foo
`-- tasks
`-- main.yml
```
playbook.yml:
```
- name: Test playbook
hosts: localhost
roles:
- bar
- foo
```
foo/tasks/main.yml:
```
- include_role:
name: bar
```
bar/tasks/main.yml:
```
- command: echo 1
changed_when: true
notify: bar_handler
- meta: flush_handlers
```
bar/handlers/main.yml:
```
- listen: bar_handler
include_tasks: item.yml
loop: [1, 2, 3]
```
bar/handlers/item.yml:
```
- command: echo '{{ item }}'
changed_when: false
```
Run using: `ansible-playbook playbook.yml`
### Expected Results
The bar role is executed twice; its handlers are executed twice.
### Actual Results
```console
ansible-playbook [core 2.15.4]
config file = None
configured module search path = ['/Users/tensin/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /opt/homebrew/Cellar/ansible/8.4.0/libexec/lib/python3.11/site-packages/ansible
ansible collection location = /Users/tensin/.ansible/collections:/usr/share/ansible/collections
executable location = /opt/homebrew/bin/ansible-playbook
python version = 3.11.5 (main, Aug 24 2023, 15:09:45) [Clang 14.0.3 (clang-1403.0.22.14.1)] (/opt/homebrew/Cellar/ansible/8.4.0/libexec/bin/python)
jinja version = 3.1.2
libyaml = True
No config file found; using defaults
setting up inventory plugins
Loading collection ansible.builtin from
host_list declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
Skipping due to inventory source not existing or not being readable by the current user
script declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
auto declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
Skipping due to inventory source not existing or not being readable by the current user
yaml declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
Skipping due to inventory source not existing or not being readable by the current user
ini declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
Skipping due to inventory source not existing or not being readable by the current user
toml declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
Loading callback plugin default of type stdout, v2.0 from /opt/homebrew/Cellar/ansible/8.4.0/libexec/lib/python3.11/site-packages/ansible/plugins/callback/default.py
Skipping callback 'default', as we already have a stdout callback.
Skipping callback 'minimal', as we already have a stdout callback.
Skipping callback 'oneline', as we already have a stdout callback.
PLAYBOOK: playbook.yml *********************************************************
Positional arguments: playbook.yml
verbosity: 4
connection: smart
timeout: 10
become_method: sudo
tags: ('all',)
inventory: ('/etc/ansible/hosts',)
forks: 5
1 plays in playbook.yml
PLAY [Test playbook] ***********************************************************
TASK [Gathering Facts] *********************************************************
task path: /Volumes/workplace/personal/test/playbook.yml:1
<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: tensin
<127.0.0.1> EXEC /bin/sh -c 'echo ~tensin && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /Users/tensin/.ansible/tmp `"&& mkdir "` echo /Users/tensin/.ansible/tmp/ansible-tmp-1695112052.4727402-80815-156931745653572 `" && echo ansible-tmp-1695112052.4727402-80815-156931745653572="` echo /Users/tensin/.ansible/tmp/ansible-tmp-1695112052.4727402-80815-156931745653572 `" ) && sleep 0'
Using module file /opt/homebrew/Cellar/ansible/8.4.0/libexec/lib/python3.11/site-packages/ansible/modules/setup.py
<127.0.0.1> PUT /Users/tensin/.ansible/tmp/ansible-local-80811l6lp_m_7/tmpktjgn0yt TO /Users/tensin/.ansible/tmp/ansible-tmp-1695112052.4727402-80815-156931745653572/AnsiballZ_setup.py
<127.0.0.1> EXEC /bin/sh -c 'chmod u+x /Users/tensin/.ansible/tmp/ansible-tmp-1695112052.4727402-80815-156931745653572/ /Users/tensin/.ansible/tmp/ansible-tmp-1695112052.4727402-80815-156931745653572/AnsiballZ_setup.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '/opt/homebrew/Cellar/ansible/8.4.0/libexec/bin/python /Users/tensin/.ansible/tmp/ansible-tmp-1695112052.4727402-80815-156931745653572/AnsiballZ_setup.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c 'rm -f -r /Users/tensin/.ansible/tmp/ansible-tmp-1695112052.4727402-80815-156931745653572/ > /dev/null 2>&1 && sleep 0'
ok: [localhost]
TASK [bar : command] ***********************************************************
task path: /Volumes/workplace/personal/test/roles/bar/tasks/main.yml:1
<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: tensin
<127.0.0.1> EXEC /bin/sh -c 'echo ~tensin && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /Users/tensin/.ansible/tmp `"&& mkdir "` echo /Users/tensin/.ansible/tmp/ansible-tmp-1695112054.1128068-80849-39195856654206 `" && echo ansible-tmp-1695112054.1128068-80849-39195856654206="` echo /Users/tensin/.ansible/tmp/ansible-tmp-1695112054.1128068-80849-39195856654206 `" ) && sleep 0'
Using module file /opt/homebrew/Cellar/ansible/8.4.0/libexec/lib/python3.11/site-packages/ansible/modules/command.py
<127.0.0.1> PUT /Users/tensin/.ansible/tmp/ansible-local-80811l6lp_m_7/tmpm2eyr1c4 TO /Users/tensin/.ansible/tmp/ansible-tmp-1695112054.1128068-80849-39195856654206/AnsiballZ_command.py
<127.0.0.1> EXEC /bin/sh -c 'chmod u+x /Users/tensin/.ansible/tmp/ansible-tmp-1695112054.1128068-80849-39195856654206/ /Users/tensin/.ansible/tmp/ansible-tmp-1695112054.1128068-80849-39195856654206/AnsiballZ_command.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '/opt/homebrew/Cellar/ansible/8.4.0/libexec/bin/python /Users/tensin/.ansible/tmp/ansible-tmp-1695112054.1128068-80849-39195856654206/AnsiballZ_command.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c 'rm -f -r /Users/tensin/.ansible/tmp/ansible-tmp-1695112054.1128068-80849-39195856654206/ > /dev/null 2>&1 && sleep 0'
Notification for handler bar_handler has been saved.
changed: [localhost] => {
"changed": true,
"cmd": [
"echo",
"1"
],
"delta": "0:00:00.005418",
"end": "2023-09-19 10:27:34.337640",
"invocation": {
"module_args": {
"_raw_params": "echo 1",
"_uses_shell": false,
"argv": null,
"chdir": null,
"creates": null,
"executable": null,
"removes": null,
"stdin": null,
"stdin_add_newline": true,
"strip_empty_ends": true
}
},
"msg": "",
"rc": 0,
"start": "2023-09-19 10:27:34.332222",
"stderr": "",
"stderr_lines": [],
"stdout": "1",
"stdout_lines": [
"1"
]
}
TASK [bar : meta] **************************************************************
task path: /Volumes/workplace/personal/test/roles/bar/tasks/main.yml:5
NOTIFIED HANDLER bar : include_tasks for localhost
META: triggered running handlers for localhost
RUNNING HANDLER [bar : include_tasks] ******************************************
task path: /Volumes/workplace/personal/test/roles/bar/handlers/main.yml:1
included: /Volumes/workplace/personal/test/roles/bar/handlers/item.yml for localhost => (item=1)
included: /Volumes/workplace/personal/test/roles/bar/handlers/item.yml for localhost => (item=2)
included: /Volumes/workplace/personal/test/roles/bar/handlers/item.yml for localhost => (item=3)
RUNNING HANDLER [bar : command] ************************************************
task path: /Volumes/workplace/personal/test/roles/bar/handlers/item.yml:1
<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: tensin
<127.0.0.1> EXEC /bin/sh -c 'echo ~tensin && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /Users/tensin/.ansible/tmp `"&& mkdir "` echo /Users/tensin/.ansible/tmp/ansible-tmp-1695112054.42143-80873-18381882637147 `" && echo ansible-tmp-1695112054.42143-80873-18381882637147="` echo /Users/tensin/.ansible/tmp/ansible-tmp-1695112054.42143-80873-18381882637147 `" ) && sleep 0'
Using module file /opt/homebrew/Cellar/ansible/8.4.0/libexec/lib/python3.11/site-packages/ansible/modules/command.py
<127.0.0.1> PUT /Users/tensin/.ansible/tmp/ansible-local-80811l6lp_m_7/tmp2xu8xoya TO /Users/tensin/.ansible/tmp/ansible-tmp-1695112054.42143-80873-18381882637147/AnsiballZ_command.py
<127.0.0.1> EXEC /bin/sh -c 'chmod u+x /Users/tensin/.ansible/tmp/ansible-tmp-1695112054.42143-80873-18381882637147/ /Users/tensin/.ansible/tmp/ansible-tmp-1695112054.42143-80873-18381882637147/AnsiballZ_command.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '/opt/homebrew/Cellar/ansible/8.4.0/libexec/bin/python /Users/tensin/.ansible/tmp/ansible-tmp-1695112054.42143-80873-18381882637147/AnsiballZ_command.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c 'rm -f -r /Users/tensin/.ansible/tmp/ansible-tmp-1695112054.42143-80873-18381882637147/ > /dev/null 2>&1 && sleep 0'
ok: [localhost] => {
"changed": false,
"cmd": [
"echo",
"1"
],
"delta": "0:00:00.004997",
"end": "2023-09-19 10:27:34.580493",
"invocation": {
"module_args": {
"_raw_params": "echo '1'",
"_uses_shell": false,
"argv": null,
"chdir": null,
"creates": null,
"executable": null,
"removes": null,
"stdin": null,
"stdin_add_newline": true,
"strip_empty_ends": true
}
},
"msg": "",
"rc": 0,
"start": "2023-09-19 10:27:34.575496",
"stderr": "",
"stderr_lines": [],
"stdout": "1",
"stdout_lines": [
"1"
]
}
RUNNING HANDLER [bar : command] ************************************************
task path: /Volumes/workplace/personal/test/roles/bar/handlers/item.yml:1
<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: tensin
<127.0.0.1> EXEC /bin/sh -c 'echo ~tensin && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /Users/tensin/.ansible/tmp `"&& mkdir "` echo /Users/tensin/.ansible/tmp/ansible-tmp-1695112054.6377301-80894-91754434326946 `" && echo ansible-tmp-1695112054.6377301-80894-91754434326946="` echo /Users/tensin/.ansible/tmp/ansible-tmp-1695112054.6377301-80894-91754434326946 `" ) && sleep 0'
Using module file /opt/homebrew/Cellar/ansible/8.4.0/libexec/lib/python3.11/site-packages/ansible/modules/command.py
<127.0.0.1> PUT /Users/tensin/.ansible/tmp/ansible-local-80811l6lp_m_7/tmp5go6z4yo TO /Users/tensin/.ansible/tmp/ansible-tmp-1695112054.6377301-80894-91754434326946/AnsiballZ_command.py
<127.0.0.1> EXEC /bin/sh -c 'chmod u+x /Users/tensin/.ansible/tmp/ansible-tmp-1695112054.6377301-80894-91754434326946/ /Users/tensin/.ansible/tmp/ansible-tmp-1695112054.6377301-80894-91754434326946/AnsiballZ_command.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '/opt/homebrew/Cellar/ansible/8.4.0/libexec/bin/python /Users/tensin/.ansible/tmp/ansible-tmp-1695112054.6377301-80894-91754434326946/AnsiballZ_command.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c 'rm -f -r /Users/tensin/.ansible/tmp/ansible-tmp-1695112054.6377301-80894-91754434326946/ > /dev/null 2>&1 && sleep 0'
ok: [localhost] => {
"changed": false,
"cmd": [
"echo",
"2"
],
"delta": "0:00:00.005343",
"end": "2023-09-19 10:27:34.789715",
"invocation": {
"module_args": {
"_raw_params": "echo '2'",
"_uses_shell": false,
"argv": null,
"chdir": null,
"creates": null,
"executable": null,
"removes": null,
"stdin": null,
"stdin_add_newline": true,
"strip_empty_ends": true
}
},
"msg": "",
"rc": 0,
"start": "2023-09-19 10:27:34.784372",
"stderr": "",
"stderr_lines": [],
"stdout": "2",
"stdout_lines": [
"2"
]
}
RUNNING HANDLER [bar : command] ************************************************
task path: /Volumes/workplace/personal/test/roles/bar/handlers/item.yml:1
<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: tensin
<127.0.0.1> EXEC /bin/sh -c 'echo ~tensin && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /Users/tensin/.ansible/tmp `"&& mkdir "` echo /Users/tensin/.ansible/tmp/ansible-tmp-1695112054.84939-80915-139816169826551 `" && echo ansible-tmp-1695112054.84939-80915-139816169826551="` echo /Users/tensin/.ansible/tmp/ansible-tmp-1695112054.84939-80915-139816169826551 `" ) && sleep 0'
Using module file /opt/homebrew/Cellar/ansible/8.4.0/libexec/lib/python3.11/site-packages/ansible/modules/command.py
<127.0.0.1> PUT /Users/tensin/.ansible/tmp/ansible-local-80811l6lp_m_7/tmpcuoqfdyi TO /Users/tensin/.ansible/tmp/ansible-tmp-1695112054.84939-80915-139816169826551/AnsiballZ_command.py
<127.0.0.1> EXEC /bin/sh -c 'chmod u+x /Users/tensin/.ansible/tmp/ansible-tmp-1695112054.84939-80915-139816169826551/ /Users/tensin/.ansible/tmp/ansible-tmp-1695112054.84939-80915-139816169826551/AnsiballZ_command.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '/opt/homebrew/Cellar/ansible/8.4.0/libexec/bin/python /Users/tensin/.ansible/tmp/ansible-tmp-1695112054.84939-80915-139816169826551/AnsiballZ_command.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c 'rm -f -r /Users/tensin/.ansible/tmp/ansible-tmp-1695112054.84939-80915-139816169826551/ > /dev/null 2>&1 && sleep 0'
ok: [localhost] => {
"changed": false,
"cmd": [
"echo",
"3"
],
"delta": "0:00:01.006513",
"end": "2023-09-19 10:27:36.018385",
"invocation": {
"module_args": {
"_raw_params": "echo '3'",
"_uses_shell": false,
"argv": null,
"chdir": null,
"creates": null,
"executable": null,
"removes": null,
"stdin": null,
"stdin_add_newline": true,
"strip_empty_ends": true
}
},
"msg": "",
"rc": 0,
"start": "2023-09-19 10:27:35.011872",
"stderr": "",
"stderr_lines": [],
"stdout": "3",
"stdout_lines": [
"3"
]
}
TASK [include_role : bar] ******************************************************
task path: /Volumes/workplace/personal/test/roles/foo/tasks/main.yml:1
TASK [bar : command] ***********************************************************
task path: /Volumes/workplace/personal/test/roles/bar/tasks/main.yml:1
<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: tensin
<127.0.0.1> EXEC /bin/sh -c 'echo ~tensin && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /Users/tensin/.ansible/tmp `"&& mkdir "` echo /Users/tensin/.ansible/tmp/ansible-tmp-1695112056.146764-80937-219796758919766 `" && echo ansible-tmp-1695112056.146764-80937-219796758919766="` echo /Users/tensin/.ansible/tmp/ansible-tmp-1695112056.146764-80937-219796758919766 `" ) && sleep 0'
Using module file /opt/homebrew/Cellar/ansible/8.4.0/libexec/lib/python3.11/site-packages/ansible/modules/command.py
<127.0.0.1> PUT /Users/tensin/.ansible/tmp/ansible-local-80811l6lp_m_7/tmpesvqaeoc TO /Users/tensin/.ansible/tmp/ansible-tmp-1695112056.146764-80937-219796758919766/AnsiballZ_command.py
<127.0.0.1> EXEC /bin/sh -c 'chmod u+x /Users/tensin/.ansible/tmp/ansible-tmp-1695112056.146764-80937-219796758919766/ /Users/tensin/.ansible/tmp/ansible-tmp-1695112056.146764-80937-219796758919766/AnsiballZ_command.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '/opt/homebrew/Cellar/ansible/8.4.0/libexec/bin/python /Users/tensin/.ansible/tmp/ansible-tmp-1695112056.146764-80937-219796758919766/AnsiballZ_command.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c 'rm -f -r /Users/tensin/.ansible/tmp/ansible-tmp-1695112056.146764-80937-219796758919766/ > /dev/null 2>&1 && sleep 0'
Notification for handler bar_handler has been saved.
changed: [localhost] => {
"changed": true,
"cmd": [
"echo",
"1"
],
"delta": "0:00:00.005010",
"end": "2023-09-19 10:27:36.319682",
"invocation": {
"module_args": {
"_raw_params": "echo 1",
"_uses_shell": false,
"argv": null,
"chdir": null,
"creates": null,
"executable": null,
"removes": null,
"stdin": null,
"stdin_add_newline": true,
"strip_empty_ends": true
}
},
"msg": "",
"rc": 0,
"start": "2023-09-19 10:27:36.314672",
"stderr": "",
"stderr_lines": [],
"stdout": "1",
"stdout_lines": [
"1"
]
}
TASK [bar : meta] **************************************************************
task path: /Volumes/workplace/personal/test/roles/bar/tasks/main.yml:5
NOTIFIED HANDLER bar : include_tasks for localhost
NOTIFIED HANDLER bar : include_tasks for localhost
META: triggered running handlers for localhost
RUNNING HANDLER [bar : include_tasks] ******************************************
task path: /Volumes/workplace/personal/test/roles/bar/handlers/main.yml:1
included: /Volumes/workplace/personal/test/roles/bar/handlers/item.yml for localhost => (item=1)
included: /Volumes/workplace/personal/test/roles/bar/handlers/item.yml for localhost => (item=2)
included: /Volumes/workplace/personal/test/roles/bar/handlers/item.yml for localhost => (item=3)
RUNNING HANDLER [bar : command] ************************************************
task path: /Volumes/workplace/personal/test/roles/bar/handlers/item.yml:1
<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: tensin
<127.0.0.1> EXEC /bin/sh -c 'echo ~tensin && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /Users/tensin/.ansible/tmp `"&& mkdir "` echo /Users/tensin/.ansible/tmp/ansible-tmp-1695112056.3968189-80959-67605206314050 `" && echo ansible-tmp-1695112056.3968189-80959-67605206314050="` echo /Users/tensin/.ansible/tmp/ansible-tmp-1695112056.3968189-80959-67605206314050 `" ) && sleep 0'
Using module file /opt/homebrew/Cellar/ansible/8.4.0/libexec/lib/python3.11/site-packages/ansible/modules/command.py
<127.0.0.1> PUT /Users/tensin/.ansible/tmp/ansible-local-80811l6lp_m_7/tmpsv65_5tb TO /Users/tensin/.ansible/tmp/ansible-tmp-1695112056.3968189-80959-67605206314050/AnsiballZ_command.py
<127.0.0.1> EXEC /bin/sh -c 'chmod u+x /Users/tensin/.ansible/tmp/ansible-tmp-1695112056.3968189-80959-67605206314050/ /Users/tensin/.ansible/tmp/ansible-tmp-1695112056.3968189-80959-67605206314050/AnsiballZ_command.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '/opt/homebrew/Cellar/ansible/8.4.0/libexec/bin/python /Users/tensin/.ansible/tmp/ansible-tmp-1695112056.3968189-80959-67605206314050/AnsiballZ_command.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c 'rm -f -r /Users/tensin/.ansible/tmp/ansible-tmp-1695112056.3968189-80959-67605206314050/ > /dev/null 2>&1 && sleep 0'
ok: [localhost] => {
"changed": false,
"cmd": [
"echo",
"1"
],
"delta": "0:00:00.005281",
"end": "2023-09-19 10:27:36.562253",
"invocation": {
"module_args": {
"_raw_params": "echo '1'",
"_uses_shell": false,
"argv": null,
"chdir": null,
"creates": null,
"executable": null,
"removes": null,
"stdin": null,
"stdin_add_newline": true,
"strip_empty_ends": true
}
},
"msg": "",
"rc": 0,
"start": "2023-09-19 10:27:36.556972",
"stderr": "",
"stderr_lines": [],
"stdout": "1",
"stdout_lines": [
"1"
]
}
RUNNING HANDLER [bar : command] ************************************************
task path: /Volumes/workplace/personal/test/roles/bar/handlers/item.yml:1
<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: tensin
<127.0.0.1> EXEC /bin/sh -c 'echo ~tensin && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /Users/tensin/.ansible/tmp `"&& mkdir "` echo /Users/tensin/.ansible/tmp/ansible-tmp-1695112056.621751-80980-107541433073117 `" && echo ansible-tmp-1695112056.621751-80980-107541433073117="` echo /Users/tensin/.ansible/tmp/ansible-tmp-1695112056.621751-80980-107541433073117 `" ) && sleep 0'
Using module file /opt/homebrew/Cellar/ansible/8.4.0/libexec/lib/python3.11/site-packages/ansible/modules/command.py
<127.0.0.1> PUT /Users/tensin/.ansible/tmp/ansible-local-80811l6lp_m_7/tmp1258e27y TO /Users/tensin/.ansible/tmp/ansible-tmp-1695112056.621751-80980-107541433073117/AnsiballZ_command.py
<127.0.0.1> EXEC /bin/sh -c 'chmod u+x /Users/tensin/.ansible/tmp/ansible-tmp-1695112056.621751-80980-107541433073117/ /Users/tensin/.ansible/tmp/ansible-tmp-1695112056.621751-80980-107541433073117/AnsiballZ_command.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '/opt/homebrew/Cellar/ansible/8.4.0/libexec/bin/python /Users/tensin/.ansible/tmp/ansible-tmp-1695112056.621751-80980-107541433073117/AnsiballZ_command.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c 'rm -f -r /Users/tensin/.ansible/tmp/ansible-tmp-1695112056.621751-80980-107541433073117/ > /dev/null 2>&1 && sleep 0'
ok: [localhost] => {
"changed": false,
"cmd": [
"echo",
"2"
],
"delta": "0:00:00.005252",
"end": "2023-09-19 10:27:36.772082",
"invocation": {
"module_args": {
"_raw_params": "echo '2'",
"_uses_shell": false,
"argv": null,
"chdir": null,
"creates": null,
"executable": null,
"removes": null,
"stdin": null,
"stdin_add_newline": true,
"strip_empty_ends": true
}
},
"msg": "",
"rc": 0,
"start": "2023-09-19 10:27:36.766830",
"stderr": "",
"stderr_lines": [],
"stdout": "2",
"stdout_lines": [
"2"
]
}
RUNNING HANDLER [bar : command] ************************************************
task path: /Volumes/workplace/personal/test/roles/bar/handlers/item.yml:1
<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: tensin
<127.0.0.1> EXEC /bin/sh -c 'echo ~tensin && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /Users/tensin/.ansible/tmp `"&& mkdir "` echo /Users/tensin/.ansible/tmp/ansible-tmp-1695112056.828794-81001-161624896246699 `" && echo ansible-tmp-1695112056.828794-81001-161624896246699="` echo /Users/tensin/.ansible/tmp/ansible-tmp-1695112056.828794-81001-161624896246699 `" ) && sleep 0'
Using module file /opt/homebrew/Cellar/ansible/8.4.0/libexec/lib/python3.11/site-packages/ansible/modules/command.py
<127.0.0.1> PUT /Users/tensin/.ansible/tmp/ansible-local-80811l6lp_m_7/tmpwmfv9yp_ TO /Users/tensin/.ansible/tmp/ansible-tmp-1695112056.828794-81001-161624896246699/AnsiballZ_command.py
<127.0.0.1> EXEC /bin/sh -c 'chmod u+x /Users/tensin/.ansible/tmp/ansible-tmp-1695112056.828794-81001-161624896246699/ /Users/tensin/.ansible/tmp/ansible-tmp-1695112056.828794-81001-161624896246699/AnsiballZ_command.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '/opt/homebrew/Cellar/ansible/8.4.0/libexec/bin/python /Users/tensin/.ansible/tmp/ansible-tmp-1695112056.828794-81001-161624896246699/AnsiballZ_command.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c 'rm -f -r /Users/tensin/.ansible/tmp/ansible-tmp-1695112056.828794-81001-161624896246699/ > /dev/null 2>&1 && sleep 0'
ok: [localhost] => {
"changed": false,
"cmd": [
"echo",
"3"
],
"delta": "0:00:00.004990",
"end": "2023-09-19 10:27:36.998890",
"invocation": {
"module_args": {
"_raw_params": "echo '3'",
"_uses_shell": false,
"argv": null,
"chdir": null,
"creates": null,
"executable": null,
"removes": null,
"stdin": null,
"stdin_add_newline": true,
"strip_empty_ends": true
}
},
"msg": "",
"rc": 0,
"start": "2023-09-19 10:27:36.993900",
"stderr": "",
"stderr_lines": [],
"stdout": "3",
"stdout_lines": [
"3"
]
}
RUNNING HANDLER [bar : include_tasks] ******************************************
task path: /Volumes/workplace/personal/test/roles/bar/handlers/main.yml:1
fatal: [localhost]: FAILED! => {
"reason": "Could not find or access '/Volumes/workplace/personal/test/item.yml' on the Ansible Controller."
}
fatal: [localhost]: FAILED! => {
"reason": "Could not find or access '/Volumes/workplace/personal/test/item.yml' on the Ansible Controller."
}
fatal: [localhost]: FAILED! => {
"reason": "Could not find or access '/Volumes/workplace/personal/test/item.yml' on the Ansible Controller."
}
PLAY RECAP *********************************************************************
localhost : ok=15 changed=2 unreachable=0 failed=3 skipped=0 rescued=0 ignored=0
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/81722
|
https://github.com/ansible/ansible/pull/81733
|
86fd7026a88988c224ae175a281e7e6e2f3c5bc3
|
1e7f7875c617a12e5b16bcf290d489a6446febdb
| 2023-09-19T08:28:30Z |
python
| 2023-09-21T19:12:04Z |
changelogs/fragments/81722-handler-subdir-include_tasks.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 81,722 |
include_tasks within handler called within include_role doesn't work
|
### Summary
If there's a role with a `include_tasks` handler, and it's dynamically included by `include_role`, Ansible cannot find the included file. However, it can find the included file perfectly well when the role with the handler is included in a play standalone.
### Issue Type
Bug Report
### Component Name
handlers
### Ansible Version
```console
$ ansible --version
ansible [core 2.15.4]
config file = None
configured module search path = ['/Users/tensin/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /opt/homebrew/Cellar/ansible/8.4.0/libexec/lib/python3.11/site-packages/ansible
ansible collection location = /Users/tensin/.ansible/collections:/usr/share/ansible/collections
executable location = /opt/homebrew/bin/ansible
python version = 3.11.5 (main, Aug 24 2023, 15:09:45) [Clang 14.0.3 (clang-1403.0.22.14.1)] (/opt/homebrew/Cellar/ansible/8.4.0/libexec/bin/python)
jinja version = 3.1.2
libyaml = True
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
CONFIG_FILE() = None
EDITOR(env: EDITOR) = vim
```
### OS / Environment
MacOS Ventura
### Steps to Reproduce
```
# tree
.
|-- playbook.yml
`-- roles
|-- bar
| |-- handlers
| | |-- item.yml
| | `-- main.yml
| `-- tasks
| `-- main.yml
`-- foo
`-- tasks
`-- main.yml
```
playbook.yml:
```
- name: Test playbook
hosts: localhost
roles:
- bar
- foo
```
foo/tasks/main.yml:
```
- include_role:
name: bar
```
bar/tasks/main.yml:
```
- command: echo 1
changed_when: true
notify: bar_handler
- meta: flush_handlers
```
bar/handlers/main.yml:
```
- listen: bar_handler
include_tasks: item.yml
loop: [1, 2, 3]
```
bar/handlers/item.yml:
```
- command: echo '{{ item }}'
changed_when: false
```
Run using: `ansible-playbook playbook.yml`
### Expected Results
The bar role is executed twice; its handlers are executed twice.
### Actual Results
```console
ansible-playbook [core 2.15.4]
config file = None
configured module search path = ['/Users/tensin/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /opt/homebrew/Cellar/ansible/8.4.0/libexec/lib/python3.11/site-packages/ansible
ansible collection location = /Users/tensin/.ansible/collections:/usr/share/ansible/collections
executable location = /opt/homebrew/bin/ansible-playbook
python version = 3.11.5 (main, Aug 24 2023, 15:09:45) [Clang 14.0.3 (clang-1403.0.22.14.1)] (/opt/homebrew/Cellar/ansible/8.4.0/libexec/bin/python)
jinja version = 3.1.2
libyaml = True
No config file found; using defaults
setting up inventory plugins
Loading collection ansible.builtin from
host_list declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
Skipping due to inventory source not existing or not being readable by the current user
script declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
auto declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
Skipping due to inventory source not existing or not being readable by the current user
yaml declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
Skipping due to inventory source not existing or not being readable by the current user
ini declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
Skipping due to inventory source not existing or not being readable by the current user
toml declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
Loading callback plugin default of type stdout, v2.0 from /opt/homebrew/Cellar/ansible/8.4.0/libexec/lib/python3.11/site-packages/ansible/plugins/callback/default.py
Skipping callback 'default', as we already have a stdout callback.
Skipping callback 'minimal', as we already have a stdout callback.
Skipping callback 'oneline', as we already have a stdout callback.
PLAYBOOK: playbook.yml *********************************************************
Positional arguments: playbook.yml
verbosity: 4
connection: smart
timeout: 10
become_method: sudo
tags: ('all',)
inventory: ('/etc/ansible/hosts',)
forks: 5
1 plays in playbook.yml
PLAY [Test playbook] ***********************************************************
TASK [Gathering Facts] *********************************************************
task path: /Volumes/workplace/personal/test/playbook.yml:1
<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: tensin
<127.0.0.1> EXEC /bin/sh -c 'echo ~tensin && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /Users/tensin/.ansible/tmp `"&& mkdir "` echo /Users/tensin/.ansible/tmp/ansible-tmp-1695112052.4727402-80815-156931745653572 `" && echo ansible-tmp-1695112052.4727402-80815-156931745653572="` echo /Users/tensin/.ansible/tmp/ansible-tmp-1695112052.4727402-80815-156931745653572 `" ) && sleep 0'
Using module file /opt/homebrew/Cellar/ansible/8.4.0/libexec/lib/python3.11/site-packages/ansible/modules/setup.py
<127.0.0.1> PUT /Users/tensin/.ansible/tmp/ansible-local-80811l6lp_m_7/tmpktjgn0yt TO /Users/tensin/.ansible/tmp/ansible-tmp-1695112052.4727402-80815-156931745653572/AnsiballZ_setup.py
<127.0.0.1> EXEC /bin/sh -c 'chmod u+x /Users/tensin/.ansible/tmp/ansible-tmp-1695112052.4727402-80815-156931745653572/ /Users/tensin/.ansible/tmp/ansible-tmp-1695112052.4727402-80815-156931745653572/AnsiballZ_setup.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '/opt/homebrew/Cellar/ansible/8.4.0/libexec/bin/python /Users/tensin/.ansible/tmp/ansible-tmp-1695112052.4727402-80815-156931745653572/AnsiballZ_setup.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c 'rm -f -r /Users/tensin/.ansible/tmp/ansible-tmp-1695112052.4727402-80815-156931745653572/ > /dev/null 2>&1 && sleep 0'
ok: [localhost]
TASK [bar : command] ***********************************************************
task path: /Volumes/workplace/personal/test/roles/bar/tasks/main.yml:1
<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: tensin
<127.0.0.1> EXEC /bin/sh -c 'echo ~tensin && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /Users/tensin/.ansible/tmp `"&& mkdir "` echo /Users/tensin/.ansible/tmp/ansible-tmp-1695112054.1128068-80849-39195856654206 `" && echo ansible-tmp-1695112054.1128068-80849-39195856654206="` echo /Users/tensin/.ansible/tmp/ansible-tmp-1695112054.1128068-80849-39195856654206 `" ) && sleep 0'
Using module file /opt/homebrew/Cellar/ansible/8.4.0/libexec/lib/python3.11/site-packages/ansible/modules/command.py
<127.0.0.1> PUT /Users/tensin/.ansible/tmp/ansible-local-80811l6lp_m_7/tmpm2eyr1c4 TO /Users/tensin/.ansible/tmp/ansible-tmp-1695112054.1128068-80849-39195856654206/AnsiballZ_command.py
<127.0.0.1> EXEC /bin/sh -c 'chmod u+x /Users/tensin/.ansible/tmp/ansible-tmp-1695112054.1128068-80849-39195856654206/ /Users/tensin/.ansible/tmp/ansible-tmp-1695112054.1128068-80849-39195856654206/AnsiballZ_command.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '/opt/homebrew/Cellar/ansible/8.4.0/libexec/bin/python /Users/tensin/.ansible/tmp/ansible-tmp-1695112054.1128068-80849-39195856654206/AnsiballZ_command.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c 'rm -f -r /Users/tensin/.ansible/tmp/ansible-tmp-1695112054.1128068-80849-39195856654206/ > /dev/null 2>&1 && sleep 0'
Notification for handler bar_handler has been saved.
changed: [localhost] => {
"changed": true,
"cmd": [
"echo",
"1"
],
"delta": "0:00:00.005418",
"end": "2023-09-19 10:27:34.337640",
"invocation": {
"module_args": {
"_raw_params": "echo 1",
"_uses_shell": false,
"argv": null,
"chdir": null,
"creates": null,
"executable": null,
"removes": null,
"stdin": null,
"stdin_add_newline": true,
"strip_empty_ends": true
}
},
"msg": "",
"rc": 0,
"start": "2023-09-19 10:27:34.332222",
"stderr": "",
"stderr_lines": [],
"stdout": "1",
"stdout_lines": [
"1"
]
}
TASK [bar : meta] **************************************************************
task path: /Volumes/workplace/personal/test/roles/bar/tasks/main.yml:5
NOTIFIED HANDLER bar : include_tasks for localhost
META: triggered running handlers for localhost
RUNNING HANDLER [bar : include_tasks] ******************************************
task path: /Volumes/workplace/personal/test/roles/bar/handlers/main.yml:1
included: /Volumes/workplace/personal/test/roles/bar/handlers/item.yml for localhost => (item=1)
included: /Volumes/workplace/personal/test/roles/bar/handlers/item.yml for localhost => (item=2)
included: /Volumes/workplace/personal/test/roles/bar/handlers/item.yml for localhost => (item=3)
RUNNING HANDLER [bar : command] ************************************************
task path: /Volumes/workplace/personal/test/roles/bar/handlers/item.yml:1
<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: tensin
<127.0.0.1> EXEC /bin/sh -c 'echo ~tensin && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /Users/tensin/.ansible/tmp `"&& mkdir "` echo /Users/tensin/.ansible/tmp/ansible-tmp-1695112054.42143-80873-18381882637147 `" && echo ansible-tmp-1695112054.42143-80873-18381882637147="` echo /Users/tensin/.ansible/tmp/ansible-tmp-1695112054.42143-80873-18381882637147 `" ) && sleep 0'
Using module file /opt/homebrew/Cellar/ansible/8.4.0/libexec/lib/python3.11/site-packages/ansible/modules/command.py
<127.0.0.1> PUT /Users/tensin/.ansible/tmp/ansible-local-80811l6lp_m_7/tmp2xu8xoya TO /Users/tensin/.ansible/tmp/ansible-tmp-1695112054.42143-80873-18381882637147/AnsiballZ_command.py
<127.0.0.1> EXEC /bin/sh -c 'chmod u+x /Users/tensin/.ansible/tmp/ansible-tmp-1695112054.42143-80873-18381882637147/ /Users/tensin/.ansible/tmp/ansible-tmp-1695112054.42143-80873-18381882637147/AnsiballZ_command.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '/opt/homebrew/Cellar/ansible/8.4.0/libexec/bin/python /Users/tensin/.ansible/tmp/ansible-tmp-1695112054.42143-80873-18381882637147/AnsiballZ_command.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c 'rm -f -r /Users/tensin/.ansible/tmp/ansible-tmp-1695112054.42143-80873-18381882637147/ > /dev/null 2>&1 && sleep 0'
ok: [localhost] => {
"changed": false,
"cmd": [
"echo",
"1"
],
"delta": "0:00:00.004997",
"end": "2023-09-19 10:27:34.580493",
"invocation": {
"module_args": {
"_raw_params": "echo '1'",
"_uses_shell": false,
"argv": null,
"chdir": null,
"creates": null,
"executable": null,
"removes": null,
"stdin": null,
"stdin_add_newline": true,
"strip_empty_ends": true
}
},
"msg": "",
"rc": 0,
"start": "2023-09-19 10:27:34.575496",
"stderr": "",
"stderr_lines": [],
"stdout": "1",
"stdout_lines": [
"1"
]
}
RUNNING HANDLER [bar : command] ************************************************
task path: /Volumes/workplace/personal/test/roles/bar/handlers/item.yml:1
<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: tensin
<127.0.0.1> EXEC /bin/sh -c 'echo ~tensin && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /Users/tensin/.ansible/tmp `"&& mkdir "` echo /Users/tensin/.ansible/tmp/ansible-tmp-1695112054.6377301-80894-91754434326946 `" && echo ansible-tmp-1695112054.6377301-80894-91754434326946="` echo /Users/tensin/.ansible/tmp/ansible-tmp-1695112054.6377301-80894-91754434326946 `" ) && sleep 0'
Using module file /opt/homebrew/Cellar/ansible/8.4.0/libexec/lib/python3.11/site-packages/ansible/modules/command.py
<127.0.0.1> PUT /Users/tensin/.ansible/tmp/ansible-local-80811l6lp_m_7/tmp5go6z4yo TO /Users/tensin/.ansible/tmp/ansible-tmp-1695112054.6377301-80894-91754434326946/AnsiballZ_command.py
<127.0.0.1> EXEC /bin/sh -c 'chmod u+x /Users/tensin/.ansible/tmp/ansible-tmp-1695112054.6377301-80894-91754434326946/ /Users/tensin/.ansible/tmp/ansible-tmp-1695112054.6377301-80894-91754434326946/AnsiballZ_command.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '/opt/homebrew/Cellar/ansible/8.4.0/libexec/bin/python /Users/tensin/.ansible/tmp/ansible-tmp-1695112054.6377301-80894-91754434326946/AnsiballZ_command.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c 'rm -f -r /Users/tensin/.ansible/tmp/ansible-tmp-1695112054.6377301-80894-91754434326946/ > /dev/null 2>&1 && sleep 0'
ok: [localhost] => {
"changed": false,
"cmd": [
"echo",
"2"
],
"delta": "0:00:00.005343",
"end": "2023-09-19 10:27:34.789715",
"invocation": {
"module_args": {
"_raw_params": "echo '2'",
"_uses_shell": false,
"argv": null,
"chdir": null,
"creates": null,
"executable": null,
"removes": null,
"stdin": null,
"stdin_add_newline": true,
"strip_empty_ends": true
}
},
"msg": "",
"rc": 0,
"start": "2023-09-19 10:27:34.784372",
"stderr": "",
"stderr_lines": [],
"stdout": "2",
"stdout_lines": [
"2"
]
}
RUNNING HANDLER [bar : command] ************************************************
task path: /Volumes/workplace/personal/test/roles/bar/handlers/item.yml:1
<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: tensin
<127.0.0.1> EXEC /bin/sh -c 'echo ~tensin && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /Users/tensin/.ansible/tmp `"&& mkdir "` echo /Users/tensin/.ansible/tmp/ansible-tmp-1695112054.84939-80915-139816169826551 `" && echo ansible-tmp-1695112054.84939-80915-139816169826551="` echo /Users/tensin/.ansible/tmp/ansible-tmp-1695112054.84939-80915-139816169826551 `" ) && sleep 0'
Using module file /opt/homebrew/Cellar/ansible/8.4.0/libexec/lib/python3.11/site-packages/ansible/modules/command.py
<127.0.0.1> PUT /Users/tensin/.ansible/tmp/ansible-local-80811l6lp_m_7/tmpcuoqfdyi TO /Users/tensin/.ansible/tmp/ansible-tmp-1695112054.84939-80915-139816169826551/AnsiballZ_command.py
<127.0.0.1> EXEC /bin/sh -c 'chmod u+x /Users/tensin/.ansible/tmp/ansible-tmp-1695112054.84939-80915-139816169826551/ /Users/tensin/.ansible/tmp/ansible-tmp-1695112054.84939-80915-139816169826551/AnsiballZ_command.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '/opt/homebrew/Cellar/ansible/8.4.0/libexec/bin/python /Users/tensin/.ansible/tmp/ansible-tmp-1695112054.84939-80915-139816169826551/AnsiballZ_command.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c 'rm -f -r /Users/tensin/.ansible/tmp/ansible-tmp-1695112054.84939-80915-139816169826551/ > /dev/null 2>&1 && sleep 0'
ok: [localhost] => {
"changed": false,
"cmd": [
"echo",
"3"
],
"delta": "0:00:01.006513",
"end": "2023-09-19 10:27:36.018385",
"invocation": {
"module_args": {
"_raw_params": "echo '3'",
"_uses_shell": false,
"argv": null,
"chdir": null,
"creates": null,
"executable": null,
"removes": null,
"stdin": null,
"stdin_add_newline": true,
"strip_empty_ends": true
}
},
"msg": "",
"rc": 0,
"start": "2023-09-19 10:27:35.011872",
"stderr": "",
"stderr_lines": [],
"stdout": "3",
"stdout_lines": [
"3"
]
}
TASK [include_role : bar] ******************************************************
task path: /Volumes/workplace/personal/test/roles/foo/tasks/main.yml:1
TASK [bar : command] ***********************************************************
task path: /Volumes/workplace/personal/test/roles/bar/tasks/main.yml:1
<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: tensin
<127.0.0.1> EXEC /bin/sh -c 'echo ~tensin && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /Users/tensin/.ansible/tmp `"&& mkdir "` echo /Users/tensin/.ansible/tmp/ansible-tmp-1695112056.146764-80937-219796758919766 `" && echo ansible-tmp-1695112056.146764-80937-219796758919766="` echo /Users/tensin/.ansible/tmp/ansible-tmp-1695112056.146764-80937-219796758919766 `" ) && sleep 0'
Using module file /opt/homebrew/Cellar/ansible/8.4.0/libexec/lib/python3.11/site-packages/ansible/modules/command.py
<127.0.0.1> PUT /Users/tensin/.ansible/tmp/ansible-local-80811l6lp_m_7/tmpesvqaeoc TO /Users/tensin/.ansible/tmp/ansible-tmp-1695112056.146764-80937-219796758919766/AnsiballZ_command.py
<127.0.0.1> EXEC /bin/sh -c 'chmod u+x /Users/tensin/.ansible/tmp/ansible-tmp-1695112056.146764-80937-219796758919766/ /Users/tensin/.ansible/tmp/ansible-tmp-1695112056.146764-80937-219796758919766/AnsiballZ_command.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '/opt/homebrew/Cellar/ansible/8.4.0/libexec/bin/python /Users/tensin/.ansible/tmp/ansible-tmp-1695112056.146764-80937-219796758919766/AnsiballZ_command.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c 'rm -f -r /Users/tensin/.ansible/tmp/ansible-tmp-1695112056.146764-80937-219796758919766/ > /dev/null 2>&1 && sleep 0'
Notification for handler bar_handler has been saved.
changed: [localhost] => {
"changed": true,
"cmd": [
"echo",
"1"
],
"delta": "0:00:00.005010",
"end": "2023-09-19 10:27:36.319682",
"invocation": {
"module_args": {
"_raw_params": "echo 1",
"_uses_shell": false,
"argv": null,
"chdir": null,
"creates": null,
"executable": null,
"removes": null,
"stdin": null,
"stdin_add_newline": true,
"strip_empty_ends": true
}
},
"msg": "",
"rc": 0,
"start": "2023-09-19 10:27:36.314672",
"stderr": "",
"stderr_lines": [],
"stdout": "1",
"stdout_lines": [
"1"
]
}
TASK [bar : meta] **************************************************************
task path: /Volumes/workplace/personal/test/roles/bar/tasks/main.yml:5
NOTIFIED HANDLER bar : include_tasks for localhost
NOTIFIED HANDLER bar : include_tasks for localhost
META: triggered running handlers for localhost
RUNNING HANDLER [bar : include_tasks] ******************************************
task path: /Volumes/workplace/personal/test/roles/bar/handlers/main.yml:1
included: /Volumes/workplace/personal/test/roles/bar/handlers/item.yml for localhost => (item=1)
included: /Volumes/workplace/personal/test/roles/bar/handlers/item.yml for localhost => (item=2)
included: /Volumes/workplace/personal/test/roles/bar/handlers/item.yml for localhost => (item=3)
RUNNING HANDLER [bar : command] ************************************************
task path: /Volumes/workplace/personal/test/roles/bar/handlers/item.yml:1
<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: tensin
<127.0.0.1> EXEC /bin/sh -c 'echo ~tensin && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /Users/tensin/.ansible/tmp `"&& mkdir "` echo /Users/tensin/.ansible/tmp/ansible-tmp-1695112056.3968189-80959-67605206314050 `" && echo ansible-tmp-1695112056.3968189-80959-67605206314050="` echo /Users/tensin/.ansible/tmp/ansible-tmp-1695112056.3968189-80959-67605206314050 `" ) && sleep 0'
Using module file /opt/homebrew/Cellar/ansible/8.4.0/libexec/lib/python3.11/site-packages/ansible/modules/command.py
<127.0.0.1> PUT /Users/tensin/.ansible/tmp/ansible-local-80811l6lp_m_7/tmpsv65_5tb TO /Users/tensin/.ansible/tmp/ansible-tmp-1695112056.3968189-80959-67605206314050/AnsiballZ_command.py
<127.0.0.1> EXEC /bin/sh -c 'chmod u+x /Users/tensin/.ansible/tmp/ansible-tmp-1695112056.3968189-80959-67605206314050/ /Users/tensin/.ansible/tmp/ansible-tmp-1695112056.3968189-80959-67605206314050/AnsiballZ_command.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '/opt/homebrew/Cellar/ansible/8.4.0/libexec/bin/python /Users/tensin/.ansible/tmp/ansible-tmp-1695112056.3968189-80959-67605206314050/AnsiballZ_command.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c 'rm -f -r /Users/tensin/.ansible/tmp/ansible-tmp-1695112056.3968189-80959-67605206314050/ > /dev/null 2>&1 && sleep 0'
ok: [localhost] => {
"changed": false,
"cmd": [
"echo",
"1"
],
"delta": "0:00:00.005281",
"end": "2023-09-19 10:27:36.562253",
"invocation": {
"module_args": {
"_raw_params": "echo '1'",
"_uses_shell": false,
"argv": null,
"chdir": null,
"creates": null,
"executable": null,
"removes": null,
"stdin": null,
"stdin_add_newline": true,
"strip_empty_ends": true
}
},
"msg": "",
"rc": 0,
"start": "2023-09-19 10:27:36.556972",
"stderr": "",
"stderr_lines": [],
"stdout": "1",
"stdout_lines": [
"1"
]
}
RUNNING HANDLER [bar : command] ************************************************
task path: /Volumes/workplace/personal/test/roles/bar/handlers/item.yml:1
<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: tensin
<127.0.0.1> EXEC /bin/sh -c 'echo ~tensin && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /Users/tensin/.ansible/tmp `"&& mkdir "` echo /Users/tensin/.ansible/tmp/ansible-tmp-1695112056.621751-80980-107541433073117 `" && echo ansible-tmp-1695112056.621751-80980-107541433073117="` echo /Users/tensin/.ansible/tmp/ansible-tmp-1695112056.621751-80980-107541433073117 `" ) && sleep 0'
Using module file /opt/homebrew/Cellar/ansible/8.4.0/libexec/lib/python3.11/site-packages/ansible/modules/command.py
<127.0.0.1> PUT /Users/tensin/.ansible/tmp/ansible-local-80811l6lp_m_7/tmp1258e27y TO /Users/tensin/.ansible/tmp/ansible-tmp-1695112056.621751-80980-107541433073117/AnsiballZ_command.py
<127.0.0.1> EXEC /bin/sh -c 'chmod u+x /Users/tensin/.ansible/tmp/ansible-tmp-1695112056.621751-80980-107541433073117/ /Users/tensin/.ansible/tmp/ansible-tmp-1695112056.621751-80980-107541433073117/AnsiballZ_command.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '/opt/homebrew/Cellar/ansible/8.4.0/libexec/bin/python /Users/tensin/.ansible/tmp/ansible-tmp-1695112056.621751-80980-107541433073117/AnsiballZ_command.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c 'rm -f -r /Users/tensin/.ansible/tmp/ansible-tmp-1695112056.621751-80980-107541433073117/ > /dev/null 2>&1 && sleep 0'
ok: [localhost] => {
"changed": false,
"cmd": [
"echo",
"2"
],
"delta": "0:00:00.005252",
"end": "2023-09-19 10:27:36.772082",
"invocation": {
"module_args": {
"_raw_params": "echo '2'",
"_uses_shell": false,
"argv": null,
"chdir": null,
"creates": null,
"executable": null,
"removes": null,
"stdin": null,
"stdin_add_newline": true,
"strip_empty_ends": true
}
},
"msg": "",
"rc": 0,
"start": "2023-09-19 10:27:36.766830",
"stderr": "",
"stderr_lines": [],
"stdout": "2",
"stdout_lines": [
"2"
]
}
RUNNING HANDLER [bar : command] ************************************************
task path: /Volumes/workplace/personal/test/roles/bar/handlers/item.yml:1
<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: tensin
<127.0.0.1> EXEC /bin/sh -c 'echo ~tensin && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /Users/tensin/.ansible/tmp `"&& mkdir "` echo /Users/tensin/.ansible/tmp/ansible-tmp-1695112056.828794-81001-161624896246699 `" && echo ansible-tmp-1695112056.828794-81001-161624896246699="` echo /Users/tensin/.ansible/tmp/ansible-tmp-1695112056.828794-81001-161624896246699 `" ) && sleep 0'
Using module file /opt/homebrew/Cellar/ansible/8.4.0/libexec/lib/python3.11/site-packages/ansible/modules/command.py
<127.0.0.1> PUT /Users/tensin/.ansible/tmp/ansible-local-80811l6lp_m_7/tmpwmfv9yp_ TO /Users/tensin/.ansible/tmp/ansible-tmp-1695112056.828794-81001-161624896246699/AnsiballZ_command.py
<127.0.0.1> EXEC /bin/sh -c 'chmod u+x /Users/tensin/.ansible/tmp/ansible-tmp-1695112056.828794-81001-161624896246699/ /Users/tensin/.ansible/tmp/ansible-tmp-1695112056.828794-81001-161624896246699/AnsiballZ_command.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '/opt/homebrew/Cellar/ansible/8.4.0/libexec/bin/python /Users/tensin/.ansible/tmp/ansible-tmp-1695112056.828794-81001-161624896246699/AnsiballZ_command.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c 'rm -f -r /Users/tensin/.ansible/tmp/ansible-tmp-1695112056.828794-81001-161624896246699/ > /dev/null 2>&1 && sleep 0'
ok: [localhost] => {
"changed": false,
"cmd": [
"echo",
"3"
],
"delta": "0:00:00.004990",
"end": "2023-09-19 10:27:36.998890",
"invocation": {
"module_args": {
"_raw_params": "echo '3'",
"_uses_shell": false,
"argv": null,
"chdir": null,
"creates": null,
"executable": null,
"removes": null,
"stdin": null,
"stdin_add_newline": true,
"strip_empty_ends": true
}
},
"msg": "",
"rc": 0,
"start": "2023-09-19 10:27:36.993900",
"stderr": "",
"stderr_lines": [],
"stdout": "3",
"stdout_lines": [
"3"
]
}
RUNNING HANDLER [bar : include_tasks] ******************************************
task path: /Volumes/workplace/personal/test/roles/bar/handlers/main.yml:1
fatal: [localhost]: FAILED! => {
"reason": "Could not find or access '/Volumes/workplace/personal/test/item.yml' on the Ansible Controller."
}
fatal: [localhost]: FAILED! => {
"reason": "Could not find or access '/Volumes/workplace/personal/test/item.yml' on the Ansible Controller."
}
fatal: [localhost]: FAILED! => {
"reason": "Could not find or access '/Volumes/workplace/personal/test/item.yml' on the Ansible Controller."
}
PLAY RECAP *********************************************************************
localhost : ok=15 changed=2 unreachable=0 failed=3 skipped=0 rescued=0 ignored=0
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/81722
|
https://github.com/ansible/ansible/pull/81733
|
86fd7026a88988c224ae175a281e7e6e2f3c5bc3
|
1e7f7875c617a12e5b16bcf290d489a6446febdb
| 2023-09-19T08:28:30Z |
python
| 2023-09-21T19:12:04Z |
lib/ansible/playbook/included_file.py
|
# (c) 2012-2014, Michael DeHaan <[email protected]>
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
# Make coding more python3-ish
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import os
from ansible import constants as C
from ansible.errors import AnsibleError
from ansible.executor.task_executor import remove_omit
from ansible.module_utils.common.text.converters import to_text
from ansible.playbook.handler import Handler
from ansible.playbook.task_include import TaskInclude
from ansible.playbook.role_include import IncludeRole
from ansible.template import Templar
from ansible.utils.display import Display
display = Display()
class IncludedFile:
def __init__(self, filename, args, vars, task, is_role=False):
self._filename = filename
self._args = args
self._vars = vars
self._task = task
self._hosts = []
self._is_role = is_role
self._results = []
def add_host(self, host):
if host not in self._hosts:
self._hosts.append(host)
return
raise ValueError()
def __eq__(self, other):
return (other._filename == self._filename and
other._args == self._args and
other._vars == self._vars and
other._task._uuid == self._task._uuid and
other._task._parent._uuid == self._task._parent._uuid)
def __repr__(self):
return "%s (args=%s vars=%s): %s" % (self._filename, self._args, self._vars, self._hosts)
@staticmethod
def process_include_results(results, iterator, loader, variable_manager):
included_files = []
task_vars_cache = {}
for res in results:
original_host = res._host
original_task = res._task
if original_task.action in C._ACTION_ALL_INCLUDES:
if original_task.loop:
if 'results' not in res._result:
continue
include_results = res._result['results']
else:
include_results = [res._result]
for include_result in include_results:
# if the task result was skipped or failed, continue
if 'skipped' in include_result and include_result['skipped'] or 'failed' in include_result and include_result['failed']:
continue
cache_key = (iterator._play, original_host, original_task)
try:
task_vars = task_vars_cache[cache_key]
except KeyError:
task_vars = task_vars_cache[cache_key] = variable_manager.get_vars(play=iterator._play, host=original_host, task=original_task)
include_args = include_result.get('include_args', dict())
special_vars = {}
loop_var = include_result.get('ansible_loop_var', 'item')
index_var = include_result.get('ansible_index_var')
if loop_var in include_result:
task_vars[loop_var] = special_vars[loop_var] = include_result[loop_var]
if index_var and index_var in include_result:
task_vars[index_var] = special_vars[index_var] = include_result[index_var]
if '_ansible_item_label' in include_result:
task_vars['_ansible_item_label'] = special_vars['_ansible_item_label'] = include_result['_ansible_item_label']
if 'ansible_loop' in include_result:
task_vars['ansible_loop'] = special_vars['ansible_loop'] = include_result['ansible_loop']
if original_task.no_log and '_ansible_no_log' not in include_args:
task_vars['_ansible_no_log'] = special_vars['_ansible_no_log'] = original_task.no_log
# get search path for this task to pass to lookup plugins that may be used in pathing to
# the included file
task_vars['ansible_search_path'] = original_task.get_search_path()
# ensure basedir is always in (dwim already searches here but we need to display it)
if loader.get_basedir() not in task_vars['ansible_search_path']:
task_vars['ansible_search_path'].append(loader.get_basedir())
templar = Templar(loader=loader, variables=task_vars)
if original_task.action in C._ACTION_INCLUDE_TASKS:
include_file = None
if original_task._parent:
# handle relative includes by walking up the list of parent include
# tasks and checking the relative result to see if it exists
parent_include = original_task._parent
cumulative_path = None
while parent_include is not None:
if not isinstance(parent_include, TaskInclude):
parent_include = parent_include._parent
continue
if isinstance(parent_include, IncludeRole):
parent_include_dir = parent_include._role_path
else:
try:
parent_include_dir = os.path.dirname(templar.template(parent_include.args.get('_raw_params')))
except AnsibleError as e:
parent_include_dir = ''
display.warning(
'Templating the path of the parent %s failed. The path to the '
'included file may not be found. '
'The error was: %s.' % (original_task.action, to_text(e))
)
if cumulative_path is not None and not os.path.isabs(cumulative_path):
cumulative_path = os.path.join(parent_include_dir, cumulative_path)
else:
cumulative_path = parent_include_dir
include_target = templar.template(include_result['include'])
if original_task._role:
new_basedir = os.path.join(original_task._role._role_path, 'tasks', cumulative_path)
candidates = [loader.path_dwim_relative(original_task._role._role_path, 'tasks', include_target),
loader.path_dwim_relative(new_basedir, 'tasks', include_target)]
for include_file in candidates:
try:
# may throw OSError
os.stat(include_file)
# or select the task file if it exists
break
except OSError:
pass
else:
include_file = loader.path_dwim_relative(loader.get_basedir(), cumulative_path, include_target)
if os.path.exists(include_file):
break
else:
parent_include = parent_include._parent
if include_file is None:
if original_task._role:
include_target = templar.template(include_result['include'])
include_file = loader.path_dwim_relative(
original_task._role._role_path,
'handlers' if isinstance(original_task, Handler) else 'tasks',
include_target,
is_role=True)
else:
include_file = loader.path_dwim(include_result['include'])
include_file = templar.template(include_file)
inc_file = IncludedFile(include_file, include_args, special_vars, original_task)
else:
# template the included role's name here
role_name = include_args.pop('name', include_args.pop('role', None))
if role_name is not None:
role_name = templar.template(role_name)
new_task = original_task.copy()
new_task.post_validate(templar=templar)
new_task._role_name = role_name
for from_arg in new_task.FROM_ARGS:
if from_arg in include_args:
from_key = from_arg.removesuffix('_from')
new_task._from_files[from_key] = templar.template(include_args.pop(from_arg))
omit_token = task_vars.get('omit')
if omit_token:
new_task._from_files = remove_omit(new_task._from_files, omit_token)
inc_file = IncludedFile(role_name, include_args, special_vars, new_task, is_role=True)
idx = 0
orig_inc_file = inc_file
while 1:
try:
pos = included_files[idx:].index(orig_inc_file)
# pos is relative to idx since we are slicing
# use idx + pos due to relative indexing
inc_file = included_files[idx + pos]
except ValueError:
included_files.append(orig_inc_file)
inc_file = orig_inc_file
try:
inc_file.add_host(original_host)
inc_file._results.append(res)
except ValueError:
# The host already exists for this include, advance forward, this is a new include
idx += pos + 1
else:
break
return included_files
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 81,722 |
include_tasks within handler called within include_role doesn't work
|
### Summary
If there's a role with a `include_tasks` handler, and it's dynamically included by `include_role`, Ansible cannot find the included file. However, it can find the included file perfectly well when the role with the handler is included in a play standalone.
### Issue Type
Bug Report
### Component Name
handlers
### Ansible Version
```console
$ ansible --version
ansible [core 2.15.4]
config file = None
configured module search path = ['/Users/tensin/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /opt/homebrew/Cellar/ansible/8.4.0/libexec/lib/python3.11/site-packages/ansible
ansible collection location = /Users/tensin/.ansible/collections:/usr/share/ansible/collections
executable location = /opt/homebrew/bin/ansible
python version = 3.11.5 (main, Aug 24 2023, 15:09:45) [Clang 14.0.3 (clang-1403.0.22.14.1)] (/opt/homebrew/Cellar/ansible/8.4.0/libexec/bin/python)
jinja version = 3.1.2
libyaml = True
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
CONFIG_FILE() = None
EDITOR(env: EDITOR) = vim
```
### OS / Environment
MacOS Ventura
### Steps to Reproduce
```
# tree
.
|-- playbook.yml
`-- roles
|-- bar
| |-- handlers
| | |-- item.yml
| | `-- main.yml
| `-- tasks
| `-- main.yml
`-- foo
`-- tasks
`-- main.yml
```
playbook.yml:
```
- name: Test playbook
hosts: localhost
roles:
- bar
- foo
```
foo/tasks/main.yml:
```
- include_role:
name: bar
```
bar/tasks/main.yml:
```
- command: echo 1
changed_when: true
notify: bar_handler
- meta: flush_handlers
```
bar/handlers/main.yml:
```
- listen: bar_handler
include_tasks: item.yml
loop: [1, 2, 3]
```
bar/handlers/item.yml:
```
- command: echo '{{ item }}'
changed_when: false
```
Run using: `ansible-playbook playbook.yml`
### Expected Results
The bar role is executed twice; its handlers are executed twice.
### Actual Results
```console
ansible-playbook [core 2.15.4]
config file = None
configured module search path = ['/Users/tensin/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /opt/homebrew/Cellar/ansible/8.4.0/libexec/lib/python3.11/site-packages/ansible
ansible collection location = /Users/tensin/.ansible/collections:/usr/share/ansible/collections
executable location = /opt/homebrew/bin/ansible-playbook
python version = 3.11.5 (main, Aug 24 2023, 15:09:45) [Clang 14.0.3 (clang-1403.0.22.14.1)] (/opt/homebrew/Cellar/ansible/8.4.0/libexec/bin/python)
jinja version = 3.1.2
libyaml = True
No config file found; using defaults
setting up inventory plugins
Loading collection ansible.builtin from
host_list declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
Skipping due to inventory source not existing or not being readable by the current user
script declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
auto declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
Skipping due to inventory source not existing or not being readable by the current user
yaml declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
Skipping due to inventory source not existing or not being readable by the current user
ini declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
Skipping due to inventory source not existing or not being readable by the current user
toml declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
Loading callback plugin default of type stdout, v2.0 from /opt/homebrew/Cellar/ansible/8.4.0/libexec/lib/python3.11/site-packages/ansible/plugins/callback/default.py
Skipping callback 'default', as we already have a stdout callback.
Skipping callback 'minimal', as we already have a stdout callback.
Skipping callback 'oneline', as we already have a stdout callback.
PLAYBOOK: playbook.yml *********************************************************
Positional arguments: playbook.yml
verbosity: 4
connection: smart
timeout: 10
become_method: sudo
tags: ('all',)
inventory: ('/etc/ansible/hosts',)
forks: 5
1 plays in playbook.yml
PLAY [Test playbook] ***********************************************************
TASK [Gathering Facts] *********************************************************
task path: /Volumes/workplace/personal/test/playbook.yml:1
<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: tensin
<127.0.0.1> EXEC /bin/sh -c 'echo ~tensin && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /Users/tensin/.ansible/tmp `"&& mkdir "` echo /Users/tensin/.ansible/tmp/ansible-tmp-1695112052.4727402-80815-156931745653572 `" && echo ansible-tmp-1695112052.4727402-80815-156931745653572="` echo /Users/tensin/.ansible/tmp/ansible-tmp-1695112052.4727402-80815-156931745653572 `" ) && sleep 0'
Using module file /opt/homebrew/Cellar/ansible/8.4.0/libexec/lib/python3.11/site-packages/ansible/modules/setup.py
<127.0.0.1> PUT /Users/tensin/.ansible/tmp/ansible-local-80811l6lp_m_7/tmpktjgn0yt TO /Users/tensin/.ansible/tmp/ansible-tmp-1695112052.4727402-80815-156931745653572/AnsiballZ_setup.py
<127.0.0.1> EXEC /bin/sh -c 'chmod u+x /Users/tensin/.ansible/tmp/ansible-tmp-1695112052.4727402-80815-156931745653572/ /Users/tensin/.ansible/tmp/ansible-tmp-1695112052.4727402-80815-156931745653572/AnsiballZ_setup.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '/opt/homebrew/Cellar/ansible/8.4.0/libexec/bin/python /Users/tensin/.ansible/tmp/ansible-tmp-1695112052.4727402-80815-156931745653572/AnsiballZ_setup.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c 'rm -f -r /Users/tensin/.ansible/tmp/ansible-tmp-1695112052.4727402-80815-156931745653572/ > /dev/null 2>&1 && sleep 0'
ok: [localhost]
TASK [bar : command] ***********************************************************
task path: /Volumes/workplace/personal/test/roles/bar/tasks/main.yml:1
<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: tensin
<127.0.0.1> EXEC /bin/sh -c 'echo ~tensin && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /Users/tensin/.ansible/tmp `"&& mkdir "` echo /Users/tensin/.ansible/tmp/ansible-tmp-1695112054.1128068-80849-39195856654206 `" && echo ansible-tmp-1695112054.1128068-80849-39195856654206="` echo /Users/tensin/.ansible/tmp/ansible-tmp-1695112054.1128068-80849-39195856654206 `" ) && sleep 0'
Using module file /opt/homebrew/Cellar/ansible/8.4.0/libexec/lib/python3.11/site-packages/ansible/modules/command.py
<127.0.0.1> PUT /Users/tensin/.ansible/tmp/ansible-local-80811l6lp_m_7/tmpm2eyr1c4 TO /Users/tensin/.ansible/tmp/ansible-tmp-1695112054.1128068-80849-39195856654206/AnsiballZ_command.py
<127.0.0.1> EXEC /bin/sh -c 'chmod u+x /Users/tensin/.ansible/tmp/ansible-tmp-1695112054.1128068-80849-39195856654206/ /Users/tensin/.ansible/tmp/ansible-tmp-1695112054.1128068-80849-39195856654206/AnsiballZ_command.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '/opt/homebrew/Cellar/ansible/8.4.0/libexec/bin/python /Users/tensin/.ansible/tmp/ansible-tmp-1695112054.1128068-80849-39195856654206/AnsiballZ_command.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c 'rm -f -r /Users/tensin/.ansible/tmp/ansible-tmp-1695112054.1128068-80849-39195856654206/ > /dev/null 2>&1 && sleep 0'
Notification for handler bar_handler has been saved.
changed: [localhost] => {
"changed": true,
"cmd": [
"echo",
"1"
],
"delta": "0:00:00.005418",
"end": "2023-09-19 10:27:34.337640",
"invocation": {
"module_args": {
"_raw_params": "echo 1",
"_uses_shell": false,
"argv": null,
"chdir": null,
"creates": null,
"executable": null,
"removes": null,
"stdin": null,
"stdin_add_newline": true,
"strip_empty_ends": true
}
},
"msg": "",
"rc": 0,
"start": "2023-09-19 10:27:34.332222",
"stderr": "",
"stderr_lines": [],
"stdout": "1",
"stdout_lines": [
"1"
]
}
TASK [bar : meta] **************************************************************
task path: /Volumes/workplace/personal/test/roles/bar/tasks/main.yml:5
NOTIFIED HANDLER bar : include_tasks for localhost
META: triggered running handlers for localhost
RUNNING HANDLER [bar : include_tasks] ******************************************
task path: /Volumes/workplace/personal/test/roles/bar/handlers/main.yml:1
included: /Volumes/workplace/personal/test/roles/bar/handlers/item.yml for localhost => (item=1)
included: /Volumes/workplace/personal/test/roles/bar/handlers/item.yml for localhost => (item=2)
included: /Volumes/workplace/personal/test/roles/bar/handlers/item.yml for localhost => (item=3)
RUNNING HANDLER [bar : command] ************************************************
task path: /Volumes/workplace/personal/test/roles/bar/handlers/item.yml:1
<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: tensin
<127.0.0.1> EXEC /bin/sh -c 'echo ~tensin && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /Users/tensin/.ansible/tmp `"&& mkdir "` echo /Users/tensin/.ansible/tmp/ansible-tmp-1695112054.42143-80873-18381882637147 `" && echo ansible-tmp-1695112054.42143-80873-18381882637147="` echo /Users/tensin/.ansible/tmp/ansible-tmp-1695112054.42143-80873-18381882637147 `" ) && sleep 0'
Using module file /opt/homebrew/Cellar/ansible/8.4.0/libexec/lib/python3.11/site-packages/ansible/modules/command.py
<127.0.0.1> PUT /Users/tensin/.ansible/tmp/ansible-local-80811l6lp_m_7/tmp2xu8xoya TO /Users/tensin/.ansible/tmp/ansible-tmp-1695112054.42143-80873-18381882637147/AnsiballZ_command.py
<127.0.0.1> EXEC /bin/sh -c 'chmod u+x /Users/tensin/.ansible/tmp/ansible-tmp-1695112054.42143-80873-18381882637147/ /Users/tensin/.ansible/tmp/ansible-tmp-1695112054.42143-80873-18381882637147/AnsiballZ_command.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '/opt/homebrew/Cellar/ansible/8.4.0/libexec/bin/python /Users/tensin/.ansible/tmp/ansible-tmp-1695112054.42143-80873-18381882637147/AnsiballZ_command.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c 'rm -f -r /Users/tensin/.ansible/tmp/ansible-tmp-1695112054.42143-80873-18381882637147/ > /dev/null 2>&1 && sleep 0'
ok: [localhost] => {
"changed": false,
"cmd": [
"echo",
"1"
],
"delta": "0:00:00.004997",
"end": "2023-09-19 10:27:34.580493",
"invocation": {
"module_args": {
"_raw_params": "echo '1'",
"_uses_shell": false,
"argv": null,
"chdir": null,
"creates": null,
"executable": null,
"removes": null,
"stdin": null,
"stdin_add_newline": true,
"strip_empty_ends": true
}
},
"msg": "",
"rc": 0,
"start": "2023-09-19 10:27:34.575496",
"stderr": "",
"stderr_lines": [],
"stdout": "1",
"stdout_lines": [
"1"
]
}
RUNNING HANDLER [bar : command] ************************************************
task path: /Volumes/workplace/personal/test/roles/bar/handlers/item.yml:1
<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: tensin
<127.0.0.1> EXEC /bin/sh -c 'echo ~tensin && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /Users/tensin/.ansible/tmp `"&& mkdir "` echo /Users/tensin/.ansible/tmp/ansible-tmp-1695112054.6377301-80894-91754434326946 `" && echo ansible-tmp-1695112054.6377301-80894-91754434326946="` echo /Users/tensin/.ansible/tmp/ansible-tmp-1695112054.6377301-80894-91754434326946 `" ) && sleep 0'
Using module file /opt/homebrew/Cellar/ansible/8.4.0/libexec/lib/python3.11/site-packages/ansible/modules/command.py
<127.0.0.1> PUT /Users/tensin/.ansible/tmp/ansible-local-80811l6lp_m_7/tmp5go6z4yo TO /Users/tensin/.ansible/tmp/ansible-tmp-1695112054.6377301-80894-91754434326946/AnsiballZ_command.py
<127.0.0.1> EXEC /bin/sh -c 'chmod u+x /Users/tensin/.ansible/tmp/ansible-tmp-1695112054.6377301-80894-91754434326946/ /Users/tensin/.ansible/tmp/ansible-tmp-1695112054.6377301-80894-91754434326946/AnsiballZ_command.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '/opt/homebrew/Cellar/ansible/8.4.0/libexec/bin/python /Users/tensin/.ansible/tmp/ansible-tmp-1695112054.6377301-80894-91754434326946/AnsiballZ_command.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c 'rm -f -r /Users/tensin/.ansible/tmp/ansible-tmp-1695112054.6377301-80894-91754434326946/ > /dev/null 2>&1 && sleep 0'
ok: [localhost] => {
"changed": false,
"cmd": [
"echo",
"2"
],
"delta": "0:00:00.005343",
"end": "2023-09-19 10:27:34.789715",
"invocation": {
"module_args": {
"_raw_params": "echo '2'",
"_uses_shell": false,
"argv": null,
"chdir": null,
"creates": null,
"executable": null,
"removes": null,
"stdin": null,
"stdin_add_newline": true,
"strip_empty_ends": true
}
},
"msg": "",
"rc": 0,
"start": "2023-09-19 10:27:34.784372",
"stderr": "",
"stderr_lines": [],
"stdout": "2",
"stdout_lines": [
"2"
]
}
RUNNING HANDLER [bar : command] ************************************************
task path: /Volumes/workplace/personal/test/roles/bar/handlers/item.yml:1
<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: tensin
<127.0.0.1> EXEC /bin/sh -c 'echo ~tensin && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /Users/tensin/.ansible/tmp `"&& mkdir "` echo /Users/tensin/.ansible/tmp/ansible-tmp-1695112054.84939-80915-139816169826551 `" && echo ansible-tmp-1695112054.84939-80915-139816169826551="` echo /Users/tensin/.ansible/tmp/ansible-tmp-1695112054.84939-80915-139816169826551 `" ) && sleep 0'
Using module file /opt/homebrew/Cellar/ansible/8.4.0/libexec/lib/python3.11/site-packages/ansible/modules/command.py
<127.0.0.1> PUT /Users/tensin/.ansible/tmp/ansible-local-80811l6lp_m_7/tmpcuoqfdyi TO /Users/tensin/.ansible/tmp/ansible-tmp-1695112054.84939-80915-139816169826551/AnsiballZ_command.py
<127.0.0.1> EXEC /bin/sh -c 'chmod u+x /Users/tensin/.ansible/tmp/ansible-tmp-1695112054.84939-80915-139816169826551/ /Users/tensin/.ansible/tmp/ansible-tmp-1695112054.84939-80915-139816169826551/AnsiballZ_command.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '/opt/homebrew/Cellar/ansible/8.4.0/libexec/bin/python /Users/tensin/.ansible/tmp/ansible-tmp-1695112054.84939-80915-139816169826551/AnsiballZ_command.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c 'rm -f -r /Users/tensin/.ansible/tmp/ansible-tmp-1695112054.84939-80915-139816169826551/ > /dev/null 2>&1 && sleep 0'
ok: [localhost] => {
"changed": false,
"cmd": [
"echo",
"3"
],
"delta": "0:00:01.006513",
"end": "2023-09-19 10:27:36.018385",
"invocation": {
"module_args": {
"_raw_params": "echo '3'",
"_uses_shell": false,
"argv": null,
"chdir": null,
"creates": null,
"executable": null,
"removes": null,
"stdin": null,
"stdin_add_newline": true,
"strip_empty_ends": true
}
},
"msg": "",
"rc": 0,
"start": "2023-09-19 10:27:35.011872",
"stderr": "",
"stderr_lines": [],
"stdout": "3",
"stdout_lines": [
"3"
]
}
TASK [include_role : bar] ******************************************************
task path: /Volumes/workplace/personal/test/roles/foo/tasks/main.yml:1
TASK [bar : command] ***********************************************************
task path: /Volumes/workplace/personal/test/roles/bar/tasks/main.yml:1
<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: tensin
<127.0.0.1> EXEC /bin/sh -c 'echo ~tensin && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /Users/tensin/.ansible/tmp `"&& mkdir "` echo /Users/tensin/.ansible/tmp/ansible-tmp-1695112056.146764-80937-219796758919766 `" && echo ansible-tmp-1695112056.146764-80937-219796758919766="` echo /Users/tensin/.ansible/tmp/ansible-tmp-1695112056.146764-80937-219796758919766 `" ) && sleep 0'
Using module file /opt/homebrew/Cellar/ansible/8.4.0/libexec/lib/python3.11/site-packages/ansible/modules/command.py
<127.0.0.1> PUT /Users/tensin/.ansible/tmp/ansible-local-80811l6lp_m_7/tmpesvqaeoc TO /Users/tensin/.ansible/tmp/ansible-tmp-1695112056.146764-80937-219796758919766/AnsiballZ_command.py
<127.0.0.1> EXEC /bin/sh -c 'chmod u+x /Users/tensin/.ansible/tmp/ansible-tmp-1695112056.146764-80937-219796758919766/ /Users/tensin/.ansible/tmp/ansible-tmp-1695112056.146764-80937-219796758919766/AnsiballZ_command.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '/opt/homebrew/Cellar/ansible/8.4.0/libexec/bin/python /Users/tensin/.ansible/tmp/ansible-tmp-1695112056.146764-80937-219796758919766/AnsiballZ_command.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c 'rm -f -r /Users/tensin/.ansible/tmp/ansible-tmp-1695112056.146764-80937-219796758919766/ > /dev/null 2>&1 && sleep 0'
Notification for handler bar_handler has been saved.
changed: [localhost] => {
"changed": true,
"cmd": [
"echo",
"1"
],
"delta": "0:00:00.005010",
"end": "2023-09-19 10:27:36.319682",
"invocation": {
"module_args": {
"_raw_params": "echo 1",
"_uses_shell": false,
"argv": null,
"chdir": null,
"creates": null,
"executable": null,
"removes": null,
"stdin": null,
"stdin_add_newline": true,
"strip_empty_ends": true
}
},
"msg": "",
"rc": 0,
"start": "2023-09-19 10:27:36.314672",
"stderr": "",
"stderr_lines": [],
"stdout": "1",
"stdout_lines": [
"1"
]
}
TASK [bar : meta] **************************************************************
task path: /Volumes/workplace/personal/test/roles/bar/tasks/main.yml:5
NOTIFIED HANDLER bar : include_tasks for localhost
NOTIFIED HANDLER bar : include_tasks for localhost
META: triggered running handlers for localhost
RUNNING HANDLER [bar : include_tasks] ******************************************
task path: /Volumes/workplace/personal/test/roles/bar/handlers/main.yml:1
included: /Volumes/workplace/personal/test/roles/bar/handlers/item.yml for localhost => (item=1)
included: /Volumes/workplace/personal/test/roles/bar/handlers/item.yml for localhost => (item=2)
included: /Volumes/workplace/personal/test/roles/bar/handlers/item.yml for localhost => (item=3)
RUNNING HANDLER [bar : command] ************************************************
task path: /Volumes/workplace/personal/test/roles/bar/handlers/item.yml:1
<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: tensin
<127.0.0.1> EXEC /bin/sh -c 'echo ~tensin && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /Users/tensin/.ansible/tmp `"&& mkdir "` echo /Users/tensin/.ansible/tmp/ansible-tmp-1695112056.3968189-80959-67605206314050 `" && echo ansible-tmp-1695112056.3968189-80959-67605206314050="` echo /Users/tensin/.ansible/tmp/ansible-tmp-1695112056.3968189-80959-67605206314050 `" ) && sleep 0'
Using module file /opt/homebrew/Cellar/ansible/8.4.0/libexec/lib/python3.11/site-packages/ansible/modules/command.py
<127.0.0.1> PUT /Users/tensin/.ansible/tmp/ansible-local-80811l6lp_m_7/tmpsv65_5tb TO /Users/tensin/.ansible/tmp/ansible-tmp-1695112056.3968189-80959-67605206314050/AnsiballZ_command.py
<127.0.0.1> EXEC /bin/sh -c 'chmod u+x /Users/tensin/.ansible/tmp/ansible-tmp-1695112056.3968189-80959-67605206314050/ /Users/tensin/.ansible/tmp/ansible-tmp-1695112056.3968189-80959-67605206314050/AnsiballZ_command.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '/opt/homebrew/Cellar/ansible/8.4.0/libexec/bin/python /Users/tensin/.ansible/tmp/ansible-tmp-1695112056.3968189-80959-67605206314050/AnsiballZ_command.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c 'rm -f -r /Users/tensin/.ansible/tmp/ansible-tmp-1695112056.3968189-80959-67605206314050/ > /dev/null 2>&1 && sleep 0'
ok: [localhost] => {
"changed": false,
"cmd": [
"echo",
"1"
],
"delta": "0:00:00.005281",
"end": "2023-09-19 10:27:36.562253",
"invocation": {
"module_args": {
"_raw_params": "echo '1'",
"_uses_shell": false,
"argv": null,
"chdir": null,
"creates": null,
"executable": null,
"removes": null,
"stdin": null,
"stdin_add_newline": true,
"strip_empty_ends": true
}
},
"msg": "",
"rc": 0,
"start": "2023-09-19 10:27:36.556972",
"stderr": "",
"stderr_lines": [],
"stdout": "1",
"stdout_lines": [
"1"
]
}
RUNNING HANDLER [bar : command] ************************************************
task path: /Volumes/workplace/personal/test/roles/bar/handlers/item.yml:1
<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: tensin
<127.0.0.1> EXEC /bin/sh -c 'echo ~tensin && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /Users/tensin/.ansible/tmp `"&& mkdir "` echo /Users/tensin/.ansible/tmp/ansible-tmp-1695112056.621751-80980-107541433073117 `" && echo ansible-tmp-1695112056.621751-80980-107541433073117="` echo /Users/tensin/.ansible/tmp/ansible-tmp-1695112056.621751-80980-107541433073117 `" ) && sleep 0'
Using module file /opt/homebrew/Cellar/ansible/8.4.0/libexec/lib/python3.11/site-packages/ansible/modules/command.py
<127.0.0.1> PUT /Users/tensin/.ansible/tmp/ansible-local-80811l6lp_m_7/tmp1258e27y TO /Users/tensin/.ansible/tmp/ansible-tmp-1695112056.621751-80980-107541433073117/AnsiballZ_command.py
<127.0.0.1> EXEC /bin/sh -c 'chmod u+x /Users/tensin/.ansible/tmp/ansible-tmp-1695112056.621751-80980-107541433073117/ /Users/tensin/.ansible/tmp/ansible-tmp-1695112056.621751-80980-107541433073117/AnsiballZ_command.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '/opt/homebrew/Cellar/ansible/8.4.0/libexec/bin/python /Users/tensin/.ansible/tmp/ansible-tmp-1695112056.621751-80980-107541433073117/AnsiballZ_command.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c 'rm -f -r /Users/tensin/.ansible/tmp/ansible-tmp-1695112056.621751-80980-107541433073117/ > /dev/null 2>&1 && sleep 0'
ok: [localhost] => {
"changed": false,
"cmd": [
"echo",
"2"
],
"delta": "0:00:00.005252",
"end": "2023-09-19 10:27:36.772082",
"invocation": {
"module_args": {
"_raw_params": "echo '2'",
"_uses_shell": false,
"argv": null,
"chdir": null,
"creates": null,
"executable": null,
"removes": null,
"stdin": null,
"stdin_add_newline": true,
"strip_empty_ends": true
}
},
"msg": "",
"rc": 0,
"start": "2023-09-19 10:27:36.766830",
"stderr": "",
"stderr_lines": [],
"stdout": "2",
"stdout_lines": [
"2"
]
}
RUNNING HANDLER [bar : command] ************************************************
task path: /Volumes/workplace/personal/test/roles/bar/handlers/item.yml:1
<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: tensin
<127.0.0.1> EXEC /bin/sh -c 'echo ~tensin && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /Users/tensin/.ansible/tmp `"&& mkdir "` echo /Users/tensin/.ansible/tmp/ansible-tmp-1695112056.828794-81001-161624896246699 `" && echo ansible-tmp-1695112056.828794-81001-161624896246699="` echo /Users/tensin/.ansible/tmp/ansible-tmp-1695112056.828794-81001-161624896246699 `" ) && sleep 0'
Using module file /opt/homebrew/Cellar/ansible/8.4.0/libexec/lib/python3.11/site-packages/ansible/modules/command.py
<127.0.0.1> PUT /Users/tensin/.ansible/tmp/ansible-local-80811l6lp_m_7/tmpwmfv9yp_ TO /Users/tensin/.ansible/tmp/ansible-tmp-1695112056.828794-81001-161624896246699/AnsiballZ_command.py
<127.0.0.1> EXEC /bin/sh -c 'chmod u+x /Users/tensin/.ansible/tmp/ansible-tmp-1695112056.828794-81001-161624896246699/ /Users/tensin/.ansible/tmp/ansible-tmp-1695112056.828794-81001-161624896246699/AnsiballZ_command.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '/opt/homebrew/Cellar/ansible/8.4.0/libexec/bin/python /Users/tensin/.ansible/tmp/ansible-tmp-1695112056.828794-81001-161624896246699/AnsiballZ_command.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c 'rm -f -r /Users/tensin/.ansible/tmp/ansible-tmp-1695112056.828794-81001-161624896246699/ > /dev/null 2>&1 && sleep 0'
ok: [localhost] => {
"changed": false,
"cmd": [
"echo",
"3"
],
"delta": "0:00:00.004990",
"end": "2023-09-19 10:27:36.998890",
"invocation": {
"module_args": {
"_raw_params": "echo '3'",
"_uses_shell": false,
"argv": null,
"chdir": null,
"creates": null,
"executable": null,
"removes": null,
"stdin": null,
"stdin_add_newline": true,
"strip_empty_ends": true
}
},
"msg": "",
"rc": 0,
"start": "2023-09-19 10:27:36.993900",
"stderr": "",
"stderr_lines": [],
"stdout": "3",
"stdout_lines": [
"3"
]
}
RUNNING HANDLER [bar : include_tasks] ******************************************
task path: /Volumes/workplace/personal/test/roles/bar/handlers/main.yml:1
fatal: [localhost]: FAILED! => {
"reason": "Could not find or access '/Volumes/workplace/personal/test/item.yml' on the Ansible Controller."
}
fatal: [localhost]: FAILED! => {
"reason": "Could not find or access '/Volumes/workplace/personal/test/item.yml' on the Ansible Controller."
}
fatal: [localhost]: FAILED! => {
"reason": "Could not find or access '/Volumes/workplace/personal/test/item.yml' on the Ansible Controller."
}
PLAY RECAP *********************************************************************
localhost : ok=15 changed=2 unreachable=0 failed=3 skipped=0 rescued=0 ignored=0
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/81722
|
https://github.com/ansible/ansible/pull/81733
|
86fd7026a88988c224ae175a281e7e6e2f3c5bc3
|
1e7f7875c617a12e5b16bcf290d489a6446febdb
| 2023-09-19T08:28:30Z |
python
| 2023-09-21T19:12:04Z |
test/integration/targets/handlers/roles/include_role_include_tasks_handler/handlers/include_handlers.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 81,722 |
include_tasks within handler called within include_role doesn't work
|
### Summary
If there's a role with a `include_tasks` handler, and it's dynamically included by `include_role`, Ansible cannot find the included file. However, it can find the included file perfectly well when the role with the handler is included in a play standalone.
### Issue Type
Bug Report
### Component Name
handlers
### Ansible Version
```console
$ ansible --version
ansible [core 2.15.4]
config file = None
configured module search path = ['/Users/tensin/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /opt/homebrew/Cellar/ansible/8.4.0/libexec/lib/python3.11/site-packages/ansible
ansible collection location = /Users/tensin/.ansible/collections:/usr/share/ansible/collections
executable location = /opt/homebrew/bin/ansible
python version = 3.11.5 (main, Aug 24 2023, 15:09:45) [Clang 14.0.3 (clang-1403.0.22.14.1)] (/opt/homebrew/Cellar/ansible/8.4.0/libexec/bin/python)
jinja version = 3.1.2
libyaml = True
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
CONFIG_FILE() = None
EDITOR(env: EDITOR) = vim
```
### OS / Environment
MacOS Ventura
### Steps to Reproduce
```
# tree
.
|-- playbook.yml
`-- roles
|-- bar
| |-- handlers
| | |-- item.yml
| | `-- main.yml
| `-- tasks
| `-- main.yml
`-- foo
`-- tasks
`-- main.yml
```
playbook.yml:
```
- name: Test playbook
hosts: localhost
roles:
- bar
- foo
```
foo/tasks/main.yml:
```
- include_role:
name: bar
```
bar/tasks/main.yml:
```
- command: echo 1
changed_when: true
notify: bar_handler
- meta: flush_handlers
```
bar/handlers/main.yml:
```
- listen: bar_handler
include_tasks: item.yml
loop: [1, 2, 3]
```
bar/handlers/item.yml:
```
- command: echo '{{ item }}'
changed_when: false
```
Run using: `ansible-playbook playbook.yml`
### Expected Results
The bar role is executed twice; its handlers are executed twice.
### Actual Results
```console
ansible-playbook [core 2.15.4]
config file = None
configured module search path = ['/Users/tensin/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /opt/homebrew/Cellar/ansible/8.4.0/libexec/lib/python3.11/site-packages/ansible
ansible collection location = /Users/tensin/.ansible/collections:/usr/share/ansible/collections
executable location = /opt/homebrew/bin/ansible-playbook
python version = 3.11.5 (main, Aug 24 2023, 15:09:45) [Clang 14.0.3 (clang-1403.0.22.14.1)] (/opt/homebrew/Cellar/ansible/8.4.0/libexec/bin/python)
jinja version = 3.1.2
libyaml = True
No config file found; using defaults
setting up inventory plugins
Loading collection ansible.builtin from
host_list declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
Skipping due to inventory source not existing or not being readable by the current user
script declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
auto declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
Skipping due to inventory source not existing or not being readable by the current user
yaml declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
Skipping due to inventory source not existing or not being readable by the current user
ini declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
Skipping due to inventory source not existing or not being readable by the current user
toml declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
Loading callback plugin default of type stdout, v2.0 from /opt/homebrew/Cellar/ansible/8.4.0/libexec/lib/python3.11/site-packages/ansible/plugins/callback/default.py
Skipping callback 'default', as we already have a stdout callback.
Skipping callback 'minimal', as we already have a stdout callback.
Skipping callback 'oneline', as we already have a stdout callback.
PLAYBOOK: playbook.yml *********************************************************
Positional arguments: playbook.yml
verbosity: 4
connection: smart
timeout: 10
become_method: sudo
tags: ('all',)
inventory: ('/etc/ansible/hosts',)
forks: 5
1 plays in playbook.yml
PLAY [Test playbook] ***********************************************************
TASK [Gathering Facts] *********************************************************
task path: /Volumes/workplace/personal/test/playbook.yml:1
<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: tensin
<127.0.0.1> EXEC /bin/sh -c 'echo ~tensin && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /Users/tensin/.ansible/tmp `"&& mkdir "` echo /Users/tensin/.ansible/tmp/ansible-tmp-1695112052.4727402-80815-156931745653572 `" && echo ansible-tmp-1695112052.4727402-80815-156931745653572="` echo /Users/tensin/.ansible/tmp/ansible-tmp-1695112052.4727402-80815-156931745653572 `" ) && sleep 0'
Using module file /opt/homebrew/Cellar/ansible/8.4.0/libexec/lib/python3.11/site-packages/ansible/modules/setup.py
<127.0.0.1> PUT /Users/tensin/.ansible/tmp/ansible-local-80811l6lp_m_7/tmpktjgn0yt TO /Users/tensin/.ansible/tmp/ansible-tmp-1695112052.4727402-80815-156931745653572/AnsiballZ_setup.py
<127.0.0.1> EXEC /bin/sh -c 'chmod u+x /Users/tensin/.ansible/tmp/ansible-tmp-1695112052.4727402-80815-156931745653572/ /Users/tensin/.ansible/tmp/ansible-tmp-1695112052.4727402-80815-156931745653572/AnsiballZ_setup.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '/opt/homebrew/Cellar/ansible/8.4.0/libexec/bin/python /Users/tensin/.ansible/tmp/ansible-tmp-1695112052.4727402-80815-156931745653572/AnsiballZ_setup.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c 'rm -f -r /Users/tensin/.ansible/tmp/ansible-tmp-1695112052.4727402-80815-156931745653572/ > /dev/null 2>&1 && sleep 0'
ok: [localhost]
TASK [bar : command] ***********************************************************
task path: /Volumes/workplace/personal/test/roles/bar/tasks/main.yml:1
<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: tensin
<127.0.0.1> EXEC /bin/sh -c 'echo ~tensin && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /Users/tensin/.ansible/tmp `"&& mkdir "` echo /Users/tensin/.ansible/tmp/ansible-tmp-1695112054.1128068-80849-39195856654206 `" && echo ansible-tmp-1695112054.1128068-80849-39195856654206="` echo /Users/tensin/.ansible/tmp/ansible-tmp-1695112054.1128068-80849-39195856654206 `" ) && sleep 0'
Using module file /opt/homebrew/Cellar/ansible/8.4.0/libexec/lib/python3.11/site-packages/ansible/modules/command.py
<127.0.0.1> PUT /Users/tensin/.ansible/tmp/ansible-local-80811l6lp_m_7/tmpm2eyr1c4 TO /Users/tensin/.ansible/tmp/ansible-tmp-1695112054.1128068-80849-39195856654206/AnsiballZ_command.py
<127.0.0.1> EXEC /bin/sh -c 'chmod u+x /Users/tensin/.ansible/tmp/ansible-tmp-1695112054.1128068-80849-39195856654206/ /Users/tensin/.ansible/tmp/ansible-tmp-1695112054.1128068-80849-39195856654206/AnsiballZ_command.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '/opt/homebrew/Cellar/ansible/8.4.0/libexec/bin/python /Users/tensin/.ansible/tmp/ansible-tmp-1695112054.1128068-80849-39195856654206/AnsiballZ_command.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c 'rm -f -r /Users/tensin/.ansible/tmp/ansible-tmp-1695112054.1128068-80849-39195856654206/ > /dev/null 2>&1 && sleep 0'
Notification for handler bar_handler has been saved.
changed: [localhost] => {
"changed": true,
"cmd": [
"echo",
"1"
],
"delta": "0:00:00.005418",
"end": "2023-09-19 10:27:34.337640",
"invocation": {
"module_args": {
"_raw_params": "echo 1",
"_uses_shell": false,
"argv": null,
"chdir": null,
"creates": null,
"executable": null,
"removes": null,
"stdin": null,
"stdin_add_newline": true,
"strip_empty_ends": true
}
},
"msg": "",
"rc": 0,
"start": "2023-09-19 10:27:34.332222",
"stderr": "",
"stderr_lines": [],
"stdout": "1",
"stdout_lines": [
"1"
]
}
TASK [bar : meta] **************************************************************
task path: /Volumes/workplace/personal/test/roles/bar/tasks/main.yml:5
NOTIFIED HANDLER bar : include_tasks for localhost
META: triggered running handlers for localhost
RUNNING HANDLER [bar : include_tasks] ******************************************
task path: /Volumes/workplace/personal/test/roles/bar/handlers/main.yml:1
included: /Volumes/workplace/personal/test/roles/bar/handlers/item.yml for localhost => (item=1)
included: /Volumes/workplace/personal/test/roles/bar/handlers/item.yml for localhost => (item=2)
included: /Volumes/workplace/personal/test/roles/bar/handlers/item.yml for localhost => (item=3)
RUNNING HANDLER [bar : command] ************************************************
task path: /Volumes/workplace/personal/test/roles/bar/handlers/item.yml:1
<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: tensin
<127.0.0.1> EXEC /bin/sh -c 'echo ~tensin && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /Users/tensin/.ansible/tmp `"&& mkdir "` echo /Users/tensin/.ansible/tmp/ansible-tmp-1695112054.42143-80873-18381882637147 `" && echo ansible-tmp-1695112054.42143-80873-18381882637147="` echo /Users/tensin/.ansible/tmp/ansible-tmp-1695112054.42143-80873-18381882637147 `" ) && sleep 0'
Using module file /opt/homebrew/Cellar/ansible/8.4.0/libexec/lib/python3.11/site-packages/ansible/modules/command.py
<127.0.0.1> PUT /Users/tensin/.ansible/tmp/ansible-local-80811l6lp_m_7/tmp2xu8xoya TO /Users/tensin/.ansible/tmp/ansible-tmp-1695112054.42143-80873-18381882637147/AnsiballZ_command.py
<127.0.0.1> EXEC /bin/sh -c 'chmod u+x /Users/tensin/.ansible/tmp/ansible-tmp-1695112054.42143-80873-18381882637147/ /Users/tensin/.ansible/tmp/ansible-tmp-1695112054.42143-80873-18381882637147/AnsiballZ_command.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '/opt/homebrew/Cellar/ansible/8.4.0/libexec/bin/python /Users/tensin/.ansible/tmp/ansible-tmp-1695112054.42143-80873-18381882637147/AnsiballZ_command.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c 'rm -f -r /Users/tensin/.ansible/tmp/ansible-tmp-1695112054.42143-80873-18381882637147/ > /dev/null 2>&1 && sleep 0'
ok: [localhost] => {
"changed": false,
"cmd": [
"echo",
"1"
],
"delta": "0:00:00.004997",
"end": "2023-09-19 10:27:34.580493",
"invocation": {
"module_args": {
"_raw_params": "echo '1'",
"_uses_shell": false,
"argv": null,
"chdir": null,
"creates": null,
"executable": null,
"removes": null,
"stdin": null,
"stdin_add_newline": true,
"strip_empty_ends": true
}
},
"msg": "",
"rc": 0,
"start": "2023-09-19 10:27:34.575496",
"stderr": "",
"stderr_lines": [],
"stdout": "1",
"stdout_lines": [
"1"
]
}
RUNNING HANDLER [bar : command] ************************************************
task path: /Volumes/workplace/personal/test/roles/bar/handlers/item.yml:1
<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: tensin
<127.0.0.1> EXEC /bin/sh -c 'echo ~tensin && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /Users/tensin/.ansible/tmp `"&& mkdir "` echo /Users/tensin/.ansible/tmp/ansible-tmp-1695112054.6377301-80894-91754434326946 `" && echo ansible-tmp-1695112054.6377301-80894-91754434326946="` echo /Users/tensin/.ansible/tmp/ansible-tmp-1695112054.6377301-80894-91754434326946 `" ) && sleep 0'
Using module file /opt/homebrew/Cellar/ansible/8.4.0/libexec/lib/python3.11/site-packages/ansible/modules/command.py
<127.0.0.1> PUT /Users/tensin/.ansible/tmp/ansible-local-80811l6lp_m_7/tmp5go6z4yo TO /Users/tensin/.ansible/tmp/ansible-tmp-1695112054.6377301-80894-91754434326946/AnsiballZ_command.py
<127.0.0.1> EXEC /bin/sh -c 'chmod u+x /Users/tensin/.ansible/tmp/ansible-tmp-1695112054.6377301-80894-91754434326946/ /Users/tensin/.ansible/tmp/ansible-tmp-1695112054.6377301-80894-91754434326946/AnsiballZ_command.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '/opt/homebrew/Cellar/ansible/8.4.0/libexec/bin/python /Users/tensin/.ansible/tmp/ansible-tmp-1695112054.6377301-80894-91754434326946/AnsiballZ_command.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c 'rm -f -r /Users/tensin/.ansible/tmp/ansible-tmp-1695112054.6377301-80894-91754434326946/ > /dev/null 2>&1 && sleep 0'
ok: [localhost] => {
"changed": false,
"cmd": [
"echo",
"2"
],
"delta": "0:00:00.005343",
"end": "2023-09-19 10:27:34.789715",
"invocation": {
"module_args": {
"_raw_params": "echo '2'",
"_uses_shell": false,
"argv": null,
"chdir": null,
"creates": null,
"executable": null,
"removes": null,
"stdin": null,
"stdin_add_newline": true,
"strip_empty_ends": true
}
},
"msg": "",
"rc": 0,
"start": "2023-09-19 10:27:34.784372",
"stderr": "",
"stderr_lines": [],
"stdout": "2",
"stdout_lines": [
"2"
]
}
RUNNING HANDLER [bar : command] ************************************************
task path: /Volumes/workplace/personal/test/roles/bar/handlers/item.yml:1
<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: tensin
<127.0.0.1> EXEC /bin/sh -c 'echo ~tensin && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /Users/tensin/.ansible/tmp `"&& mkdir "` echo /Users/tensin/.ansible/tmp/ansible-tmp-1695112054.84939-80915-139816169826551 `" && echo ansible-tmp-1695112054.84939-80915-139816169826551="` echo /Users/tensin/.ansible/tmp/ansible-tmp-1695112054.84939-80915-139816169826551 `" ) && sleep 0'
Using module file /opt/homebrew/Cellar/ansible/8.4.0/libexec/lib/python3.11/site-packages/ansible/modules/command.py
<127.0.0.1> PUT /Users/tensin/.ansible/tmp/ansible-local-80811l6lp_m_7/tmpcuoqfdyi TO /Users/tensin/.ansible/tmp/ansible-tmp-1695112054.84939-80915-139816169826551/AnsiballZ_command.py
<127.0.0.1> EXEC /bin/sh -c 'chmod u+x /Users/tensin/.ansible/tmp/ansible-tmp-1695112054.84939-80915-139816169826551/ /Users/tensin/.ansible/tmp/ansible-tmp-1695112054.84939-80915-139816169826551/AnsiballZ_command.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '/opt/homebrew/Cellar/ansible/8.4.0/libexec/bin/python /Users/tensin/.ansible/tmp/ansible-tmp-1695112054.84939-80915-139816169826551/AnsiballZ_command.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c 'rm -f -r /Users/tensin/.ansible/tmp/ansible-tmp-1695112054.84939-80915-139816169826551/ > /dev/null 2>&1 && sleep 0'
ok: [localhost] => {
"changed": false,
"cmd": [
"echo",
"3"
],
"delta": "0:00:01.006513",
"end": "2023-09-19 10:27:36.018385",
"invocation": {
"module_args": {
"_raw_params": "echo '3'",
"_uses_shell": false,
"argv": null,
"chdir": null,
"creates": null,
"executable": null,
"removes": null,
"stdin": null,
"stdin_add_newline": true,
"strip_empty_ends": true
}
},
"msg": "",
"rc": 0,
"start": "2023-09-19 10:27:35.011872",
"stderr": "",
"stderr_lines": [],
"stdout": "3",
"stdout_lines": [
"3"
]
}
TASK [include_role : bar] ******************************************************
task path: /Volumes/workplace/personal/test/roles/foo/tasks/main.yml:1
TASK [bar : command] ***********************************************************
task path: /Volumes/workplace/personal/test/roles/bar/tasks/main.yml:1
<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: tensin
<127.0.0.1> EXEC /bin/sh -c 'echo ~tensin && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /Users/tensin/.ansible/tmp `"&& mkdir "` echo /Users/tensin/.ansible/tmp/ansible-tmp-1695112056.146764-80937-219796758919766 `" && echo ansible-tmp-1695112056.146764-80937-219796758919766="` echo /Users/tensin/.ansible/tmp/ansible-tmp-1695112056.146764-80937-219796758919766 `" ) && sleep 0'
Using module file /opt/homebrew/Cellar/ansible/8.4.0/libexec/lib/python3.11/site-packages/ansible/modules/command.py
<127.0.0.1> PUT /Users/tensin/.ansible/tmp/ansible-local-80811l6lp_m_7/tmpesvqaeoc TO /Users/tensin/.ansible/tmp/ansible-tmp-1695112056.146764-80937-219796758919766/AnsiballZ_command.py
<127.0.0.1> EXEC /bin/sh -c 'chmod u+x /Users/tensin/.ansible/tmp/ansible-tmp-1695112056.146764-80937-219796758919766/ /Users/tensin/.ansible/tmp/ansible-tmp-1695112056.146764-80937-219796758919766/AnsiballZ_command.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '/opt/homebrew/Cellar/ansible/8.4.0/libexec/bin/python /Users/tensin/.ansible/tmp/ansible-tmp-1695112056.146764-80937-219796758919766/AnsiballZ_command.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c 'rm -f -r /Users/tensin/.ansible/tmp/ansible-tmp-1695112056.146764-80937-219796758919766/ > /dev/null 2>&1 && sleep 0'
Notification for handler bar_handler has been saved.
changed: [localhost] => {
"changed": true,
"cmd": [
"echo",
"1"
],
"delta": "0:00:00.005010",
"end": "2023-09-19 10:27:36.319682",
"invocation": {
"module_args": {
"_raw_params": "echo 1",
"_uses_shell": false,
"argv": null,
"chdir": null,
"creates": null,
"executable": null,
"removes": null,
"stdin": null,
"stdin_add_newline": true,
"strip_empty_ends": true
}
},
"msg": "",
"rc": 0,
"start": "2023-09-19 10:27:36.314672",
"stderr": "",
"stderr_lines": [],
"stdout": "1",
"stdout_lines": [
"1"
]
}
TASK [bar : meta] **************************************************************
task path: /Volumes/workplace/personal/test/roles/bar/tasks/main.yml:5
NOTIFIED HANDLER bar : include_tasks for localhost
NOTIFIED HANDLER bar : include_tasks for localhost
META: triggered running handlers for localhost
RUNNING HANDLER [bar : include_tasks] ******************************************
task path: /Volumes/workplace/personal/test/roles/bar/handlers/main.yml:1
included: /Volumes/workplace/personal/test/roles/bar/handlers/item.yml for localhost => (item=1)
included: /Volumes/workplace/personal/test/roles/bar/handlers/item.yml for localhost => (item=2)
included: /Volumes/workplace/personal/test/roles/bar/handlers/item.yml for localhost => (item=3)
RUNNING HANDLER [bar : command] ************************************************
task path: /Volumes/workplace/personal/test/roles/bar/handlers/item.yml:1
<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: tensin
<127.0.0.1> EXEC /bin/sh -c 'echo ~tensin && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /Users/tensin/.ansible/tmp `"&& mkdir "` echo /Users/tensin/.ansible/tmp/ansible-tmp-1695112056.3968189-80959-67605206314050 `" && echo ansible-tmp-1695112056.3968189-80959-67605206314050="` echo /Users/tensin/.ansible/tmp/ansible-tmp-1695112056.3968189-80959-67605206314050 `" ) && sleep 0'
Using module file /opt/homebrew/Cellar/ansible/8.4.0/libexec/lib/python3.11/site-packages/ansible/modules/command.py
<127.0.0.1> PUT /Users/tensin/.ansible/tmp/ansible-local-80811l6lp_m_7/tmpsv65_5tb TO /Users/tensin/.ansible/tmp/ansible-tmp-1695112056.3968189-80959-67605206314050/AnsiballZ_command.py
<127.0.0.1> EXEC /bin/sh -c 'chmod u+x /Users/tensin/.ansible/tmp/ansible-tmp-1695112056.3968189-80959-67605206314050/ /Users/tensin/.ansible/tmp/ansible-tmp-1695112056.3968189-80959-67605206314050/AnsiballZ_command.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '/opt/homebrew/Cellar/ansible/8.4.0/libexec/bin/python /Users/tensin/.ansible/tmp/ansible-tmp-1695112056.3968189-80959-67605206314050/AnsiballZ_command.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c 'rm -f -r /Users/tensin/.ansible/tmp/ansible-tmp-1695112056.3968189-80959-67605206314050/ > /dev/null 2>&1 && sleep 0'
ok: [localhost] => {
"changed": false,
"cmd": [
"echo",
"1"
],
"delta": "0:00:00.005281",
"end": "2023-09-19 10:27:36.562253",
"invocation": {
"module_args": {
"_raw_params": "echo '1'",
"_uses_shell": false,
"argv": null,
"chdir": null,
"creates": null,
"executable": null,
"removes": null,
"stdin": null,
"stdin_add_newline": true,
"strip_empty_ends": true
}
},
"msg": "",
"rc": 0,
"start": "2023-09-19 10:27:36.556972",
"stderr": "",
"stderr_lines": [],
"stdout": "1",
"stdout_lines": [
"1"
]
}
RUNNING HANDLER [bar : command] ************************************************
task path: /Volumes/workplace/personal/test/roles/bar/handlers/item.yml:1
<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: tensin
<127.0.0.1> EXEC /bin/sh -c 'echo ~tensin && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /Users/tensin/.ansible/tmp `"&& mkdir "` echo /Users/tensin/.ansible/tmp/ansible-tmp-1695112056.621751-80980-107541433073117 `" && echo ansible-tmp-1695112056.621751-80980-107541433073117="` echo /Users/tensin/.ansible/tmp/ansible-tmp-1695112056.621751-80980-107541433073117 `" ) && sleep 0'
Using module file /opt/homebrew/Cellar/ansible/8.4.0/libexec/lib/python3.11/site-packages/ansible/modules/command.py
<127.0.0.1> PUT /Users/tensin/.ansible/tmp/ansible-local-80811l6lp_m_7/tmp1258e27y TO /Users/tensin/.ansible/tmp/ansible-tmp-1695112056.621751-80980-107541433073117/AnsiballZ_command.py
<127.0.0.1> EXEC /bin/sh -c 'chmod u+x /Users/tensin/.ansible/tmp/ansible-tmp-1695112056.621751-80980-107541433073117/ /Users/tensin/.ansible/tmp/ansible-tmp-1695112056.621751-80980-107541433073117/AnsiballZ_command.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '/opt/homebrew/Cellar/ansible/8.4.0/libexec/bin/python /Users/tensin/.ansible/tmp/ansible-tmp-1695112056.621751-80980-107541433073117/AnsiballZ_command.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c 'rm -f -r /Users/tensin/.ansible/tmp/ansible-tmp-1695112056.621751-80980-107541433073117/ > /dev/null 2>&1 && sleep 0'
ok: [localhost] => {
"changed": false,
"cmd": [
"echo",
"2"
],
"delta": "0:00:00.005252",
"end": "2023-09-19 10:27:36.772082",
"invocation": {
"module_args": {
"_raw_params": "echo '2'",
"_uses_shell": false,
"argv": null,
"chdir": null,
"creates": null,
"executable": null,
"removes": null,
"stdin": null,
"stdin_add_newline": true,
"strip_empty_ends": true
}
},
"msg": "",
"rc": 0,
"start": "2023-09-19 10:27:36.766830",
"stderr": "",
"stderr_lines": [],
"stdout": "2",
"stdout_lines": [
"2"
]
}
RUNNING HANDLER [bar : command] ************************************************
task path: /Volumes/workplace/personal/test/roles/bar/handlers/item.yml:1
<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: tensin
<127.0.0.1> EXEC /bin/sh -c 'echo ~tensin && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /Users/tensin/.ansible/tmp `"&& mkdir "` echo /Users/tensin/.ansible/tmp/ansible-tmp-1695112056.828794-81001-161624896246699 `" && echo ansible-tmp-1695112056.828794-81001-161624896246699="` echo /Users/tensin/.ansible/tmp/ansible-tmp-1695112056.828794-81001-161624896246699 `" ) && sleep 0'
Using module file /opt/homebrew/Cellar/ansible/8.4.0/libexec/lib/python3.11/site-packages/ansible/modules/command.py
<127.0.0.1> PUT /Users/tensin/.ansible/tmp/ansible-local-80811l6lp_m_7/tmpwmfv9yp_ TO /Users/tensin/.ansible/tmp/ansible-tmp-1695112056.828794-81001-161624896246699/AnsiballZ_command.py
<127.0.0.1> EXEC /bin/sh -c 'chmod u+x /Users/tensin/.ansible/tmp/ansible-tmp-1695112056.828794-81001-161624896246699/ /Users/tensin/.ansible/tmp/ansible-tmp-1695112056.828794-81001-161624896246699/AnsiballZ_command.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '/opt/homebrew/Cellar/ansible/8.4.0/libexec/bin/python /Users/tensin/.ansible/tmp/ansible-tmp-1695112056.828794-81001-161624896246699/AnsiballZ_command.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c 'rm -f -r /Users/tensin/.ansible/tmp/ansible-tmp-1695112056.828794-81001-161624896246699/ > /dev/null 2>&1 && sleep 0'
ok: [localhost] => {
"changed": false,
"cmd": [
"echo",
"3"
],
"delta": "0:00:00.004990",
"end": "2023-09-19 10:27:36.998890",
"invocation": {
"module_args": {
"_raw_params": "echo '3'",
"_uses_shell": false,
"argv": null,
"chdir": null,
"creates": null,
"executable": null,
"removes": null,
"stdin": null,
"stdin_add_newline": true,
"strip_empty_ends": true
}
},
"msg": "",
"rc": 0,
"start": "2023-09-19 10:27:36.993900",
"stderr": "",
"stderr_lines": [],
"stdout": "3",
"stdout_lines": [
"3"
]
}
RUNNING HANDLER [bar : include_tasks] ******************************************
task path: /Volumes/workplace/personal/test/roles/bar/handlers/main.yml:1
fatal: [localhost]: FAILED! => {
"reason": "Could not find or access '/Volumes/workplace/personal/test/item.yml' on the Ansible Controller."
}
fatal: [localhost]: FAILED! => {
"reason": "Could not find or access '/Volumes/workplace/personal/test/item.yml' on the Ansible Controller."
}
fatal: [localhost]: FAILED! => {
"reason": "Could not find or access '/Volumes/workplace/personal/test/item.yml' on the Ansible Controller."
}
PLAY RECAP *********************************************************************
localhost : ok=15 changed=2 unreachable=0 failed=3 skipped=0 rescued=0 ignored=0
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/81722
|
https://github.com/ansible/ansible/pull/81733
|
86fd7026a88988c224ae175a281e7e6e2f3c5bc3
|
1e7f7875c617a12e5b16bcf290d489a6446febdb
| 2023-09-19T08:28:30Z |
python
| 2023-09-21T19:12:04Z |
test/integration/targets/handlers/roles/include_role_include_tasks_handler/handlers/main.yml
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.